diff --git a/.gitattributes b/.gitattributes index cbc89d899c8eb8ebb1b6799f9b41702175ddf5f4..c5fe0d21c2dc594b4182c2b95d9c179c6a7ecf3e 100644 --- a/.gitattributes +++ b/.gitattributes @@ -228,3 +228,4 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-19.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-12.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-08.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalaa/part-10.finalaa filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalaa/part-10.finalaa b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-10.finalaa new file mode 100644 index 0000000000000000000000000000000000000000..a731c8d24141a34e1a2f6ab07648cdadbfc78cb0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-10.finalaa @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b8630d69b5cb04535eb2d34e38c47cd06ec800f4ee726b5242d8d8a813a1a09 +size 12576681195 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzptb b/data_all_eng_slimpj/shuffled/split2/finalzptb new file mode 100644 index 0000000000000000000000000000000000000000..b1b0f5afa9008925dc540d15a7467e612ec070fc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzptb @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{}}\n\\section{INTRODUCTION}\nOPERA~\\cite{OPERA} is a long baseline neutrino experiment located\n in the Gran Sasso underground laboratory (LNGS) in Italy. The collaboration is composed\nof about 200 physicists coming from 36 institutions in 13 different countries. \nThe experiment is a massive hybrid detector\n with nuclear emulsions used as very precise tracking devices and electronic detectors \nto locate the neutrino interaction events in the emulsions.\n It is designed to primarily search\nfor $\\nu_{\\tau}$ appearance in the CERN high energy $\\nu_{\\mu}$ beam CNGS~\\cite{CNGS} at \n730 km from the neutrino source,\nin order to establish unambiguously the origin of the neutrino\n oscillations observed at the \"atmospheric\" $\\Delta m^{2}$ scale. The preferred hypothesis \nto describe this phenomenon being $\\nu_{\\mu} \\rightarrow \\nu_{\\tau}$ oscillation.\nCombining all the present known neutrino data \n the best fit values of a global three flavour \nanalysis of neutrino oscillations~\\cite{Fogli2008} give for \n $\\nu_{\\mu} \\rightarrow \\nu_{\\tau}$ oscillation parameters \n $\\Delta m^{2}=2.39$x$10^{-3}$$ \\mathrm{eV}^{2}$ and $\\mathrm{sin}^{2}2\\theta$=0.995.\nThe range of allowed values at 3 $\\sigma$ is \n2.06x$10^{-3} < \\Delta m^{2} <$ 2.81x$10^{-3}$$ \\mathrm{eV}^{2}$.\nIn addition to the dominant $\\nu_{\\mu}\\rightarrow\\nu_{\\tau}$ oscillation in $\\nu_{\\mu}$ beam, it is possible that a \nsub-leading $\\nu_{\\mu}\\rightarrow\\nu_{e}$ transition occurs as well.\nThis process will also be investigated by OPERA profiting from its \nexcellent electron identification capabilities \nto asses a possible improvement on the knowledge of the third yet unknown mixing angle $\\theta_{13}$.\n\nThe $\\nu_{\\tau}$ direct appearance search is based on the observation\nof events produced by charged current interaction (CC) with the\n$\\tau$ decaying in leptonic and hadronic modes.\nIn order to directly observe the $\\tau$ kinematics,\nthe principle of the OPERA experiment is to observe the $\\tau$ trajectories \nand the decay products in emulsion films composed of\ntwo thin emulsion layers (44 $\\mu$m thick) put on either side of a plastic base\n (205 $\\mu$m thick). The detector concept which is described in the next section combines\nmicrometer tracking resolution, large target mass together with good lepton identification. \nThis concept allows to reject efficiently the main topological background coming from charm production in \n$\\nu_\\mu$ charged current interactions.\n\n\n\\section{DETECTOR OVERVIEW}\n\nThe OPERA detector is installed in the Hall C of the Gran Sasso underground laboratory.\nFigure~\\ref{fig:opera} shows a recent picture of the detector which is 20 m long with a \ncross section of about 8x9 $\\mathrm{m}^{2}$ and composed \nof two identical parts called super modules (SM). Each SM has a target section and\na muon spectrometer. \n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=17cm]{opera_det.eps}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{View of the OPERA detector in Hall C of the Gran Sasso Underground Laboratory in May 2007.}\n\\label{fig:opera}\n\\vspace{0cm}\n\\end{figure*}\n\nThe spectrometer allows a determination of the charge and momentum\nof muons going through by measuring their curvature in a \ndipolar magnet made of 990 tons of iron, and providing 1.53 Tesla transverse to the \nneutrino beam axis. Each spectrometer is equipped with six vertical planes of \ndrift tubes as precision tracker together with 22 planes (8x8 $\\mathrm{m}^{2}$) \nof RPC bakelite chambers reaching a spatial resolution of $\\sim$1 cm and an efficiency of 96\\%.\nThe precision tracker planes are composed of 4 staggered layers of 168 aluminium tubes,\n8 m long with 38 mm outer diameter. The spatial resolution\nof this detector is better than 500 $\\mu$m.\nThe physics performance of the complete spectrometer should reduce the charge confusion \nto less than 0.3\\% and gives a momentum resolution better than 20\\% for momentum less than\n50 GeV. The muon identification efficiency reaches 95\\% adding the target\ntracker information for the cases where the muons stop inside the target. \n\nThe target section is composed of 31 vertical light supporting steel structures, called walls,\ninterleaved with double layered planes of \n 6.6 m long scintillator strips in the two transverse\ndirections. The main goals\nof this electronic detector are to provide a trigger for the\nneutrino interactions, an efficient event pattern recognition together with the magnetic spectrometer\nallowing a clear classification of the $\\nu$ interactions\n and a precise localisation of the event.\nThe electronic target tracker spatial resolution\nreaches $\\sim$0.8 cm and has an efficiency of 99\\%.\\\\\nThe walls contain the basic target detector units, called ECC brick, sketched in Fig.~\\ref{fig:brick}\nwhich are obtained by stacking 56 lead plates with 57 emulsion films. This structure provides\nmany advantages like a massive target coupled to a very precise tracker, as well as a standalone\ndetector to measure electromagnetic showers and charged particle momentum using the multiple\ncoulomb scattering in the lead. The ECC concept has been\nalready succesfully used for the direct $\\nu_\\tau$ observation perfomed in 2000 by the DONUT \nexperiment~\\cite{donut}.\n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=6cm]{brick_emul3.eps}\n \\includegraphics[width=6cm]{brique.eps}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{a)Schematic structure of an ECC cell. The $\\tau$ decay kink is\nreconstructed by using four track segments in the emulsion films. b) Picture of an assembled brick. \n Each brick weights about 8.6 kg and has a thickness of 10 radiation length $X_{o}$. }\n\\label{fig:brick}\n\\vspace{0cm}\n\\end{figure*}\n\n Behind each brick, an emulsion film doublet, called Changeable Sheet (CS) is attached in \n a separate enveloppe. The CS can be detached from the brick for analysis\n to confirm and locate the tracks produced in neutrino interactions.\n\nBy the time of this conference, 146500 bricks (1.25 kton of target) assembled\n underground at an average rate of about 700 bricks\/day by\na dedicated fully automated Brick Assembly Machine (BAM) with precise robotics were\ninstalled in the support steel structures from the sides of the walls\nusing two automated manipulator systems (BMS) running on each side of the experiment. \\\\\nWhen a candidate brick has been located by the electronic detectors, \nthe brick is removed using the BMS and the changeable\nsheet is detached and developped. The film is then scanned to \nsearch for the tracks originating from the neutrino interaction. If none\nare found then the brick is left untouched and another\none is removed. When a neutrino \n event is confirmed the brick is exposed to cosmics\nto collect enough alignment tracks before going to the development. \nAfter development the emulsions are sent to the scanning laboratories hosting\nautomated optical microscopes in Europe and Japan, each region using\na different technology~\\cite{scan1,scan2}. This step is the \nstart of the detailed analysis consisting of finding the neutrino vertex\nand looking for a decay kink topology in the vertex region. \n\n\n\\section{THE CNGS BEAM STARTUP}\n The CNGS neutrino beam~\\cite{CNGS} is a high energy $\\nu_\\mu$ beam optimised\nto maximise the $\\nu_\\tau$ charged current interactions at Gran Sasso produced\nby oscillation mechanism at the atmospheric $\\Delta m^{2}$.\nThe mean neutrino energy is about 17 GeV with a contamination of 2.4\\% $\\overline{\\nu}_{\\mu}$,\n 0.9\\% $\\nu_{e}$ and less than 0.06\\% of $\\overline{\\nu}_{e}$.\nUsing the CERN SPS accelerator in a shared mode with fixed target experiment together with LHC, \n4.5x$10^{19}$ protons on target (pot) per year should normally be delivered,\n assuming 200 days of operation.\nThe number of charged current and neutral current interactions expected in the Gran Sasso\nlaboratory from $\\nu_\\mu$\nare then about 2900 \/kton\/year and 875 \/kton\/year respectively.\nIf the $\\nu_{\\mu}\\rightarrow\\nu_{\\tau}$ oscillation hypothesis is confirmed, \nthe number of $\\tau$'s produced via\ncharged current interaction at the Gran Sasso should be\nof the order of 14 \/kton\/year \nfor $\\Delta m^{2}=$2.5x$10^{-3}$$ \\mathrm{eV}^{2}$ at full mixing. \n\nA first CNGS short run took place in August 2006. The OPERA target was empty at that time\nbut the electronic detectors were taking data.\nDuring this run, 319 events correlated in time with the beam and coming from neutrino \ninteractions in the surrounding rock and inside the detector have been recorded. \n The delivered intensity correponded to $7.6\\mathrm{x}10^{17}$ pot, with a \npeak intensity of $1.7$x$10^{13}$ pot per extraction corresponding to 70\\% of the expected nominal value.\n The reconstructed zenith angle distribution from penetrating muon tracks was showing a clear\npeak centered around $3.4^{o}$ as expected for neutrinos originating from CERN.\nDetails and results can be found in Ref~\\cite{cngs2006}.\n\n\\section{FIRST NEUTRINO EVENTS AND DETECTOR PERFORMANCES}\nA second CNGS physics run took place in October\n2007 with a total of $8.24\\mathrm{x}10^{17}$ pot delivered and 369 reconstructed beam related events.\nSimilar selection criteria to the 2006 analysis~\\cite{cngs2006},\n based on GPS timing systems and synchronisation between OPERA and CNGS, have been used\nto select events compatible with the CNGS proton extraction time window. \n The OPERA target was filled with 80\\% of the first\nsupermodule corresponding to a total target mass of 0.5 kton. \nAmong the selected beam events, 38 were recorded and reconstructed inside the OPERA target\nfor 31.5$\\pm$6 expected.\nAmong them, 29 were classified as Charged Current (CC) and 9\nas Neutral Current (NC) in agreement with expectation. \nFor each event the electronic detector hits were used to find the most probable\nbrick where the neutrino interaction may have occured. \nThe left part of Figure~\\ref{fig:cngs_event} shows an event display of the first neutrino \ninteraction located in the OPERA detector. The black dots represent hits in the \nelectronic detector. The event is a charged current event with a clear muon track traversing\nboth target and spectrometer sections over more than 18 m. The right part of the figure\nshows the result of the detailed analysis of the emulsions after scanning the identified \nbrick where a clear reconstructed\ninteraction vertex is visible with two photon conversions compatible with \na $\\pi^{o}$ decay.\n\n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=10cm]{cngs_event.eps}\n \\includegraphics[width=7cm]{vertex.eps}\n\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{a) Charged current neutrino interaction recorded in OPERA. The event display \nshows the hits left in the electronic detectors). b) Emulsion reconstruction of the \nneutrino interaction vertex in the corresponding target brick.}\n\\label{fig:cngs_event}\n\\vspace{0cm}\n\\end{figure*}\n\nThe extensive study of the recorded events have confirmed the OPERA\nperformances and the validity of the methods and algorithms used which, for example, give\nimpact parameter resolution of the order of a few microns, particle momentum estimation, \nshower detection for e\/$\\pi$ separation.\n Figure~\\ref{fig:charm2007} shows the longitudinal and \ntransverse views of another reconstructed event vertex where a clear decay topology similar to what is expected\nfrom a $\\tau$ decay is visible. However, the presence of a prompt muon attached to the primary vertex and the momentum\nbalance in the transverse plane is in favour of a $\\nu_\\mu^{CC}$ interaction producing a charm particle. \n \n\\begin{figure*}[htb]\n\\vspace{-0.3cm}\n\\begin{center}\n \\includegraphics[width=9cm]{charm2007_v2.eps}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{Longitudinal and transverse view of a reconstructed neutrino interaction vertex with a charm\ndecay candidate topology.}\n\\label{fig:charm2007}\n\\vspace{0cm}\n\\end{figure*}\n\n\n\\section{CONCLUSIONS}\n\nThe OPERA detector is completed and is now massive with 1.25 kton of lead-emulsion target offering\na huge and precise tracking device. \nWith the cosmic data taking and the first CNGS neutrino runs in 2006 and 2007,\nthe design goals and detector perfomances\nwere reached and the first levels of the reconstruction software and analysis tools were validated.\nThe observation in 2007 of 38 neutrino events in the target bricks, the \nlocalization and reconstruction of neutrino vertex in emulsions was an important phase which\nsuccesfully validated the OPERA detector concept.\\\\\nHaving now the full OPERA target, the next important step is the 2008 CNGS neutrino run which\nstarted already in June.\nIt is expected to have about $2.28$x$10^{19}$ pot in 123 days of SPS running assuming a nominal \nintensity of $2$x$10^{13}$ pot\/extraction. This intensity, when reached, should lead\n to about 20 neutrino interactions\/day in the target and eventually the \nobservation of the first $\\tau$ event candidate.\\\\\nIn 5 years of CNGS running at 4.5x$10^{19}$ pot per year, OPERA should be able \nto observe 10 to 15 $\\nu_\\tau$ events after oscillation at\nfull mixing in the range $2.5$x$10^{-3} < \\Delta m^{2} <$ 3x$10^{-3}$ $\\mathrm{eV}^{2}$,\nwith a total background less than 0.76 events.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} The combination of control theory techniques with quantum mechanics (see, e.g., \\cite{AltaTico}, \\cite{Mikobook}, \\cite{Optrev}) has generated a rich set of control algorithms for quantum mechanical systems modeled by the \nSchr\\\"odinger (operator) equation \n\\be{Scrod11}\n\\dot X=AX+ \\sum_{j=1}^m B_j u_jX, \\qquad X(0)={\\bf 1}. \n\\end{equation} \nHere we assume that we have a finite dimensional model, $A$ and $ B_j$ are matrices in $\\mathfrak{su}(n)$ for each $j=1,...,m$, $X$ is the unitary propagator, which is equal to the identity ${\\bf 1}$ at time zero, and $u_j$ are the controls. These are usually electromagnetic fields, constant in space but possibly time-varying, which are the output of an appropriately engineered pulse-shaping device. Many of the proposed algorithms in the literature involve control functions which are only piecewise continuous and in fact have `jumps' at certain points of the interval of control. For example, control algorithms based on {\\it Lie group decompositions} (see, e.g., \\cite{Rama1}) involve `switches' between different Hamiltonians; Algorithms based on \n{\\it optimal control}, even if they produce smooth control functions, often require a jump at the beginning of the control interval in order for the control to achieve the prescribed value in norm (assuming a bound in norm of the optimal control as in \\cite{Bosca1}). Beside the practical problem of generating (almost) instantaneous switches with pulse shapers, such discontinuities introduce undesired high frequency \ncomponents in the dynamics of the controlled system. For these reasons, it is important to have algorithms which produce {\\it smooth} control functions whose values at the beginning and the end of the control interval are equal to zero. \n\nThis paper describes a method to design control functions without discontinuities in order to drive the state of a class of quantum systems of the form (\\ref{Scrod11}) to an arbitrary final configuration. Our main example of application will be the simultaneous control of two quantum bits in zero field NMR, a system which was also considered in \\cite{Xinhua} in the context of optimal control. As compared to that paper, we abandon here the requirement of time optimality (under the requirement of bounded norm for the control) but introduce a novel \nmethod which will allow us more flexibility in the control design. The result is a control algorithm that does not present discontinuities, with the control equal to zero at the beginning and at the end of the control interval. \n\nThe paper is organized in two main sections each of which divided into several subsections. In Section \\ref{GenTheor} we describe the class of systems we consider and the general theory underlying our method. We also present two simple examples of quantum systems where the theory applies. In Section \\ref{App2QB} we detail the application to the system of two spin $\\frac{1}{2}$ particles in zero field NMR above mentioned. This section includes a description of the model as well as the explicit numerical treatment of a control problem: the independent control of the two spin $\\frac{1}{2}$ particles to two different types of Hadamard gates. \n\n\n\n\\section{General Theory}\\label{GenTheor}\n\n\\subsection{Class of systems considered}\\label{Classo} \n\nConsider the class of control systems (\\ref{Scrod11}) with $A$ and $ B_j$, $j=1,...,m$, in $\\mathfrak{su}(n)$ and let ${\\cal L}$ denote the Lie algebra generated by $\\{A, B_1,...,B_m\\}$. We assume that ${\\cal L}$ is semi-simple, which implies, since ${\\cal L} \\subseteq \\mathfrak{su}(n)$, that the associated Lie group $e^{\\cal L}$ is compact. The Lie algebra ${\\cal L}$ is called, in quantum control theory, the {\\it dynamical Lie algebra} associated to the system (\\ref{Scrod11}). Since $e^{\\cal L}$ is compact, the Lie group $e^{\\cal L}$ is the set of states for (\\ref{Scrod11}) reachable by changing the control \\cite{Suss}. In particular if ${\\cal L}=\\mathfrak{su}(n)$, the system is said to be {\\it controllable} because every special unitary matrix can be obtained with appropriate control. These are known facts in quantum control theory (see, e.g., \\cite{Mikobook}). We assume that ${\\cal L}$ has a (vector space) decomposition ${\\cal L}={\\cal K} \\oplus {\\cal P}$, such that $[{\\cal K}, {\\cal K}] \\subseteq {\\cal K}$, i.e., ${\\cal K}$ is a {\\it Lie subalgebra of ${\\cal L}$}, which we also assume to be semisimple so that $e^{\\cal K}$ is compact. Moreover $[{\\cal K}, {\\cal P}] \\subseteq {\\cal P}$. A special case is when in addition $[{\\cal P}, {\\cal P} ] \\subseteq {\\cal K}$ in which case the decomposition ${\\cal L}={\\cal K} \\oplus {\\cal P}$ defines a symmetric space of $e^{\\cal L}$ \\cite{Helgason}. We assume, in the model (\\ref{Scrod11}), that such a decomposition exists so that $A \\in {\\cal K}$ and $\\{B_1,...,B_m\\}$ forms a basis for ${\\cal P}$. \n\n\nUnder such circumstances, we can reduce ourselves to the case $A=0$ in (\\ref{Scrod11}), i.e., to systems of the form \n\\be{Scro}\n\\dot U= \\sum_{k=1}^m \\hat u_k B_k U, \\qquad U(0)={\\bf 1}. \n\\end{equation} \nTo see this, assume that for any fixed interval $[0,t_f]$ and any desired final condition $ U_f$, we are able to find a control $\\hat u_k$ steering the state $U$ in (\\ref{Scro}) from the identity ${\\bf 1}$ to $U_f$. Let $a_{kj}=a_{kj}(t)$, $k,j=1,...,m$ the coefficients forming an $m \\times m$ orthogonal matrix, so that, for any $j=1,...,m$, \n\\be{transforB}\ne^{-At} B_j e^{At}=\\sum_{k=1}^m a_{kj}(t) B_k. \n\\end{equation} \nLet $X_f$ be the desired final condition for (\\ref{Scrod11}) and $\\hat u_k$ be the control steering the state $U$ of system (\\ref{Scro}) from the identity ${\\bf 1}$ to $e^{-At_f} X_f$, in time $t_f$. Then the control $u_j$ obtained inverting \n\\be{tobeinverted}\n\\hat u_k(t):=\\sum_{j=1}^m a_{kj}(t)u_j(t), \n\\end{equation}\nsteers the state $X$ of (\\ref{Scrod11}) from the identity to $X_f$. This follows from the fact that, if $U=U(t)$ is the solution of (\\ref{Scro}) with the control $\\hat u_k$, and final condition $e^{-At_f} X_f$, then $X=e^{At}U$ is a solution of (\\ref{Scrod11}), with the controls $u_j$ given by (\\ref{tobeinverted}) and therefore the final condition at $t_f$ is $X_f$. Notice that the transformation (\\ref{tobeinverted}) does not modify the smoothness properties of the control, neither does it modify the fact that the control is zero at the beginning and at the end of the control interval (or at any other point). \nTherefore in the following we shall deal with {\\it driftless systems} of the form (\\ref{Scro}) with the Lie algebraic ${\\cal L}={\\cal K} \\oplus {\\cal P}$, structure above described. In particular $\\{B_1,...,B_m\\}$ forms a basis for ${\\cal P}$. \n\n\\subsection{Symmetry reduction}\n \nThe compact Lie group $e^{\\cal K}$ can be seen as a Lie \ntransformation group which acts on $e^{\\cal L}$ via conjugation $X \\in e^{\\cal L} \\rightarrow KXK^{-1}$, $ K \\in e^{\\cal K}$. Moreover this is a {\\it group of symmetries} for system (\\ref{Scro}) \nin the sense that $K B_j K^{-1} \\in {\\cal P}$ for each $j$, for every $K \\in e^{\\cal K}$. In particular let $K B_j K^{-1} :=\\sum_{k=1}^m a_{kj} B_k$ for an orthogonal matrix $\\{a_{kj}\\}$ depending on $K \\in e^{\\cal K}$ (cf. (\\ref{transforB})). If $U=U(t)$ is a trajectory corresponding to a control $\\hat u_k$, $KUK^{-1}$ is the trajectory \ncorresponding to controls $u_k:=\\sum_{j=1}^ma_{kj} \\hat u_j$, as it is easily seen from (\\ref{Scro}) and \n$$\nK\\dot U K^{-1}=\\sum_{j=1}^m \\hat u_j KB_j K^{-1} KUK^{-1}= \\sum_{j=1}^m \\hat u_j \\left( \\sum_{k=1}^m a_{kj} B_k \\right) KUK^{-1}=$$\n$$=\\sum_{k=1}^m \\left( \\sum_{j=1}^m a_{kj} \\hat u_j \\right) B_k KUK^{-1}=\\sum_{k=1}^m u_k B_k KUK^{-1}. \n$$ \nThis suggests to treat the control problem on the {\\it quotient space} $e^{\\cal L}\/e^{\\cal K}$ corresponding \nto the above action of $e^{\\cal K}$ on $e^{\\cal L}$. \n \n \nFrom the theory of Lie transformation groups (see, e.g., \\cite{Bredon}) we know that the quotient space $e^{\\cal L}\/e^{\\cal K}$ has the structure of a {\\it stratified space} where each stratum corresponds to an {\\it orbit type}, i.e., a set of points in $e^{\\cal L}$ which have conjugate isotropy groups. The stratum corresponding to the smallest possible isotropy group, $K_{min}$, is known to be a connected manifold which is {\\it open and dense} in $e^{\\cal L}\/e^{\\cal K}$. We denote it here by $e^{\\cal L}_{reg}\/e^{\\cal K}$, where $reg$ stands for the {\\it regular} part. Its preimage in $e^{\\cal L}$, $e^{\\cal L}_{reg}$, under the natural projection $\\pi \\, : \\, e^{\\cal L} \\rightarrow e^{\\cal L}\/e^{\\cal K}$ is open and dense in $e^{\\cal L}$ \nas well. This is called the {\\it regular part} of $e^{\\cal L}\/e^{\\cal K}$, (resp. $e^{\\cal L}$). The complementary set in $e^{\\cal L}\/e^{\\cal K}$, (resp. $e^{\\cal L}$) is called the {\\it singular part}. The dimension of $e^{\\cal L}_{reg}\/e^{\\cal K}$ as a manifold is \n\\be{dimensio}\n\\dim (e^{\\cal L}_{reg}\/e^{\\cal K}) =\\dim (e^{\\cal L})-\\dim (e^{\\cal K})+\\dim K_{min}=\\dim ({\\cal L})-\\dim ({\\cal K})+(\\dim K_{min}), \n\\end{equation} \nwhere $\\dim ( K_{min})$ is the dimension of the minimal isotropy group as a Lie group.\\footnote{More discussion on these basic facts in the theory of Lie transformation groups can be found in \\cite{NOIJDCS} and references therein.} In particular, if \n$K_{min}$ is a {\\it discrete Lie group}, i.e., it has dimension zero, the right hand side of (\\ref{dimensio}) is the dimension of the subspace ${\\cal P}$. This is verified for instance in $K-P$ problems (cf., e.g., \\cite{conBenECC}) when $e^{\\cal L}=SU(n)$. We shall assume this to be the case in the following. \n\nAccording to a result in \\cite{conBenECC}, under the assumption that the minimal isotropy group $K_{min}$ is discrete, the restriction of $\\pi_*$ to $R_{x*} {\\cal P}$ is an isomorphism onto $T_{\\pi(x)} \\left( e^{\\cal L}_{reg}\/{e^{\\cal K}} \\right)$ for each point $x$ in the regular part, $e^{\\cal L}_{reg}$. Here, as it is often done, we have identified the Lie algebra ${\\cal L}$ with the tangent space of $e^{\\cal L}$ at the identity ${\\bf 1}$, and therefore ${\\cal P}$ is identified with a subspace of the tangent space at $\\bf{1}$. The map $R_x$ denotes the {\\it right translation} by $x$ so that $R_{x*}{\\cal P}$ is a subspace (with the same dimension) of the tangent space at $x$, $T_x {e^{\\cal L}}$.\\footnote{Recall that for a map $f \\, : \\, M \\rightarrow N$ for two manifolds $M$ and $N$, $f_*$ denotes the {\\it differential} (also called {\\it push-forward}) $f_* \\, : \\, T_xM \\rightarrow T_{f(x)}N$ between two tangent spaces. When we want to emphasize the point $x$ we write $f_*|_x$. } In Appendix B, we show that in given coordinates the determinant of the restriction of $\\pi_*$ to $R_{x*}{\\cal P}$ is invariant under the action of $e^{\\cal K}$. The above isomorphism result says that in the regular part $\\det \\pi_* \\not=0$. In this situation, for every regular point $U \\in e^{\\cal L}$, for every tangent vector $V \\in T_{\\pi(U)}(e^{\\cal L}_{reg}\/e^{\\cal K})$, we can find a tangent vector $\\pi_*^{-1} V \\in R_{U*} {\\cal P}$. Such a tangent vector is {\\it horizontal} for system (\\ref{Scro}) which means that it can be written as a linear combination of the available vector fields $\\{ B_k U \\}$ in (\\ref{Scro}). If $\\Gamma=\\Gamma(t)$ is a curve entirely contained in $e^{\\cal L}_{reg}\/e^{\\cal K}$ and $U=U(t)$ a curve in $e^{\\cal L}_{reg}$ such that $\\pi(U(t))=\\Gamma(t)$ for every $t$, i.e., $U$ is a `{\\it lift}' of $\\Gamma$, then $\\pi_*|_{U}^{-1} \\dot \\Gamma$ is a horizontal tangent vector at $U(t)$ for every $t$. If $\\Gamma$ joins two points $\\Gamma_0$ and $\\Gamma_1$ in $e^{\\cal L}_{reg}\/e^{\\cal K}$, in the interval $[t_0,t_1]$, and $U_0$ is such that $\\pi(U_0)=\\Gamma_0$, then the solution of the differential system \n\\be{diffsys}\n\\dot U=\\pi_*|_{U}^{-1} \\dot \\Gamma, \\qquad U(t_0)=U_0, \n\\end{equation} \nis such that $\\pi(U(t_1))=\\Gamma_1$. Therefore, once we prescribe an arbitrary trajectory $\\Gamma$ to move in the quotient space between two given orbits $\\Gamma_0$ and $\\Gamma_1$ in the regular part, the control specified by \n\\be{deficontr}\n\\pi_*|_{U}^{-1} \\dot \\Gamma=\\sum_{j=1}^m u_j B_j U\n\\end{equation}\n will allow us to move between two states $U_0$ and $U_1$ such that $\\pi(U_0)=\\Gamma_0$ and $\\pi(U_1)=\\Gamma_1$. \n \n \n \\subsection{Methodology for Control} \n The above treatment suggests a general methodology to design control laws \n for systems of the form (\\ref{Scro}) described in subsection \\ref{Classo}. In fact, \n given the freedom in the choice of the trajectory $\\Gamma=\\Gamma(t)$ above mentioned, \n we can design such controls satisfying various requirements and in particular without discontinuity. Such a methodology can be summarized as follows. \n \n First of all we need to obtain a geometric description of the orbit space $e^{\\cal L}\/e^{\\cal K}$, and in particular of its regular part $e^{\\cal L}_{reg}\/e^{\\cal K}$, and verify that the minimal isotropy group, which is the isotropy group of the elements in $e^{\\cal L}_{reg}$, is discrete so that the right hand side of \n (\\ref{dimensio}) is equal to $\\dim {\\cal P}$. This is a weak assumption, easily verified in the examples that will follow and that can be proven true in several cases \\cite{conBenECC}, \\cite{BenTesi}. Then one chooses\n coordinates for the manifold $e^{\\cal L}_{reg}\/e^{\\cal K}$. These are expressed in terms of the original coordinates in \n $e^{\\cal L}$ or, more commonly, in terms of the entries of the matrices in $e^{\\cal L}$. Such coordinates are a complete set of {\\it independent invariants} with respect to the (conjugacy) action of the group $e^{\\cal K}$. The word `complete' here means that the knowledge of their values uniquely determines the {\\it orbit}, i.e., a point in $e^{\\cal L}_{reg}\/e^{\\cal K}$. There are $m=\\dim({\\cal P})$ of them, as this is the dimension of $e^{\\cal L}_{reg}\/e^{\\cal K}$ (cf. (\\ref{dimensio})). Once we have coordinates $\\{x^1,...,x^m\\}$, the tangent vectors $\\left \\{ \\frac{\\partial}{\\partial x^1},...,\\frac{\\partial}{\\partial x^m}\\right\\}$ at every regular point in the quotient space determine a basis of the tangent space of $e^{\\cal L}_{reg}\/e^{\\cal K}$. For any trajectory $\\Gamma$ in \n $e^{\\cal L}_{reg}\/e^{\\cal K}$,\n we can write the tangent vector $\\dot \\Gamma(t)$ as $\\dot \\Gamma(t)=\\sum_{j=1}^m \\dot x^{j}\\frac{\\partial}{\\partial x^j}$, for some functions $\\dot x^j$. With this choice of coordinates, one then needs to calculate, for every regular point $U$, the matrix for $\\pi_*|_U$ as restricted to $R_{U*} {\\cal P}$ and its inverse $\\pi_*|_U^{-1}$. This allows us to implement \n formula (\\ref{deficontr}) to obtain the control from a given prescribed trajectory in the orbit space. \n \n \n We remark that there is an issue concerning the fact that our initial condition which is the identity ${\\bf 1}$ in (\\ref{Scro}) (and possibly the final desired condition) is not in the regular part of $e^{\\cal L}$. In fact the whole Lie group $e^{\\cal K}$ is the isotropy group of the identity. If we take for $\\Gamma$ a trajectory which starts from the class corresponding to the identity, the matrix corresponding to $\\pi_*$ may become singular as $t \\rightarrow 0$ and therefore it will be impossible to derive the control directly from formula (\\ref{deficontr}). This problem can be overcome by applying a {\\it preliminary control} which takes the state of system (\\ref{Scro}) out of the singular part and into the regular part of $e^{\\cal L}$. To avoid discontinuities, such a control is chosen to be zero at the beginning and at the end of the control interval. It takes the system to a point $U_0$ with $\\pi(U_0)=\\Gamma_0 \\in e^{\\cal L}_{reg}\/e^{\\cal K}$. Then we choose the trajectory in the quotient space $\\Gamma$ in the regular part of the quotient space which joins $\\Gamma_0$ and $\\Gamma_1$ where $\\Gamma_1$ is the orbit of the desired final condition. The control obtained through (\\ref{deficontr}) will steer system (\\ref{Scro}) to a state $\\hat U_f$ in the same orbit as the desired final condition. Therefore if $U_f$ is the desired final condition we will have \n $\\pi(\\hat U_f)=\\pi(U_f)=\\Gamma_1$. Notice that we also want $\\dot \\Gamma\\to 0$ at both the initial and final point so that the control is zero and can be {\\it concatenated} continuously with the preliminary control above described. \n \n \n \n \n It is possible that the desired final condition $U_f$ is also in the singular part of $e^{\\cal L}$. This problem can be tackled in two ways. We can recall that the regular part is open and dense and therefore we can always drive to a state in the regular part arbitrarily close to the desired $U_f$. This means that our algorithm will only give an approximate control, but which will steer the system arbitrarily close to $U_f$. Alternatively we can select a regular element $S \\in e^{\\cal L}$ and such that $U_f S^{-1}$ is also regular.\\footnote{Such an element $S\\in e^{\\mathcal{L}}_{\\text{reg}}$ always exists for any $U\\in e^{\\cal L}$, by the following argument: Assume that it does not exist. Then for every regular $S$, $US$ is singular. Therefore, by indicating by $L_U$ the left translation by $U$ we have, $L_U(e^{\\mathcal{L}}_{\\text{reg}})\\subseteq e^{\\mathcal{L}}_{\\text{sing}}$. Then by applying the unique bi-invariant Haar measure $\\mu$ on $e^{\\cal L}$ with $\\mu(e^{\\cal L})=1$ implies $\\mu(e^{\\mathcal{L}}_{\\text{reg}})=\\mu(L_U(e^{\\mathcal{L}}_{\\text{reg}}))\\leq\\mu(e^{\\mathcal{L}}_{\\text{sing}})$. On the other hand, $\\mu(e^{\\mathcal{L}}_{\\text{sing}})=0$ since $\\mu$ must also correspond to the Riemannian volume of the bi-invariant Killing metric (normalized if necessary) and each stratum in $e^{\\mathcal{L}}_{\\text{sing}}$ has dimension strictly less than dimension of $e^{\\cal L}$ and thus has volume $0$ and therefore invariant measure $0$. But $\\mu(e^{\\mathcal{L}}_{\\text{reg}})=\\mu(e^{\\mathcal{L}})-\\mu(e^{\\mathcal{L}}_{\\text{sing}})=\\mu(e^{\\mathcal{L}})=1$. This is a contradiction.} Then we find two controls: $u_1$ driving $U$ in (\\ref{Scro}) from the identity to $S$ in (\\ref{Scro}) and $u_2$ driving $U$ in (\\ref{Scro}) from the identity to $U_fS^{-1}$ in (\\ref{Scro}). In particular, because of the right invariance of system (\\ref{Scro}), $u_2$ also drives $S$ to $U_f$. \n Therefore, the concatenation of $u_1$ (first) and $u_2$ (second) will drive to the desired final configuration. Therefore in the following, for simplicity, we shall assume that the final desired state is in the regular part. \n \n \n The (concatenated) control $\\hat u_j$ obtained from the tangent vector $\\dot \\Gamma$ at every time $t$ for a trajectory on the quotient space $\\Gamma$ (cf. (\\ref{deficontr})) will drive the state $U$ of (\\ref{Scro}) from the identity ${\\bf 1}$ only to a state $\\hat U_f$ which is in the same orbit as the desired final state $U_f$. \n There exists $K \\in e^{\\cal K}$ such \n that $K\\hat U_f K^{-1}=U_f$. Once such a $K$ is found it will determine through \n $\\sum_{j=1}^m KB_j \\hat u_jK^{-1}= \\sum_{k=1}^m B_k u_k$ the actual control $\\{u_k\\}$ to apply. We remark that this tranformation does not modify the smoothness properties of the control, nor the fact that it is zero at some point (in particular at the beginning and at the end of the control interval). \n\n \n\\subsection{Examples}\n \n\\subsubsection{Control of a single spin $\\frac{1}{2}$ particle}\\label{Twol}\n\nConsider the Schr\\\"odinger operator equation (\\ref{Scro}) in the form \n\\be{Scro1}\n\\dot U=\\begin{pmatrix} 0 & \\alpha(t) \\cr \n-\\alpha^*(t) & 0\n\\end{pmatrix} U, \\qquad U(0)={\\bf 1}_2, \n\\end{equation}\nwith $U$ in $SU(2)$. The complex-valued function $\\alpha$ is a \ncomplex control field representing the $x$ and $y$ \ncomponents of an electro-magnetic field. \nThe dynamical Lie algebra ${\\cal L}$ is $\\mathfrak{su}(2)$ and it has a decomposition \n$\\mathfrak{su}(2)={\\cal K} \\oplus {\\cal P}$ with ${\\cal K}$ diagonal and ${\\cal P}$ \nanti-diagonal matrices. \n The one-dimensional \nLie group of {\\it diagonal} matrices in $SU(2)=e^{\\cal L}$ is a symmetry group $e^{\\cal K}$ \nfor the above system and the structure of the quotient space $SU(2)\/e^{\\cal K}$ is that of \na closed unit disc \\cite{NoiAutomat}. The entry $u_{1,1}$ of \n$U \\in SU(2)$, which is a complex number with absolute value $\\leq 1$, \ndetermines the orbit of the matrix $U$. The regular part \nof $SU(2)$ corresponds to matrices with $|u_{1,1}| < 1$, i.e., the interior \nof the unit disc $\\mathring D$, in the complex plane. The singular part is \nthe boundary of the unit disc. Denote by $z$ the (complex) coordinate in the \ninterior of the complex unit disc. This corresponds to two {\\it real} coordinates invariant under the action of $e^{\\cal K}$ \n (conjugation by diagonal matrices). Let $\\Gamma=\\Gamma(t)$ be a desired trajectory inside \n the unit disc, which we denote by $z_d$ in the chosen coordinates. The tangent \nvector $\\dot \\Gamma$ in (\\ref{diffsys}) is given in complex coordinates by $\\dot{z_d} \\frac{\\partial}{\\partial z}$.\\footnote{This means $\\dot x_d \\frac{\\partial}{\\partial x_d} +\\dot y_d \\frac{\\partial}{\\partial y_d}$ where $x_d$ and $y_d$ are the real and imaginary parts of $z_d$.} In the coordinates on $SU(2)$ used in equation (\\ref{Scro}) the corresponding tangent vector for $\\dot U$ is given by (cf. (\\ref{Scro1})) $\\begin{pmatrix} 0 & \\alpha \\cr -\\alpha & 0 \\end{pmatrix} U$ and the value of the control $\\alpha$ is obtained by solving (\\ref{diffsys}) which gives \n\\be{diffebis}\n\\dot z_d=\\frac{d}{dt}|_{t=0} z \\left( e^{\\begin{pmatrix}0 & \\alpha \\cr -\\alpha^* & 0 \\end{pmatrix}t} U \\right), \n\\end{equation}\nwhere $z(P)$ denotes the the $(1,1)$ entry of the matrix $P$. Equation (\\ref{diffebis}) gives $\\alpha=\\frac{\\dot z_d}{u_{21}}$, which, \nas expected from the above recalled isomorphism theorem of \\cite{conBenECC}, gives a one-to-one correspondence between $\\alpha$ and $\\dot z_d$ as long as $U$ is in the \nregular part of $SU(2)$, i.e., it is not diagonal, i.e., $u_{2,1} \\not=0$. \n\n\n\n\n \n\\subsection{Control of a three level system in the $\\Lambda$ configuration}\\label{Lambda}\nConsider a three level quantum system where the controls couple level $|1\\rangle $ to level $|2\\rangle $ and level $|1 \\rangle $ to level $|3\\rangle$ but not level $|2\\rangle$ and $|3\\rangle$ directly. Assuming that $|1\\rangle$ is the highest energy level, the energy level diagram takes the so-called $\\Lambda$ configuration (see, e.g., \\cite{Lambda1}). The Schr\\\"odinger operator equation (\\ref{Scro}) is such that \n\\be{lambdascro}\n\\sum_{j=1}^m u_j B_j =\\sum_{j=1}^4 u_j B_j=\n\\begin{pmatrix}0 & \\alpha & \\beta \\cr -\\alpha^* & 0 & 0 \\cr \n- \\beta^* & 0 & 0 \\end{pmatrix}, \n\\end{equation}\nwith the complex control functions $\\alpha$ and $\\beta$. Such a system admits a group of symmetries given by $e^{\\cal K}=S(U(1) \\times U(2))$, i.e., block diagonal matrices in $SU(3)$ with one block of dimension $1 \\times 1$ and one block of dimension $2 \\times 2$, and determinant equal to $1$. The Lie subalgebra ${\\cal K}$ consists of matrices in $\\mathfrak{su}(3)$ with a block diagonal structure with one block of dimension $1 \\times 1$ and one block of dimension $2 \\times 2$. The complementary subspace ${\\cal P}$ is spanned by \nantidiagonal matrices in $\\mathfrak{su}(2)$ with the same partition of rows and columns. Such a system was studied in \\cite{ADS} in the context of optimal control and a description of the orbit space $SU(3)\/e^{\\cal K}$ was given. \nIt was shown that the regular part $SU(3)_{reg}\/e^{\\cal K}$ is \nhomeomorphic to the product of two open unit discs \n$\\mathring D_1 \\times \\mathring D_2$ in the complex plane. Up to a similarity transformation in $e^{\\cal K}=S(U(1) \\times U(2))$, a matrix $U$ in $SU(3)$ \ncan be written as \n\\be{canonicalformSU3}\nU=\\begin{pmatrix} z_1 & \\sqrt{1-|z_1|^2} & 0 \\cr \n-\\sqrt{1-|z_1|^2} & z_1^* & 0 \\cr 0 & 0 & 1 \\end{pmatrix}\n\\begin{pmatrix} 1 & 0 & 0 \\cr 0 & z_2 & w \\cr 0 & - w^* & z_2^* \\end{pmatrix}, \n\\end{equation}\nfor complex numbers $z_1,$ $z_2$ and $w$, where $|z_1|\\leq 1$ and $|z_2| \\leq 1$. Strict inequalities hold if and only if $U$ is in the regular part in which case $z_1$ and $z_2$ can be taken as the coordinates in $SU(3)_{reg}\/e^{\\cal K}$. An alternative set of (complex) coordinates is given by $(z_1,T)$ where $T$ is the trace of the (lower) $2 \\times 2$ block of the element $U \\in SU(3)$ which is invariant (along with $z_1$) under the conjugation action of elements in $e^{\\cal K}=S(U(1) \\times U(2))$. The coordinates $(z_1,T)$ are related to the coordinates $(z_1,z_2)$ by (from (\\ref{canonicalformSU3})) $T=z_1^*z_2+z_2^*$ which is inverted as $z_2=\\frac{T^*-z_1 T}{1-|z_1|^2}$. A desired trajectory $\\Gamma$ in $SU(3)_{reg}\/S(U(1)\\times U(2))$ is written in these coordinates as $(z_{1d},T_d):= (z_{1d}(t),T_d(t))$ . The associated tangent vector $\\dot \\Gamma$ in (\\ref{diffsys}) is \n $\\dot z_{1d}(t) \\frac{\\partial}{\\partial z_{1d}}+\\dot T_{d} \\frac{\\partial}{\\partial T}$. By applying $\\pi_*|_{U}\\dot U$ in (\\ref{diffsys}) to $z_1$ and $T$ with the restriction that $\\dot U$ is of the form $\\begin{pmatrix}0 & \\alpha & \\beta \\cr -\\alpha^* & 0 & 0 \\cr \n- \\beta^* & 0 & 0 \\end{pmatrix}U$, we obtain two equations for $\\alpha$ and $\\beta$, \n$$\n\\alpha u_{2,1}+\\beta u_{3,1}=\\dot z_{1d}, \\qquad -\\alpha u_{1,2}^*-\\beta u_{1,3}^*=\\dot T_{d}^*. \n$$ \nThese are solved, using $\\hat D:=u_{1,3}^*u_{2,1}-u_{3,1} u_{1,2}^*$ by \n\\be{alphabetasol}\n\\alpha=\\frac{u_{1,3}^* \\dot z_{1,d}+ u_{3,1} \\dot T_d^*}{\\hat D}, \\qquad \n\\beta=-\\frac{u_{1,2}^* \\dot z_{1,d}+ u_{2,1} \\dot T_d^*}{\\hat D}. \n\\end{equation}\n\nThe quantity $\\hat D$ is different from zero if and only if the matrix $U$ is in the regular part of $SU(3)$. This can be shown in two steps: First one shows that $\\hat D$ is invariant under the action of $S(U(1) \\times U(2))$ by writing a matrix in \n$S(U(1) \\times U(2))$ with an Euler-type decomposition \nas $F_1 R F_2$ with $F_1$ and $F_2$ diagonal and $R$ of the form \n$\nR=\\begin{pmatrix} 1 & 0 & 0 \\cr \n0 & \\cos(\\theta) & \\sin (\\theta) \\cr \n0 & -\\sin(\\theta) & \\cos(\\theta) \n\\end{pmatrix}, \n$\nand verifying that conjugation by each factor in $F_1 R F_2$ leaves $\\hat D$ unchanged. The second step is to verify that $\\hat D$ for the matrix $X$ in the form \n(\\ref{canonicalformSU3}) is different from zero if and only if $|z_1| \\not=1$ and \n$|z_2| \\not=1$. This gives a quick \ntest to check whether a matrix is in the regular part, i.e., if its isotropy group is the smallest possible one, which, in this case, is the group of scalar matrices in $SU(3)$. \n{This fact also follows from the result in Appendix B which shows in general that $\\det(\\pi_*)$, with $\\pi_* :\\, R_{p *}{\\cal P} \\rightarrow T_{\\pi(p)} e^{\\cal L}_{reg}\/e^{\\cal K}$ is invariant under the action of $e^{\\cal K}$. }\n\nAs always, we have the problem that the initial point ${\\bf 1}$ is in the singular part of the orbit space decomposition and therefore $\\hat D$ in (\\ref{alphabetasol}) is zero at time $0$. As suggested in the previous section, we can however apply a preliminary control to \nsteer away from the singular part. \n\n\n\n\n \n \n\n\n \n\n\\section{Simultaneous control of two independent spin $\\frac{1}{2}$ particles}\\label{App2QB}\n\n\\subsection{The model}\nThe dynamics of two spin $\\frac{1}{2}$ particles with different gyromagnetic ratios in zero field NMR can be described by the Schr\\\"odinger equation (\\ref{Scro}) (after appropriate normalization) where \n\\be{Hamilto}\n\\sum_{k=1}^m \\hat u_k B_k:=\\sum_{x,y,z} u_{x,y,z}(t)( i\\sigma_{x,y,z} \\otimes {\\bf 1}+ \\gamma\n i {\\bf 1} \\otimes \\sigma_{x,y,z}). \n\\end{equation} \nHere $u_{x,y,z}$ are the controls representing the $x,y,z$ components of the electromagnetic field, and $\\sigma_{x,y,z}$ are the Pauli matrices defined as \n\\be{Pauli}\n\\sigma_x:= \\begin{pmatrix} 0 & 1 \\cr 1 & 0 \\end{pmatrix}, \\qquad\n\\sigma_y:= \\begin{pmatrix} 0 & -i \\cr i & 0 \\end{pmatrix}, \\qquad\n\\sigma_z:= \\begin{pmatrix} 1 & 0 \\cr 0 & -1 \\end{pmatrix}. \\qquad\n\\end{equation}\nThe parameter $\\gamma$ is the ratio of the two gyromagnetic ratios and \nwe shall assume that $|\\gamma|\\not=1$. Under this assumption, the dynamical Lie algebra ${\\cal L}$ for system (\\ref{Scro}), (\\ref{Hamilto}) is the $6-$dimensional Lie algebra spanned by $\\{\\sigma_1 \\otimes {\\bf 1} + {\\bf 1} \\otimes \\sigma_2 \\, | \\, \\sigma_1, \\sigma_2 \\in \\mathfrak{su}(2)\\}$.\\footnote{This Lie algebra is isomorphic to $\\mathfrak{so}(4)$.} The corresponding Lie group $e^{\\cal L}$, which is the set of reachable states for system (\\ref{Scro}), (\\ref{Hamilto}), is $\\{X_1 \\otimes X_2 \\, | \\, X_1, X_2 \\in SU(2)\\}$, i.e. the tensor product $SU(2)\\otimes SU(2)$. It is convenient to slightly relax the description of the state space and look at system (\\ref{Scro}), (\\ref{Hamilto}) as a system on the Lie group $SU(2) \\times SU(2)$, i.e., the Cartesian direct product of $SU(2)$ with itself, and the dynamical equations (\\ref{Scro}), (\\ref{Hamilto}) \nreplaced by\n\\be{reple}\n\\dot U= \\sigma(t)U, \\quad U(0)={\\bf 1}, \\qquad \\dot V= \\gamma \\sigma(t)V, \\quad V(0)={\\bf 1}, \n\\end{equation} \nwith $\\sigma(t):=\\sum_{x,y,z} i u_{x,y,z}(t) \\sigma_{x,y,z}$. \nThe controls that drive system (\\ref{reple}) to $(\\pm U_f, \\pm V_f)$ drive system (\\ref{Scro}), (\\ref{Hamilto}) to the state $U_f \\otimes V_f$. Therefore we shall focus on the steering problem for system (\\ref{reple}) which consists of steering one spin to $U_f$ and the other to $V_f$, simultaneously. Since $|\\gamma| \\not=1$, the dynamical Lie algebra associated with (\\ref{reple}) is spanned by the pairs $(\\sigma_1, \\sigma_2)$ with $\\sigma_1$ and $\\sigma_2$ in $\\mathfrak{su}(2)$. Such a Lie algebra can be written as ${\\cal K} \\oplus {\\cal P}$ with ${\\cal K}$ spanned by elements of the form \n$(\\sigma, \\sigma)$ with $\\sigma \\in \\mathfrak{su}(2)$ and ${\\cal P}$ spanned by elements of the \nform $(\\sigma, \\gamma \\sigma)$ with $\\sigma \\in \\mathfrak{su}(2)$. At every $p \\in G=SU(2) \\times SU(2)$, the vector fields in (\\ref{reple}) belong to $R_{p*} {\\cal P}$. \n\n\\subsection{Symmetries and the the structure of the quotient space} \n\nThe Lie group $SU(2)$ acts on $G:=SU(2) \\times SU(2)$ by {\\it simultaneous conjugation}, that is, for $K \\in SU(2)$, $(U_f, V_f) \\rightarrow (K U_f K^{-1}, K V_f K^{-1})$ and this \nis a group of symmetries for system (\\ref{reple}) in that if $\\sigma=\\sigma(t)$ is the control steering to $(U_f,V_f)$, then $K\\sigma K^{-1}$ is the control steering to $(KU_f K^{-1}, KV_f K^{-1})$. The quotient space of \n$SU(2) \\times SU(2)$ under this action, $(SU(2) \\times SU(2))\/SU(2)$ was described in \\cite{Xinhua} as follows. \n\nConsider \na pair $(U_f, V_f)$ and let $\\phi \\in [0,\\pi]$ be a real number so that the two eigenvalues of $U_f$ are $e^{i\\phi}$ \nand $e^{-i\\phi}$. If $0< \\phi < \\pi$ then $U_f \\not= \\pm {\\bf 1}$ and there exists a unitary \nmatrix $S$ such that $SU_f S^{\\dagger}:=D_f$ is diagonal. Therefore the matrix $(U_f, V_f)$ is in the same orbit as $(D_f, SV_f S^\\dagger)$. The parameter $\\phi$ determines the orbit, along with the $(1,1)-$entry of $SV_f S^\\dagger$, which does not depend on the choice of $S$.\\footnote{All the possible diagonalizing matrices differ by a diagonal factor that does not affect the $(1,1)$ entry of $SV_f S^\\dagger$.} Such a $(1,1)$-entry has absolute value $\\leq 1$ and therefore it is an element of the unit disk in the complex plane. The orbits corresponding to the values of $ 0< \\phi < \\pi$ (for the eigenvalue of the first matrix) are in one-to-one correspondence with the points of a solid cylinder with height equal to $\\phi$. When $\\phi=0$ (or $\\phi=\\pi$), the matrix $U_f$ is $\\pm$ identity and therefore the equivalence class is determined by the eigenvalue of the matrices $V_f$, which are $e^{\\pm i \\psi}$ for $\\psi\\in [0, \\pi]$. In the geometric description, the solid cylinder degenerates at the two ends to become a segment $[0,\\pi]$. The regular part of the orbit space $G_{reg}$ is represented by points in the interior of the solid cylinder. Such points \ncorrespond to pairs \n$$\n\\left( \\begin{pmatrix} e^{i \\phi} & 0 \\cr 0 & e^{-i\\phi} \\end{pmatrix}, \\begin{pmatrix} z & w \\cr -w^* & z^* \\end{pmatrix}\\right)\n$$ \nwith $\\phi \\in (0,\\pi)$ and $|z|<1$. For these pairs, the isotropy group is the discrete group $\\{ \\pm {\\bf 1} \\}$. In general points that are in the singular part correspond to pairs of matrices $(U_f, V_f)$ which can be simultaneously diagonalized. Therefore \nthe condition that they commute \n\\be{commut34}\nU_f V_f=V_fU_f, \n\\end{equation} \nis necessary and sufficient for a pair $(U_f,V_f)$ to belong to the singular part. \n\n\nAssume that $p$ is a regular point in $G_{reg}$ for this problem and $\\pi$ is the natural projection $\\pi: G_{reg} \\rightarrow G_{reg}\/SU(2)$. Then from the theory in the previous section, the differential $\\pi_*|_{p}$ is an isomorphism from $R_{p*}{\\cal P}$ to $T_{\\pi(p)} G_{reg}\/SU(2)$. Let us choose a basis for ${\\cal P}$ given by $(i\\sigma_{x,y,z}, \\gamma i\\sigma_{x,y,z})$. To choose the three coordinates in $G_{reg}\/SU(2)$, we consider a general element $p$ in $SU(2) \\times SU(2)$ written as \n\\be{puntop}\np:=(U_f,V_f):=\\left( \\begin{pmatrix} x & y \\cr -y^* & x^* \\end{pmatrix}, \\begin{pmatrix} z & w \\cr \n- w^* & z^* \\end{pmatrix}\\right). \n\\end{equation} \nFor a complex number $q$ we shall denoted by $q_R:=\\texttt{Re}(q)$ and $q_I:=\\texttt{Im}(q)$. Notice that in (\\ref{puntop}) we have \n$$\nx_R^2+x_I^2+y_R^2+y_I^2=z_R^2+z_I^2+w_R^2+w_I^2=1. \n$$ \nCoordinates in $G_{reg}\/SU(2)$ must be independent invariant functions of $(U_f,V_f)$ in (\\ref{puntop}). We choose \n\\be{ccordinates}\nx^1:=x_R, \\qquad x^2:=z_R, \\qquad x^3:=x_I z_I+w_R y_R+ w_I y_I.\n\\end{equation}\nIt is a direct verification to check that at any point $p \\in SU(2) \\times SU(2)$, $x^1$, $x^2$ and $x^3$ are unchanged by the (double conjugation) action of $SU(2)$, i.e., they are invariant. We remark also that we can consider two unit \nvectors $\\vec V:=(x_R,x_I, y_R, y_I)$, and $\\vec W:=(z_R,z_I, w_R, w_I)$, and, if we do that, \n $x^3=\\vec V \\cdot \\vec W-x_Rz_R$. \n\n\\subsection{Choice of invariants}\nWe pause a moment to detail how the invariant coordinates in (\\ref{ccordinates})\n were chosen. We do this because the method can be used for other examples. We consider the vectors $\\vec V:=[x_R, x_I, y_R, y_I]^T$ and \n $\\vec W:=[z_R, z_I, w_R, w_I]^T$ and the adjoint action of $SU(2)$ on $SU(2) \\times SU(2)$ which gives a linear action on $\\vec Q:=[\\vec V^T, \\vec W^T]^T$. We are looking for functions $f=f(\\vec V, \\vec W)$ invariant under this action. Given that every element of $SU(2)$ can be written according to Euler's decomposition as $e^{i \\sigma_z \\alpha} e^{i \\sigma_y \\theta} e^{i \\sigma_z \\beta}$, for real parameters $\\alpha, \\beta$ and $\\theta$, it is enough that $f$ is invariant with respect to transformations of the form \n$e^{i \\sigma_z \\beta}$ and $e^{i \\sigma_y \\theta}$, for general real $\\beta$ and $\\theta$, in order for $f$ to be invariant with respect to all of $SU(2)$. If $X_z:=X_z(\\beta):=e^{i \\sigma_z \\beta}$ then $Ad_{X_z}$ acting on $[\\vec V^T, \\vec W^T]^T$ is \n\\be{adXz}\nAd_{X_z(\\beta)}:=\\begin{pmatrix} 1 & 0 & 0 & 0 \\cr 0 & 1 & 0 & 0 \\cr \n0 & 0 & \\cos(\\beta) & - \\sin (\\beta) \\cr \n0 & 0 & \\sin(\\beta) & \\cos(\\beta) \n\\end{pmatrix}. \n\\end{equation}\nIf $X_y:=X_y(\\theta):=e^{i \\sigma_y \\theta}$ then $Ad_{X_y}$ acting on $[\\vec V^T, \\vec W^T]^T$ is \n\\be{adXy}\nAd_{X_y(\\theta)}:=\\begin{pmatrix} 1 & 0 & 0 & 0 \\cr 0 & \\cos(\\theta) & 0 & \\cos(\\theta) \\cr \n0 & 0 & 1 & 0 \\cr \n0 & -\\sin(\\theta) & 0 & \\cos(\\theta) \n\\end{pmatrix}. \n\\end{equation}\nWe first look for {\\it linear invariants}, i.e., invariant functions $f$ of the form \n$f(\\vec V, \\vec W):=\\vec a^{\\, T} \\vec V + \\vec b^{\\, T} \\vec W$. From the condition\n$$\n\\vec a^{\\, T} \\vec V + \\vec b^{\\, T} \\vec W=\\vec a^{\\, T} Ad_{X}\\vec V + \\vec b^{\\, T} Ad_{X}\\vec W, \n$$ \nwhere $X$ may be equal to $X_z(\\beta)$ or $X_{y}(\\theta)$, with arbitrary $\\beta$ and $\\theta$, we find that the last three components of $\\vec a$ and $\\vec b$ must be zero. Therefore all linear invariants $f$ must be of the form \n$f=a_1 x_R+b_1 z_R$, from which we get the invariant $x_R$ and $z_R$ in (\\ref{ccordinates}). \n\n\n\nWe then try to find {\\it quadratic invariants} and therefore an $8 \\times 8$ symmetric matrix $P$ so that\n$f(\\vec V, \\vec W)=[\\vec V^T, \\vec W^T] P [\\vec V^T, \\vec W^T]^T$ and \n$$\n[\\vec V^T, \\vec W^T] P [\\vec V^T, \\vec W^T]^T=[\\vec V^T, \\vec W^T] \\begin{pmatrix}Ad_X^T & 0 \\cr 0 & Ad_X^T \\end{pmatrix}P \\begin{pmatrix}Ad_X & 0 \\cr 0 & Ad_X \\end{pmatrix} [\\vec V^T, \\vec W^T]^T, \n$$\nfor $X=X_z(\\beta)$ and $X=X_{y}(\\theta)$ as defined in (\\ref{adXz}) and (\\ref{adXy}) for every $\\beta$ and $\\theta$ (and for every $\\vec V$ and $\\vec W$). This leads to the condition \n$$\n\\begin{pmatrix} Ad_X & 0 \\cr \n0 & Ad_X \\end{pmatrix} P=P \\begin{pmatrix} Ad_X & 0 \\cr \n0 & Ad_X \\end{pmatrix}. \n$$ \nFrom this, we find that the matrix $P$ must be of the form \n$$\nP=\\begin{pmatrix} e & 0 & 0 & 0& d & 0& 0 & 0\\cr \n 0 & c & 0 & 0 &0 & g & 0 & 0 \\cr \n 0 & 0 & c & 0 & \n 0 & 0 & g & 0 \\cr \n 0 & 0 & 0 & c & 0 & 0 & 0 & g \\cr \n d & 0 & 0 & 0 & r & 0 & 0 & 0\\cr \n 0 & g & 0 & 0 & 0 & h & 0 & 0 \\cr \n 0 & 0 & g & 0 & 0 & 0 & h& 0 \\cr \n 0 & 0 & 0 & g & 0 & 0 & 0 & h \\end{pmatrix}. \n$$\nIt follows that all quadratic invariants must be of the form \n$$\nf=ex_R^2+2dx_R z_R + r z_R^2+c(x_I^2+y_R^2+y_I^2) + h (z_I^2+w_R^2+w_I^2)+ 2g(x_I z_I+w_R y_R+ w_I y_I). \n$$\nBecause of $x_R^2+x_I^2+y_R^2+y_I^2=z_R^2+z_I^2+w_R^2+w_I^2=1$, all terms can be written in terms of the (linear) invariant $x_R$ and $z_R$ except the last one which we choose as the third coordinate in (\\ref{ccordinates}). \n\n\\subsection{Algorithm for control}\\label{AlgoCon}\n\nAt the point $\\pi(p) \\in G_{reg}\/SU(2)$, the tangent vectors \n$\\frac{\\partial}{\\partial x^j},$ $j=1,2,3$ span $T_{\\pi(p)} G_{reg}\/SU(2)$, so that a general tangent vector at $\\pi(p)$ can be written as $a_1\\frac{\\partial}{\\partial x^1}+a_2 \\frac{\\partial}{\\partial x^2}+ a_3 \\frac{\\partial}{\\partial x^3}$. We calculate the matrix of the isomorphism $\\pi_*|_{p}$ mapping \nthe coordinates $\\alpha_x, \\alpha_y, \\alpha_z,$ in $R_{p*} (\\sigma, \\gamma \\sigma):=R_{p*}(\\alpha_x(i \\sigma_x, \\gamma i \\sigma_x) +\\alpha_y (i \\sigma_y, \\gamma i \\sigma_y)+ \\alpha_z (i \\sigma_z, \\gamma i \\sigma_z) ) \\in R_{p*} {\\cal P}$ to $(a_1,a_2,a_3)$, (cf. \n(\\ref{diffsys})). Denote this matrix by $\\Pi_{j,l}:=\\Pi_{j,l}(p)$ with $j=1,2,3$ and $l=x,y,z$. \nWe have \n$$\n\\Pi_{j,l}(p)= \\pi_{*}|_p R_{p*} (i \\sigma_l, i \\gamma \\sigma_l) x^j. \n$$\nFor the sake of illustration, let us calculate $\\Pi_{1,x}(p)$. This is given by (recall $p$ is defined in (\\ref{puntop})) \n$$\n\\Pi_{1,x}(p):=\\pi_{p*}R_{p*} (i \\sigma_x, i \\gamma \\sigma_x) x^1=R_{p*} (i \\sigma_x, i \\gamma \\sigma_x) (x^1 \\circ \\pi)=\\frac{d}{dt}|_{t=0} x^1\\circ \\pi \\left( e^{i \\sigma_x t} \\begin{pmatrix} x & y \\cr -y^* & x^*\\end{pmatrix}\\, , \\,e^{i \\gamma \\sigma_x t} \\begin{pmatrix} z & w \\cr -w^* & w^*\\end{pmatrix} \\right). \n$$\nThis simplifies because $x^1 \\circ \\pi $ does not depend on the second factor. Therefore the $\\Pi_{1,x}(p)$ entry is the derivative at $t=0$ of the real part of the $(1,1)$ entry of the matrix \n$$\ne^{i \\sigma_x t} \\begin{pmatrix} x & y \\cr -y^* & x^*\\end{pmatrix} \n= \n\\begin{pmatrix} \\cos(t) & i \\sin(t) \\cr i \\sin(t) & \\cos(t) \\end{pmatrix} \\begin{pmatrix} x & y \\cr -y^* & x^*\\end{pmatrix}.\n$$ \nThis leads to the result \n$$\n\\Pi_{1,x}(p)=-y_I. \n$$\n\n\nThe quantities \n\\be{D1D2D3}\n\\Delta_1:=z_I y_R- x_I w_R, \\qquad \\Delta_2:=z_I y_I-x_Iw_I, \\qquad \\Delta_3:=w_Ry_I-w_Iy_R, \n\\end{equation}\nappear routinely in calculations that follow. \n\n\n\n\nSimilar calculations to the ones above for $\\Pi_{1,x}(p)$ lead to the full matrix $\\Pi(p)$, which is given by \n{\n\\be{PiofP}\n\\Pi(p):=\n\\begin{pmatrix} - y_I & -y_R & -x_I \\cr \n- \\gamma w_I & - \\gamma w_R & - \\gamma z_I \\cr \n(\\gamma -1) \\Delta_1 +\\gamma z_R y_I \n+w_I x_R \n& (1-\\gamma) \\Delta_2 \n+ w_R x_R \n+ \\gamma z_R y_R \n &\n(\\gamma-1) \\Delta_3 \n+x_R z_I \n+ \\gamma z_R x_I\n\\end{pmatrix}\n\\end{equation} }\nThe determinant of this matrix is different from zero if and only if $p$ is in the regular part \nand it is another invariant under the action of $SU(2)$ on $SU(2) \\times SU(2)$ (cf. Appendix B). It can be explicitly computed as \n\\be{determinante}\n\\det (\\Pi (p))=\\gamma(\\gamma-1)(\\Delta_1^2+\\Delta_2^2+\\Delta_3^2), \n\\end{equation}\nwhich can be seen to be equal to zero if and only if condition (\\ref{commut34}) is verified. The invariant \n$\\Delta:=\\Delta_1^2+\\Delta_2^2+\\Delta_3^2$ can be expressed in terms of the (minimal) invariants $x_R$, $z_R$ and $x^3$ in (\\ref{ccordinates}) as\\footnote{This can be seen by expanding the left hand side using the definitions of $\\Delta_{1,2,3}$ (\\ref{D1D2D3}) and the right hand side using the definition of $x^3$ (\\ref{ccordinates}), so that (\\ref{delt}) reduces to $y_I^2w_R^2+w_I^2y_R^2+y_R^2 z_I^2+x_I^2 w_R^2+x_I^2 w_I^2+y_I^2 z_I^2=(1-x_R^2)(1-z_R^2)-z_I^2x_I^2-y_R^2 w_R^2-y_I^2 w_I^2$, and writing $(1-x_R^2)=x_I^2+y_R^2+y_I^2$ and $(1-z_R^2)=z_I^2+w_R^2+w_I^2$, we obtain an identity.}\n\\be{delt}\n\\Delta=\\Delta_1^2+\\Delta_2^2+\\Delta_3^2=(1-x_R^2)(1-z_R^2)-(x^3)^2. \n\\end{equation}\n\n\n\n\n\nWhen we design a control law, the components $a_1,a_2,a_3$ of the tangent vector at every time $t$ in the tangent space at $\\pi(p(t))$ are given by the derivatives $\\dot x^1$, $\\dot x^2$, $\\dot x^3$ of the desired trajectory in the quotient space. The corresponding components, $\\alpha_x$, $\\alpha_y$ and $\\alpha_z$, of the tangent vector in $R_{p(t)*}{\\cal P}$ give the appropriate control functions $(u_x, u_y, u_z)$. The matrix $\\Pi(p)$ in (\\ref{PiofP}) gives the map from the control to trajectories. Since we want to specify trajectories and compute the corresponding controls, we need the inverse of the matrix $\\Pi(p)$ (cf. (\\ref{deficontr})). This is found from (\\ref{PiofP}) to be \n\\be{Piinverse}\n\\det(\\Pi(p)) \\Pi^{-1}(p):=\n\\begin{pmatrix}\n\\gamma(\\gamma -1)(-w_R \\Delta_3 - z_I \\Delta_2) + \\gamma^2z_R \\Delta_1 &\n(\\gamma -1)(x_I \\Delta_2 + y_R \\Delta_3)+ x_R \\Delta_1 & \\gamma \\Delta_1 \\cr \n\\gamma (\\gamma -1)(w_I \\Delta_3 - z_I \\Delta_1)- \\gamma^2 z_R \\Delta_2 & \n(\\gamma -1) (x_I \\Delta_1 - y_I \\Delta_3) - x_R \\Delta_2 & \n- \\gamma \\Delta_2 \\cr \n\\gamma ( \\gamma-1) ( w_I \\Delta_2 + w_R \\Delta_1) + \\gamma^2 z_R \\Delta_3 \n& \n-(\\gamma-1)(y_I \\Delta_2 +y_R \\Delta_1) + x_R \\Delta_3 \n& \n\\gamma \\Delta_3\n\\end{pmatrix}. \n\\end{equation}\n\n\nWe remark that $\\Pi^{-1}(p)$ is not defined if we are in the singular part of the space $G=SU(2) \\times SU(2)$ as the determinant of $\\Pi$ is zero there. This is in particular true at the beginning as the initial point $p \\in SU(2) \\times SU(2)$ is the identity. In order to follow a prescribed trajectory in the quotient space $G_{reg}\/SU(2)$, we need therefore to apply a preliminary control to drive the state to an arbitrary point in $G_{reg}$ and after that we \nshall apply the control corresponding to a prescribed trajectory in the quotient space. \n\n\n\nThe preliminary control in an interval $[0,T_1]$ to move the state from the singular part of the quotient space has to involve at least two different directions in the tangent space. In other terms, if we use $\\sigma(t)=u(t) \\sigma$ for some function $u=u(t)$ and a constant matrix $\\sigma \\in \\mathfrak{su}(2)$ we remain in the singular part. To see this, notice that if \n$d:=\\int_0^{T_1} u(s)ds$, then the solution of (\\ref{reple}) will be $(U_f, V_f)=(e^{d \\sigma}, e^{\\gamma d \\sigma})$, a pair that satisfies the condition (\\ref{commut34}). Therefore the simplest control strategy of moving in one direction only will not work if we want to move the state from the singular part. Furthermore, we want $u_x(0)=u_y(0)=u_z(0)=0$ and $u_x(T_1)=u_y(T_1)=u_z(T_1)=0$ to avoid discontinuities at the initial time $t=0$ and at the time of concatenation with the second portion of the control. We propose to prescribe a trajectory \nfor $U=U(t)$ in (\\ref{reple}) and, from that trajectory, to derive the control to be used in the equation for $V=V(t)$ in (\\ref{reple}). We choose a smooth function $\\delta:=\\delta(t)$ such that \n$\\delta(0)=0$ and $\\delta(T_1)=\\delta_0 \\not=0 $, and $\\dot \\delta(0)=\\dot \\delta(T_1)=0$. \nWe also choose a smooth function $\\epsilon:=\\epsilon(t)$, with $\\epsilon(0)=\\epsilon_0 \\not=0 $ and $\\epsilon(T_1)=0$, and $\\dot \\epsilon(0)=\\dot \\epsilon(T_1)=0$. We choose for $U=U(t)$ in (\\ref{reple}) \n\\be{Uoft}\nU(t)=\\begin{pmatrix} \\cos(\\delta(t)) & e^{i \\epsilon(t)} \\sin(\\delta(t)) \\cr \n-e^{-i\\epsilon(t)} \\sin(\\delta(t)) & \\cos(\\delta(t)) \\end{pmatrix}, \n\\end{equation}\nwhich at time $T_1$ gives \n\\be{Poil}\nU(T_1)=\\begin{pmatrix} \\cos(\\delta_0) & \\sin(\\delta_0) \\cr \n-\\sin(\\delta_0) & \\cos(\\delta_0) \\end{pmatrix}. \n\\end{equation}\nThe corresponding control $\\sigma$ is $\\sigma(t)=\\dot U U^\\dagger$, which is \n\\be{controlSS}\n\\sigma(t)=\\begin{pmatrix} i \\dot \\epsilon \\sin^2(\\delta) & \\dot \\delta e^{i \\epsilon}+ \\frac{i}{2} \\, \\dot \\epsilon \\,\n{\\sin(2 \\delta)}\\, e^{i\\epsilon} \\cr \n- \\dot \\delta e^{-i \\epsilon}+ \\frac{i}{2} \\dot \\epsilon \\, {\\sin(2 \\delta)} \\, e^{-i\\epsilon} & -i \\dot \\epsilon \\sin^2(\\delta) \\end{pmatrix}.\n\\end{equation}\nPlacing this in the second equation of (\\ref{reple}) and integrating numerically we obtain the values for $V(T_1)$, the second component of $(U,V)$, and therefore the values of $(z_R,z_I,w_R,w_I)$. Using these values and the expression for $(x_R,x_I,z_R,z_I)$ in (\\ref{Poil}), and using the formula for $\\Delta$ given in \n(\\ref{delt}), we obtain \n\\be{accor5}\n\\Delta=\\Delta_1^2+\\Delta_2^2+\\Delta_3^2=\\sin^2(\\delta_0)(1-z_R^2(T_1)-w_R^2(T_1))=\\sin^2(\\delta_0)(z^2_I(T_1)+w^2_I(T_1)),\n\\end{equation}\nwhich has to be different from zero in order for the state to be in the regular part. \n\n\nThe second portion of the control depends on the trajectory followed, $(x^1,x^2,x^3)=(x^1(t),x^2(t),x^3(t))$, and it is obtained by multiplying by $\\Pi^{-1}$ in (\\ref{Piinverse}) $(\\dot x^1,\\dot x^2, \\dot x^3)$. The trajectory $(x^1,x^2,x^3)$ is almost completely arbitrary. However it has to satisfy certain conditions which we now discuss. Let us denote the interval where the second part of the control is used by $[0,T_2]$. The initial condition $(x^1(0),x^2(0),x^3(0))$ has to agree with the one given by the previous interval of control. The final condition $(x^1(T_2),x^2(T_2),x^3(T_2))$ has to agree with the orbit of the desired final condition. Moreover, care has to be taken to make sure that the trajectory is such that $\\Delta$ in (\\ref{delt}) is never zero because this would create a singularity in $\\Pi^{-1}(p)$. Furthermore \nwe need $\\dot x^1(0)=\\dot x^2(0)=\\dot x^3(0)=0$, which gives $\\sigma(0)=0$, \n to ensure continuity with the control in the previous interval, and we also choose $\\dot x^1(T_2)=\\dot x^2(T_2)=\\dot x^3(T_2)=0$ to ensure that the control is switched off at the end of the procedure. Finally, the functions $(x^1, x^2, x^3)$ have to be representative of a possible trajectory for special unitary matrices. This means that, with $\\vec V:=(x_R, x_I, y_R, y_I)^T$ and $\\vec W:=(z_R, z_I, w_R, w_I)^T$, \n $\\|\\vec V (t)\\|^2=\\|\\vec W(t)\\|^2=1$, at every $t$. Therefore $|x_R(t)| < 1$ at every $t$, $|z_R(t)| < 1$, at every $t$ (to avoid singularity), and from the Schwartz inequality $|\\vec V \\cdot \\vec W |\\leq 1$ we also must have $|x_R z_R + x^3|\\leq 1$ and therefore \n\\be{condx3} \n -1-x_R z_R \\leq x^3 \\leq 1 - x_R z_R. \n\\end{equation} \n \n \nOnce the functions $(\\dot x^1, \\dot x^2,\\dot x^3)$ are chosen, the system to integrate numerically is (\\ref{reple}) with $(u_x,u_y,u_z)$ given by $\\Pi^{-1}(p)(\\dot x^1, \\dot x^2,\\dot x^3)^T$. By deriving $(u_x,u_y,u_z)$ using the explicit expression of $\\Pi^{-1}$ given in (\\ref{Piinverse}) and replacing into (\\ref{reple}), it is possible to obtain a simplified system of differential equations for $(x_R,x_I,...,z_R,z_I,...,w_I)$ without implementing the preliminary step of calculating the control. We found this system to be more stable in numerical integration with MATLAB and report it in Appendix A for future use. \n\n\n\n\\subsection{Numerical example: Driving to two different Hadamard gates} \n\nWe now apply the above technique to a specific numerical example: The problem is to drive \nthe system (\\ref{Hamilto}) so that the first spin performs the Hadamard-type gate \n\\be{Hada1}\nH_1:=\\begin{pmatrix} \\frac{1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} \\cr \n\\frac{-1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}}\\end{pmatrix}\n\\end{equation}\nand the second spin performs the Hadamard gate\n\\be{Hada2}\nH_2:=\\begin{pmatrix} \\frac{1}{\\sqrt{2}} & \\frac{-i}{\\sqrt{2}} \\cr \n\\frac{-i}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}}\\end{pmatrix}. \n\\end{equation}\nWe want to drive system (\\ref{reple}) to $(U_f, V_f)=(H_1, H_2)$. The orbit of the desired final condition is characterized by the invariant coordinates \n\\be{finalorbit}\nx^1=x_R=\\frac{1}{\\sqrt{2}}, \\qquad x^2=z_R=\\frac{1}{\\sqrt{2}}, \\qquad x^3=x_I z_I+ y_R w_R+ y_I w_I=0. \n\\end{equation}\nWe take a physical value for the ratio $\\gamma$ between the two gyromagnetic ratios. In particular we will choose \n $\\gamma\\approx \\frac{1}{0.2514}$ which corresponds to the Hydrogen-Carbon ($^{1} H-^{13} C$) system also considered in \\cite{Xinhua}. \n\nWe first consider the control that moves the state away from the singular part, in a time interval $[0,1]$. We choose \n$\\sigma$ in (\\ref{controlSS}) with the functions \n$\\delta$ and $\\epsilon$ as follows: \n\\be{deltaandepsilon}\n\\delta=6 \\delta_0 \\left( \\frac{t^2}{2} -\\frac{t^3}{3} \\right), \\qquad \\epsilon=\\epsilon_0 + 6 \\epsilon_0\n\\left( \\frac{t^3}{3} -\\frac{t^2}{2} \\right). \n\\end{equation}\nWith these functions $\\delta$ and $\\epsilon$, $\\sigma$ satisfies all the requirements described above. \nFrom (\\ref{deltaandepsilon}) and (\\ref{controlSS}) we obtain the controls $u_x,u_y,u_z$ which replaced into (\\ref{reple}) give the dynamics in the interval $[0,T_1]=[0,1]$. Numerical integration with the values of the parameters $\\delta_0=0.5$ and $\\epsilon_0=1$, gives the following conditions at time $T_1=1$ (cf. (\\ref{Poil}) \n\\be{condizioni}\nU(1)=\\begin{pmatrix} \\cos(0.5) & \\sin(0.5) \\cr \n-\\sin(0.5) & \\cos(0.5) \\end{pmatrix}, \\qquad V(1) \\approx \\begin{pmatrix} -0.3472+i0.7769 & -0.5123 -i 0.1157\\cr \n0.5123-i0.1157 & -0.3472-i0.7769 \\end{pmatrix}. \n\\end{equation}\nThe value of $\\Delta$ is, according to (\\ref{accor5}), \n$\\Delta \\approx \\sin^2(0.5)\\left( (0.7769 )^2+(0.1157)^2 \\right) \\not= 0$, as desired. \n\n\n\nThe values of the variables to be used as initial conditions in the integration in the subsequent interval of the procedure are \n$\nx_R(1)=x^1(1)=\\cos(0.5), \\quad x_I(1)=0, \\quad y_R(1)=\\sin(0.5), \\quad y_I(1)=0, \\quad \nz_R(1)=x^2(1)=-0.3472 , \\quad z_I(1)=0.7769 , \\quad w_R(1)=-0.5123, \\quad w_I(1)=-0.1157, \n$ \nand $x^3(1)=x_I(1) z_I(1)+y_R(1) w_R(1)+y_I(1) w_I(1)=\\sin(0.5) \\times (-0.5123)\\approx -0.2456.$ For the subsequent \n interval $[0,T_2]$ we choose the trajectory $x^1(t),x^2(t),x^3(t)$ in the quotient space as follows: \n$T_2=10$ and the trajectory in the interval $[0,T_2]$ is \n\\be{x1t}\nx^1(t)=-\\frac{1}{500} \\left( \\frac{1}{\\sqrt{2}}-\\cos(0.5)\\right)t^3+\\frac{3}{100} \\left( \\frac{1}{\\sqrt{2}} - \\cos(0.5) \\right)t^2+ \\cos(0.1);\n\\end{equation}\n\\be{x2t}\nx^2(t)=-\\frac{1}{500} \\left( \\frac{1}{\\sqrt{2}}+0.3472 \\right) t^3+\\frac{3}{100} \\left( \\frac{1}{\\sqrt{2}}+\n0.3472 \\right)t^2-0.3472; \n\\end{equation} \n\\be{x3t}\nx^3(t):=\\frac{-0.2456}{500} t^3-\\frac{3 \\times (-0.2456)}{100} t^2-0.2456, \n\\end{equation}\nwhich are easily seen to satisfy the conditions at the endpoints. Moreover by plotting $x^1$ and $x^2$ we see that \n$|x^1(t)|\\leq 1$ and $|x^2(t)|\\leq 1$ for every $t \\in [0,10]$ (Figure \\ref{Fig1}). \nBy plotting $x^3$ vs $1-x^1 x^2$ and $-1-x^1x^2$ (Figure \\ref{Fig2}) we find that $-1-x^1 x^2(t) \\leq x^3(t) \\leq 1-x^1 x^2(t)$ for every $t \\in [0,10]$ as required from condition (\\ref{condx3}). \nBy plotting $\\Delta=\\Delta(t)$ in $[0,10]$ we know that $\\Delta(t) \\not=0$ for every $t \\in [0,10]$ (Figure \\ref{Fig3}). Therefore the whole trajectory is in the regular part. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=0.4\\textheight]{Fig1}\n\\vspace*{-16mm}\n\\caption{Prescribed $x^1$ and $x^2$ variables in the (second) interval $[0,10]$.}\n\\label{Fig1}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{Fig2}\n\\vspace*{-16mm}\n\\caption{$x^3=x^3(t)$ vs $-1-x^1(t)x^2(t)$ and $ 1-x^1(t)x^2(t)$ in the (second) interval $[0,10]$.}\n\\label{Fig2}\n\\end{figure}\n\n\n\n\n \n \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{Fig3}\n\\vspace*{-16mm}\n\\caption{$\\Delta=\\Delta(t)=\\Delta_1^2(t)+\\Delta_2^2(t)+\\Delta_3^2(t)$ in the (second) interval $[0,10]$.}\n\\label{Fig3}\n\\end{figure}\n\nThe {\\it full} trajectory, in the union of the two intervals, and with the concatenation \nof the two controls is depicted in Figure \\ref{Fig4}. Let us denote the full control by $\\hat \\sigma=\\hat \\sigma(t)=\nu_x i\\sigma_x+ u_y i \\sigma_y+ u_z i \\sigma_z$. The final condition $(\\hat U_f, \\hat V_f)$ is given by \n\\be{Finalprel}\n\\hat U_f=\\begin{pmatrix} 0.7071-0.2795i & 0.5913+0.2685i \\cr \n-0.5913+0.2685i & 0.7071+0.2795i \\end{pmatrix}, \n\\qquad \\hat V_f=\\begin{pmatrix} 0.7071+0.2708i & 0.3718-0.5369i \\cr \n -0.3718-0.5369i & 0.7071-0.2708i \\end{pmatrix}. \n\\end{equation} \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{BeforeConjugation}\n\\vspace*{-16mm}\n\\caption{Trajectory under the preliminary control in the total interval $[0,11]$.}\n\\label{Fig4}\n\\end{figure}\nThis condition, as expected, is in the same orbit as the desired final condition $(H_1,H_2)$ in (\\ref{Hada1}), (\\ref{Hada2}), that is, there exists a matrix $K \\in SU(2)$ such that\n$K\\hat U_fK^\\dagger=H_1$ and $K\\hat V_fK^\\dagger=H_2$. The matrix $K$ solving these equations is found to be \n$$\nK=\\begin{pmatrix} 0.1485-0.2460i & -0.2444+0.9260i \\cr \n 0.2444+0.9260i & 0.1485+0.2460i\\end{pmatrix}. \n$$\nIn particular, to find $K$ one can diagonalize $\\hat U_f$ and $H_1$, i.e., $\\hat U_f=P \\Lambda P^\\dagger$ and $H_1=R\\Lambda R^\\dagger$ for a diagonal matrix $\\Lambda$, so that, from $KP\\Lambda P^\\dagger K^\\dagger=R \\Lambda R^\\dagger$, we find that $R^\\dagger K P=D$, for $D$, a diagonal matrix. This matrix is found by solving $DP^\\dagger \\hat V_f P=R^ \\dagger H_2 R D$. The control $K \\hat \\sigma K^\\dagger$ steers then to the desired final condition. The resulting trajectory leading to the desired final condition (\\ref{Hada1}), (\\ref{Hada2}) is given in Figure \\ref{Fig5}. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=\\textwidth, height=.4\\textheight]{AfterConjugation}\n\\vspace*{-16mm}\n\\caption{Trajectory under control algorithm leading to the desired final condition (\\ref{Hada1}) and (\\ref{Hada2}).}\n\\label{Fig5}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\n The notion of the complexity $c(M)$ of a compact $3$-manifold $M$\nwas introduced in \\cite{Matveev-1990}. The complexity is defined as\nthe minimal possible number of true vertices of an almost simple\nspine of $M$. If $M$ is closed and irreducible and $c(M)>0$, then\n$c(M)$ is the minimal number of tetrahedra needed to obtain $M$ by\ngluing together their faces. The problem of calculating the\ncomplexity $c(M)$ is very difficult. The exact values of the\ncomplexity are presently known only for certain infinite series of\nirreducible boundary irreducible $3$-manifolds \\cite{FMP,\nAnisov-2005, Jaco-Rubinstein-Tillmann-2009}. In addition, this\nproblem is solved for all closed orientable irreducible manifolds up\nto complexity $12$ (see \\cite{Matveev-2005}). Note that the table\ngiven in \\cite{Matveev-2005} contains $36833$ manifolds and is only\navailable in electronic form \\cite{Atlas}.\n\n The task of finding an upper bound for the complexity of a manifold\n$M$ does not present any particular difficulties. To do that it\nsuffices to construct an almost simple spine $P$ of $M$. The number\nof true vertices of $P$ will serve as an upper bound for the\ncomplexity. It is known \\cite[2.1.2]{Matveev-2003} that an almost\nsimple spine can be easily constructed from practically any\nrepresentation of a manifold. The rather large number of manifolds\nin \\cite{Atlas} gives rise to a new task of finding potentially\nsharp upper bounds for the complexity, i.e. upper bounds that would\nyield the exact value of the complexity for all manifolds from the\ntable \\cite{Atlas}. An important result in this direction was\nobtained by Martelli and Petronio \\cite{Martelli-Petronio-2004}.\nThey found a potentially sharp upper bound for the complexity of all\nclosed orientable Seifert manifolds. Similar results for infinite\nfamilies of graph manifolds can be found in\n\\cite{Fominykh-Ovchinnikov-2005, Fominykh-2008}.\n\n An upper bound $h(r\/s, t\/u, v\/w)$ for the complexity of hyperbolic\nmanifolds obtained by surgeries on the link $6^3_1$ (in Rolfsen's\nnotation \\cite{Rolfsen-1976}) with rational parameters $(r\/s, t\/u,\nv\/w)$ is given by Martelli and Petronio in\n\\cite{Martelli-Petronio-2004}. It turns out that the bound is not\nsharp for a large number of manifolds, as the following two examples\nshow. First, the value of $h$ is equal to $10$ only for $13$ of $24$\nmanifolds of complexity $10$ obtained by surgeries on $6^3_1$ (see\n\\cite{Martelli-Petronio-2004}). Second, on analyzing the table\n\\cite{Atlas} we noticed that the bound is not sharp for $44$ of $46$\nmanifolds of the type $6^3_1(1, 2, v\/w)$ with complexity less or\nequal to $12$. Denote by $4_1(p\/q)$ the closed orientable\n$3$-manifold obtained from the figure eight knot $4_1$ by\n$p\/q$-surgery. Since the manifolds $4_1(p\/q)$ and $6^3_1(1, 2,\np\/q+1)$ are homeomorphic, a potentially sharp upper bound for the\ncomplexity of such manifolds become important.\n\n The following theorem is the main result of the paper. To give an\nexact formulation, we need to introduce a certain\n$\\mathbb{N}$-valued function $\\omega(p\/q)$ on the set of\nnon-negative rational numbers. Let $p\\geqslant 0$, $q\\geqslant 1$ be\nrelatively prime integers, let $[p\/q]$ be the integer part of $p\/q$,\nand let $rem(p,q)$ be the remainder of the division of $p$ by $q$.\nAs in \\cite{Matveev-2003}, we denote by $S(p,q)$ the sum of all\npartial quotients in the expansion of $p\/q$ as a regular continued\nfraction. Now we define:\n $$\\omega(p\/q) = a(p\/q) + \\max\\{[p\/q]-3, 0\\} + S(rem(p,q),q),$$\nwhere\n $$a(p\/q) = \\left\\\n\\begin{array}{ll}\n 6, & \\hbox{if } p\/q=4, \\\\\n 7, & \\hbox{if } p\/q\\in \\mathbb{Z} \\hbox{ and } p\/q\\neq 4,\\\\\n 8, & \\hbox{if } p\/q\\not\\in \\mathbb{Z}.\\\\\n\\end{array\n\\right.$$\n\n\\begin{theorem}\n For any two relatively prime integers $p\\geqslant 0$ and\n$q\\geqslant 1$ we have the inequality $c(4_1(p\/q))\\leqslant\n\\omega(p\/q)$. Moreover, if $\\omega(p\/q)\\leqslant 12$, then\n$c(4_1(p\/q)) = \\omega(p\/q)$.\n\\end{theorem}\n\n Note that the restrictions $p\\geqslant 0$ and $q\\geqslant 1$ in the\nabove theorem are inessential, since the knot $4_1$ is equivalent to\nits mirror image, which implies $4_1(-p\/q)$ is homeomorphic to\n$4_1(p\/q)$.\n\n\n\\section{Preliminaries}\n\n In this section we recall some known definitions and facts that\nwill be used in the paper.\n\n\\subsection{Theta-curves on a torus}\n\n By a theta-curve $\\theta\\subset T$ on a torus $T$ we mean a graph\nthat is homeomorphic to a circle with a diameter and such that\n$T\\setminus \\theta$ is an open disc. It is well known\n\\cite{Martelli-Petronio-2004, Anisov-1994} that any two theta-curves\non $T$ can be transformed into each other by isotopies and by a\nsequence of flips (see Fig.~\\ref{flip-transformation}). Let us endow\nthe set $\\Theta(T)$ of theta-curves on $T$ with the distance\nfunction $d$ defining for given $\\theta, \\theta'\\in \\Theta(T)$ the\ndistance $d(\\theta, \\theta')$ between them as the minimal number of\nflips required to transform $\\theta$ into $\\theta'$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{flip.eps}\n\\caption{A flip-transformation} \\label{flip-transformation}\n\\end{figure}\n\n For calculating the distance between two theta-curves on a torus\nwe use the classical ideal triangulation $\\mathbb{F}$ (Farey\ntesselation) of the hyperbolic plane $\\mathbb{H}^2$. If we view the\nhyperbolic plane $\\mathbb{H}^2$ as the upper half plane of\n$\\mathbb{C}$ bounded by the circle $\\partial \\mathbb{H}^2 =\n\\mathbb{R}\\cup \\{\\infty\\}$, then the triangulation $\\mathbb{F}$ has\nvertices at the points of $\\mathbb{Q}\\cup \\{1\/0\\}\\subset \\partial\n\\mathbb{H}^2$, where $1\/0=\\infty$, and its edges are all the\ngeodesics in $\\mathbb{H}^2$ with endpoints the pairs $a\/b$, $c\/d$\nsuch that $ad-bc=\\pm 1$. For convenience, the images of the\nhyperbolic plane $\\mathbb{H}^2$ and of the triangulation\n$\\mathbb{F}$ under the mapping $z\\to (z-i)\/(z+i)$ are shown in\nFig.~\\ref{triangulation}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{circle.eps}\n\\caption{The ideal Farey triangulation of the hyperbolic plane}\n\\label{triangulation}\n\\end{figure}\n\n Fix some coordinate system $(\\mu, \\lambda)$ on a torus $T$. We now\nconstruct a map $\\Psi_{\\mu, \\lambda}$ from $\\Theta(T)$ to the set of\ntriangles of $\\mathbb{F}$. To do that we consider the map\n$\\psi_{\\mu, \\lambda}$ that assigns to each nontrivial simple closed\ncurve $\\mu^{\\alpha}\\lambda^{\\beta}$ on $T$ the point $\\alpha\/\n\\beta\\in \\partial\\mathbb{H}^2$. Note that each theta-curve $\\theta$\non $T$ contains three nontrivial simple closed curves $\\ell_1$,\n$\\ell_2$, $\\ell_3$, that are formed by the pairs of edges of\n$\\theta$. Since the intersection index of every two curves $\\ell_i$,\n$\\ell_j$, $i\\neq j$, is equal to $\\pm 1$, the points $\\psi_{\\mu,\n\\lambda}(\\ell_1)$, $\\psi_{\\mu, \\lambda}(\\ell_2)$, $\\psi_{\\mu,\n\\lambda}(\\ell_3)$ are the vertices of a triangle $\\triangle$ of the\nFarey triangulation, and we define $\\Psi_{\\mu, \\lambda}(\\theta)$ to\nbe $\\triangle$.\n\n Denote by $\\Sigma$ the graph dual to the triangulation\n$\\mathbb{F}$. This graph is a tree because the triangulation is\nideal. We now define the distance between any two triangles of\n$\\mathbb{F}$ to be the number of edges of the only simple path in\n$\\Sigma$ that joins the corresponding vertices of the dual graph.\nThe key observation used for the practical calculations is that for\nany coordinate system $(\\mu, \\lambda)$ on $T$ the distance between\nany two theta-curves $\\theta$, $\\theta'$ is equal to the distance\nbetween the triangles $\\Psi_{\\mu, \\lambda}(\\theta)$, $\\Psi_{\\mu,\n\\lambda}(\\theta')$ of the Farey triangulation. The reason is that if\n$\\theta'$ is obtained from $\\theta$ via a flip, the corresponding\ntriangles have a common edge.\n\n\\subsection{Simple and special spines}\n\n A compact polyhedron $P$, following Matveev \\cite{Matveev-2003}, is\ncalled simple if the link of each point $x\\in P$ is homeomorphic to\none of the following $1$-dimensional polyhedra:\n\\begin{itemize}\n \\item[(a)] a circle (the point $x$ is then called nonsingular);\n \\item[(b)] a circle with a diameter (then $x$ is a triple point);\n \\item[(c)] a circle with three radii (then $x$ is a true vertex).\n\\end{itemize}\n\n The components of the set of nonsingular points are said to\nbe the $2$-components of $P$, while the components of the set of\ntriple points are said to be the triple lines of $P$. A simple\npolyhedron is special if each of its triple lines is an open\n$1$-cell and each of its $2$-components is an open $2$-cell.\n\n A subpolyhedron $P$ of a $3$-manifold $M$ is a spine of $M$ if\n$\\partial M\\neq\\emptyset$ and the manifold $M\\setminus P$ is\nhomeomorphic to $\\partial M \\times (0,1]$, or $\\partial M=\\emptyset$\nand $M\\setminus P$ is an open ball. A spine of a $3$-manifold is\ncalled simple or special if it is a simple or special polyhedron,\nrespectively.\n\n\n\n\\subsection{Relative spines}\n\n A manifold with boundary pattern, following Johannson\n\\cite{Johannson-1979}, is a $3$-manifold $M$ with a fixed graph\n$\\Gamma \\subset \\partial M$ that does not have any isolated\nvertices. A manifold $M$ with boundary pattern $\\Gamma$ can be\nconveniently viewed as a pair $(M, \\Gamma)$. The case $\\Gamma =\n\\emptyset$ is also allowed.\n\n\\begin{definition}\n Let $(M, \\Gamma)$ be a $3$-manifold with boundary pattern. Then a\nsubpolyhedron $P\\subset M$ is called a relative spine of $(M,\n\\Gamma)$ if the following holds:\n \\begin{enumerate}\n \\item $M\\setminus P$ is an open ball;\n \\item $\\partial M \\subset P$;\n \\item $\\partial M \\cap Cl(P\\setminus\\partial M) = \\Gamma$.\n \\end{enumerate}\n\\end{definition}\n\nA relative spine is simple if it is a simple polyhedron.\nObviously, if $M$ is closed, then any relative spine of $(M,\n\\emptyset)$ is a spine of $M$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{2blocks.eps}\n\\caption{Examples of simple relative spines} \\label{blocks}\n\\end{figure}\n\n\\begin{example}\n Let $V$ be a solid torus with a meridian $m$. Choose a\nsimple closed curve $\\ell$ on $\\partial V$ that intersects $m$ twice\nin the same direction. Note that $\\ell$ decomposes $m$ into two\narcs. Consider a theta-curve $\\theta_V\\subset \\partial V$ consisting\nof $\\ell$ and an arc (denote it by $\\gamma$) of $m$. Then the\nmanifold $(V, \\theta_V)$ has a simple relative spine without\ninterior true vertices. This spine is the union of $\\partial V$, a\nM\\\"{o}bius strip inside V, and a part of meridional disc bounded by\n$\\gamma$ (Figure~\\ref{blocks}a).\n\n Note that among the three nontrivial simple closed curves contained in\n$\\theta_V$, none is isotopic to the meridian $m$ of $V$. On the\nother hand, applying the flip to $\\theta_V$ along $\\gamma$, we get a\ntheta-curve $\\theta_m\\subset \\partial V$ containing $m$.\n\\end{example}\n\n\\begin{example}\n Let $\\theta$, $\\theta'$ be two theta-curves on a torus $T$ such that\n$\\theta'$ is obtained from $\\theta$ by exactly one flip. Then the\nmanifold\n $$(T\\times [0, 1], (\\theta\\times \\{0\\})\\cup (\\theta'\\times \\{1\\}))$$\nhas a simple relative spine $R$ with one interior true vertex (in\nFigure~\\ref{blocks}b the torus $T$ is represented as a square with\nthe sides identified). Note that $R$ satisfies the following\nconditions:\n \\begin{itemize}\n \\item[(1)] for each $t\\in [0, 1\/2)$ a theta-curve $\\theta_t$, where\n $R\\cap (T\\times \\{t\\}) = \\theta_t\\times \\{t\\}$, is isotopic to\n $\\theta$;\n \\item[(2)] for each $t\\in (1\/2, 1]$ the theta-curve $\\theta_t$ is isotopic to\n $\\theta'$;\n \\item[(3)] $R\\cap (T\\times \\{1\/2\\})$ is a wedge of two circles.\n \\end{itemize}\n\\end{example}\n\n\\subsection{Assembling of manifolds with boundary patterns}\n\n Denote by $\\mathscr{T}$ the class of all manifolds $(M,\n\\Gamma)$ such that any component $T$ of $\\partial M$ is a torus and\n$T \\cap \\Gamma$ is a theta-curve. Let $(M, \\Gamma)$ and $(M',\n\\Gamma')$ be two manifolds in $\\mathscr{T}$ with nonempty\nboundaries. Choose two tori $T\\subseteq \\partial M$, $T'\\subseteq\n\\partial M'$ and a homeomorphism $\\varphi: T\\to T'$ taking the theta-curve\n$\\theta = T\\cap \\Gamma$ to the theta-curve $\\theta' = T'\\cap\n\\Gamma'$. Then we can construct a new manifold $(W, \\Xi)\\in\n\\mathscr{T}$, where $W = M\\cup_\\varphi M'$, and $\\Xi =\n(\\Gamma\\setminus \\theta)\\cup (\\Gamma'\\setminus \\theta')$. In this\ncase we say that the manifold $(W, \\Xi)$ is obtained assembling $(M,\n\\Gamma)$ and $(M', \\Gamma')$ \\cite{Martelli-Petronio-2001}.\n\n Note that if manifolds $(M, \\Gamma)$ and $(M', \\Gamma')$ have\nsimple relative spines denoted $P$ and $P'$ respectively, with $v$\nand $v'$ interior true vertices, then the manifold $(W, \\Xi)$ has a\nsimple relative spine $R$ with $v + v'$ interior true vertices.\nIndeed, $R$ can be obtained by gluing $P$ and $P'$ along $\\varphi$\nand removing the open disc in $P\\cup_\\varphi P'$ that is obtained by\nidentifying $T\\setminus \\theta$ with $T'\\setminus \\theta'$.\n\n To prove the main theorem of the paper we generalize the notion of the assembling\nby removing the restriction $\\varphi(\\theta) = \\theta'$.\n\n\\begin{lemma}\n \\label{assembling}\n Let $(M, \\Gamma)$ and $(M', \\Gamma')$ be two manifolds in\n$\\mathscr{T}$ with nonempty boundaries that admit simple relative\nspines with $v$ and $v'$ interior true vertices respectively. Then\nfor any homeomorphism $\\varphi: T\\to T'$ of a torus $T\\subseteq\n\\partial M$ onto a torus $T'\\subseteq \\partial M'$ there exists a simple\nrelative spine of a manifold $(W, \\Upsilon)$, where $W =\nM\\cup_\\varphi M'$ and $\\Upsilon = (\\Gamma\\setminus \\theta)\\cup\n(\\Gamma'\\setminus \\theta')$, with $v + v' + d(\\varphi(\\theta),\n\\theta')$ interior true vertices.\n\\end{lemma}\n\n\\begin{proof}\n First, by induction on the number $n = d(\\varphi(\\theta), \\theta')$\nwe prove that there exists a simple relative spine of the manifold\n $$(M'', \\Gamma'') = (T'\\times [0, 1], (\\varphi(\\theta)\\times \\{0\\})\\cup\n (\\theta'\\times \\{1\\}))$$\nwith $n$ interior true vertices. If $n=0$, i.e. the theta-curve\n$\\varphi(\\theta)$ is isotopic to the theta-curve $\\theta'$, the\ndesired spine is isotopic to the polyhedron\n$(\\varphi(\\theta)\\times [0, 1])\\cup \\partial M''$. Suppose that $n\n> 0$. As has already been alluded to in the beginning of the\nsection 1.1, there exists a sequence $\\{\\theta_i\\}_{i=0}^n$ of\npairwise distinct theta-curves on the torus $T'$ such that\n$\\theta_0 = \\varphi(\\theta)$, $\\theta_n = \\theta'$, and $\\theta_i$\nis obtained from $\\theta_{i-1}$ by a flip, for $i=1\\ldots n$. The\ninduction assumption implies that the manifold\n \\begin{equation}\n \\label{m1}\n (T'\\times [0, 1\/2], (\\theta_0\\times \\{0\\})\\cup\n (\\theta_{n-1}\\times \\{1\/2\\}))\n \\end{equation}\nhas a simple relative spine with $n-1$ interior true vertices.\nFurthermore, the simple relative spine of the manifold\n \\begin{equation}\n \\label{m2}\n (T'\\times [1\/2, 1], (\\theta_{n-1}\\times \\{1\/2\\})\\cup\n (\\theta_n\\times \\{1\\}))\n \\end{equation}\nwith one interior true vertex is described in the Example 2. Then\nthe desired spine of the manifold $(M'', \\Gamma'')$ is obtained by\nassembling the manifolds (\\ref{m1}) and (\\ref{m2}) along the\nidentity map on $T'\\times \\{1\/2\\}$.\n\n Now, note that the consecutive assemblings of the manifolds $(M,\n\\Gamma)$, $(M'', \\Gamma'')$ and $(M', \\Gamma')$ along natural\nhomeomorphisms that take each point $x\\in T$ to the point\n$(\\varphi(x), 0)\\in T'\\times \\{0\\}$, and each point $(y, 1)\\in\nT'\\times \\{1\\}$ to the point $y\\in T'$, yield the manifold $(W,\n\\Upsilon)$ and its simple relative spine with $v + v' +\nd(\\varphi(\\theta), \\theta')$ interior true vertices.\n\\end{proof}\n\n\n\\section{Relative spines of the figure eight knot complement}\n\n In this section we construct some simple relative spines of\nthe figure eight knot complement $E(4_1)$. Let us fix a canonical\ncoordinate system on the boundary torus $\\partial E(4_1)$ consisting\nof oriented closed curves $\\mu$, $\\lambda$ such that the meridian\n$\\mu$ generates $H_1(E(4_1); \\mathbb{Z})$ and the longitude\n$\\lambda$ bounds a surface in $E(4_1)$. This system determines the\nmap $\\Psi_{\\mu, \\lambda}$ from $\\Theta(T)$ to the set of triangles\nof the Farey triangulation. Denote by $\\triangle^{(i)}$ the triangle\nof $\\mathbb{F}$ with the vertices at $i$, $i+1$, and $\\infty$.\n\n\\begin{proposition}\n \\label{spine}\n For any $i\\in\\{ 0, 1, 2, 3\\}$ there exists a theta-curve\n$\\theta^{(i)}$ on the torus $\\partial E(4_1)$ such that the\nmanifold $(E(4_1), \\theta^{(i)})$ has a simple relative spine with\n$10$ interior true vertices and $\\Psi_{\\mu, \\lambda}(\\theta^{(i)})\n= \\triangle^{(i)}$.\n\\end{proposition}\n\n\\begin{proof}\n Step 1. Let $P$ be a special spine of an arbitrary compact orientable\n$3$-manifold $M$ whose boundary is a torus, and let $\\theta$ be a\ntheta-curve on $\\partial M$. We begin the proof by describing a\nmethod for constructing a simple relative spine $R(P, \\theta)$ of\nthe manifold $M$.\n\n By Theorem 1.1.7 \\cite{Matveev-2003}, $M$ can be identified with\nthe mapping cylinder of a local embedding $f:\\partial M\\to P$.\nDenote by $f_{|\\theta}:\\theta\\to P$ the restriction to $\\theta$ of\nthe map $f$. Then the union $R(P, \\theta)$ of the mapping cylinder\nof $f_{|\\theta}$ and of $\\partial M$ is a relative spine of $M$,\nsince $\\partial M \\subset R(P, \\theta)$, $\\partial M \\cap Cl(R(P,\n\\theta)\\setminus\\partial M) = \\theta$, and $M\\setminus R(P, \\theta)$\nis homeomorphic to the direct product of the open disc $\\partial\nM\\setminus \\theta$ with an interval. In general, $R(P, \\theta)$ just\nconstructed is not necessarily a simple polyhedron. This can be\ndealt with by introducing the notion of general position. We say\nthat a theta-curve $\\theta\\subset \\partial M$ is in general position\nwith respect to the map $f$, if the image $f(\\theta)$ satisfies the\nfollowing conditions.\n\\begin{enumerate}\n \\item $f(\\theta)$ contains no true vertices of $P$.\n \\item For any intersection point $x$ of $f(\\theta)$ with\n the triple lines of $P$ there exists a neighborhood $U(x)\\subset P$\n such that the intersection $U(x)\\cap f(\\theta)$ is an arc\n meeting the set of the triple lines of $P$ transversally exactly at $x$.\n \\item For any intersection point $x$ of the set $f(\\theta)$ with\n the $2$-components of $P$ its inverse image $f^{-1}_{|\\theta}(x)$ consists of at most\n two points of $\\theta$. Moreover, if $f^{-1}_{|\\theta}(x)$ consists of exactly\n two points, then there exists a neighborhood $U(x)\\subset P$ such that\n the inverse image $f^{-1}_{|\\theta}(U(x)\\cap f(\\theta))$ of the\n intersection $U(x)\\cap f(\\theta)$ is the disjoint union of two\n arcs $\\gamma_1$, $\\gamma_2$ of $\\theta$, and the images $f(\\gamma_1)$, $f(\\gamma_2)$\n intersect each other transversally at exactly one point $x$.\n Such a point $x$ is called the self-intersection point of the image $f(\\theta)$\n of $\\theta$.\n\\end{enumerate}\n\nObviously, if a theta-curve $\\theta$ is in general position with\nrespect to the map $f$, then the relative spine $R(P, \\theta)$ of\nthe manifold $M$ is simple.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{P_2.eps}\n\\caption{A minimal spine of the complement of the figure eight knot}\n\\label{minspine}\n\\end{figure}\n\n Step 2. We consider now the minimal special spine $P$ of the manifold\n$M = E(4_1)$ shown in Figure \\ref{minspine} (see\n\\cite[2.4.2]{Matveev-2003}). To construct the theta-curves\n$\\theta^{(i)}$, $i\\in\\{ 0, 1, 2, 3\\}$, we need to describe certain\ncell decompositions of the torus $T=\\partial M$ and of its universal\ncovering $\\tilde{T}$. The local embedding $f:T\\to P$ determines a\ncell decomposition of $T$ as follows.\n\\begin{enumerate}\n \\item The inverse image $f^{-1}(C)$ of every open $k$-dimensional\n cell $C$ of $P$ consists of two open $2$-cells if $k=2$,\n three open arcs if $k=1$, and four points if $k=0$.\n \\item The restriction of $f$ to each of these cells\n is a homeomorphism onto the corresponding cell of $P$.\n\\end{enumerate}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.7]{decompositions.eps}\n\\caption{Cell decompositions of $\\tilde{T}$ (left) and $T$ (right)}\n\\label{decomposition}\n\\end{figure}\n\n Construct the universal covering $\\tilde{T}$ of $T$. It can\nbe presented as a plane decomposed into hexagons, see Fig.\n\\ref{decomposition}a. The group of covering translations is\nisomorphic to the group $\\pi_1(T) = H_1(T; \\mathbb{Z})$. We choose a\nbasis $\\tilde{\\mu}$, $\\tilde{\\lambda}$ as shown in Fig.\n\\ref{decomposition}a. It is easy to see that the corresponding\nelements of $\\pi_1(T)$ (which can be also viewed as oriented loops)\nform the canonical coordinate system $(\\mu, \\lambda)$ on $T$. If we\nfactor this covering by the translations $\\tilde{\\mu}$,\n$\\tilde{\\lambda}$, we recover $T$. If we additionally identify the\nhexagons marked by the letter $A$ with respect to the composition of\nthe symmetry in the dotted diagonal of the hexagon and the\ntranslation by $-\\tilde{\\mu} + \\tilde{\\lambda}\/2$, and do the same\nfor the hexagons marked by the letter $B$, we obtain $P$. The torus\n$T$ is shown in Fig. \\ref{decomposition}b as a polygon $D$ composed\nof four hexagons. Each side of $D$ is identified with some other one\nvia the translation along one of the three vectors $\\tilde{\\mu}$,\n$-2\\tilde{\\mu}+\\tilde{\\lambda}$, and $-\\tilde{\\mu}+\\tilde{\\lambda}$.\nThe spine $P$ can be presented as the union of two hexagons, see\nFig. \\ref{theta0} (right). The edges of the hexagons are decorated\nwith four different patterns. To recover $P$, one should identify\nthe edges having the same pattern.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta0_2.eps}\n\\caption{The theta-curve $\\theta^{(0)}$} \\label{theta0}\n\\end{figure}\n\n Step 3. Now for each $i\\in\\{ 0, 1, 2, 3\\}$ we exhibit a theta-curve\n$\\theta^{(i)}\\subset \\partial M$ such that the simple relative\nspine $R(P, \\theta^{(i)})$ of $M$ has $10$ interior true vertices\nand $\\Psi_{\\mu, \\lambda}(\\theta^{(i)}) = \\triangle^{(i)}$.\n\n Consider the wedge of the three arcs on $\\tilde{T}$, see Fig.\n\\ref{theta0} (left). The projections of the arcs onto $T$ yield a\ntheta-curve that we denoted by $\\theta^{(0)}$. It can be checked\ndirectly that $\\theta^{(0)}$ is in general position with respect to\nthe map $f$, and $\\Psi_{\\mu, \\lambda}(\\theta^{(0)}) =\n\\triangle^{(0)}$. It remains to note that the set of the interior\ntrue vertices of $R(P, \\theta^{(0)})$ consists of (a) the two true\nvertices of the special polyhedron $P$, (b) the images under $f$ of\nthe two vertices of $\\theta^{(0)}$, (c) the five intersection points\nof the set $f(\\theta^{(0)})$ with the triple lines of $P$, see Fig.\n\\ref{theta0} (left), and (d) one self-intersection point of the\nimage $f(\\theta^{(0)})$ of $\\theta^{(0)}$ (shown in Fig.\n\\ref{theta0} (right) as a fat gray dot).\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta1_2.eps}\n\\caption{The theta-curve $\\theta^{(1)}$} \\label{theta1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta2_2.eps}\n\\caption{The theta-curve $\\theta^{(2)}$} \\label{theta2}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{theta3_2.eps}\n\\caption{The theta-curve $\\theta^{(3)}$} \\label{theta3}\n\\end{figure}\n\n The theta-curves $\\theta^{(1)}$, $\\theta^{(2)}$, $\\theta^{(3)}$\nsatisfying the conclusion of the Proposition are shown in Fig.\n\\ref{theta1}, \\ref{theta2}, \\ref{theta3}. We point out that among\nthe $10$ interior true vertices of $R(P, \\theta^{(3)})$ there are\n$6$ intersection points of the set $f(\\theta^{(3)})$ with the triple\nlines of $P$, see Fig. \\ref{theta3} (left), while there are no\nself-intersection points of the image $f(\\theta^{(3)})$ of\n$\\theta^{(3)}$, see Fig. \\ref{theta3} (right).\n\\end{proof}\n\n\n\\section{Proof of the main theorem}\n\n Let $p\\geqslant 0$ and $q\\geqslant 1$ be two relatively prime integers.\nTo prove the inequality $c(4_1(p\/q))\\leqslant \\omega(p\/q)$ it\nsuffices to construct a simple spine of the manifold $4_1(p\/q)$ with\n$\\omega(p\/q)$ true vertices.\n\n Thurston \\cite{Thurston-1978} proved that the manifold $4_1(p\/q)$ is\nhyperbolic except for $p\/q\\in \\{0, 1, 2, 3, 4, \\infty\\}$. The case\n$p\/q=\\infty$ does not satisfy the assumptions of the Theorem. In\neach of the five remaining cases the non-hyperbolic manifold\n$4_1(p\/q)$ has complexity $7$ and $\\omega(p\/q)=7$.\n\n Let us construct a simple spine of the hyperbolic manifold\n$4_1(p\/q)$. Recall that the meridian $m$ and the theta-curve\n$\\theta_m$ on the boundary of $(V, \\theta_V)$ were fixed in Example\n1. Let $(\\mu, \\lambda)$ be the canonical coordinate system on the\nboundary torus $\\partial E(4_1)$ of the figure eight knot complement\n$E(4_1)$. Among all homeomorphisms $\\partial V \\to\n\\partial E(4_1)$ that take $m$ to the curve $\\mu^p\\lambda^q$, we\nchoose a homeomorphism $\\varphi$ such that the distance between the\ntheta-curves $\\varphi(\\theta_m)$ and $\\theta^{(0)}$ be as small as\npossible. For convenience denote by $z$ the number $\\min\\{[p\/q],\n3\\}$. By the Proposition, the manifold $(E(4_1), \\theta^{(z)})$ has\na simple relative spine with $10$ interior true vertices. Since\n$4_1(p\/q) = V\\cup_\\varphi E(4_1)$, it follows from Lemma that the\nmanifold $(4_1(p\/q), \\emptyset)$ has a simple relative spine\n$Q_{p\/q}$ with $10 + d(\\varphi(\\theta_V), \\theta^{(z)})$ interior\ntrue vertices. Moreover, $Q_{p\/q}$ is a spine of $4_1(p\/q)$, since\n$\\partial 4_1(p\/q) = \\emptyset$.\n\n Now let us prove that $d(\\varphi(\\theta_V), \\theta^{(z)}) = -2 +\n\\max\\{[p\/q]-3, 0\\} + S(rem(p,q),q)$. Recall that for each $i\\in\\{ 0,\n1, 2, 3\\}$ the map $\\Psi_{\\mu, \\lambda}$ takes $\\theta^{(i)}$ to the\ntriangle $\\triangle^{(i)}$ of the Farey triangulation with the\nvertices at $i$, $i+1$, and $\\infty$. Denote by $\\triangle_V$ and\n$\\triangle_m$ the triangles $\\Psi_{\\mu, \\lambda}(\\varphi(\\theta_V))$\nand $\\Psi_{\\mu, \\lambda}(\\varphi(\\theta_m))$, respectively. Since\nthe distance between theta-curves on $\\partial E(4_1)$ is equal to\nthe distance between the corresponding triangles of $\\mathbb{F}$, it\nis sufficient to find $d(\\triangle_V, \\triangle^{(z)})$.\n\n The choice of $\\varphi$ guarantees us that $\\triangle_m$ is\nthe closest triangle to $\\triangle^{(0)}$ among all the triangles\nwith a vertex at $p\/q$. This implies (see \\cite[Proposition\n4.3]{Martelli-Petronio-2004} and \\cite[Lemma 2]{Fominykh-2008}) that\n$d(\\triangle_m, \\triangle^{(0)}) = S(p,q)-1$. Since the theta-curve\n$\\theta_V$ is obtained from $\\theta_m$ by exactly one flip and\n$\\theta_V$ does not contain the meridian $m$, the triangle\n$\\triangle_V$ has a common edge with $\\triangle_m$ and $p\/q$ is not\na vertex of $\\triangle_V$. Hence, $d(\\triangle_V, \\triangle^{(0)}) =\nS(p,q)-2$. Analyzing the Farey triangulation, we can notice that\n$d(\\triangle_V, \\triangle^{(z)}) = d(\\triangle_V, \\triangle^{(0)}) -\nd(\\triangle^{(z)}, \\triangle^{(0)})$. Taking into account that\n$d(\\triangle^{(z)}, \\triangle^{(0)})=z$, $S(p,q) = [p\/q] +\nS(rem(p,q),q)$ and $[p\/q] - \\min\\{[p\/q], 3\\} = \\max\\{[p\/q]-3, 0\\}$\nwe get the equality $d(\\varphi(\\theta_V), \\theta^{(z)}) =\nd(\\triangle_V, \\triangle^{(z)}) = -2 + \\max\\{[p\/q]-3, 0\\} +\nS(rem(p,q),q)$.\n\n Note that if $p\/q\\not\\in \\mathbb{Z}$, then $Q_{p\/q}$ is the desired\nspine, since it contains $\\omega(p\/q)$ true vertices. On the other\nhand, if $p\/q\\in \\mathbb{Z}$, the spine $Q_{p\/q}$ contains\n$\\omega(p\/q)+1$ true vertices. In this case $Q_{p\/q}$ can be\ntransformed into another simple spine $Q_{p\/q}'$ of $4_1(p\/q)$ by a\nsequence of moves along boundary curves of length $4$ (similar\narguments can be found in \\cite[page 81]{Matveev-2003}). The spine\n$Q_{p\/q}'$ has the same number of true vertices but possesses a\nboundary curve of length $3$, hence it can be simplified. The result\nis a new spine of $4_1(p\/q)$ with $\\omega(p\/q)$ true vertices.\n\n To conclude the proof of the theorem, it remains to note that the\ntable \\cite{Atlas}contains $46$ hyperbolic manifolds of the type\n$4_1(p\/q)$ satisfying $\\omega(p\/q)\\leqslant 12$. For each of them\nour upper bound is sharp.\n\n\n\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgment}\n\nThe authors greatly appreciate the financial support from the Rail Manufacturing Cooperative Research Centre (funded jointly by participating rail organizations and the Australian Federal Government's Business-Cooperative Research Centres Program) through Project R3.7.3 - Rail infrastructure defect detection through video analytics.\n\n\\small\n\n\\section{Conclusion}\n\nWe propose a Poisson Transfer Network (PTN) to tackle the semi-supervised few-shot problem, aiming to explore the value of unlabeled novel-class data from two aspects. We propose to employ the Poisson learning model to capture the relations between the few labeled and unlabeled data, which results in a more stable and informative classifier than previous semi-supervised few-shot models. Moreover, we propose to adopt the unsupervised contrastive learning to improve the generality of the embedding on novel classes, which avoids the possible over-fitting problem when training with few labeled samples. \nIntegrating the two modules, the proposed PTN can fully explore the unlabeled auxiliary information boosting the performance of few-shot learning.\nExtensive experiments indicate that PTN outperforms state-of-the-art few-shot and semi-supervised few-shot methods.\n\n\n\\section{Experiments}\n\\subsection{Datasets}\nWe evaluate the proposed PTN on two few-shot benchmark datasets: miniImageNet and tieredImageNet. The miniImageNet dataset \\cite{vinyals2016matching} is a subset of the ImageNet, consisting of 100 classes, and each class contains 600 images of size 84$\\times$84. We follow the standard split of 64 base, 16 validation , and 20 test classes \\cite{vinyals2016matching,tian2020rethinking}. The tieredImageNet \\cite{ren2018meta} is another subset but with 608 classes instead. We follow the standard split of 351 base, 97 validation, and 160 test classes for the experiments \\cite{ren2018meta,liu2018learning}. We resize the images from tieredImageNet to 84$\\times$84 pixels, and randomly select $C$ classes from the novel class to construct the few-shot task. Within each class, $K$ examples are selected as the labeled data, and $V$ examples from the rest as queries. The extra $N$ unlabeled samples are selected from the $C$ classes or rest novel classes. We set $C=5, K= \\{1,5\\}, V=15$ and study different sizes of $N$. We run 600 few-shot tasks and report the mean accuracy with the 95\\% confidence interval.\n\n\\subsection{Implementation Details}\nSame as previous works~\\cite{rusu2018meta,dhillon2019baseline,liu2019prototype,tian2020rethinking,yu2020transmatch}, we adopt the wide residual network (WRN-28-10) \\cite{zagoruyko2016wide} as the backbone of our base model $W_{\\phi} \\circ f_{\\theta_0}$, and we follow the protocals in \\cite{tian2020rethinking,yu2020transmatch} fusing the base and validation classes to train the base model from scratch. We set the batch size to 64 with SGD learning rate as 0.05 and weight decay as $5e^{-4}$. We reduce the learning rate by 0.1 after 60 and 80 epochs. The base model is trained for 100 epochs.\n\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{2.0\\columnwidth}{!}{%\n\\begin{tabular}{lcccc}\n\\hline\n\\multicolumn{1}{c}{\\multirow{2}{*}{Methods}} & \\multirow{2}{*}{Type} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{miniImageNet} \\\\ \\cline{4-5} \n\\multicolumn{1}{c}{} & & & 1-shot & 5-shot \\\\ \\hline\nPrototypical-Net \\cite{snell2017prototypical} & Metric, Meta & ConvNet-256 & 49.42$\\pm$0.78 & 68.20$\\pm$0.66 \\\\\nRelation Network \\cite{sung2018learning} & Metric, Meta & ConvNet-64 & 50.44$\\pm$0.82 & 65.32$\\pm$0.70 \\\\\nTADAM \\cite{oreshkin2018tadam} & Metric, Meta & ResNet-12 & 58.50$\\pm$0.30 & 76.70$\\pm$0.30\n \\\\\nDPGN \\cite{yang2020dpgn} & Metric, Meta & ResNet-12 & 67.77$\\pm$0.32 & 84.60$\\pm$0.43 \n \\\\\nRFS \\cite{tian2020rethinking} & Metric, Transfer & ResNet-12 & 64.82$\\pm$0.60 & 82.14$\\pm$0.43\n \\\\ \\hline\nMAML \\cite{finn2017model} & Optimization, Meta & ConvNet-64 & 48.70$\\pm$1.84 & 63.11$\\pm$0.92 \\\\\nSNAIL \\cite{mishra2018simple} & Optimization, Meta & ResNet-12 & 55.71$\\pm$0.99 & 68.88$\\pm$0.92 \\\\\nLEO \\cite{rusu2018meta} & Optimization, Meta & WRN-28-10 & 61.76$\\pm$0.08 & 77.59$\\pm$0.12 \\\\\nMetaOptNet \\cite{lee2019meta} & Optimization, Meta & ResNet-12 & 64.09$\\pm$0.62 & 80.00$\\pm$0.45 \\\\ \\hline\nTPN \\cite{liu2018learning} & Transductive, Meta & ConvNet-64 & 55.51$\\pm$0.86 & 69.86$\\pm$0.65 \\\\\nBD-CSPN \\cite{liu2019prototype} & Transductive, Meta & WRN-28-10 & 70.31$\\pm$0.93 & 81.89$\\pm$0.60 \\\\\nTransductive Fine-tuning \\cite{dhillon2019baseline} & Transductive, Transfer & WRN-28-10 & 65.73$\\pm$0.68 & 78.40$\\pm$0.52 \\\\\nLaplacianShot \\cite{ziko2020laplacian} & Transductive, Transfer & DenseNet & 75.57$\\pm$0.19 & 84.72$\\pm$0.13 \\\\ \\hline\nMasked Soft k-Means \\cite{ren2018meta} & Semi, Meta & ConvNet-128 & 50.41$\\pm$0.31 & 64.39$\\pm$0.24 \\\\\nTPN-semi \\cite{liu2018learning} & Semi, Meta & ConvNet-64 & 52.78$\\pm$0.27 & 66.42$\\pm$0.21 \\\\\nLST \\cite{li2019learning} & Semi, Meta & ResNet-12 & 70.10$\\pm$1.90 & 78.70$\\pm$0.80 \\\\ \\hline\nTransMatch \\cite{yu2020transmatch} & Semi, Transfer & WRN-28-10 & 62.93$\\pm$1.11 & 82.24$\\pm$0.59 \\\\\nDPN (Ours) & Semi, Transfer & WRN-28-10 & \\multicolumn{1}{l}{79.67$\\pm$1.06} & \\multicolumn{1}{l}{86.30$\\pm$0.95} \\\\\nPTN (Ours) & Semi, Transfer & WRN-28-10 & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.67} \\\\ \\hline \\hline\n\\multicolumn{1}{c}{\\multirow{2}{*}{Methods}} & \\multirow{2}{*}{Type} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{tieredImageNet} \\\\ \\cline{4-5} \n\\multicolumn{1}{c}{} & & & 1-shot & 5-shot \\\\ \\hline\nPrototypical-Net \\cite{snell2017prototypical} & Metric, Meta & ConvNet-256 & 53.31$\\pm$0.89 & 72.69$\\pm$0.74 \\\\\nRelation Network \\cite{sung2018learning} & Metric, Meta & ConvNet-64 & 54.48$\\pm$0.93 & 71.32$\\pm$0.78 \\\\\nDPGN \\cite{yang2020dpgn} & Metric, Meta & ResNet-12 & 72.45$\\pm$0.51 & 87.24$\\pm$0.39 \n \\\\\nRFS \\cite{tian2020rethinking} & Metric, Transfer & ResNet-12 & 71.52$\\pm$0.69 & 86.03$\\pm$0.49 \\\\ \\hline\nMAML \\cite{finn2017model} & Optimization, Meta & ConvNet-64 & 51.67$\\pm$1.81 & 70.30$\\pm$1.75 \\\\\nLEO \\cite{rusu2018meta} & Optimization, Meta & WRN-28-10 & 66.33$\\pm$0.05 & 81.44$\\pm$0.09 \\\\\nMetaOptNet \\cite{lee2019meta} & Optimization, Meta & ResNet-12 & 65.81$\\pm$0.74 & 81.75$\\pm$0.53 \\\\ \\hline\nTPN \\cite{liu2018learning} & Transductive, Meta & ConvNet-64 & 59.91$\\pm$0.94 & 73.30$\\pm$0.75 \\\\\nBD-CSPN \\cite{liu2019prototype} & Transductive, Meta & WRN-28-10 & 78.74$\\pm$0.95 & 86.92$\\pm$0.63 \\\\\nTransductive Fine-tuning \\cite{dhillon2019baseline} & Transductive, Transfer & WRN-28-10 & 73.34$\\pm$0.71 & 85.50$\\pm$0.50 \\\\\nLaplacianShot \\cite{ziko2020laplacian} & Transductive, Transfer & DenseNet & 80.30$\\pm$0.22 & 87.93$\\pm$0.15 \\\\ \\hline\nMasked Soft k-Means \\cite{ren2018meta} & Semi, Meta & ConvNet-128 & 52.39$\\pm$0.44 & 69.88$\\pm$0.20 \\\\\nTPN-semi \\cite{liu2018learning} & Semi, Meta & ConvNet-64 & 55.74$\\pm$0.29 & 71.01$\\pm$0.23 \\\\\nLST \\cite{li2019learning} & Semi, Meta & ResNet-12 & 77.70$\\pm$1.60 & 85.20$\\pm$0.80 \\\\ \\hline\nDPN (Ours) & Semi, Transfer & WRN-28-10 & \\multicolumn{1}{l}{82.18$\\pm$1.06} & \\multicolumn{1}{l}{88.02$\\pm$0.72} \\\\\nPTN (Ours) & Semi, Transfer & WRN-28-10 & \\textbf{84.70$\\pm$1.14} & \\textbf{89.14$\\pm$0.71} \\\\ \\hline\n\\end{tabular} }\n\\caption{The 5-way, 1-shot and 5-shot classification accuracy (\\%) on the two datasets with 95\\% confidence interval. Tne best results are in bold. The upper and lower parts of the table show the results on miniImageNet and tieredImageNet, respectively.}\n\\label{Res1}\n\\end{table*}\n\nIn unsupervised embedding transfer, the data augmentation $T$ is defined same as \\cite{lee2019meta,tian2020rethinking}. For fair comparisons against TransMatch~\\cite{yu2020transmatch}, we also augment each labeled image 10 times by random transformations and generate the prototypes of each class as labeled samples. We apply SGD optimizer with a momentum of 0.9. The learning rate is initialized as $1e^{-3}$, and the cosine learning rate scheduler is used for 10 epochs. We set the batch size to 80 with $\\lambda = 1$ in Eq. (\\ref{UT}).\nFor Poisson inference, we construct the graph by connecting each sample to its $K$-nearest neighbors with Gaussian weights. We set $K = 30$ and the weight matrix $W$ is summarized with $w_{ii} = 0$, which accelerates the convergence of the iteration in Algorithm \\ref{algorithm} without change the solution of the Equation \\ref{PO}. We set the max $tp = 100$ in step 7 of Algorithm \\ref{algorithm} by referring to the stop constraint discussed in the Proposed Algorithm section. \nWe set hyper-parameters $\\mu = 1.5, M_1 = 20, M_2 = 40$ and $M_3 = 100$ empirically. Moreover, we set ${\\varphi} = 10, \\upsilon_{\\alpha}=0.5, \\upsilon_{\\sigma}=1.0$.\n\\subsection{Experimental Results}\n\\subsubsection{Comparison with the State-Of-The-Art}\nIn our experiments, we group the compared methods into five categories, and the experimental results on two datasets are summarized in Table \\ref{Res1}. \nWith the auxiliary unlabeled data available, our proposed PTN outperforms the metric-based and optimization-based few-shot models by large margins, indicating that the proposed model effectively utilizes the unlabeled information for assisting few-shot recognition.\nBy integrating the unsupervised embedding transfer and PoissonMBO classifier, PTN achieves superior performance over both transductive and existing SSFSL approaches. Specifically, under the 5-way-1-shot setting, the classification accuracies are 81.57\\% vs. 63.02\\%~TransMatch \\cite{yu2020transmatch}, 84.70\\% vs. 80.30\\%~LaplacianShot \\cite{ziko2020laplacian} on miniImageNet and tieredImageNet, respectively; under the 5-way-5-shot setting, the classification accuracies are 88.43\\% vs. 78.70\\%~LST \\cite{li2019learning}, 89.14\\% vs. 81.89\\%~BD-CSPN \\cite{liu2019prototype} on miniImageNet and tieredImageNet, respectively.\nThese results demonstrate the superiority of PTN for SSFSL tasks.\n\\subsubsection{Different Extra Unlabeled Samples}\n\\begin{table}[]\n\\centering\n\\begin{tabular}{@{}cccc@{}}\n\\toprule\nMethods & Num\\_U & 1-shot & 5-shot \\\\ \\midrule\n~~PTN$^*$ & \\multicolumn{1}{c}{0} & 76.20$\\pm$0.82 & 84.25$\\pm$0.61 \\\\\nPTN & \\multicolumn{1}{c}{0} & 77.01$\\pm$0.94 & 85.32$\\pm$0.68 \\\\\nPTN & \\multicolumn{1}{c}{20} & 77.20$\\pm$0.92 & 85.93$\\pm$0.82 \\\\\nPTN & \\multicolumn{1}{c}{50} & 79.92$\\pm$1.06 & 86.09$\\pm$0.75 \\\\\nPTN & \\multicolumn{1}{c}{100} & 81.57$\\pm$0.94 & 87.17$\\pm$0.58 \\\\\nPTN & \\multicolumn{1}{c}{200} & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.76} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{The 5-way, 1-shot and 5-shot classification accuracy (\\%) with different number of extra unlabeled samples on miniImageNet. PTN$^*$ denotes that we adopt PTN as the transductive model without fine-tune embedding. Best results are in bold.}\n\\label{Res2}\n\\end{table}\n\nWe show the results of using different numbers of extra unlabeled instances in Table \\ref{Res2}. For Num\\_U = 0, PTN$^*$ can be viewed as the transductive model without extra unlabeled data, where we treat query samples as the unlabeled data, and we do not fine-tune the embedding with query labels for fair comparisons. Contrary to PTN$^*$, the proposed PTN model utilize the query samples to fine-tune the embedding when Num\\_U=0.\nIt can be observed that our PTN model achieves better performances with more extra unlabeled samples, which indicates the effectiveness of PTN in mining the unlabeled auxiliary information for the few-shot problem.\n\n\n\\subsubsection{Results with Distractor Classes}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.95\\linewidth]{.\/images\/Few-shot.pdf}\n\\end{center}\n\\caption{The 5-way, 1-shot and 5-shot classification accuracy (\\%) with different number of extra unlabeled samples on miniImageNet. w\/D means with distractor classes.}\n\\label{dist}\n\\end{figure}\n\nInspired by \\cite{ren2018meta,liu2018learning,yu2020transmatch}, we further investigate the influence of distractor classes, where the extra unlabeled data are collected from classes with no overlaps to labeled support samples. We follow the settings in \\cite{ren2018meta,liu2018learning}. As shown in Figure \\ref{dist}, even with distractor class data, the proposed PTN still outperforms other SSFSL methods by a large margin, which indicates the robustness of the proposed PTN in dealing with distracted unlabeled data.\n\n\\subsection{Ablation Study}\n\\begin{table}[t]\n\\begin{minipage}{16.5cm}\n\\begin{tabular}{@{}lcc@{}}\n\\toprule\nMethods & 1-shot & 5-shot \\\\ \\midrule\nTransMatch & 62.93$\\pm$1.11 & 82.24$\\pm$0.59 \\\\\nLabel Propagation (LP) & 74.04$\\pm$1.00 & 82.60$\\pm$0.68 \\\\\nPoissonMBO & 79.67$\\pm$1.02 & 86.30$\\pm$0.65 \\\\\nDPN & 80.00$\\pm$0.83 & 87.17$\\pm$0.51 \\\\\nUnsup Trans+LP \\footnote{Unsup Trans means Unsupervised Embedding Transfer.} & 75.65$\\pm$1.06 & 84.46$\\pm$0.68 \\\\\nUnsup Trans+PoissonMBO & 80.73$\\pm$1.11 & 87.41$\\pm$0.63 \\\\\nUnsup Trans+PTN \\footnote{PTN consists of Unsup Trans and DPN.} & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.76} \\\\ \\bottomrule\n\\end{tabular}\n\\end{minipage}\n\\caption{Ablation studies about the proposed PTN, all methods are based on a pretrained embedding with 200 extra unlabeled samples each class on miniImageNet for 5-way, 1-shot and 5-shot classification (\\%). Best results are in bold.}\n\\label{aby}\n\\end{table}\n\nWe analyze different components of the PTN and summarize the results in Table ~\\ref{aby}. All compared approaches are based on the pre-trained WRN-28-10 embedding. \n\nFirst of all, we investigate the graph propagation component (classifier).\nIt can be observed that graph-based models such as Label Propagation~\\cite{zhou2004learning} and PoissonMBO~\\cite{calder2020poisson} outperform the inductive model TransMatch~\\cite{yu2020transmatch}, which is consistent with previous researches~\\cite{zhu2005semi,liu2018learning,ziko2020laplacian}. Compared to directly applying PoissonMBO on few-shot tasks, the proposed DPN \\textit{\\textbf{(without Unsupervised Embedding Transfer)}} achieves better performance, which indicates it is necessary to perform the feature calibration to eliminate the cross-class biases between support and query data distributions before \nlabel inference.\n\nFor investigating the proposed unsupervised embedding transfer in representation learning, we observe that all the graph-based models achieve clear improvement after incorporating the proposed transfer module. For instance, the Label Propagation obtains 1.61\\%, 1.86\\% performance gains on 5-way-1-shot, and 5-way-5-shot minImageNet classification. These results indicate the effectiveness of the proposed unsupervised embedding transfer. \nFinally, by integrating the unsupervised embedding transfer and graph propagation classifier, the PTN model achieves the best performances compared against all other approaches in Table \\ref{aby}. \n\n\\subsection{Inference Time}\nWe conduct inference time experiments to investigate the computation efficiency of the proposed Poisson Transfer Network (PTN) on the \\textit{mini}ImageneNet~\\cite{vinyals2016matching} dataset. Same as \\cite{ziko2020laplacian}, we compute the average inference time required for each 5-shot task. The results are shown in Table~\\ref{time}. Compared with inductive models, the proposed PTN costs more time due to the graph-based Poisson inference. However, our model achieves better classification performance than inductive ones and other transductive models, with affordable inference time. \n\n\\begin{table}[h]\n\\caption{Average inference time (in seconds) for the 5-shot tasks in \\textit{mini}ImageneNet dataset.}\n\\begin{tabular}{@{}lc@{}}\n\\toprule\nMethods & Inference Time \\\\ \\midrule\nSimpleShot~\\cite{wang2019simpleshot} & 0.009 \\\\\nLaplacianShot~\\cite{ziko2020laplacian} & 0.012 \\\\\nTransductive fine-tune~\\cite{dhillon2019baseline} & 20.7 \\\\\nPTN(Ours) & 13.68 \\\\ \\bottomrule\n\\end{tabular}\n\n\\label{time}\n\\end{table}\n\n\\begin{table*}[t!]\n\\centering\n\\caption{Accuracy with various extra unlabeled samples for different semi-supervised few-shot methods on the \\textit{mini}ImageNet dataset. All results are averaged over 600 episodes with 95\\% confidence intervals. The best results are in bold.}\n\\begin{tabular}{@{}lccccc@{}}\n\\toprule\n & \\multicolumn{5}{c}{\\textit{mini}ImageNet 5-way-1-shot} \\\\ \\midrule\n & 0 & 20 & 50 & 100 & 200 \\\\ \\midrule\nTransMatch~\\cite{yu2020transmatch} & - & 58.43$\\pm$0.93 & 61.21$\\pm$1.03 & 63.02$\\pm$1.07 & 62.93$\\pm$1.11 \\\\\nLabel Propagation~\\cite{zhou2004learning} & 69.74$\\pm$0.72& 71.80$\\pm$1.02& 72.97$\\pm$1.06 & 73.35$\\pm$1.05 & 74.04$\\pm$1.00 \\\\\nPoissonMBO~\\cite{calder2020poisson} & 74.79$\\pm$1.06 & 76.01$\\pm$0.99 & 76.67$\\pm$1.02 & 78.28$\\pm$1.02 & 79.67$\\pm$1.02 \\\\\nDPN (Ours) & 75.85$\\pm$0.97 & 76.10$\\pm$1.06 & 77.01$\\pm$0.92& 79.55$\\pm$1.13 & 80.00$\\pm$0.83 \\\\\nPTN (Ours) & \\textbf{77.01$\\pm$0.94} & \\textbf{77.20$\\pm$0.92} & \\textbf{79.92$\\pm$1.06} & \\textbf{81.57$\\pm$0.94} & \\textbf{82.66$\\pm$0.97} \\\\ \\midrule\n & \\multicolumn{5}{c}{\\textit{mini}ImageNet 5-way-5-shot} \\\\ \\midrule\n & 0 & 20 & 50 & 100 & 200 \\\\ \\midrule\nTransMatch & - & 76.43$\\pm$0.61 & 79.30$\\pm$0.59 & 81.19$\\pm$0.59 & 82.24$\\pm$0.59 \\\\\nLabel Propagation & 75.50$\\pm$0.60 & 78.47$\\pm$0.60 & 80.40$\\pm$0.61 & 81.65$\\pm$0.59 & 82.60$\\pm$0.68 \\\\\nPoissonMBO & 83.89$\\pm$0.66 & 84.43$\\pm$0.67 & 84.94$\\pm$0.82 & 85.51$\\pm$0.81 & 86.30$\\pm$0.65 \\\\\nDPN (Ours) & 84.74$\\pm$0.63 & 85.04$\\pm$0.66 & 85.36$\\pm$0.60 & 86.09$\\pm$0.63 & 87.17$\\pm$0.51 \\\\\nPTN (Ours) & \\textbf{85.32$\\pm$0.68} & \\textbf{85.93$\\pm$0.82} & \\textbf{86.09$\\pm$0.75} & \\textbf{87.17$\\pm$0.58} & \\textbf{88.43$\\pm$0.76} \\\\ \\bottomrule\n\\label{unlab}\n\\end{tabular}\n\\end{table*}\n\n\\subsection{Results with Different Extra Unlabeled}\nWe conduct further experiments to investigate the current semi-supervised few-shot methods in mining the value of the unlabeled data. All approaches are based on a pre-trained WRN-28-10~\\cite{zagoruyko2016wide} model for fair comparisons. As indicated in Table \\ref{unlab}, with more unlabeled samples, all the models achieve higher classification performances. However, our proposed PTN model achieves the highest performance among the compared methods, which validates the superior capacity of the proposed model in using the extra unlabeled information for boosting few-shot methods.\n\n\\begin{table*}[t]\n\\centering\n\\caption{Semi-supervised comparison on the \\textit{mini}ImageNet dataset.}\n\\begin{threeparttable}\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\nMethods & 1-shot & 5-shot & \\multicolumn{1}{l}{1-shot w\/D} & \\multicolumn{1}{l}{5-shot w\/D} \\\\ \\midrule\nSoft K-Means~\\cite{ren2018meta} & 50.09$\\pm$0.45 & 64.59$\\pm$0.28 & 48.70$\\pm$0.32 & 63.55$\\pm$0.28 \\\\\nSoft K-Means+Cluster~\\cite{ren2018meta} & 49.03$\\pm$0.24 & 63.08$\\pm$0.18 & 48.86$\\pm$0.32 & 61.27$\\pm$0.24 \\\\\nMasked Soft k-Means~\\cite{ren2018meta} & 50.41$\\pm$0.31 & 64.39$\\pm$0.24 & 49.04$\\pm$0.31 & 62.96$\\pm$0.14 \\\\\nTPN-semi~\\cite{liu2018learning} & 52.78$\\pm$0.27 & 66.42$\\pm$0.21 & 50.43$\\pm$0.84 & 64.95$\\pm$0.73 \\\\\nTransMatch~\\cite{yu2020transmatch} & 63.02$\\pm$1.07 & 81.19$\\pm$0.59 & 62.32$\\pm$1.04 & 80.28$\\pm$0.62 \\\\ \\midrule\nPTN (Ours) & \\textbf{82.66$\\pm$0.97} & \\textbf{88.43$\\pm$0.67} & \\textbf{81.92$\\pm$1.02} & \\textbf{87.59$\\pm$0.61} \\\\ \\bottomrule\n\\end{tabular}\n\\begin{tablenotes}\n\\item[$\\divideontimes$]``w\/D\" means with distraction classification. In this setting, many extra unlabeled samples are from the distraction classes, which is different from the support labeled classes. All results are averaged over 600 episodes with 95\\% confidence intervals. The best results are in bold. \n\\end{tablenotes}\n\\end{threeparttable}\n\\label{mini}\n\\end{table*}\n\n\\begin{table*}[!hpbt]\n\\centering\n\\caption{Semi-supervised comparison on the \\textit{tiered}ImageNet dataset.}\n\\begin{threeparttable}\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\nMethods & 1-shot & 5-shot & \\multicolumn{1}{l}{1-shot w\/D} \n& \\multicolumn{1}{l}{5-shot w\/D} \n\\\\ \\midrule\nSoft K-Means~\\cite{ren2018meta} & 51.52$\\pm$0.36 & 70.25$\\pm$0.31 & 49.88$\\pm$0.52 & 68.32$\\pm$0.22 \\\\\nSoft K-Means+Cluster~\\cite{ren2018meta} & 51.85$\\pm$0.25 & 69.42$\\pm$0.17 & 51.36$\\pm$0.31 & 67.56$\\pm$0.10 \\\\\nMasked Soft k-Means~\\cite{ren2018meta} & 52.39$\\pm$0.44 & 69.88$\\pm$0.20 & 51.38$\\pm$0.38& 69.08$\\pm$0.25 \\\\\nTPN-semi~\\cite{liu2018learning} & 55.74$\\pm$0.29 & 71.01$\\pm$0.23 & 53.45$\\pm$0.93& 69.93$\\pm$0.80 \\\\ \\midrule\nPTN (Ours) & \\textbf{84.70$\\pm$1.14} & \\textbf{89.14$\\pm$0.71} & \\textbf{83.84$\\pm$1.07} & \\textbf{88.06$\\pm$0.62} \\\\ \\bottomrule\n\\end{tabular}\n\\begin{tablenotes}\n\\item[$\\divideontimes$]``w\/D\" means with distraction classification. In this setting, many extra unlabeled samples are from the distraction classes, which is different from the support labeled classes. All results are averaged over 600 episodes with 95\\% confidence intervals. The best results are in bold. \n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tiered}\n\\end{table*}\n\n\n\\subsection{Results with Distractor Classification}\nWe report the results of the proposed PTN on both \\textit{mini}ImageNet~\\cite{vinyals2016matching} and \\textit{tiered}ImageneNet~\\cite{ren2018meta} datasets under different settings in Table~\\ref{mini} and Table~\\ref{tiered}, respectively. It can be observed that the classification results of all semi-supervised few-shot models are degraded due to the distractor classes. However, the proposed PTN model still outperforms other semi-supervised few-shot methods with a large margin. This also indicates the superiority of the proposed PTN model in dealing with the semi-supervised few-shot classification tasks over previous approaches.\n\n\n\n\\section{Introduction}\n\\noindent\nFew-shot learning \\cite{miller2000learning,fei2006one,vinyals2016matching} aims to learn a model that generalizes well with a few instances of each novel class.\nIn general, a few-shot learner is firstly trained on a substantial annotated dataset, also noted as the base-class set, and then adapted to unseen novel classes with a few labeled instances.\nDuring the evaluation, a set of few-shot tasks are fed to the learner, where each task consists of a few support (labeled) samples and a certain number of query (unlabeled) data.\nThis research topic has been proved immensely appealing in the past few years, as a large number of few-shot learning methods are proposed from various perspectives. Mainstream methods can be roughly grouped into two categories. The first one is learning from episodes \\cite{vinyals2016matching}, also known as meta-learning, which adopts the base-class data to create a set of episodes. Each episode is a few-shot learning task, with support and query samples that simulate the evaluation procedure.\nThe second type is the transfer-learning based method, which focuses on learning a decent classifier by transferring the domain knowledge from a model pre-trained on the large base-class set~\\cite{chen2018closer,qiao2018few}. This paradigm decouples the few-shot learning progress into representation learning and classification, and has shown favorable performance against meta-learning methods in recent works \\cite{tian2020rethinking,ziko2020laplacian}. Our method shares somewhat similar motivation with transfer-learning based methods and proposes to utilize the extra unlabeled novel-class data and a pre-trained embedding to tackle the few-shot problem.\n\nCompared with collecting labeled novel-class data, it is much easier to obtain abundant unlabeled data from these classes. Therefore, semi-supervised few-shot learning (SSFSL)~\\cite{ren2018meta,liu2018learning,li2019learning,yu2020transmatch} is proposed to combine the auxiliary information from labeled base-class data and extra unlabeled novel-class data to enhance the performance of few-shot learners.\nThe core challenge in SSFSL is how to fully explore the auxiliary information from these unlabeled.\nPrevious SSFSL works indicate that graph-based models~\\cite{liu2018learning,ziko2020laplacian} can learn a better classifier than inductive ones~\\cite{ren2018meta,li2019learning,yu2020transmatch}, since these methods directly model the relationship between the labeled and unlabeled samples during the inference.\nHowever, current graph-based models adopt the Laplace learning~\\cite{zhu2003semi} to conduct label propagation, the solutions of Laplace learning develop localized spikes near the labeled samples but are almost constant far from the labeled samples, \\textit{i.e.,} label values are not propagated well, especially with few labeled samples. Therefore, these models suffer from the underdeveloped message-passing capacity for the labels.\nOn the other hand, most SSFSL methods adapt the feature embedding pre-trained on base-class data (meta- or transfer- pre-trained) as the novel-class embedding. This may lead to the embedding degeneration problem, as the pre-trained model is designed for the base-class recognition, it tends to learn the embedding that represents only base-class information, and lose information that might be useful outside base classes.\n\nTo address the above issues, we propose a novel transfer-learning based SSFSL method, named Poisson Transfer Network (PTN). Specifically, \\textbf{\\textit{to improve the capacity of graph-based SSFSL models in message passing}}, we propose to revise the Poisson model tailored for few-shot problems by incorporating the query feature calibration and the Poisson MBO model. Poisson learning~\\cite{calder2020poisson} has been provably more stable and informative than traditional Laplace learning in low label rate semi-supervised problems. However, directly employing Poisson MBO for SSFSL may suffer from the cross-class bias due to the data distribution drift between the support and query data. Therefore, we improve the Poisson MBO model by explicitly eliminating the cross-class bias before label inference. \n\\textbf{\\textit{To tackle the novel-class embedding degeneration problem}}, we propose to transfer the pre-trained base-class embedding to the novel-class embedding by adopting unsupervised contrastive training \\cite{he2020momentum,chen2020simple} on the extra unlabeled novel-class data. Constraining the distances between the augmented positive pairs, while pushing the negative ones distant, the proposed transfer scheme captures the novel-class distribution implicitly. This strategy effectively avoids the possible overfitting of retraining feature embedding on the few labeled instances.\n\nBy integrating the Poisson learning and the novel-class specific embedding, the proposed PTN model can fully explore the auxiliary information of extra unlabeled data for SSFSL tasks.\nThe contributions are summarized as follows:\n\\begin{itemize}\n\\item We propose a Poisson learning based model to improve the capacity of mining the relations between the labeled and unlabeled data for graph-based SSFSL.\n\n\\item We propose to adapt unsupervised contrastive learning in the representation learning with extra unlabeled data to improve the generality of the pre-trained base-class embedding for novel-class recognition. \n\n\\item \nExtensive experiments are conducted on two benchmark datasets to investigate the effectiveness of PTN, and PTN achieves state-of-the-art performance.\n\\end{itemize}\n\\section{Methodology}\n\\subsection{Problem Definition}\\label{def}\nIn the standard few-shot learning, there exists a labeled support set $S$ of $C$ different classes, ${ S } = \\left\\{ \\left({x} _ { s } , {y} _ { s } \\right) \\right\\} _ { s = 1 } ^ { K \\times C }$, where $x_s$ is the labeled sample and $y_s$ denote its label. We use the standard basis vector $\\mathbf{e}_{i} \\in \\mathbb{R}^{C}$ represent the $i$-th class, \\textit{i.e.}, $y_s \\in \\left\\{\\mathbf{e}_{1}, \\mathbf{e}_{2}, \\ldots, \\mathbf{e}_{C}\\right\\}$. Given an unlabeled query sample $x_q$ from the query set ${ Q } = \\left\\{ {x} _ { q } \\right\\} _ { q=1 } ^ { V }$, the goal is to assign the query to one of the $C$ support classes. The labeled support set and unlabeled query set share the same label space, and the novel-class dataset $\\mathcal{D}_{novel}$ is thus defined as $\\mathcal{D}_{novel} = S \\cup Q$. If $S$ contains $K$ labeled samples for each of $C$ categories, the task is noted as a $C$-way-$K$-shot problem.\nIt is far from obtaining an ideal classifier with the limited annotated $S$. Therefore, few-shot models usually utilize a fully annotated dataset, which has similar data distribution but disjoint label space with $\\mathcal{D}_{novel}$ as an auxiliary dataset $\\mathcal{D}_{base}$, noted as the base-class set.\n\nFor the semi-supervised few-shot learning (SSFSL), we have an extra unlabeled support set ${ U } = \\left\\{ {x} _ { u } \\right\\} _ { u = 1 } ^ { N }$. These additional $N$ unlabeled samples are usually from each of the $C$ support classes in standard-setting, or other novel-class under distractor classification settings. Then the new novel-class dataset $\\mathcal{D}_{novel}$ is defined as $\\mathcal{D}_{novel} = S \\cup Q \\cup U$. The goal of SSFSL is maximizing the value of the extra unlabeled data to improve the few-shot methods.\n\nFor a clear understanding, the details of proposed PTN are introduced as follows: we first introduce the proposed Representation Learning, and then we illustrate the proposed Poisson learning model for label inference.\n\n\\subsection{Representation Leaning}\nThe representation learning aims to learn a well-generalized novel-class embedding through Feature Embedding Pre-training and Unsupervised Embedding Transfer. \n\\subsubsection{Feature Embedding Pre-training}\nOn the left side of Figure \\ref{framework}, the first part of PTN is the feature embedding pre-training.\n\nBy employing the cross-entropy loss between predictions and ground-truth labels in $\\mathcal{D}_{base}$, we train the base encoder $f_{\\theta_0}$ in a fully-supervised way, which is the same as \\cite{chen2018closer,yu2020transmatch,tian2020rethinking}. \nThis stage can generate powerful embedding for the downstream few-shot learner.\n\n\\subsubsection{Unsupervised Embedding Transfer} \\label{UET}\nDirectly employ the pre-trained base-class embedding for the novel-class may suffer from the degeneration problem. However, retraining the base-class embedding with the limited labeled instances is easy to lead to overfitting. How can we train a novel-class embedding to represent things beyond labels when our only supervision is the limited labels? Our solution is unsupervised contrastive learning.\nUnsupervised learning, especially Contrastive learning~\\cite{he2020momentum,chen2020simple}, recently has shown great potential in representation learning for various downstream vision tasks, and most of these works training a model from scratch. \nHowever, unsupervised pre-trained models perform worse than fully-supervised pre-trained models. \nUnlike previous works, we propose to adopt contrastive learning to retrain the pre-trained embedding with the unlabeled novel data. In this way, we can learn a decent novel-class embedding by integrating the fully-supervised pre-trained scheme with unsupervised contrastive fine-tuning. \n\nSpecifically, for a minibatch of $n$ examples from the unlabeled novel-class subset ${ U_i = \\{x_u\\}_{u=1}^n }$, randomly sampling two data augmentation operators $t,t'\\in {T}$, we can generate a new feature set $Z = \\{ Z_t = \\{f_{\\theta_0} \\circ t(x_u)\\}_{u=1}^n \\} \\cup \\{ Z_{t'} = \\{f_{\\theta_0} \\circ t'(x_u)\\}_{u=1}^n \\}$, resulting in $n$ pairs of feature points. We treat each feature pair from the same raw data input as the positive pair, and the other $2(n-1)$ feature points as negative samples. Then the contrastive loss for the minibatch is defined as\n\\begin{equation}\n\\ell_{cont}=- \\sum_{i,j = 1}^{n} \\log \\frac{\\exp \\left(\\operatorname{cosine}\\left({z}_{i}, {z}_{j}\\right) \/ \\tau\\right)}{\\sum_{k \\neq i} \\exp \\left(\\operatorname{cosine}\\left({z}_{i}, {z}_{k}\\right) \/ \\tau\\right)},\n\\label{NCE}\n\\end{equation}\n\nwhere $z_i,z_j$ denote a positive feature pair from $Z$, $\\tau$ is a temperature parameter, and $\\operatorname{cosine}(\\cdot)$ represents the consine similarity. Then, we adopt a Kullback-Leibler divergence ($\\ell_{KL}$) between two feature subset $Z_t$ and $Z_{t'}$ as the regulation term. Therefore, the final unsupervised embedding transfer loss $\\ell_{UT}$ is defined as \n\\begin{equation}\n\\ell_{UT} = \\ell_{cont} + \\lambda \\ell_{KL}(Z_t ~\\|~ Z_{t'}).\n\\label{UT}\n\\end{equation}\nBy training the extra unlabeled data with this loss, we can learn a robust novel-class embedding $f_{\\theta}$ from $f_{\\theta_0}$.\n\n\\subsection{Poisson Label Inference}\nPrevious studies \\cite{zhu2003semi,zhou2004learning,zhu2005semi,liu2018learning,ziko2020laplacian} indicate that the graph-based few-shot classifier has shown superior performance against inductive ones. Therefore, we propose constructing the classifier with a graph-based Poisson model, which adopts different optimizing strategy with representation learning. \nPoisson model~\\cite{calder2020poisson} has been proved superior over traditional Laplace-based graph models~\\cite{zhu2003semi,zhou2004learning} both theoretically and experimentally, especially for the low label rate semi-supervised problem. \nHowever, directly applying this model to the few-shot task will suffer from a cross-class bias challenge, caused by the data distribution bias between support data (including labeled support and unlabeled support data) and query data. \n\nTherefore, we revise this powerful model by eliminating the support-query bias as the classifier. We explicitly propose a query feature calibration strategy before the final Poisson label inference. \nIt is worth noticing that the proposed graph-based classifier can be directly appended to the pre-trained embedding without adopting the unsupervised embedding transfer training. We dob this baseline model as \\textit{Decoupled Poisson Network} (\\textit{DPN}). \n\n\\subsubsection{Query Feature Calibration}\nThe support-query data distribution bias, also referred to as the cross-class bias~\\cite{liu2019prototype}, is one of the reasons for the degeneracy of the few-shot learner. In this paper, we propose a simple but effective method to eliminate this distribution bias for Poisson graph inference. For a SSFSL task, we fuse the labeled support set $S$ and the extra unlabeled set $U$ as the final support set $ B = S \\cup U$. We denote the normalized embedded support feature set and query feature set as $Z_b = \\{z_b\\}$ and $Z_q = \\{z_q\\}$,\nthe cross-class bias is defined as \n\\begin{equation} \\\n\\begin{split}\n& \\Delta_{\\text {cross}}=\\mathbb{E}_{z_{b} \\sim p_{\\mathcal{B}}}\\left[z_{b}\\right]-\\mathbb{E}_{z_{q} \\sim p_{\\mathcal{Q}}}\\left[z_{q}\\right] \\\\\n& \\quad\\quad~ = \\frac{1}{|\\mathcal{B}|} \\sum_{b=1}^{|\\mathcal{B}|} {z}_{b}-\\frac{1}{|\\mathcal{Q}|} \\sum_{q=1}^{|\\mathcal{Q}|} {z}_{q}.\n\\end{split}\n\\label{QFR}\n\\end{equation}\nWe then add the bias $\\Delta_{cross}$ to query features. To such a degree, support-query bias is somewhat eliminated. After that, a Poisson MBO model is adopted to infer the query label.\n\n\\subsubsection{The Poisson Merriman\u2013Bence\u2013Osher Model}\nWe denote the embedded feature set as $Z_{novel} = Z_b \\cup Z_q = \\{z_1, z_2, \\dots, z_m\\}$ ($m=K\\times C + N + V)$, where the first $K \\times C$ feature points belong to the labeled support set, the last $V$ feature points belong to the query set, and the remaining $N$ points denote the unlabeled support set. We build a graph with the feature points as the vertices, and the edge weight $w_{ij}$ is the similarity between feature \npoint $z_i$ and $z_j$, defined as $w_{i j}=\\exp \\left(-4\\left|z_{i}-z_{j}\\right|^{2} \/ d_{K}\\left(z_{i}\\right)^{2}\\right)$, where $d_{K}\\left(z_{i}\\right)^{2}$ is the distance between $z_i$ and its $K$-th nearest neighbor. We set $w_{ij} \\ge 0$ and $w_{ij} = w_{ji}$. Correspondingly, we define the weight matrix as $W=[w_{ij}]$, the degree matrix as $D=\\operatorname{diag}([d_i=\\sum_{j=1}^{m}w_{ij}])$, and the unnormalized Laplacian as $L = D - W$. \nAs the first $K\\times C$ feature points have the ground-truth label, we use $\\bar y = \\frac{1}{K \\times C} \\sum_{s=1}^{K\\times C} y_s $ to denote the average label vector, and we let indicator $\\mathbb{I}_{ij} = 1 $ if $i=j$, else $\\mathbb{I}_{ij} = 0 $. The goal of this model is to learn a classifier $g: z \\rightarrow \\mathbb{R}^{C}$.\nBy solving the Poisson equation:\n\\begin{equation}\n\\begin{split}\n& L g\\left(z_{i}\\right)=\\sum_{j=1}^{K\\times C}\\left(y_{j}-\\bar{y}\\right) \\mathbb{I}_{ij} \\quad \\text { for } i=1, \\ldots, m, \n\\end{split}\n\\label{PO}\n\\end{equation}\nsatisfying $\\sum_{i=1}^{m} \\sum_{k=1}^{m}w_{ik} g\\left(z_{i}\\right)=0$, we can then result in the label prediction function $g(z_i)=(g_1(z_i),g_2(z_i),\\dots,g_C(z_i))$. The predict label $\\hat{y_i}$ of vertex $z_i$ is then determined as $\\hat{y_i} = {\\arg \\max_{j \\in\\{1, \\ldots, C\\}} }\\left\\{g_{j}(x_i)\\right\\}$. Let $G$ denote the set of $ m \\times C $ matrix, which is the prediction label matrix of the all data. We concatenate the support label to form a label matrix $Y = [y_s] \\in \\mathbb{R}^{C \\times (\nK\\times C)} $. Let $A = [Y - \\bar y, \\mathbf{0}^{C \\times (m-K\\times C)}]$ denotes the initial label of all the data, in which all unlabeled data's label is zero. The query label of Eq. (\\ref{PO}) can be determined by\n\\begin{equation}\n G^{tp+1} = G^{tp} + D^{-1} ( A^T - LG^{tp}),\n \\label{POSO}\n\\end{equation}\nwhere $G^{tp}$ denotes the predicted labels of all data at the timestamp $tp$.\nWe can get a stable classifier $g$ with a certain number of iteration using Eq. (\\ref{POSO}). After that, we adopt a graph-cut method to improve the inference performance by incrementally adjusting the classifier's decision boundary. The graph-cut problem is defined as\n\\begin{equation}\n\\min_{g: Z\\rightarrow H \\atop(g)_{z}=o}\\left\\{ g^T L g -\\mu \\sum_{i=1}^{K \\times C}\\left(y_{i}-\\bar{y}\\right) \\cdot g\\left(z_{i}\\right)\\right\\},\n\\label{POMBO}\n\\end{equation}\nwhere $H = \\{ \\mathbf{e}_{1}, \\mathbf{e}_{2}, \\ldots, \\mathbf{e}_{C} \\}$ denotes the annotated samples' label set, $(g)_z = \\frac{1}{m}\\sum_{i=1}^m g(z_i)$ is the fraction of vertices to each of $C$ classes, and $o =[o_1,o_2,\\dots,o_C]^T \\in \\mathbb{R}^{C}$ is the piror knowledge of the class size distribution that $o_i$ is the fraction of data belonging to class $i$. With the constraint $(g)_z = o$, we can encode the prior \nknowledge into the Poisson Model.\n$g^T L g = \\frac{1}{2} \\sum_{i, j=1}^{m} w_{i j}(g(i)-g(j))^{2}$, this term is the graph-cut energy of the classification given by $g=[g(z_1),g(z_2),\\dots,g(z_m)]^T$, widely used in semi-supervised graph models~\\cite{zhu2003semi,zhu2005semi,zhou2004learning}.\n\nIn Eq. (\\ref{POMBO}), the solution will get discrete values, which is hard to solve.\nTo relax this problem, we use the Merriman-Bence-Osher (MBO) scheme~\\cite{garcia2014multiclass} by replacing the graph-cut energy with the Ginzburg-Landau approximation:\n\n\\begin{equation}\n\\begin{split}\n& \\min _{g\\in \\mathrm{SP}\\{Z\\rightarrow \\mathbb{R}^C\\} \\atop(g)_{z}=o}\\left\\{ \\mathrm{GL}_{\\tau'} (g) -\\mu \\sum_{i=1}^{K \\times C}\\left(y_{i}-\\bar{y}\\right) \\cdot g\\left(z_{i}\\right)\\right\\},\\\\\n& \\mathrm{GL}_{\\tau'}(g)= g^T L g +\\frac{1}{\\tau'} \\sum_{i=1}^{m} \\prod_{j=1}^{C}\\left|g\\left(z_{i}\\right)-\\mathbf{e}_{j}\\right|^{2}.\n\\end{split}\n\\label{GLPOMBO}\n\\end{equation}\n\nIn Eq. (\\ref{GLPOMBO}), $\\mathrm{SP}\\{Z\\rightarrow \\mathbb{R}^C\\}$ represents the space of projections $g: Z\\rightarrow \\mathbb{R}^C$, which allow the classifier $g$ to take on any real values, instead of the discrete value from $H$ in Eq. (\\ref{POMBO}). More importantly, this leads to a more efficiently computation of the Poisson model. The Eq. (\\ref{GLPOMBO}) can be efficiently solved with alternates \ngradient decent strategy, as shown in lines 9-20 of Algorithm \\ref{algorithm}. \n\n\\begin{algorithm}[t]\n\\caption{PTN for SSFSL}\\label{algorithm}\n\\SetKwData{Left}{left}\\SetKwData{This}{this}\\SetKwData{Up}{up}\n \\SetKwFunction{Union}{Union}\\SetKwFunction{FindCompress}{FindCompress}\n \\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \\SetKwProg{PoissonMBO}{$PoissonMBO$}{}{$G\\leftarrow G[m-V:m,:]$;}\n \\Input{$\\mathcal{D}_{base}$, $\\mathcal{D}_{novel}=S\\cup U \\cup Q$,\\\\ $o$, $\\mu$, $M_{1}, M_{2}, M_{3}$}\n \\Output{Query samples' label prediction $G$}\n\nTrain a base model $\\mathbf{W}_{\\phi} \\circ f_{\\theta_0} (x)$ with all samples and labels from $\\mathcal{D}_{base}$;\n\nApply unsupervised embedding transfer method to fine-tune the $f_{\\theta_0}$ with novel unlabeled data $U$ by using $\\ell_{UT}$ in Eq. (\\ref{UT}), and result in $f_{\\theta}$;\n\nApply $f_{\\theta}$ to extract features on $D_{novel}$ as $Z_{novel}$;\n\nApply query feature calibration using Eq. (\\ref{QFR});\n\nCompute $W, D, L, A$ according to $Z_{novel}$, $G \\leftarrow \\mathbf{0}^{m \\times C}$\n\n\\PoissonMBO{}{\n\nUpdate $G$ uisng Eq. (\\ref{POSO}) with given steps\n\n$\\mathrm{d}_{mx} \\leftarrow 1 \/ \\max _{1 \\leq i \\leq m} {D}_{i i}$, $G \\leftarrow \\mu G$\n\n\\For{ $i=1$ \\KwTo $M_{1} $}{\n\\For{ $j=1$ \\KwTo $M_{2} $}{${G} \\leftarrow {G}-\\mathrm{d}_{mx}\\left({L} {G}-\\mu {A}^{T}\\right)$}\n$r \\leftarrow \\textbf{ones}(1,C)$\\\\\n\\For{ $j=1$ \\KwTo $M_3$}{$\\hat{o} \\leftarrow \\frac{1}{n} \\mathbf{1}^{T} \\mathbf{P r o j}_{H}({G} \\cdot \\operatorname{diag}({r}))$\\\\\n${r} \\leftarrow \\max \\left(\\min \\left({r}+{\\varphi} \\cdot ({o}-\\hat{o}), \\upsilon_{\\alpha}\\right), \\upsilon_{\\sigma }\\right)$\n}\n$G \\leftarrow \\mathbf{P r o j}_{H}({G} \\cdot \\operatorname{diag}({r}))$ }\n}\n\\end{algorithm}\n\\subsection{Proposed Algorithm}\nThe overall proposed algorithm is summarized in Algorithm \\ref{algorithm}. Inputting the base-class set $\\mathcal{D}_{base}$, novel-class set $\\mathcal{D}_{novel}$, prior classes' distribution $o$, and other parameters, PTN will predict the query samples' label $G \\in \\mathbb{R}^{V \\times C}$. The query label $\\hat{y}_q$ is then determined as $\\hat{y}_q = \\arg \\max _{1 \\leq j \\leq C} G_{qj}$. \nMore specifically, once the encoder $f_\\theta$ is learned using the base set $\\mathcal{D}_{base}$, we employ the proposed unsupervised embedding transfer method in step 2 in Algorithm \\ref{algorithm}. After that, we build the graph with the feature set $Z_{novel}$ and compute the related matrices $W, D, L, A$ in step 3-5. In the label inference stage in steps 6-20, we first apply Poisson model to robust propagate the labels in step 7, and then solve the graph-cut problem by using MBO scheme in several steps of gradient-descent to boost the classification performance. The stop condition in step 7 follow the constraint: $\\left\\|\\mathbf{sp}_{tp}-{W} \\mathbf{1} \/\\left(\\mathbf{1}^{T} {W} \\mathbf{1}\\right)\\right\\|_{\\infty} \\leq 1 \/ m$, where $\\mathbf{1}$ is a all-ones \ncolumn vector, $ \\mathbf{sp}_{tp}={W} {D}^{-1} \\mathbf{sp}_{tp-1}$, $\\mathbf{sp_{0}}$ is a $m$-column vector with ones in the first $K\\times C$ positions and zeros elsewhere. \nSteps 9-19 are aimed to solve the graph-cut problem in Eq. (\\ref{GLPOMBO}), \nTo solve the problem, we first divide the Eq. (\\ref{GLPOMBO}) into $E_1 = g^TLg-\\mu \\sum_{i=1}^{K \\times C}\\left(y_{i}-\\bar{y}\\right) \\cdot g\\left(z_{i}\\right)$ and $E_2=\\frac{1}{\\tau'} \\sum_{i=1}^{m} \\prod_{j=1}^{C}\\left|g\\left(z_{i}\\right)-\\mathbf{e}_{j}\\right|^{2}$, and then \nemploying the gradient decent alternative on these two energy functions. \nSteps 10-12 are used to optimize the $E_1$.\nWe optimize the $E_2$ in steps 14-17, $\\mathbf{Proj}_{H}: \\mathbb{R}^{C} \\rightarrow H$ is the closet point projection, $r =[r_1,\\dots,r_C]^T$ ($r_i > 0$), ${\\varphi}$ is the time step, and $\\upsilon_{\\alpha}, \\upsilon_{\\sigma }$ are the clipping values, \nBy adopting the gradient descent scheme in steps 14-17, the vector $r$ is generated that also satisfies the constraint $(g)_z = o$ in Eq.(\\ref{GLPOMBO}). After obtaining the PoissonMBO's solution $G$, the query samples' label prediction matrix is resolved by step 20. \n\nThe main inference complexity of PTN is $\\mathcal{O}(M_1 M_2 E)$\n, where $E$ is the number of edges in the graph. As a graphed-based model, PTN's inference complexity is heavier than inductive models. However, previous studies \\cite{liu2018learning,calder2020poisson} indicate that this complexity is affordable for few-shot tasks since the data scale is not very big. Moreover, we do not claim that our model is the final solution for SSFSL. We aim to design a new method to make full use of the extra unlabeled information. We report inference time comparison experiments in Table~\\ref{time}. The average inference time of PTN is 13.68s.\n\n\n\n\\section{Related Work}\n\\subsection{Few-Shot Learning} \nAs a representative of the learning methods with limited samples, \\textit{e.g.,} weakly supervised learning~\\cite{lan2017robust,zhang2018adversarial}, semi-supervised learning~\\cite{zhu2003semi,calder2019properly}, \nfew-shot learning can be roughly grouped into two categories: meta-learning models and transfer-learning models. Meta-learning models adopt the episode training mechanism~\\cite{vinyals2016matching}, of which metric-based models optimize the transferable embedding of both auxiliary and target data, and queries are identified according to the embedding distances~\\cite{sung2018learning,li2019distribution,Simon_2020_CVPR,zhang2020sgone}. Meanwhile, meta-optimization models~\\cite{finn2017model,rusu2018meta} target at designing optimization-centered algorithms to adapt the knowledge from meta-training to meta-testing. \nInstead of separating base classes into a set of few-shot tasks, transfer-learning methods~\\cite{qiao2018few,gidaris2018dynamic,chen2018closer,qi2018low} utilize all base classes to pre-train the few-shot model, which is then adapted to novel-class recognition. \nMost recently, Tian \\textit{et al.} \\cite{tian2020rethinking} decouple the learning procedure into the base-class embedding pre-training and novel-class classifier learning. By adopting multivariate logistic regression and knowledge distillation, the proposed model outperforms the meta-learning approaches. \nOur proposed method is inspired by the transfer-learning framework, where we adapt this framework to the semi-supervised few-shot learning by exploring both unlabeled novel-class data and base-class data to boost the performance of few-shot tasks.\n\n\\subsection{Semi-Supervised Few-shot Learning (SSFSL)}\nSSFSL aims to leverage the extra unlabeled novel-class data to improve the few-shot learning. \\citeauthor{ren2018meta} \\cite{ren2018meta} propose a meta-learning based framework by extending the prototypical network \\cite{snell2017prototypical} with unlabeled data to refine class prototypes. LST \\cite{li2019learning} re-trains the base model using the unlabeled data with generated pseudo labels. During the evaluation, it dynamically adds the unlabeled sample with high prediction confidence into testing. In \\cite{yu2020transmatch}, TransMatch proposes to initialize the novel-class classifier with the pre-trained feature imprinting, and then employs MixMatch \\cite{berthelot2019mixmatch} to fine-tune the whole model with both labeled and unlabeled data.\nAs closely related research to SSFSL, the transductive few-shot approaches~\\cite{liu2018learning,kim2019edge,ziko2020laplacian} also attempt to utilize unlabeled data to improve the performance of the few-shot learning. These methods adopt the entire query set as the unlabeled data and perform inference on all query samples together. For instance, TPN \\cite{liu2018learning} employs graph-based transductive inference to address the few-shot problem, and a semi-supervised extension model is also presented in their work.\n\nUnlike the above approaches, in this paper, we adopt the transfer-learning framework and propose to fully explore the extra unlabeled information in both classifier learning and embedding learning with different learning strategies. \n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=0.85\\linewidth]{.\/images\/flowchart.pdf}\n\\end{center}\n \\caption{The overview of the proposed PTN. We first pre-train a feature embedding $f_{\\theta_0}$ from the base-class set using standard cross-entropy loss. This embedding is then fine-tuned with the external novel-class unlabeled data by adopting unsupervised transferring loss $\\ell_{UT}$ to generate $f_{\\theta}$. Finally, we revise a graph model named PoissonMBO to conduct the query label inference.}\n\\label{framework}\n\\end{figure*}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nDynamics of liquid drops is familiar in daily life: we observe rain drops\nrolling on a new umbrella, honey dripping off from a spoon, and oil droplets\nfloating on the surface of vegetable soup and so on. Such everyday phenomena\nare in fact important not only in physical sciences\n\\cite{RichardClanetQuere2002,DoshiCohenZhangSiegelHowellBasaranNagel2003,CouderProtiereFortBoudaoud2005,RistenpartBirdBelmonteDollarStone2009,PhysRevLett.79.1265,BirdDeCourbinStone2010,EtienneWedge2014}\nbut also in a variety of practical issues such as ink-jet printing\n\\cite{Calvert2001}, microfluidics manipulations\n\\cite{SquiresQuake2005,MathileTabeling2014}, and emulsification, formation of\nspray and foams \\cite{DynamicsDroplets,PhysicsFoams,Sylvie}. From such\nphenomena familiar to everybody, researchers have successfully extracted a\nnumber of scaling laws representing the essential physics \\cite{CapilaryText},\nwhich include scaling laws associated with the lifetime of a bubble in viscous\nliquid \\cite{DebregeasGennesBrochard-Wyart1998,EriOkumura2007} and contact\ndynamics of a drop to another drop\n\\cite{AartsLekkerkerkerGuoWegdamBonn2005,YokotaPNAS2011} or to a solid plate\n\\cite{BirdMandreStone2008,DavidCoalPRE}. Here, we report on a crossover of two\nscaling regimes experimentally revealed for viscous friction acting on a fluid\ndrop in a confined space. In particular, we study the descending motion (due\nto gravity) of an oil droplet surrounded by another immiscible oil in a\nHele-Shaw cell. The friction law thus revealed is nonlinear and replaces the\nwell-known Stokes' law in the Hele-Shaw cell geometry.\n\nA closely related topic of the rising bubble in a Hele-Shaw cell is\ntheoretically discussed by Taylor and Saffman in a pioneering paper\n\\cite{TAYLORSAFFMAN1959} in 1958 (earlier than the Bretherton's paper on\nbubbles in tubes \\cite{Bretherton,Clanet2004}). The solution of Taylor and\nSaffman was further discussed by Tanveer \\cite{Tanveer1986}. There are many\nother theoretical works on fluid drops in the Hele-Shaw cell geometry, notably\nin the context of the topological transition associated with droplet breakup\n\\cite{Eggers1997,ConstantinDupontGoldsteinKadanoffShelleyZhou1993,GoldsteinPesciShelley1995,Howell1999}%\n. \\ As for experimental studies, a number of researchers have investigated the\nrising motion of a bubble in a Hele-Shaw cell\n\\cite{Maxworthy1986,Kopf-SillHomsy1988,MaruvadaPark1996}. However, unlike the\npresent study, systematic and quantitative studies in a constant velocity\nregime have mostly concerned with the case in which there is a forced flow in\nthe outer fluid phase and most of the studies have been performed with the\ncell strongly inclined nearly to a horizontal position (one of a few examples\nof the case with the cell set in the upright position but with external flow\n\\cite{HeleShawPetroleum2010} demonstrates relevance of the present work to\nimportant problems in petroleum industry, such as the suction of crude oil\nfrom the well).\\ \n\nOne of the features of the present study compared with most of previous ones\non the dynamics of fluid drops in a Hele-Shaw cell is that in the present case\nthe existence of a thin liquid film surrounding a fluid drop plays a crucial\nrole: In many previous works, the existence of such thin films is not\nconsidered. In this respect, the present problem is closely related to the\ndynamics governed by thin film dissipation such as the imbibition of textured\nsurfaces\n\\cite{StoneNM2007,IshinoReyssatEPL2007,ObaraPRER2012,TaniPlosOne2014,TaniSR2015,DominicVellaImbibition2016}%\n. In this sense, our problem is quasi two-dimensional, although the geometry\nof the Hele-Shaw cell is often associated with a purely two-dimensional problem.\n\n\\section*{Experiment}\n\nWe fabricated a Hele-Shaw cell of thickness $D$\n\\cite{EriOkumura2007,EriOkumura2010,YokotaPNAS2011} and filled the cell with\nolive oil (150-00276, Wako; kinematic viscosity $\\nu_{ex}=60$ cS and density\n$\\rho_{ex}=910$ kg\/m$^{3}$). This oil plays a role of an external surrounding\nliquid for a drop of poly(dimethylsiloxane) (PDMS) to be inserted at the top\nof the cell using a syringe (SS-01T, Termo). We observe the inserted drop\ngoing down in the cell, as illustrated in Fig. \\ref{Fig1}(a), because of the\ndensity difference $\\Delta\\rho=\\rho_{in}-\\rho_{ex}>0$. The drop density\n$\\rho_{in}$ depends on its kinematic viscosity $\\nu_{in}$ only slightly (see\nthe details for Methods). The drop size is characterized by the cell thickness\n$D$ and the width $R_{T}$, i.e., the size in the direction transverse to that\nof gravity (see Fig. \\ref{Fig1}(b)), which is slightly smaller than the size\nin the longitudinal direction, $R_{L}$. As shown in Fig. \\ref{Fig1}(c), a thin\nfilm of olive oil exists between a cell plate and the surface of the drop. We\ncan think of two limiting cases for the distribution of liquid flow: (1)\nInternal Regime: The velocity gradient is predominantly created in the\ninternal side of the droplet as in the left illustration. (2) External Regime:\nThe gradient is predominantly exists in the external side of the droplet as in\nthe right.\n\nThe width and height of the cell are 10 cm and 40 cm, respectively, and are\nmuch larger than the drop size to remove any finite size effects in the\ndirection of width and height. The cell is made of acrylic plates of thickness\n5 mm, to avoid thinning deformation of the cell due to the effect of capillary\nadhesion \\cite{CapilaryText}.\n\nWe took snapshots of the descending drop at a regular time interval using a\ndigital camera (Lumix DMC-G3, Panasonic) and a camera controller (PS1,\nEtsumi). The obtained data were analyzed with the software, Image J, to obtain\nthe position as a function of time to determine the descending velocity of the\ndrop. Some examples are shown in Fig. \\ref{Fig1}(d). This plot show the\nfollowing facts. (1) The descending motion can be characterized by a\nwell-defined constant velocity (to guarantee a long stationary regime, the\ncell height is made significantly larger (40 cm) than the drop size; because\nof a small density difference, the constant-velocity regime starts after a\nlong transient regime). (2) The descending velocity is dependent on the\nkinematic viscosity of the internal liquid of the drop $\\nu_{in}$ for the\nthinner cell ($D=0.7$ mm) as predicted in the previous study\n\\cite{EriSoftMat2011}, which is not the case for the thicker cell ($D=1.5$\nmm); These examples clearly demonstrate the existence of a novel scaling\nregime different from the one discussed in the previous study\n\\cite{EriSoftMat2011}.\n\nIn the present study, the dependence of the descending velocity on the drop\nsize is negligible. In the previous study \\cite{EriSoftMat2011}, it was found\nthat the descending speed of drops is dependent on $R_{T}$ for $R_{T}\/D<10$ if\na glycerol drop goes down in PDMS oil. However, in the present combination\n(i.e., a PDMS drop going down in olive oil), we do not observe a significant\ndependence on $R_{T}$ in our data even for fairly small drops, whereas $R_{T}$\nis in the range $1.31k_{3}D\/\\kappa^{-1}$.\nIn other words, the phase boundary between the internal and external regimes\nis given by\n\\begin{equation}\n\\eta_{ex}\/\\eta_{in}=k_{3}D\/\\kappa^{-1}\\label{eq10}%\n\\end{equation}\nwith $k_{3}=k_{1}^{3}\/k_{in}$. This means that the phase boundary between the\ninternal and external regime is a straight line with the slope $k_{3}$ on the\nplot of $\\eta_{ex}\/\\eta_{in}$ as a function of $D\/\\kappa^{-1}$.\n\n\\section*{Experiment and theory}\n\nThe experimental data for the descending velocity of drops $V$ are plotted as\na function of $\\Delta\\rho gD^{2}\/\\eta_{in}$ in Fig. \\ref{Fig2}(a). In view of\nEq. (\\ref{eq4}), the data points in the internal regime would be on a straight\nline of slope 1. This is almost true: there is a series of data well on the\ndashed line of slope close to one. Naturally, there is a slight deviation from\nthe theory: the slope of the straight dashed line obtained by a numerical\nfitting is in fact $1.24\\pm0.06$, a value slightly larger than one, but the\ncoefficient corresponding $k_{in}$ is $0.150\\pm0.015$, the order of magnitude\nof which is consistent with the scaling arguments.\n\nSome detailed remarks for the above arguments are as follows. (1) Even in the\nprevious study \\cite{EriSoftMat2011} in which the internal scaling regime was\nconfirmed for the first time, the scaling regime described by Eq. (\\ref{eq4})\nwas shown with some deviations, similarly to the present case (whereas another\nscaling regime first established in the previous paper \\cite{EriSoftMat2011}\nis almost perfectly demonstrated). (2) We note here that the data represented\nby the red filled circle and red filled inverse triangle are exceptional ones\nand their seemingly strange behavior will be explained in Discussion. (3) We\nhave confirmed that even if we replace $D$ with $D-2h$ in the analysis (by\nusing the thickness $h$ estimated from Eq. (\\ref{eq6})) when $D$ is used as a\nlength scale characterizing the viscous gradient (i.e., when $D$ is used in\nthe expression $V\/D$ in Eq. (2)), any visible differences are not introduced\ninto the plots given in Fig. \\ref{Fig2} (This correction could be motivated by\nconsidering the existence of thin films surrounding the drops as in Fig.\n\\ref{Fig1}(c) as mentioned above).\n\nIn Fig. \\ref{Fig2}(b), it is shown that some of the data we obtained clearly\nsatisfy Eq. (\\ref{eq8}), which describes the external regime. In Fig.\n\\ref{Fig2}(b), we collected the data points that are off the dashed line of\nslope close to one in Fig. \\ref{Fig2}(a) and that are thus ruled out from the\ninternal regime. The data thus selected and plotted in Fig. \\ref{Fig2}(b) are\nalmost on the straight line of slope 3 in accordance with Eq. (\\ref{eq8}). The\nstraight line is obtained by a numerical fitting with fixing the slope to 3.0;\nas a result of this fitting, the coefficient is given as $k_{1}=0.167\\pm\n0.003$, the order of magnitude of which is consistent with the scaling arguments.\n\nWe confirm this scaling law in Eq. (\\ref{eq8}) also in Fig. \\ref{Fig2}(a). In\nthe light of Eq. (\\ref{eq8}), the data in the external regime for a given $D$\nshould take almost the same values, because $\\eta_{ex}$ and $\\gamma$ are both\nconstant and $\\kappa^{-1}$ is almost constant (note that $\\Delta\\rho$ is\nalmost constant) in the present study. In fact, in Fig. \\ref{Fig2}(a), the\ndata points for a fixed $D$ that are off the dashed line, which data are shown\nto be in the external regime in Fig. \\ref{Fig2}(b), take almost a constant\nvalue, that is, they are located almost on a horizontal line. This fact also\nconfirms that the data in question are independent of $\\eta_{in}$, that is,\nthey are certainly not in the internal regime. Strictly speaking, the data\nlabeled as a given $D$ can have slightly different measured values of $D$ (see\nMethods), which is the main reason the data for a \"given\" $D$ that are off the\ndashed line in Fig. \\ref{Fig2}(a) slightly deviate from the straight\nhorizontal line corresponding the $D$ value.\n\nThe scaling law in Eq. (\\ref{eq8}) can be confirmed in Fig. \\ref{Fig2}(a) in a\nstill another way. The open marks of the same shape, say diamond, but with\ndifferent colors (that are the data for a given $\\nu_{in}$ but with different\n$D$) are almost on a straight line of a slope close to one (This slope may\nseem to be slightly larger than one, which may be because of the uncertainty\non the cell spacing $D$ as already mentioned\\ in the last sentence of the\nparagraph just above this one, or because the exponent 3 in Eq. (\\ref{eq8})\nmay be in fact slightly larger than 3 in a more complete theory beyond the\npresent arguments at the level of scaling laws). For a such series of data,\nthe velocity $V$ scales with $D^{3}$ according to Eq. (\\ref{eq8}), thus when\nplotted as a function of $D^{2}$ as in Fig. \\ref{Fig2}(a), the quantity\nlinearly scales with $D$, as \\ reasonably well confirmed.\n\nThe phase diagram based on Eq. (\\ref{eq10}) is shown in Fig. \\ref{Fig2}(c), in\nwhich\\ we plot all the data (except for the special data mentioned above), to\ndemonstrate further consistency of the present arguments. As expected from Eq.\n(\\ref{eq10}), we can indeed draw a straight line of slope 1 on Fig.\n\\ref{Fig2}(c), which divides the internal and external regimes; Above the\nstraight line of slope 1 in Fig. \\ref{Fig2}(c) lie the data in the internal\nregime described by Eq. (\\ref{eq4}), i.e., the data on the straight dashed\nline in Fig. \\ref{Fig2}(a); Below the straight line in Fig. \\ref{Fig2}(c) lie\nthe data in the external regime described by Eq. (\\ref{eq8}), i.e., the data\non the straight line in Fig. \\ref{Fig2}(b). The coefficient $k_{3}$ of Eq.\n(\\ref{eq10}), i.e., the line dividing two regimes shown in Fig. \\ref{Fig2}(c),\nis $k_{3}=0.017$,\\ the order of magnitude of which is consistent with the\nscaling arguments in a profound\\ sense: The numerical coefficient, $k_{in}$,\n$k_{1}$, and $k_{3}$, are predicted to satisfy the relation $k_{3}=k_{1}%\n^{3}\/k_{in}$, and this relation is satisfied at a quantitative level in the\npresent analysis (0.017 vs $(0.167)^{3}\/0.15\\simeq0.031$). This quantitative\nagreement is indeed quite satisfactory, if we consider slight deviations of\nthe data from the predicted theory. For example, the value $0.15$ used in the\nestimation in the parentheses is not the value of $k_{in}$ itself (the precise\ndefinition of $k_{in}$ is the coefficient appearing in Eq. (\\ref{eq4}),\n$V_{in}=k_{in}\\Delta\\rho gD^{2}\/\\eta_{in}$, but the value of $k_{in}$, 0.15,\nused in the above is in fact the value of the coefficient $k_{in}^{\\prime}$\nappearing in the relation $V_{in}=(k_{in}^{\\prime}\\Delta\\rho gD^{2}\/\\eta\n_{in})^{\\alpha}$ obtained when the data corresponding to the internal regime\nin Fig. \\ref{Fig2}(a) are numerically fitted by this relation with $\\alpha$\ndetermined to be not equal to one but close to 1.24, as mentioned in the first\nparagraph in Experiment and Theory).\\ In addition, the exponent in (\\ref{eq8})\nmight also be slightly deviated from 3 as suggested in the paragraph just\nabove this one.\n\nThe crossover from the internal to external regime can explicitly be seen in\nthe data for $D=1.0$ mm (red data) in Fig. \\ref{Fig2}(a). As $\\eta_{in}$\ndecreases from the left-most data for $\\nu_{in}=30000$ cS (red open diamonds)\nto the data for $\\nu_{in}=5000$ cS (red open inverse triangle), the velocity\nis independent of $\\nu_{in}$, which reveals that the three data on the\nhorizontal line are in the external regime. However, the data for $\\nu\n_{in}=1000$ cS and $\\nu_{in}=500$ cS are on the straight dashed line with a\nslope close to one, which confirms that these two data are in the internal\nregime. Since the phase boundary expressed by Eq. (\\ref{eq10}) is obtained\nalso by equating $V_{in}$ and $V_{ex}$ in Eqs. (\\ref{eq4}) and (\\ref{eq8}),\nthe crossover between the two regimes occurs in Fig. \\ref{Fig2}(a) near at the\ncross point between the horizontal line connecting the data on the external\nregime for a given $D$ and the straight dashed line of a slope close to one\nrepresenting the internal regime.\n\nThe behavior of the data close to the crossover points are quite intriguing.\nThe data for $D=2.0$ mm and 3.0 mm at $\\nu_{in}=1000$ cS (green filled square\nand purple filled square) are located at the position close to the phase\nboundary in Fig. \\ref{Fig2}(c) (and the data have already been confirmed to be\nin the internal regime in Fig. \\ref{Fig2}(a): in this plot, these data points\nare reasonably well on the dashed line).\\ We have confirmed that, when these\ntwo data are plotted in Fig. \\ref{Fig2}(b), they are nearly on the straight\nline of slope close to 3 in Fig. \\ref{Fig2}(b). The two points can be\ndescribed by both Eqs. (\\ref{eq4}) and (\\ref{eq8}), which is reasonable\nbecause they are nearly on the phase boundary. However, this is not always the\ncase. The data for $D=0.7$ mm and $\\nu_{in}=5000$ cS (black filled inverse\ntriangle) and for $D=1.5$ mm and $\\nu_{in}=3000$ cS (blue open triangle) are\nalso positioned close to the phase boundary in Fig. \\ref{Fig2}(c). However,\nthe former is rather in the internal regime and the latter rather in the\nexternal regime. This is in a sense logical because the blue open triangle is\nrather away from the crossover point for $D=1.5$ mm in Fig. \\ref{Fig2}(a) but\nthis is not the case for black filled inverse triangle. In general, how\nquickly the crossover occurs seems to be a subtle problem.\n\n\\section*{Discussion}\n\nThe direct measurement of the thickness $h$ supports the above analysis. We\nused a laser distance sensor\\ (ZS-HLDS2+ZS-HLDC11+Smart Monitor Zero Pro.,\nOmron), as illustrated in Fig. \\ref{Fig3}(a). The measurement is extremely\ndelicate and difficult, because we have six reflective planes I to VI with\nsignificantly different strengths of reflection where the target two\nreflections II and III are the smallest and the second smallest among them\n(see Fig. \\ref{Fig3}(b)). The six surfaces are the front and back surfaces of\nthe front cell plate (interface I and II), the front and back interfaces\nbetween olive oil and the PDMS drop (interface III and IV), and the front and\nback surfaces of the back cell plate (interface V and VI). To determine $h$,\nwe need to detect reflection from interface II and III, where the\nreflection\\ from II is small compared with that of III (see Fig.\n\\ref{Fig3}(b)) and significantly small compared with that of I, because the\nrefractive index of olive oil is $n_{olive}=1.47$, that of acrylic plate is\n$n_{acr}=1.491$, that of PDMS oil is $n_{PDMS}=1.403$ and that of air is\n$n_{air}=1$. Furthermore, the object (the descending drop) is moving. In spite\nof these experimental difficulties, we obtained a reasonably good correlation\nbetween the measured thickness and the experimentally\\ obtained value as shown\nin Fig. \\ref{Fig3}(c), by virtue of various efforts (for example, in the\nscreen shot Fig. \\ref{Fig3}(b), the two target peaks are intensionally\npositioned off-center because the precision of measurement becomes the maximum\nwhen the reflection angle is the largest). Here, the slope of the line\nobtained by a numerical fitting is $0.749\\pm0.027$ (the slope here is not the\nexponent but the coefficient for the linear relationship), the order of\nmagnitude of which is consistent with the scaling argument.\n\nExceptional data mentioned above reveal an intriguing phenomenon. In Fig.\n\\ref{Fig2}(a), the data for $D=1$ mm and for $\\nu_{in}=10000$ cS are\nrepresented by two different marks, the red filled circle and the red open\ncircle, with the former described by the internal regime and the latter by the\nexternal regime. The data for $D=1$ mm and for $\\nu_{in}=5000$ cS are also\ncategorized into two filled and open symbols. The experimental difference in\nacquiring these two different types (filled and open symbols) of data obtained\nfor identical drop viscosity and cell spacing\\ is that, when the drop goes\ndown on the same path multiple times in the same cell, the first drop is in\nthe external regime (open marks) whereas the drop going down after the first\none is always in the internal regime (filled marks). This apparently\nmysterious effect is quite reproducible and is understood by considering a\npossibility of mixing of olive oil and PDMS at the surface of the drops. For\nthe first drop, such a mixing effect is negligible and the drop is governed by\nthe dynamics of the external regime. However, after the first one, because of\nthe mixing effect, the viscosity of the thin film surrounding the drop\nincreases (because $\\nu_{in}\\gg\\nu_{ex}$)\\ so that making a velocity gradient\nin the external thin film is no longer favored in terms of energy and instead\nthe velocity gradient inside the drop is favored to realize the dynamics in\nthe internal regime. Because of this reason, the red filled circles\\ and the\nred filled inverse triangles\\ are not shown in the phase diagram given in Fig.\n\\ref{Fig2}(c). This seemingly mysterious behavior tends to be suppressed if\nthe viscosity is too small (because the \"external\" viscosity does not get\nsufficiently viscous), or too large (because the mixing is not sufficiently\neffective). This is why we observed this phenomenon only for the two values of viscosity.\n\nThe present study suggests that Stokes' drag friction $F=6\\pi\\eta_{ex}VR$ for\na solid sphere of radius $R$ surrounded by a viscous liquid of viscosity\n$\\eta_{ex}$ is replaced in the external regime of the Hele-Shaw cell geometry\nby\n\\begin{equation}\nF_{ex}\\simeq\\eta_{ex}VR_{T}R_{L}\/h\\simeq\\eta_{ex}Ca^{-2\/3}VR_{T}R_{L}%\n\/\\kappa^{-1}.\\label{eq11}%\n\\end{equation}\nThis expression possesses a nonlinear dependence on the velocity $V$ due to\nthe extra $V$ dependence contained in the capillary number $Ca$. This is\nstrikingly different from the two other expressions for the drag force:\n$F_{in}\\simeq\\eta_{in}VR_{T}R_{L}\/D$ and $F_{bubble}\\simeq\\eta_{ex}VR_{T}%\n^{2}\/D$, which are both linear in velocity; The former corresponds to the\ninternal regime in the present study, whereas the latter corresponds to the\ncase in which the dominant dissipation is the one associated with the velocity\ngradient $V\/D$ in the surrounding external liquid \\cite{EriSoftMat2011}. The\nviscous friction forces including the nonlinear friction in Eq. (\\ref{eq11})\nare relevant to the dynamics of emulsion, foam, antifoam and soft gels\n\\cite{Sylvie,AnnLaureSylvieSM2009,DominiqueMicrogravity2015}, in particular,\nnonlinear rheology of such systems\n\\cite{DenkovSoftMat2009,DurianPRL10,CloitreNP2011}.\n\nWe intentionally used several times the phrase, \\textquotedblleft the order of\nmagnitude of which is consistent with the scaling argument,\\textquotedblright%\n\\ which may be vague compared with an expression like, \\textquotedblleft being\nof order one further supports the scaling argument.\\textquotedblright\\ The\nreason we used the seemingly vague expression is that whether a coefficient\nfor a scaling law is of the order of one or not is in fact a subtle issue.\nDepending on the problem or on the definition of the coefficient, the orders\nof magnitude can be fairly larger or smaller than one. An example of such a\ncase can be given by exploiting the relation, $k_{3}=k_{1}^{3}\/k_{in}$, given\nabove: The three coefficients, $k_{1},k_{3}$, and $k_{in}$ are all\ncoefficients for some scaling laws so that, for example, $k_{1}$ and $k_{in}$\ncan be 5 and 1, respectively, but this example implies $k_{3}$ is much larger\nthan one ($k_{3}=5^{3}$).\\ \n\nIn the present study, the consistency of the whole scaling arguments is\nchecked in several ways, which clearly deepens our physical understanding. For\nexample, a new scaling regime is demonstrated through a clear data collapse\n(Fig. \\ref{Fig2}(b)), and the crossover of this regime to another is shown\n(Fig. \\ref{Fig2}(a)), which is completed by the phase diagram (Fig.\n\\ref{Fig2}(c)) and a separate measurements on thin-film thickness (Fig.\n\\ref{Fig3}(c)). In addition, data arrangements in the crossover diagram (Fig.\n\\ref{Fig2}(a)) are interpreted from various viewpoints, confirming the\nconsistency of the arguments.\n\n\\section*{Conclusion}\n\nIn summary, we show in Fig. \\ref{Fig2}(b) the existence of a novel scaling\nregime for the descending velocity of a drop surrounded by thin external fluid\nin the Hele-Shaw cell, in which regime the viscous dissipation in the thin\nfilm is essential. This regime corresponds to a nonlinear form of viscous drag\nfriction. In this regime, the thickness of the film is determined by the law\nof LLD, as directly confirmed in Fig. \\ref{Fig3}(c). The crossover between\nthis regime and another regime in which the viscous dissipation in the\ninternal side of the drop governs the dynamics is shown in Fig. \\ref{Fig2}(a).\nThe phase boundary between the two regimes are given in Fig. \\ref{Fig2}(c).\n\nThere are some other scaling regimes for the viscous drag friction in the\nHele-Shaw cell geometry with the existence of thin films surrounding a fluid\ndrop. For example, the dissipation associated with the velocity gradient $V\/D$\nin the internal drop liquid has been revealed to be important for a rising\nbubble in the Hele-Shaw cell \\cite{EriSoftMat2011}. The dissipation associated\nwith the dynamic meniscus (in the context of LLD theory\n\\cite{LandauLevich,Derjaguin1943,CapilaryText}) formed in the external\nthin-film has been found to be important in a non Hele-Shaw cell geometry\n\\cite{PascalEPL2002}. In addition, the present external regime will give\nanother scaling law if the capillary length $\\kappa^{-1}$ is, unlike in the\npresent study, larger than the cell thickness $D$.\n\nConfirmation of such other regimes for viscous drag friction in the Hele-Shaw\ncell geometry, as well as crossovers among various scaling regimes would be\nexplored in future studies. The simple friction laws for confined fluid drops\nand the crossover between them revealed in the present study (and in future\nstudies) are relevant to fundamental issues including rheology of foam and\nemulsion, as well as applications such as in microfluidics.\n\n\\section*{Methods}\n\nThe density of PDMS oil $\\rho_{in}$ slightly depends on viscosity: (1) 970\nkg\/m$^{3}$ for the kinematic viscosities $\\nu_{in}=500,1000,$ and $3000$ cS\n(SN-4, SN-5, and SN-6, As One). (2) 975 kg\/m$^{3}$ for $\\nu_{in}=5000$ and\n$10000$ cS (SN-7 and SN-8, As One). (3) 976 kg\/m$^{3}$ for $\\nu_{in}=30000$ cS\n(KF-96H, ShinEtsu).\n\nThe cell thickness $D$ is controlled by spaces, and is directly measured using\nthe laser distance sensor\\ (ZS-HLDS5, Omron) for most of the cells. In all the\nfigures of the present study, for simplicity, the cell thickness $D$ is\nrepresented by an approximate value, which is slightly different from measured\nvalues. For some of the data the measurement of $D$ was not performed and in\nsuch a case an approximate value of $D$ is used, instead of measured values,\nto plot the data points, which does not cause serious difficulties in\nanalyzing and interpreting the data. This is because the difference between\nthe $D$ value used for labeling and the measured value of the cell thickness\nis rather small.\n\nThe interfacial tension between PDMS and olive oil was measured by using\npendant drop tensiometry. It is recently discussed that measured values for\npendant drops are dependent on Bond number and Worthington number, with both\nscaling with $B=\\Delta\\rho gR_{0}^{2}\/\\gamma$ ($R_{0}$: the drop radius at the\napex of the pendant drop) when the drop size is of the same order of magnitude\nas the needle diameter, and that the measured value approach the correct value\nas $B$ approaches one \\cite{PendantDrop2015} (one could expect that the\nexperimental precision will be optimized when the drop is most \"swelled,\" that\nis, when the droplet is on the verge of detaching off from the needle tip due\nto gravity, that is, when $B=1$).\\ We measured the value of tension as a\nfunction of $B$ by using the software, OpenDrop, developed by Michael Neeson,\nJoe Berry and Rico Tabor. We extrapolated the data thus obtained to the value\nat $B=1$ to have a pragmatic\\ value, $\\gamma=0.78$ mN\/m, because it was\nexperimentally difficult to approach $B=1$. This is possibly because the\ntension is significantly small, which might lead to an extra error in the measurement.\n\nEven though the measurement of the interfacial tension contains an extra error\nand our analysis numerically depends on the measured value, this does not\nbring any uncertainties in the present arguments at the level of scaling laws.\nWe explain this by an example. Introducing the experimentally measured value\nof surface tension $\\gamma_{m}$, we define a numerical coefficient $\\beta$ as\n$\\gamma=\\beta^{2}\\gamma_{m}$ and the corresponding capillary length\n$\\kappa^{-1}=\\beta\\kappa_{m}^{-1}$. With these \"measured\" quantities, Eq.\n(\\ref{eq8}) can be expressed as $\\eta_{ex}V_{ex}\/\\gamma_{m}=k_{1,m}%\n^{3}(D\/\\kappa_{m}^{-1})^{3}$ with $k_{1,m}^{3}=k_{1}^{3}\/\\beta$. By noting\nthat the values of the interfacial tension and capillary number used in Fig.\n\\ref{Fig2}(c) that experimentally confirms the relation Eq. (\\ref{eq8}) are in\nfact not $\\gamma$ and $\\kappa^{-1}$ but $\\gamma_{m}$ and $\\kappa_{m}^{-1}$,\nrespectively, the coefficient we determined from Fig. \\ref{Fig2}(c) is in fact\nnot $k_{1}$ but $k_{1,m}$. However, since Eq. (\\ref{eq10}) can be expressed as\n$\\eta_{ex}\/\\eta_{in}=k_{3,m}D\/\\kappa_{m}^{-1}$ with $k_{3,m}=k_{3}\/\\beta$, the\nphase boundary line $\\eta_{ex}\/\\eta_{in}=k_{3,m}D\/\\kappa_{m}^{-1}$ on the\n$(\\eta_{ex}\/\\eta_{in},D\/\\kappa_{m}^{-1})$ space and the line $\\eta_{ex}%\n\/\\eta_{in}=k_{3}D\/\\kappa^{-1}$ on the $(\\eta_{ex}\/\\eta_{in},D\/\\kappa^{-1})$\nspace have the same physical meaning. From these reasons, a special care is\nneeded when one compares the numerical coefficient obtained experimentally in\nthe present study with more sophisticated experiments or calculations.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzsha b/data_all_eng_slimpj/shuffled/split2/finalzsha new file mode 100644 index 0000000000000000000000000000000000000000..9ce76858c55a26b5cad2f79a11fec6c48ca0d5ec --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzsha @@ -0,0 +1,5 @@ +{"text":"\\subsection{\\@startsection{subsection}{1}%\n \\z@{.7\\linespacing\\@plus\\linespacing}{.5\\linespacing}%\n {\\normalfont\\bfseries\\centering}\n\\makeatother\n\n\\setcounter{tocdepth}{3\n\\let\\oldtocsection=\\tocsection\n\\let\\oldtocsubsection=\\tocsubsection\n\\let\\oldtocsubsubsection=\\tocsubsubsection\n\\renewcommand{\\tocsection}[2]{\\hspace{0em}\\oldtocsection{#1}{#2}}\n\\renewcommand{\\tocsubsection}[2]{\\hspace{1em}\\oldtocsubsection{#1}{#2}}\n\\renewcommand{\\tocsubsubsection}[2]{\\hspace{2em}\\oldtocsubsubsection{#1}{#2}}\n\n\n\\setlength{\\textwidth}{\\paperwidth}\n\\addtolength{\\textwidth}{-2.5in}\n\\calclayout\n\n\\begin{document}\n\\title[]{Adams-type maps are not stable under composition}\n\n\\author{Robert Burklund}\n\\address{Department of Mathematics, MIT, Cambridge, MA, USA}\n\\email{burklund@mit.edu}\n\n\\author{Ishan Levy}\n\\address{Department of Mathematics, MIT, Cambridge, MA, USA}\n\\email{ishanl@mit.edu}\n\n\\author{Piotr Pstr\\k{a}gowski}\n\\address{Department of Mathematics, Harvard, Cambridge, MA, USA}\n\\email{pstragowski.piotr@gmail.com}\n\n\\begin{abstract}\nWe give a simple counterexample to the plausible conjecture that Adams-type maps of ring spectra are stable under composition. We then show that over a field, this failure is quite extreme, as any map of $\\mathbb{E}_{\\infty}$-algebras is a transfinite composition of Adams-type maps. \n\\end{abstract}\n\n\\maketitle \n\n\n\n\nThe Adams spectral sequence is a fundamental tool in stable homotopy theory which, given a map $A \\to B$ of ring spectra, lets us compute the homotopy groups of $A$ in terms of information living over $B$. \nUnfortunately, the $E_{2}$-page of the Adams spectral sequence can be difficult to identify in general. \nThe standard additional assumption which makes the $E_2$-page computable in terms of homological algebra is that the map $A \\to B$ is flat in one of the senses below.\n\n\\begin{definition}\nWe say a map $A \\rightarrow B$ of $\\mathbb{E}_{1}$-rings is \n\\begin{enumerate}\n \\item \\emph{Adams-type} if we can write $B \\simeq \\varinjlim B_{\\alpha}$ as a filtered colimit of perfect $A$-modules with the property that $\\pi_{*}(B \\otimes_{A} B_{\\alpha})$ is projective as a $\\pi_*B$-module, \n \\item \\emph{descent-flat}\\footnote{In older literature, what we call descent-flat is often simply referred to as \\emph{flat}. We avoid the latter term as it is also often used to refer to maps such that $\\pi_*A \\rightarrow \\pi_*B$ is flat, a much stronger condition than the one we work with.} if $\\pi_{*}(B \\otimes_{A} B)$ is flat as a left $\\pi_*B$-module. \n\\end{enumerate}\n\\end{definition}\n\nIf $A \\rightarrow B$ is descent-flat, then for any pair $M$, $N$ of $A$-modules such that $\\pi_{*}(B \\otimes_{A} N)$ is projective $\\pi_*B$-module, the associated Adams spectral sequence computing homotopy classes of $A$-module maps from $M$ to $N$ has signature\n\\[\n\\Ext^{s, t}_{(\\pi_*B,\\, \\pi_*(B \\otimes_A B))} \\left(\\pi_{*}(B \\otimes_{A} N),\\, \\pi_{*}(B \\otimes_{A} M)\\right) \\Longrightarrow [N,\\, M]_{A}.\n\\]\nIf $A \\rightarrow B$ is furthermore Adams-type, then one can construct an Adams spectral sequence with the above signature with no projectivity assumption on $\\pi_{*}(B \\otimes_{A} N)$, as in the work of Devinatz \\cite[\\S 1]{dev_morava}.\n\nAs both of the above notions are a form of flatness, it is natural to expect that they are stable under composition, and the authors of this note have spent considerable amount of time trying to show that this is indeed the case. To our surprise, this is very far from being true; our first result is the following simple counterexample. \n\n\\begin{theorem}\n\\label{theorem:main_theorem}\nBoth of the maps $\\mathbb{S} \\rightarrow \\mathrm{MU}$ and $\\mathrm{MU} \\rightarrow \\mathbb{Z}$ are Adams-type, but the composite $\\mathbb{S} \\rightarrow \\mathbb{Z}$ is not even descent-flat. In particular, neither Adams-type nor descent-flat maps of $\\mathbb{E}_{\\infty}$-ring spectra are stable under composition.\n\\end{theorem}\n\n\\begin{proof}\nIt is well-known that $\\mathbb{S} \\rightarrow \\mathrm{MU}$ is Adams-type \\cite[13.4(iv)]{adams1995stable}. Similarly, $\\mathbb{Z}_{*} \\mathbb{Z}$ is known to contain torsion elements in positive degrees and so is not flat over $\\mathbb{Z}_{*}$ \\cite{cartan1955seminaire\n, hence $\\mathbb{S} \\rightarrow \\mathbb{Z}$ is not descent-flat. \n\nTo see that $\\mathrm{MU} \\to \\mathbb{Z}$ is Adams-type, note that if we write $\\mathrm{MU}_{*} \\simeq \\mathbb{Z}[b_{1}, b_{2}, \\ldots]$ for some choice of generators $b_{i}$, then \n\\[\n\\mathbb{Z} = \\colim_{n} \\ \\bigotimes^{\\mathrm{MU}}_{1 \\leq i \\leq n}\\mathrm{MU}\/b_i\n\\]\nin $\\mathrm{MU}$-modules. We claim this is the needed expression of $\\mathbb{Z}$ as a filtered colimit. To see this, note that we have an equivalence \n\\[\n\\mathbb{Z} \\otimes_{\\mathrm{MU}} \\mathrm{MU}\/b_{i} \\simeq \\mathbb{Z}\/b_{i} \\simeq \\mathbb{Z} \\oplus \\Sigma^{|b_{i}|+1} \\mathbb{Z}\n\\]\nas $b_{i}$ vanishes in $\\mathbb{Z}$ and hence for each $n$ we have \n\\[\n\\mathbb{Z} \\otimes_{\\mathrm{MU}} \\bigotimes^{\\mathrm{MU}}_{1 \\leq i \\leq n}\\mathrm{MU}\/b_i \\simeq \\bigotimes^{\\mathbb{Z}}_{1 \\leq i \\leq n} \\mathbb{Z} \\oplus \\Sigma^{|b_{i}|+1} \\mathbb{Z}\n\\]\nwhich is a perfect $\\mathbb{Z}$-module whose homotopy groups form a finitely generated free abelian group, as needed. \n\\end{proof}\n\n\n\n\nThe counterexample of \\cref{theorem:main_theorem} was contrary to our expectations, and led us to ask which maps of ring spectra can be written as compositions (perhaps infinite) of Adams-type maps. In \\cref{theorem:a_map_of_commutative_k_algebras_a_tf_composite_of_adams_type_maps} below we show that, surprisingly, at least for $\\mathbb{E}_{\\infty}$-algebras over a field, \\emph{any} map of ring spectra has this property.\n\n\n\n\n\\begin{lemma}\\label{lem:adamstypestability}\nLet $R \\to R'$ be a map of $\\mathbb{E}_{2}$-algebras. If $A$ is an $\\mathbb{E}_{1}$-$R$-algebra such that the unit $R \\rightarrow A$ is of Adams-type, then so is $R' \\rightarrow R' \\otimes_{R} A$. \n\\end{lemma}\n\n\\begin{proof}\n We start with an observation about the Adams-type condition:\n If $B \\to C$ is a map of $\\mathbb{E}_1$-algebras, and $M$ is a $B$-module, then \n $\\pi_*(C \\otimes_B M)$ is projective as a $\\pi_*C$-module iff the $C$-module $C \\otimes_B M$ is a retract of a sum suspensions of copies of $C$. We call a $B$-module which satisfies this condition $C$-projective.\n \n Now we prove the lemma.\n Write $A$ as a filtered colimit $A \\simeq \\colim A_{\\alpha}$ of $A$-projective $R$-modules.\n Then $R' \\otimes_{A} R \\simeq \\colim R' \\otimes_{R} A_{\\alpha}$ is a filtered colimit with the required property, as \n \\[ (R' \\otimes_R A) \\otimes_{R'} (R' \\otimes_R A_\\alpha) \\simeq R' \\otimes_R (A \\otimes_R A_\\alpha) \\]\n which implies that $R' \\otimes_R A_\\alpha$ is $(R' \\otimes_R A)$-projective.\n\\end{proof}\n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\\begin{corollary} \\label{cor:po-at}\nAdams-type maps of $\\mathbb{E}_{\\infty}$-ring spectra are stable under pushouts. \n\\end{corollary}\n\n\n\n\\begin{lemma}\n\\label{lemma:adams_type_if_source_or_target_is_a_field}\nLet $A \\rightarrow B$ be a map of $\\mathbb{E}_{1}$-rings such that either $\\pi_{*}A$ or $\\pi_{*}B$ is a graded skew-field (that is, every non-zero homogeneous element is invertible). Then $A \\rightarrow B$ is Adams-type. \n\\end{lemma}\n\n\\begin{proof}\nSuppose that the first condition holds. Then, every $A$-module is a direct sum of shifts of $A$, and hence for any way to write $B \\simeq \\varinjlim B_{\\alpha}$ as a filtered colimit of perfect $A$-modules, $B \\otimes_{A} B_{\\alpha}$ is a finite direct sum of shifts of $B$ and hence $\\pi_{*}(B \\otimes_{A} B_{\\alpha})$ is a free $B_{*}$-module. \n\nSimilarly, if $\\pi_{*} B$ is a graded skew-field, then the perfect $B$-modules $B \\otimes_{A} B_{\\alpha}$ are also finite sums of shifts of $B$, as needed.\n\\end{proof}\n\n\n\n\\begin{theorem}\n\\label{theorem:a_map_of_commutative_k_algebras_a_tf_composite_of_adams_type_maps}\nLet $k$ be a field and $A \\rightarrow B$ a map of $\\mathbb{E}_{\\infty}$-$k$-algebras. Then any $A \\rightarrow B$ can be factored as transfinite composite of Adams-type maps.\n\\end{theorem}\n\n\\begin{proof}\nLet $k \\{ x_{n} \\}$ denote the free $\\mathbb{E}_{\\infty}$-$k$-algebra on a variable of degree $| x_{n} | = n$. Let $\\mathcal{G}$ denote the set of maps $i_{n}: k \\rightarrow k \\{ x_{n} \\}$ together with the maps $p_{n}: k \\{ x_{n} \\} \\rightarrow k$ determined by $x_{n} \\mapsto 0$. As a consequence of \\cref{lemma:adams_type_if_source_or_target_is_a_field}, all of the $i_{n}$ and $p_{n}$ are Adams-type, as either the source or target is a field. By the small object argument, the given map $A \\rightarrow B$ can be factored as \n\\[\nA \\rightarrow B^{\\prime} \\rightarrow B\n\\]\nwhere $B^{\\prime} \\rightarrow B$ has the right lifting property with respect to the maps in $\\mathcal{G}$, and $A \\rightarrow B^{\\prime}$ is a transfinite composition of maps $A_{\\alpha} \\to A_{\\alpha+1}$ which fit into pushout squares \n\\begin{center}\n\\begin{tikzcd}\n\tS \\ar[r,\"f\"]\\ar[d]& \\ar[d]T \\\\\n\tA_{\\alpha}\\ar[r] & A_{\\alpha+1}\\arrow[ul, phantom, \"\\ulcorner\", very near start]\n\\end{tikzcd}\n\\end{center}\nwhere $f \\in \\mathcal{G}$. Each such map is Adams-type by \\cref{cor:po-at}, and it follows that $A \\rightarrow B^{\\prime}$ is a transfinite composite of such maps. On the other hand, one may observe that the right lifting property with respect to $i_n$ implies surjectivity on $\\pi_n(-)$ and the right lifting property with respect to $p_n$ implies injectivity on $\\pi_n(-)$, therefore $B^{\\prime} \\rightarrow B$ is an equivalence.\n \\qedhere\n\\end{proof}\n\n\\begin{remark}\nBy using a more careful argument, where we also allow pushouts along maps $k\\{S\\} \\to k$ and $k \\to k\\{S\\}$ for $S$ a (graded) set of generators, one can show that in the context of \\cref{theorem:a_map_of_commutative_k_algebras_a_tf_composite_of_adams_type_maps} the given map factors as an $\\omega$-indexed composite of Adams-type maps. \n\\end{remark}\n\n\nWe believe the above two results show that the somewhat unexplored theory of Adams-type ring spectra still has a few surprises up its sleeve. To further emphasize this point, we share with the reader a few natural questions which we believe are open.\n\n\\begin{enumerate}\n \\item Does there exist a descent-flat map $A \\rightarrow B$ which is not Adams-type\\footnote{If we had to guess, we expect that a descent-flat map of ring spectra which is not Adams-type does indeed exist, although we couldn't find one. A curious variant of this question (which we also do not know the answer to) would be to ask if exists a descent-flat map $A \\rightarrow B$ such that the associated homology theory $\\pi_{*}(B \\otimes_{A} -): \\Mod(A) \\rightarrow \\euscr{C}\\mathrm{omod}_{\\pi_{*}(B \\otimes_{A} B)}$ is adapted in the sense of \\cite[Definition 2.19]{patchkoria2021adams}, which is not Adams-type? In other words, does possessing a modified Adams spectral sequence based on $\\pi_{*}(B \\otimes_{A} B)$-comodules characterize Adams-type maps?}?\n \\item Is every map of $\\mathbb{E}_{1}$-algebras (over the sphere) a transfinite composite of Adams-type maps? Is it true for every map of $\\mathbb{E}_{\\infty}$-algebras? \n \\item Let $A \\rightarrow B$ be an Adams-type map and $M$ be an $A$-module such that $\\pi_{*}(B \\otimes_{A} M)$ is a flat $B_{*}$-module. Can we write $M \\simeq \\varinjlim M_{\\alpha}$ as a filtered colimit of perfect $A$-modules such that $\\pi_{*}(B \\otimes_{A} M_{\\alpha})$ is a projective $B_{*}$-module?\n\\end{enumerate}\n\n\\bibliographystyle{amsalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\nNowadays commercial quantum computers,\nwhich strive for near-term intermediate-scale quantum advantages~\\cite{PreskillNISQ,PreskillSupremacy},\nare accessed by the cloud~\\cite{Devitt}\nraising the problem of a client\nClara (C)\nnot trusting the remote server(s).\nRisk mitigation strategies include blind quantum computing (BQC)\nfor weakly quantum clients\n(using either single-server prepare-and-send\nor receive-and-measure protocols)~\\cite{Childs, BFK09, Fitzsimons, FK17,BKB+12, GKK19}\nand for purely classical clients\n(using multi-server entanglement-based protocols)~\\cite{GKK19, HZM+17,CCKW21} and quantum homomorphic encryption~\\cite{BJ15,OTF18,TFB+20,OTFR20}\nwhereby~C delegates quantum computation to one or more remote servers, who\nare denied key information about the computation~\\cite{Fitzsimons}.\nBuilding on successful experimental factorization\nof the odd semiprime\n(odd integer~$N=pq$ for~$p,q\\in\\mathbb{P}$ and~$p\\neq q$)\n$N=15$~\\cite{HZM+17},\nwe devise a protocol for C\nto delegate secure factorization of $N=21$~\\cite{Martin}\nto two remote quantum servers\ncalled Alice (A) \\&\\ Bob (B).\n\nOur approach extends the\nBQC factorization of~15\nin two ways~\\cite{HZM+17}.\nFirst we increase~$N$ from~15 to\nthe next odd semiprime number~21.\nSecond,\nwe choose a harder base $a$~\\cite{SSV13}\nfor modular exponentiation (modexp) $f(x):=a^x \\bmod N$\n(where $\\gcd(a,N)=1$ with~$\\gcd$ denoting greatest common divisor).\nThe period~$r$ of~$f(x)$\nyields a solution~$p=\\gcd(a^{\\nicefrac{r}2}+1, N)$ when\nthe following two conditions are simultaneously met:\n(i) either~$r$ is even, or~$r$ is odd and~$a$ is a perfect square,\nand (ii)~$a^{\\nicefrac{r}2}\\not\\equiv-1\\bmod{N}$.\nPeriod finding is sped up\nsubexponentially by quantum computing~\\cite{Shors}.\nFor~$N=15$ with~$a=11$,\n$r=2$ was achieved experimentally~\\cite{HZM+17};\nin contrast, we treat the hard case~$N=21$ with~$a=4$, for which~$r=3$.\nThis harder~$a$ requires incorporating a non-Clifford operator,\nfor which we employ the controlled-controlled-not (C$^2$NOT or Toffoli) gate~\\cite{Shi03}.\n\nThe remainder of our paper is orgainsed as follows:\nIn Sec.~\\ref{sec:Background}, we summarily recall\nsome relevant prerequisites to our work\nand point to comprehensive resources on key topics.\nIn Sec.~\\ref{sec:Approach}, we describe our methodology to construct\na blind quantum factorization scheme for given~$N$ and~$a$.\nIn Sec.~\\ref{sec:Results} we present a formal algorithm,\nalong with a function library, to design blind quantum factorization\ncircuits for arbitrary~$N$ and~$a$\nand also present the resulting circuits for two cases of~$N=21, a=4$.\nWe finish with a discussion on the significance of our results\nin Sec.~\\ref{sec:Disc} and a conclusion in Sec.~\\ref{sec:Conc}.\n\n\n\\section{Background}\n\\label{sec:Background}\n\nIn this section we briefly review\nthe state-of-the-art in BQC,\nblind quantum factorization,\nand some other key concepts\nthat are fundamental to our work.\nBQC is a quantum cryptographic protocol that\nallows clients with limited or no quantum hardware\nto outsource a comptuation to remote quantum server(s)\nwithout revealing information about the computation itself to the server(s)~\\cite{BFK09}.\nSeveral BQC protocols have already been developed\nand demonstrated for weakly quantum clients~\\cite{Childs, BFK09, Fitzsimons, FK17},\nbut a purely classical client communicating only classically\nwith a single quantum server might not be able to achieve secure BQC~\\cite{Aaronson}.\nNevertheless, this obstacle is overcome\nif multiple servers sharing non-local resources are employed~\\cite{RUV}.\nA brief overview of verifiable BQC can be found in Ref.~\\cite{GKK19}.\n\n\nSecure BQC for completely classical clients, thus,\nwarrants the remote and classical leveraging of\nquantum-advantageous algorithms,\nlike Shor's factorization~\\cite{Shors}\nor Grover's search~\\cite{Grover},\nwhich serve as prime candidates for delegation~\\cite{BKB+12, HZM+17}.\nDelegated Shor's factorization is known to be feasible\nin the measurement-based quantum computation model~\\cite{BFK09}\nand has been demonstrated experimentally for $N=15$\nin the quantum circuit model~\\cite{HZM+17}.\nThe approach in both these works comprises C\ndelegating the quantum period-finding subroutine,\nwhich computes the period~$r>1$ of~$f(x)$\nfor a given odd semiprime~$N=pq$ with unknown~$p$ \\&~$q$,\nof Shor's algorithm to remote server(s).\nFrom the outputs returned by the servers to her,\nC classically computes the factors as\n\\begin{equation}\n\tp,q=\\gcd(a^{\\nicefrac{r}2}\\pm 1, N).\n\\end{equation}\n\nA proof-of-principle implementation of BQC for a completely classical client\nhas been demonstrated in Ref.~\\cite{HZM+17},\nwherein Shor's algorithm~\\cite{Shors} is executed for factorizing\n$N=15$ using verifiable BQC based on the RUV protocol~\\cite{RUV}.\nThis blind quantum factorization was performed for the choice of base~$a=11$,\nwhich results in~$r=2$, thus\nmaking the experimental demonstration sufficiently challenging for a proof-of-concept\nbut not as realistic as, for example, the case~$a=7$ and~$r=4$ would be.\nThis is because~$r=2$ implies that the quantum period-finding circuit\nhas reduced to a classical coin-toss experiment~\\cite{SM09}---an anomaly\nthat can be rationalised as the choice of base~$a=11$\n(implicitly) assuming pre-knowledge of the factors~\\cite{SM09}.\n\nA pre-knowledgeless factorization scheme would have to\nchoose a random base~$a$ from some set of allowed bases.\nWithout any prior ansatz,\nsuch a choice would yield a hard base with high probability;\na hard base implies a period~$r>2$\nand a period-finding circuit requiring the multi-qubit Toffoli gate,\nwhich is a non-Clifford operator.\nIntroducing\na non-Clifford operator\nbrings\nthe quantum resource called ``magic''\ninto play~\\cite{MagicState}.\n``Magic'' enables quantum circuits to violate conditions for efficient classical simulatability~\\cite{Gottesman1, Gottesman2}\nso its inclusion is important for\nscaling considerations concerning BQC factorization's quantum advantage.\nTo this end,\nwe now succinctly summarize the Clifford hierarchy of unitary operators.\n\nThe~$n$-qubit Pauli group is\n\\begin{equation}\n\\bm{C}^{(1)}_n := \\{\\pm1, \\pm\\text{i}\\}\\times \\{ I, X, Y, Z \\}^{\\otimes n},\n\\end{equation}\nqubits being two-level systems spanned by logical states~$\\ket0,\\ket1\\in\\mathscr{H}_2$,\nand~$\\mathscr{H}_d$ a $d$-dimensional Hilbert space.\nLogical states are~$Z$-eigenstates\nand comprise our computational basis,\nand\n\\begin{equation}\nX,Y,Z\\in\\mathcal{U}(\\mathscr{H}_2)\n\\end{equation}\nare the single-qubit Pauli operators,\nwith~$\\mathcal{U}(\\mathscr{H}_d)$ the group of unitary operators on~$\\mathscr{H}_d$.\nThe $n$-qubit Clifford group $\\bm{C}^{(2)}_n$\nis the normalizer\nof the Pauli group, i.e.,\n\\begin{equation}\n\\bm{C}^{(2)}_n:=\\{ u\\in \\mathcal{U}(\\mathscr{H}_{2^n}) ; u \\,\\bm{C}^{(1)}_n u^\\dagger\\subseteq\\bm{C}^{(1)}_n \\}.\n\\end{equation}\nThis group is generated by the Hadamard, phase (phase-shift by~$\\pi\/2$) and controlled-not (CNOT) gates.\n\nThe Pauli and Clifford groups constitute\nthe first two levels of the Clifford hierarchy,\nand the subsequent levels~$\\bm{C}^{(k>2)}_n$ are\ndefined recursively by~\\cite{Gottesman3}\n\\begin{equation}\n\\bm{C}_n^{(k)} := \\{ u\\in \\mathcal{U}(\\mathscr{H}_{2^n}); u\\,\\bm{C}^{(1)}_n u^\\dagger\n\\subseteq\\bm{C}_n^{(k-1)}\\}.\n\\end{equation}\nConjugation with~$\\bm{C}^{(2)}_n$ maps~$\\bm{C}^{(1)}_n$ into itself,\nso Clifford operators can be blindly delegated\nusing one-time Pauli pads~\\cite{Childs}.\nHowever, $\\bm{C}^{(2)}_n$ does not constitute a universal gate-set.\nMoreover, stabilizer quantum circuits,\nwhich comprise only Clifford operators\nand computational basis measurements\n(corresponding to a projective-valued measure\n$\\ket{\\epsilon}\\bra{\\epsilon}$\nfor $\\ket\\epsilon$\na computational basis state)~\\cite{Clifford2},\ncan be simulated\nefficiently (polynomial-time)\nclassically~\\cite{Gottesman1, Gottesman2}.\nThe experimental blind quantum factorization of~15\nrequires only a stabilizer circuit in its simplest form~\\cite{HZM+17}.\n\nIn contrast, the C$^2$NOT gate\nalong with~$\\bm{C}^{(2)}_n$\nconstitutes a universal gate-set~\\cite{BMPRV00,Unweyling}.\nA circuit comprising both Clifford and C$^2$NOT gates\nalso circumvents the simulatability theorem~\\cite{Gottesman1, Gottesman2}.\nHowever, the inclusion of ``magic''\nentails significant resource costs~\\cite{Shi03, BJ15}.\nImportantly, for our protocol,\nonly one of A \\&\\ B needs ``magic''\nwhereas the other executes a stabilizer circuit,\nthereby simplifying the scheme for experimental realization.\n\n\\section{Approach}\n\\label{sec:Approach}\n\nIn this section we explain our approach\nto solving the blind quantum factorization of~21.\nFirst we describe the setup for blind quantum factorization,\nnamely the classical client, the bipartite quantum server\nand their collective resources.\nNext we describe our mathematical representation\nof the computation circuit~$\\mathcal{C}$ to be delegated to the servers\nand~$\\mathcal{C}$'s associated representations.\nOur scheme for blind quantum factorization\nrelies upon computation by teleportation\non maximally entangled states, as identified in~\\cite{Gottesman3};\nin spirit, this is similar to the RUV protocol~\\cite{RUV}\nbut we highlight some important distinctions in Sec.~\\ref{sec:Disc}.\nNext we establish the mathematical backbone\nof our scheme---a procedure to obtain two blind circuits, one for each server,\nfrom the circuit the classical client wishes to execute---via\nLemma~\\ref{lemma:concat=simult} and Fact~\\ref{fact:transpose=reverse}.\nWe conclude this section\nby reviewing the full procedure to obtain the blind quantum circuits\nfrom the input quantum circuit.\n\nSimilar to the BQC factorization of~15,\nwhich we summarize in Fig.~\\ref{fig:schemefor15}~\\cite{HZM+17},\nour scheme is based on the\nReichardt-Unger-Vazirani (RUV) protocol~\\cite{RUV}.\nThe RUV protocol is a multi-round, two-server BQC scheme for a classical client\n(as single-server BQC is not secure for classical clients~\\cite{Aaronson, Morimae}).\nIn each round of our protocol, servers~A \\&~B\nreceive~$n$ copies of the entangled two-qubit pair\n$\\ket{\\Phi}:=\\ket{00}+\\ket{11}\\in\\mathscr{H}_{4}$\n(for~$\\ket{00}\\equiv\\ket0\\otimes\\ket0$,\nand implied normalization employed throughout)\nfrom a periodic source of entanglement, Deborah (D).\nEach server receives one qubit from each copy of~$\\ket\\Phi$\nand, thus, A \\&\\ B collectively share the resource\n\\begin{equation}\n\\label{eq:sharedresource}\n\\ket{\\Phi}^{\\otimes n}\n=(\\mathds1\\otimes\\mathds1 + X\\otimes X)^{\\otimes n}\n\\ket{00}^{\\otimes n}\n\\in\\mathscr{H}_{4^n}\n\\end{equation}\nbut have no other means for communicating~\\cite{RUV}.\nWe index the~$2n$ qubits in~$\\ket\\Phi^{\\otimes n}$\nas shown in Fig.~\\ref{fig:schemefor15} for~$n=3$.\n\n\nTo delegate an~$n$-qubit quantum circuit~$\\mathcal{C}$\nin the RUV protocol,\nC instructs each server to either compute,\nby executing a quantum circuit~($\\mathcal{A}$ for A,~$\\mathcal{B}$ for B),\nor perform the measurement part of a Clauser-Horne-Shimony-Holt (CHSH) test~\\cite{CHSH}.\nThere are, thus, four distinct subprotocols:\nA~\\&~B could both compute (computational subprotocol),\nor both measure (CHSH subprotocol),\nor else one computes while the other measures\n(two tomography subprotocols)~\\cite{RUV}.\nWhen both compute,\nA~\\&~B report to C their~$i^\\text{th}$ $Z$-measurement outcomes\n\\begin{equation}\n\\{(a_i,b_i);a_i,b_i\\in\\{0,1\\},i\\in[n]:=\\{1, \\dots, n\\}\\}.\n\\end{equation}\nFrom their combined outcomes, C recovers the output from~$\\mathcal{C}$\nwhereas the output from either~$\\mathcal{A}$ or~$\\mathcal{B}$\nalone yields no information about~$\\mathcal{C}$ except depth,\nthereby blinding~A~\\&~B.\n\n\\begin{figure}\n\\begin{tikzpicture}\n\t\\node [anchor=south west,inner sep=0] (image) at (0,0) {\\includegraphics[width=6 cm]{Fig1Fact15.eps}};\n\t\\begin{scope}[x={(image.south east)},y={(image.north west)}]\n\t\t\\draw [thick, red,rounded corners=3] (-0.01, -0.03) rectangle (0.4, 0.47);\n\t\t\\draw [thick, red,rounded corners=3] (0.59, -0.03) rectangle (1.01, 0.47);\n\t\t\\node [black] at (-0.06, 0.24) {$\\mathcal{A}$};\n\t\t\\node [black] at (1.06, 0.24) {$\\mathcal{B}$};\n\t\t\\node [black] at (0.0173, 0.419) {\\scriptsize 1};\n\t\t\\node [black] at (0.017, 0.27) {\\scriptsize 2};\n\t\t\\node [black] at (0.017, 0.122) {\\scriptsize 3};\n\t\t\\node [black] at (0.983, 0.419) {\\scriptsize 4};\n\t\t\\node [black] at (0.983, 0.27) {\\scriptsize 5};\n\t\t\\node [black] at (0.983, 0.122) {\\scriptsize 6};\n\t\\end{scope}%\n\\end{tikzpicture}%\n\\caption{%\n\tBQC scheme for factorizing~15.\n\tQuantum servers A \\&\\ B\n\tjointly compute circuits~$\\mathcal{A}$ \\&~$\\mathcal{B}$\n\t(rounded rectangles), respectively,\n\ton state~$\\ket{\\Phi}^{\\otimes 3}$\n\tsupplied by entanglement source D,\n\tand report outcomes to classical client C.\n\tEach of $\\mathcal{A}$ \\&~$\\mathcal{B}$ involves\n\ta CNOT and a Hadamard gate ($H$), and~$Z$-basis measurements.\n\tSolid and dotted arrows represent\n\tclassical and quantum communication,\n\trespectively,\n\tarrowheads indicate directionality,\n\tand numbers represent indices for qubits.}%\n\\label{fig:schemefor15}%\n\\end{figure}\n\nNow we discuss the underlying primitive gates\nfor~$\\mathcal{C}$ over~$n$ qubits.\nEach computational cycle,\nwith execution by circuit component~$\\mathcal{C}_\\nu$,\nallows one or more of the following primitive gates\noperating in parallel:\n\\begin{enumerate}[(i)]\n\\item single-qubit Hadamard~($H$),\n\\item single-qubit NOT~($X$),\n\\item two-qubit controlled-rotation~(CR$^k$),\nand\n\\item multi-controlled NOT~(C$^l$NOT,\nwhere~$l\\in[n-1]$ denotes the number of controls),\nalso known as the multi-controlled Toffoli gate.\n\\end{enumerate}\nOur~CR$^k$ gates are restricted to rotations\n$\\mathrm{R}^k \\textcolor{red}{:} \\, 0\\leq k$;\nthen we convert the sequential computation~$\\mathcal{C}_>\\circ\\, \\mathcal{C}_<$\ninto a bipartite computation~$\\mathcal{A}\\otimes\\mathcal{B}$\non~$\\ket\\Phi^{\\otimes n}$.\n\nIn our scheme we require\n$\\mathcal{A}$ to be a stabilizer circuit,\nand~$\\mathcal{B}$ to have the minimum possible~$d$.\nThus, before partitioning~$\\mathcal{C}$,\nwe first minimize reduced depth~$d_>$\n(defined to be\nthe number of cycles including\nthe first non-Clifford cycle and\nthen all subsequent cycles, whether Clifford or not)\nover all circuits that are permutations of the cycles of~$\\mathcal{C}$\nand are~$\\mathcal{C}$-equivalent (i.e., map input to the same output as~$\\mathcal{C}$).\nIn case the minimum reduced depth~$d^*_>$ is not achieved uniquely,\nwe choose an optimal circuit~$\\mathcal{C}^*$.\nThen,~$\\mathcal{C}_<$ is\nthe composition of the first~$d-d_>^*$ cycles in~$\\mathcal{C}^*$\nand~$\\mathcal{C}_>$ is\nthe composition of the last~$d^*_>$ cycles in~$\\mathcal{C}^*$.\nThus,\nfor\n\\begin{equation}\n\\mathcal{C}^* = \\bigcircle_{\\nu=1}^d \\mathcal{C}^*_\\nu \\, ,\n\\end{equation}\nwe partition as\n\\begin{equation}\n\\label{eq:circuitpartition}\n\\mathcal{C}_< := \\bigcircle_{\\nu=1}^{d-d^*_>} \\mathcal{C}^*_\\nu\n\\quad \\&\\ \\quad\n\\mathcal{C}_> := \\bigcircle_{\\nu=d-d^*_>+1}^{d} \\mathcal{C}^*_\\nu \\, ,\n\\end{equation}\nso that\n\\begin{equation}\n\\mathcal{C}^* = \\mathcal{C}_>\\circ\\mathcal{C}_< .\n\\end{equation}\nThe bit strings~$\\bm{B}(\\mathcal{C}_<)$\n\\&~$\\bm{B}(\\mathcal{C}_>)$\nrepresenting~$\\mathcal{C}_<$ \\&~$\\mathcal{C}_>$,\nrespectively,\nare determined by first permuting the component bit strings of~$\\bm{B}(\\mathcal{C})$\nand then partitioning into $\\bm{B}(\\mathcal{C}_<)\\Vert\\bm{B}(\\mathcal{C}_>)$\nfollowing Eq.~\\eqref{eq:circuitpartition};\nwe denote this operation by the bit-string function \\textsc{part}.\n\nTo establish the second step in our procedure,\nwe first introduce some notation,\nprove a lemma and state a fact.\nBelow we denote transposition in the computational basis by~${}^\\top$.\nFor~$x=(x_{n}\\cdots x_1)\\in\\{0,1\\}^n$, we define\n\\begin{equation}\n\\label{eq:Xx}\nX^x:=X_{n}^{x_{n}} \\otimes \\cdots \\otimes X_1^{x_1} \\in\\mathcal{U}(\\mathscr{H}_{2^n}) ,\n\\end{equation}\nwhere the subscripts below~$X$\nindicate the index of the qubit being targeted.\nThus, a computational basis state is\n\\begin{equation}\n\\ket{x}=X^x\\ket0^{\\otimes n}\\in \\mathscr{H}_{2^n} \\, .\n\\end{equation}\n\\begin{lemma}\n\\label{lemma:concat=simult}\nFor any~$x\\in\\{0,1\\}^n$\nand unitary operators~$G_<, G_>, G_A, G_B \\in\\mathcal{U}(\\mathscr{H}_{2^n})$,\nthe mapping\n\\begin{equation}\n\t\\label{eq:mapforparallelization}\n\tX^x G_<^\\top \\mapsto G_A \\, , \\quad X^x G_>\\mapsto G_B\n\\end{equation}\nleads to the equality\n\\begin{equation}\n\t\\label{eq:equationforparallelization}\n\tX^x G_> G_< X^x\\ket0^{\\otimes n} = \\left( \\bra0^{\\otimes n} G_A \\otimes G_B\\right)\\ket\\Phi^{\\otimes n} .\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFrom the ``ricochet'' property~\\cite{Wilde17},\n\\begin{align}\n\t\\label{eq:ricochetproperty}\n\t&(\\ket0^{\\otimes n} \\bra0^{\\otimes n} G_A \\otimes G_B) \\left(\\mathds1\\otimes\\mathds1 + X\\otimes X\\right)^{\\otimes n} \\nonumber\\\\\n\t=&\\left [ \\mathds 1 \\otimes \\left (G_B G_A^\\top \\ket0^{\\otimes n} \\bra0^{\\otimes n} \\right) \\right] \\left(\\mathds1\\otimes\\mathds1 + X\\otimes X\\right)^{\\otimes n} ,\n\\end{align}\nso assign\n$G_A\\gets X^x G_<^\\top$ and\n$G_B\\gets X^x G_>$.\n\\end{proof}\n\\begin{fact}\n\\label{fact:transpose=reverse}\nAs each of our primitive gates admits\na symmetric matrix representation in the computational basis,\nthe operator $G_<^\\top$ represents a circuit~$\\mathcal{C}_<^\\top$\nthat consists of the components of~$\\mathcal{C}_<$ executed in reverse order, i.e.,\n\\begin{equation}\n\t\\label{eq:transposedcircuit}\n\t\\mathcal{C}_<^\\top := \\bigcircle_{\\nu=d-d^*_>}^{1} \\mathcal{C}^*_\\nu \\, .\n\\end{equation}\nWe denote the operation\nof obtaining~$\\bm{B}(\\mathcal{C}_<^\\top)$\nby reversing the order of components\nin~$\\bm{B}(\\mathcal{C}_<)$\nby the bit-string function~\\textsc{rev}.\n\\end{fact}\n\n\nWe now explain how we use Lemma~\\ref{lemma:concat=simult} and Fact~\\ref{fact:transpose=reverse}\nto convert~$\\mathcal{C}_>\\circ\\, \\mathcal{C}_<$\ninto~$\\mathcal{A}\\otimes\\mathcal{B}$.\nLet~$G_<, G_>, G_A$ and~$G_B$ be unitary operators\nrepresenting~$\\mathcal{C}_<, \\mathcal{C}_>, \\mathcal{A}$ and~$\\mathcal{B}$, respectively,\nso that Map~\\eqref{eq:mapforparallelization} implies\n\\begin{equation}\n\\mathcal{A}=X^x \\circ \\mathcal{C}^\\top_< \\quad \\text{\\&} \\quad \\mathcal{B}= X^x \\circ \\mathcal{C}_> \\, .\n\\end{equation}\nAlso, consider any~$x\\in\\{0,1\\}^{t+L}$\nwith~$x_i=0$ for all~$i\\in[t]$.\nThen,\n\\begin{equation}\nX^xG_>G_< X^x =G_>G_<=G,\n\\end{equation}\nbecause the second register of~$\\mathcal{C}$\nis operated on by only C$^l$NOTs~\\cite{Shors},\nand Eq.~\\eqref{eq:equationforparallelization} simplifies to\n\\begin{equation}\n\\label{eq:unitaryparallelization}\nG \\ket 0^{\\otimes n} = \\bra{x} G_<^\\top \\otimes X^x G_> \\ket\\Phi^{\\otimes n}.%\n\\end{equation}\nAs the output from only the first register of~$\\mathcal{C}^*$\nis used to compute~$r$ \\cite{Shors},\nthe~$X^x$ in Eq.~\\eqref{eq:unitaryparallelization} can be ignored.\nFinally, we have\n\\begin{equation}\n\\mathcal{A}=\\mathcal{C}_<^\\top \\quad \\text{\\&} \\quad \\mathcal{B}=\\mathcal{C}_> \\,,\n\\end{equation}\nand B's outcomes~$\\{b_i; i\\in[t]\\}$ are identical\nto the output from the first register of~$\\mathcal{C}$\nwhenever A reports~$a_i=0$ for all~$i\\in[t]$.\nThis completes our description of\nthe procedure to obtain~$\\mathcal{A}~\\&~\\mathcal{B}$\nfrom~$\\mathcal{C}$.\n\n\\section{Results}\n\\label{sec:Results}\n\nIn this section we present our results, which are two-fold.\nFirstly, we present a scalable algorithm to design circuits~$\\mathcal{A}~\\&~\\mathcal{B}$\nfor blind-quantumly factorizing arbitrary odd semiprime~$N$ using arbitrary base~$a$.\nSecondly, we present the outputs of this algorithm,\ni.e., the factorization circuits~$\\mathcal{A}~\\&~\\mathcal{B}$,\nalong with the rest of the blind factorization scheme,\nfor two cases corresponding to~$N=21$ and~$a=4$.\nOur algorithm requires certain standard functions so we start by specifying a function library,\nthough we don't reproduce the corresponding algorithms here as they are standard in literature.\n\n\n\nWe now introduce our concept\nfor the function library~\\textsc{funcLib}\nfor designing circuits~$\\mathcal{A}$\n\\&~$\\mathcal{B}$.~\\textsc{funcLib}\ncomprises four functions,\nwith two of them~(\\textsc{part} \\&~\\textsc{rev}) already discussed and\nthe other two well established\nin literature on factorization~\\cite{BCDP, Shors, SSV13}.\nThe integer function\n\\begin{equation}\n\\label{eq:maximumdepthshorcircuitfunction}\n\\textsc{maxDep}(N, a) = 96 \\lfloor\\log{a}\\rfloor \\lfloor\\log{N}\\rfloor^2\n\\end{equation}\nyields an upper bound\non factorization-circuit depth, given~$N$ and~$a$,\nbased on complexity arguments for scaling modular exponentiation~\\cite{BCDP}.\nThe bit-string function~\\textsc{ShorCir}\nreturns a bit string representing the compiled factorization circuit,\ngiven~$N, a, t$ and~$n$~\\cite{Shors,SSV13}.\nOur procedure for designing\ncircuits~$\\mathcal{A}$ \\&~$\\mathcal{B}$\nis described in Alg.~\\ref{algorithm:circuitdesign},\nwhere we employ `type' notation USINT for nonnegative integers and\nBIN for bit strings (with~[\\,] denoting array size).\n\n\\begin{algorithm}[H]\n\\caption{Parallelizing Factorization}\n\\label{algorithm:circuitdesign}\n\\begin{algorithmic}[1]\n\t\\Require\n\t\\Statex USINT \\textsc{num}\n\t\\Comment $N=pq$,\\; $p\\neq q$,\\; $p,q\\in\\mathbb{P}\\setminus\\{2\\}$ \\;\n\t\\Statex USINT \\textsc{base}\n\t\\Comment Base~$a$: $a=5$\nand incorporates three non-Clifford gates\n(one~CR$^1$ and two~C$^2$NOTs);\n$\\mathcal{A}$ \\&~$\\mathcal{B}$ thus\nhave depths of three \\& five, respectively.\nFor~$t=3$,~$\\mathcal{C}^*$\nhas~$d=17$,~$d^*_>=13$ and incorporates eight non-Clifford gates\n(one~CR$^2$, two~CR$^1$s,\nthree~C$^2$NOTs and two~C$^3$NOTs);\n$\\mathcal{A}$ \\&~$\\mathcal{B}$ thus\nhave depths of four \\& thirteen, respectively.\nIn both cases,~$\\mathcal{A}$\nis conveniently a stabilizer circuit\nwhereas~$\\mathcal{B}$ incorporates\nall non-Clifford gates in~$\\mathcal{C}^*$.\n\nFor~$t=2$,\ndespite following Shor's algorithm~\\cite{Martin},\n$\\mathcal{C}^*$ never yields the correct period\ndue to insufficient space in the first register;\nhence, $\\mathcal{A}$ \\&~$\\mathcal{B}$\nfail to factorize~21.\nHowever, the output from~$\\mathcal{C}^*$\nis sufficient to establish a proof-of-concept as in Ref.~\\cite{Martin}.\nFurther, photonic implementations of~$\\mathcal{A}$ \\&~$\\mathcal{B}$,\nwhich entail scaling up from\none C$^2$NOT to two~\\cite{Lanyon}\nand from three EPR pairs to five~\\cite{HZM+17},\nare more feasible\nin this case compared to~$t=3$.\nFor~$t=3$, $\\mathcal{C}^*$\ndelivers the correct period\nwith probability~0.47\nso~$\\mathcal{A}$ \\&~$\\mathcal{B}$\nsucceed in factorizing with probability~0.058,\nbut~$\\mathcal{B}$ requires significant resources.\n\nAnother advantage of our blind quantum factorization scheme\nis the low qubit count required to factorize~$N$.\nOur scheme requires, at worst,~$\\mathrm{O}(\\log N)$ qubits to guarantee factorization\ncompared to the~$\\mathrm{O}((\\log N)^2)$ qubits\na literal adaptation of the RUV protocol to quantum factorization would require.\nFinally, we remedy RUV's open problem of tomographic verifiability\nof non-Clifford computations in the context of factorization\nby declaring B to be dishonest if C finds A honest\n(which requires only CHSH measurments)\nbut still does not recieve the correct period\n(which can be checked efficiently classically)\nfrom B.\n\n\n\\begin{figure}\n\\begin{tikzpicture}\n\t\\node[anchor=south west,inner sep=0] (image) at (0,0) {\\includegraphics[width=0.8\\columnwidth]{Fig5BlindFact21.eps}};\n\t\\begin{scope}[%\n\t\tx={(image.south east)},\n\t\ty={(image.north west)}\n\t\t]\n\t\t\\node [black] at (-0.06, 0.5) {\\footnotesize $\\mathcal{A}$};\n\t\t\\node [black] at (1.06, 0.5) {\\footnotesize $\\mathcal{B}$};\n\t\t\\draw [thick,red,rounded corners=3] (-0.02, -0.04) rectangle (0.454, 1.04);\n\t\t\\draw [thick,red,rounded corners=3] (0.501, -0.04) rectangle (1.02, 1.04);\n\t\\end{scope}%\n\\end{tikzpicture}%\n\\caption{%\n\tBlind circuits~$\\mathcal{A}$ \\&~$\\mathcal{B}$\n\tfor $N=21, a=4$, $t=2$ and $L=3$.\n\t$\\mathcal{A}$ \\&~$\\mathcal{B}$ (rounded rectangles)\n\teach act on input registers initialized to one half of the bipartite state~$\\ket\\Phi^{\\otimes 5}$ (red dots).\n\t$\\mathcal{A}$ comprises NOT, CNOT, and Hadamard~($H$) gates\n\twhereas~$\\mathcal{B}$ comprises C$^2$NOT, controlled-phase~($S$) and Hadamard gates.\n\tAll measurements are in the~$Z$ basis.}\n\\label{fig:blindfact21}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:Conc}\n\nHere\nwe have developed a BQC scheme\nfor factorizing~21.\nOur multi-round protocol is accessible\nto a classical client,\nwho communicates with\ntwo remote quantum servers.\nThe servers send\nthe client~$Z$-measurement results\nfor each round.\nBy processing these data,\nthe client determines candidates\nfor factors of~21\nor verifies honesty of the servers,\nall while concealing the actual task.\nOur choice of hard~$a$ implies that\nservers employ non-Clifford gates,\nwhich is a non-trivial requirement\nunseen for $N=15$~\\cite{HZM+17};\nour non-Clifford analysis\nestablishes a foundation for\nfuture BQC factorization protocols.\nFinally, our protocol for~$t=2$ motivates\na challenging but feasible photonic experiment\nthat would set a milestone towards\nsecure quantum computation\nfor classical clients.\n\n\\begin{acknowledgments}\nAritra Das thanks and acknowledges financial support from the Shastri Indo-Canadian Institute through their Shastri Research Student Fellowship program\nand from the Australian Research Council Centre of Excellence CE170100012.\nAritra Das is also grateful to the Indian Institute of Technology Kanpur,\nwhere a majority of this work was undertaken.\nFinally, Aritra Das and Barry C. Sanders acknowledge the traditional owners of the land on which this work was undertaken at the University of Calgary: the Treaty 7 First Nations.\n\\end{acknowledgments}\n\n\\bibliographystyle{apsrev4-2}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter*{Streszczenie}\n\nCelem niniejszej pracy doktorskiej jest badanie zbior\u00f3w $\\K$-prostokre\u015blnych w geometrii afinicznej. \n\nPierwszy rozdzia\u0142 jest po\u015bwi\u0119cony w\u0142asno\u015bci $\\K$-prostokre\u015blno\u015bci oraz wa\\-run\\-kom r\u00f3w\\-no\\-wa\u017c\\-nym.\nNiech $\\K$ b\u0119dzie cia\u0142em algebraicznie domkni\u0119tym. Krzyw\u0105 $\\Gamma$ nazywamy parametryczn\u0105 je\u017celi jest obrazem cia\u0142a $\\K$ po\\-przez odwzorowanie\nregularne. Definicj\u0105 \\ref{k-uniruleddef} wprowadzamy stopie\u0144 $\\K$-pros\\-to\\-kre\u015bl\\-no\u015bci dla rozmaito\u015bci afinicznych. Rozmaito\u015b\u0107 afiniczna $X$ \nma stopie\u0144 $\\K$-pros\\-to\\-kre\u015bl\\-no\u015bci co najwy\u017cej $d$ je\u017celi jest pokryta krzywymi pa\\-ra\\-me\\-try\\-cznymi stopnia co najwy\u017cej $d$. \nW Propozycji \\ref{k-uniruledofdeg} podajemy dwa warunki r\u00f3w\\-no\\-wa\u017c\\-ne. W pierwszym z nich \u017c\u0105damy tylko aby g\u0119sty zbi\u00f3r by\u0142 pokryty \ntakimi krzywymi. Kolejny warunek r\u00f3wnowa\u017cny zapewnia istnienie dominuj\u0105cego, generycznie sko\u0144czonego odwzorowania wielomianowego o stopniu ze wzgl\u0119du \nna pierwsz\u0105 wsp\u00f3\u0142\\-rz\u0119d\\-n\u0105 co najwy\u017cej $d$ z cylindra $\\K\\times W$ w rozmaito\u015b\u0107 $X$. Wprowadzamy now\u0105 Definicj\u0119 \\ref{k-uniruleddef} rozmaito\u015bci \n$\\K$-prostokre\u015blnych, kt\u00f3ra ma t\u0105 zalet\u0119 w stosunku do poprzedniej, \u017ce dzi\u0119ki Propozycji \\ref{indep} nie zale\u017cy od cia\u0142a podstawowego. W sekcji \n\\ref{rr} przedstawiamy analogiczne wyniki dla cia\u0142a liczb rzeczywistych.\n\nTwierdzenie \\ref{Sfkuni} m\u00f3wi, \u017ce je\u017celi $f:X\\rightarrow Y$ jest generycznie sko\u0144\\-czo\\-nym odwzorowaniem regularnym pomi\u0119dzy rozmaito\u015bciami \nafi\\-nicz\\-ny\\-mi oraz rozmaito\u015b\u0107 $X$ jest $\\K$-prostokre\u015blna, to zbi\u00f3r $\\s_f$ punkt\u00f3w niew\u0142a\u015bciwo\u015bci odwzorowania $f$ jest r\u00f3wnie\u017c $\\K$-prostokre\u015blny. \nW drugim rozdziale zajmujemy si\u0119 ograniczeniem od g\u00f3ry stopnia $\\K$-prostokre\u015blno\u015bci zbioru $\\s_f$ w zale\u017cno\u015bci od stopnia odwzorowania $f$ oraz \nstopnia $\\K$-pro\\-sto\\-kre\u015bl\\-no\\-\u015bci rozmaito\u015bci $X$. Dla cia\u0142a liczb zespolonych $\\C$ przedstawiamy kompletne wyniki Twierdzenia \\ref{cn} i \\ref{multc}. \nPierwsze z nich daje ograniczenie przez iloczyn stopnia odwzorowania $f$ oraz stopnia $\\C$-prostokre\u015blno\u015bci rozmaito\u015bci $X$. Natomiast drugie m\u00f3wi, \n\u017ce dla $f:\\C^n\\rightarrow\\C^m$ stopnia $d$ stopie\u0144 $\\C$-prostokre\u015blno\u015bci $\\s_f$ wynosi co najwy\u017cej $d-1$. Przed\\-sta\\-wia\\-my r\u00f3wnie\u017c wyniki dla dowolnego \ncia\u0142a algebraicznie domkni\u0119tego $\\K$ - Twierdzenia \\ref{kn}, \\ref{kw}, \\ref{kkw} oraz dla cia\u0142a liczb rzeczywistych $\\R$ - Twier\\-dze\\-nia \n\\ref{rn1}, \\ref{rxw1}, \\ref{multc1}.\n\nW ostatnim rozdziale dowodzimy $\\K$-prostokre\u015blno\u015bci zbior\u00f3w zwi\u0105\\-za\\-nych ze zbiorem punkt\u00f3w sta\u0142ych dzia\u0142ania grupy algebraicznej. \nDok\u0142adnie w Twierdzeniu \\ref{glowne} dowodzimy, \u017ce je\u017celi nietrywialna, sp\u00f3jna grupa unipotentna dzia\u0142a efektywnie na rozmaito\u015bci afinicznej, \nto zbi\u00f3r pun\\-kt\u00f3w sta\u0142ych tego dzia\u0142ania jest $\\K$-prostokre\u015blny. W szczeg\u00f3lno\u015bci nie zawiera izolowanych punkt\u00f3w. \nNatomiast w Twierdzeniu \\ref{glowne1} pokazujemy, \u017ce gdy niesko\u0144czona grupa algebraiczna dzia\u0142a efektywnie na przestrzeni afinicznej $\\K^n$, to ka\u017cda hiperpowierzchnia \nzawarta w zbiorze punkt\u00f3w sta\u0142ych jest $\\K$-prostokre\u015blna.\n\n\\pagebreak\n\\addcontentsline{toc}{chapter}{Abstract}\n\\chapter*{Abstract}\n\nThe main goal of this thesis is to study $\\K$-uniruled sets that appear in affine geometry. \n\nThe first Chapter discusses the property of $\\K$-uniruledness and its equi\\-valent conditions.\nLet $\\K$ be an algebraically closed field. A curve $\\Gamma$ is called parametric if it is \nan image of the field $\\K$ under a regular map. In \\ref{k-uniruleddef} we define the degree of $\\K$-uniruledness \nfor an affine variety $X$. The degree of $\\K$-uniruledness \nof an affine variety $X$ is at most $d$ if $X$ is covered by parametric curves of degree at most $d$. \nIn Proposition \\ref{k-uniruledofdeg} we prove two equivalent conditions. In the first one we require that only a\ndense subset of the variety $X$ is covered with such curves. Another equivalent condition asserts the existence of an \naffine variety $W$ with $\\dim W=\\dim X-1$, and a dominant polynomial map $\\phi:\\K\\times W\\rightarrow X$ of degree in the first coordinate\nat most $d$. We state a new Definition \\ref{k-uniruleddef} according to which $\\K$-uniruled \nvarieties are those with a finite degree of $\\K$-uniruledness. One of the advantages of the new definition is that by \nProposition \\ref{indep} it does not depend on the base field. In section \\ref{rr} we present analogous results for the field of real \nnumbers.\n\nTheorem \\ref{Sfkuni} asserts that if $f:X\\rightarrow Y$ is a generically finite regular mapping between affine \nvarieties and $X$ is $\\K$-uniruled, then the set $\\s_f$ of points at which $f$ is not proper is also $\\K$-uniruled. \nIn the second Chapter we bound from above the degree of $\\K$-uniruledness of the set $\\s_f$ in terms of degree of \na mapping $f$ and degree of $\\K$-uniruledness of a variety $X$. For the field of complex numbers $\\C$ we get \nthe best possible results Theorems \\ref{cn} and \\ref{multc}. The first one gives a bound by the product of the degree \nof $f$ and degree $\\C$-uniruledness of a variety $X$. The second theorem bounds by $d-1$ degree of $\\C$-uniruledness \nof the set $\\s_f$ for a mapping $f:\\C^n\\rightarrow\\C^m$ of degree $d$. We also present results for an arbitrary \nalgebraically closed field $\\K$ - Theorems \\ref{kn}, \\ref{kw}, \\ref{kkw} and for the field of real numbers $\\R$ - Theorems \\ref{rn1}, \\ref{rxw1}, \\ref{multc1}.\n\nIn the last Chapter we show that some sets associated with the set of fixed points of an algebraic group action are $\\K$-uniruled. \nIn Theorem \\ref{glowne} we prove that if nontrivial connected unipotent group acts effectively on an affine variety then \nthe set of fixed points of this action is a $\\K$-uniruled variety. In particular there are no isolated fixed points. \nIn Theorem \\ref{glowne1}, on the other hand, we show that if an arbitrary infinite algebraic group acts effectively on an affine space $\\K^n$, \nthen every hypersurface contained in the set of fixed points is $\\K$-uniruled.\n\n\\pagebreak\n\\addcontentsline{toc}{chapter}{Acknowledgements}\n\\chapter*{Acknowledgements}\n\nFirst and foremost I would like to thank my supervisor Zbigniew Jelonek for his constant and enormous help during my work on this thesis. Most of the\nresults presented here come from our two joint papers. It was a great pleasure and opportunity to learn from him. Throughout the thesis he provided\nme with encouragement and useful ideas. I am sincerely grateful for his effort.\n\nI would also like to express my gratitude to professors Rosa-Maria Mir\u00f3-Roig, Jaros\u0142aw Wi\u015bniewski and Mikhail Zaidenberg for many interesting and \nfruitful discussions. I thank also S\u0142awomir Cynk with whom I had written my master thesis, and who introduced me to algebraic geometry.\n\nI acknowledge the administrative help of Anna Poczma\u0144ska.\n\nWriting this thesis would not be possible without the constant support of my family. I thank a lot my mum for her love and motivation.\n\nI thank my friends: Piotr Achinger, Agnieszka Bodzenta, Maria Donten-Bury, Micha\u0142 Farnik, Wojciech Lubawski, Mateusz Micha\u0142ek and Karol Palka \nfor interesting discussions, questions, answers and a great working atmosphere.\n\nI acknowledge a support of Polish Ministry of Science and Higher Education grant N N201 413139.\n\\pagebreak\n\n\\chapter{$\\K$-uniruled varieties}\n\nThis Chapter can be treated as a preliminary to the rest of the thesis.\n\nWe are going to remind an old, well known definition of $\\K$-uniruled varieties and introduce our new one. The relation between both definitions will \nbe given. Additionally for $\\K$-uniruled varieties we introduce a degree of $\\K$-uniruledness. \n\nWe end the Chapter by stating a few open problems concerning bounding the degree of $\\K$-uniruledness. \n\nIn the whole Chapter, unless stated otherwise, $\\K$ is assumed to be an arbitrary algebraically closed field.\n\n\\section{Introduction}\n\nWe begin with a presentation of some necessary preliminaries.\n\n\\begin{proposition}\\label{birrat} Let $\\Gamma \\subset \\K^m$ be an affine curve. The following conditions are equivalent:\n\\begin{enumerate}\n\\item there exists a regular dominant map $\\varphi:\\K \\rightarrow \\Gamma$,\n\\item there exists a regular birational map $\\varphi: \\K \\rightarrow \\Gamma$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nIt is an immediate consequence of L\\\"{u}roth's Theorem \\ref{luroth}. \n\\end{proof}\n\n\\begin{theorem}[L\\\"{u}roth]\\label{luroth}\nSuppose $\\Le$ and $\\M$ are arbitrary fields (not necessarily algebraically closed), and $\\xi$ is transcendental over $\\Le$, such that \n$$\\Le\\subsetneq\\M\\subset\\Le(\\xi).$$ Then $\\M=\\Le(\\eta)$ for some $\\eta$ in $\\Le(\\xi)$.\n\\end{theorem}\n\n\\begin{proof}(see \\cite{pi})\nSuppose $\\eta\\in\\Le(\\xi)\\setminus\\Le$, then \n$$\\eta=\\frac{f(\\xi)}{g(\\xi)}$$\nfor some polynomials $f,g\\in\\Le[X]$, at least one of which is of positive degree. Moreover we may assume that $f$ and $g$ have no common factor in $\\Le[X]$ of positive degree. Since $\\eta\\notin\\Le$, we have\n$$\\deg(f(X)-g(X)\\eta)=\\max(\\deg(f),\\deg(g)).$$\nClearly $f(\\xi)-g(\\xi)\\eta=0$, so $\\xi$ is algebraic over $\\Le(\\eta)$, hence $\\eta$ is transcendental over $\\Le$. As a consequence $f(X)-g(X)\\eta$ is irreducible in $\\Le[X]$, and\n$$\\max(\\deg(f),\\deg(g))=[\\Le(\\xi):\\Le(\\eta)].$$\n\nChoosing $\\eta$ from $\\M\\setminus\\Le$ shows that $\\Le(\\xi)$ is algebraic of finite degree over $\\M$. Denote $[\\Le(\\xi):\\M]=n$, then the minimal polynomial of $\\xi$ over $\\M$ can be written as\n$$\\frac{p(\\xi)}{q(\\xi)}\\sum_{i=0}^{n}f_i(\\xi)\\xi^i=0,$$\nwhere $$p,q,f_i\\in\\Le[X],\\text{ }\\frac{p(\\xi)}{q(\\xi)}f_i(\\xi)\\in\\M,$$\nand polynomials $f_i$ with no common factor of positive degree. We have $f_n(\\xi)\\neq 0$ and since $\\xi$ is not algebraic over $\\Le$\n$$\\eta:=\\frac{f_j(\\xi)}{f_n(\\xi)}\\notin\\Le$$\nfor some $j$. Clearly $\\eta\\in\\M$. Assume equation \n$$\\eta=\\frac{f(\\xi)}{g(\\xi)}$$\nholds as in the first part of the proof. Since $\\xi$ is a root of $f(X)-g(X)\\eta\\in\\M[X]$, this polynomial is divisible in $\\M[X]$ by $\\frac{p(\\xi)}{q(\\xi)}\\sum_{i=0}^{n}f_i(\\xi)X^i$. Therefore \n$$f(X)g(\\xi)-g(X)f(\\xi)=r(X,\\xi)\\sum_{i=0}^{n}f_i(\\xi)X^i,\\;\\;\\;\\;\\;\\;\\;\\;(\\star)$$\nfor some polynomial $r\\in\\Le[X,Y]$. \n\nSay the degree of $f(X)g(\\xi)-g(X)f(\\xi)$ in $\\xi$ is $m$. This is the same as the degree in $X$, so $n\\leq m$. Additionally\n$$m\\leq\\max(\\deg(f),\\deg(g))\\leq\\max(\\deg(f_j),\\deg(f_n))\\leq\\max(\\deg(f_0),\\dots).$$\nBy $(\\star)$, there must be equalities, and $r$ must be constant in $\\xi$. But now $f(X)$ and $g(X)$ are both divisible by $r(X)$, hence $r$ is also constant in $X$. We get that $n=m$, and $\\M=\\Le(\\eta).$ \n\\end{proof}\n\n\\begin{definition}\nAn affine curve $\\Gamma\\subset\\K^m$ is called a \\textup{parametric curve}, if equivalent conditions from Proposition \\ref{birrat} hold.\n\\end{definition}\n\nWe are ready to present the formerly known definition of $\\K$-uniruled varieties, which was introduced in \\cite{st}.\n\n\\begin{definition}\nAn affine variety $X\\subset\\K^m$ is called \\textup{$\\K$-uniruled}, if for every point $x\\in X$ there exists a parametric curve $l_x\\subset X$ passing through $x$. In other words $X$ is covered by parametric curves.\n\\end{definition}\n\nA parametric curve is an image of the field, which is irreducible, so it is also irreducible. Hence each parametric curve on a variety is contained in some irreducible component. So a variety is $\\K$-uniruled if and only if all its irreducible components are $\\K$-uniruled.\n\nThe following characterization of components of $\\K$-uniruled varieties is known \\cite{st}.\n\n\\begin{proposition}\\label{k-uniruled}\nLet $\\K$ be an uncountable field and let $X\\subset \\K^m$ be an irreducible affine variety. The following conditions are equivalent:\n\\begin{enumerate}\n\\item $X$ is $\\K$-uniruled,\n\\item there exists an open, non-empty subset $U\\subset X$, such that for every point $x\\in U$ there exists a parametric curve $l_x\\subset X$ passing through $x$, \n\\item there exists an affine variety $W$ with $\\dim W = \\dim X-1$, and a regular dominant map $\\phi: \\K\\times W\\rightarrow X$. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nImplication $(1)\\Rightarrow (2)$ is trivial. \n\nTo prove implication $(2)\\Rightarrow (3)$ let us define\n$$S_d=\\{\\varphi:\\K\\rightarrow\\K^m\\text{ such that }\\varphi(\\K)\\subset X\\text{ and }\\deg\\varphi=d\\}.$$ \nEach $\\varphi=(\\varphi_1,\\dots,\\varphi_m)$, where $\\varphi_i(t)=\\sum_{j=0}^{d}a_{j}^{i} t^{j}$, corresponds to a point \n$$(a^1_0,\\dots,a^m_d)\\in(\\K^{d+1})^m\\setminus(\\K^d\\times\\{0\\})^m.$$ \nConditions $\\varphi(\\K)\\subset X$ are polynomial equations, so $S_d$ is a quasiprojective variety. Consider the morphism: \n$$F_d:\\K\\times S_d\\ni (t,\\varphi)\\rightarrow \\varphi(t)\\in X.$$ \nLet us denote the image by $X_d:=F_d(\\K\\times S_d)$. We know that $U\\subset\\bigcup_{d\\in N}X_d$, since for every point $x\\in U$ there is a parametric curve $l_x\\subset X$ passing through $x$. Let $\\overline{X_d}$ be the closure of $X_d$ in $X$. From Baire's Theorem \\ref{baire} for the Zariski topology there exists $d$ such that $\\overline{X_d}=X$. Now the map \n$$F_d:\\K\\times S_d:\\rightarrow X$$\nis dominant, so when we restrict it to some irreducible component $Y$ (suppose $Y\\subset \\K^M$) of $S_d$ the map $\\Phi=F_d\\vert_{\\K\\times Y}$ is \nstill dominant. Suppose $\\dim X=n$ and $\\dim(Y)=s$, on an open subset of $X$ fibers of the map $\\Phi$ are of dimension $s+1-n$. Let $x$ be one of \nsuch points. From the construction of the set $S_d$ we know that the fiber $F=\\Phi^{-1}(x)$ does not contain any line of the type $\\K\\times\\{y\\}$, \nso in particular the image $F'$ of the fiber $F$ under projection $\\K\\times Y\\rightarrow Y$ has the same dimension. For general linear subspace \n$L\\subset \\K^M$ of dimension $M+n-s-1$ the dimension of $L\\cap F'$ equals to $0$. Let us fix such $L$, and let $R$ be any irreducible component of \n$L\\cap Y$ intersecting $F'$. Now mapping the\n$$\\Phi\\vert_{\\K\\times R}:\\K\\times R\\rightarrow X$$\nconfirms the assertion, since it has one fiber of dimension $0$ (at $x$), the dimension of $R$ is $n-1$ (at most $n-1$, because of the $0$ dimensional fiber, \nat least $n-1$ because of the small dimension of $L$), so as a consequence it is dominant. \n\nTo prove implication $(3)\\Rightarrow (1)$ we note that for \n$$\\phi: \\K\\times W\\ni (t,w)\\to \\phi(t,w)\\in X$$ \nthere exists $d\\in\\N$ such that $\\deg_t \\phi \\leq d$. Next we use implication $(3)\\Rightarrow (1)$ from Proposition \\ref{k-uniruledofdeg}, which gives us condition $(1)$. \n\\end{proof}\n\nBaire's Theorem reads as follows.\n\n\\begin{theorem}[Baire]\\label{baire}\nLet $\\K$ be an uncountable field, let $X\\subset\\K^m$ be an irreducible affine variety, and let $X_d$ for $d\\in\\N$ be its closed subsets. If $\\bigcup_{d\\in N}X_d$ contains a non-empty open subset $U$ of $X$, then $X_d=X$ for some $d$. \n\\end{theorem}\n\n\\begin{proof}\nWe are going to prove it by induction on the dimension of $X$. If $\\dim X=0$, then $X$ is just a point, and the assertion is clearly trivial. \n\nWhen $\\dim X>0$, then there exists a regular function $f:\\K^m\\rightarrow \\K$ non-constant on $X$ (one of coordinates is good). \n\nAssume that the assertion is false, which means that all $X_d$ are of a lower dimension than $X$, and consider sets \n$$X^c=X\\cap\\{f=c\\}$$ \nfor $c\\in\\K$, which are of pure codimension one. When $U\\cap X^c\\neq\\emptyset$, then it means that one of irreducible \ncomponents $R$ of $X^c$ satisfies the conditions of the theorem with sets $R\\cap X_d$ for $d\\in\\N$. From the inductive \nassumption we know that $R$ equals to $R\\cap X_d$ for some $d$, so to one of irreducible components of $X_d$ (they are of the same dimension). \nBut there are only countably many irreducible components of $X_d$ for some $d\\in\\N$, and uncountably many $c\\in\\K$ for \nwhich $U\\cap X^c\\neq\\emptyset$, since open set $U$ intersects almost all $X_c$ (all, except the finite number). The contradiction \nshows that the hypothesis was false.\n\\end{proof}\n\n\\section{The degree of $\\K$-uniruledness}\n\nFor an uncountable field $\\K$ there is a nice characterization of $\\K$-uniruled varieties, namely Proposition \\ref{k-uniruled}. However to work over an arbitrary algebraically closed field we need a refined version of the definition. We introduce it in our papers \\cite{jela}, \\cite{jela2}. It coincides with the older one for uncountable fields.\n\nMoreover with the new definition of $\\K$-uniruledness we are able to measure the degree of $\\K$-uniruledness of $\\K$-uniruled varieties.\n\n\\begin{definition}\nAn affine curve $\\Gamma\\subset \\K^m$ is called a \\textup{parametric curve of degree at most $d$}, if there exists a polynomial dominant map $f:\\K\\rightarrow \\Gamma$ of degree at most $d$ (by degree of $f=(f_1,\\dots,f_m)$ we mean $\\deg f:=\\max_i \\deg f_i$).\n\\end{definition}\n\nNow we prove the following:\n\n\\begin{proposition}\\label{k-uniruledofdeg}\nLet $X\\subset \\K^m$ be an irreducible affine variety of dimension $n$, and let $d$ be an integer. The following conditions are equivalent:\n\\begin{enumerate}\n\\item for every point $x\\in X$ there exists a parametric curve $l_x\\subset X$ of degree at most $d$ passing through $x$, \n\\item there exists a dense in the Zariski topology subset $U\\subset X$, such that for every point $x\\in U$ there exists a parametric curve $l_x\\subset X$ of degree at most $d$ passing through $x$, \n\\item there exists an affine variety $W$ with $\\dim W = \\dim X-1$, and a dominant polynomial map $\\phi: \\K\\times W\\ni (t,w)\\to \\phi(t,w)\\in X$, such that $\\deg_t \\phi \\leq d$. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nImplication $(1)\\Rightarrow (2)$ is obvious. \n\nWe prove $(2)\\Rightarrow (1)$. Suppose that \n$$X=\\{x\\in \\K^m:f_1(x)=0,\\dots,f_r(x)=0\\}.$$\nFor $a=(a_1,\\dots,a_m)\\in \\K^m$ and $b=(b_{1,1}:\\dots:b_{d,m})\\in\\p^M$, where $M=dm-1$, let $$\\varphi_{a,b}:\\K\\ni t\\rightarrow (a_1+b_{1,1}t+\\dots+b_{1,d}^dt^d,\\dots,a_m+b_{m,1}t+\\dots+b_{m,d}^dt^d)\\in\\K^m$$\nbe a parametric curve of degree at most $d$. Consider a variety and a projection\n$$\\K^m\\times\\p^M\\supset V=\\{(a,b)\\in \\K^m\\times\\p^M:\\forall_{t,i}\\;f_i(\\varphi_{a,b}(t))=0\\}\\ni(a,b)\\rightarrow a\\in \\K^m.$$\nThe definition of the set $V$ says that parametric curves $\\varphi_{a,b}$ are contained in $X$. Hence the image of the projection is contained in $X$ and contains dense set $U$, since through every point of $U$ passes a parametric curve of degree at most $d$. But since $\\p^M$ is complete and $V$ is closed we have that the image is closed, and hence it is the whole set $X$. \n\nLet us prove $(2)\\Rightarrow (3)$. For some affine chart $V_j=V\\cap \\{b_j=1\\}$ the above map is dominant. We consider the following dominant mapping\n$$\\Phi: \\K\\times V_j\\ni (t,\\phi)\\rightarrow \\phi(t)\\in X.$$\nAfter replacing $V_j$ by one of its irreducible components $Y\\subset \\K^M$ the map remains dominant. Suppose $\\dim X=n$ and $\\dim(Y)=s$, on an open subset of $X$ fibers of the map $\\Phi'=\\Phi\\vert_{k\\times Y}$ are of dimension $s+1-n$. Let $x$ be one of such points. From the construction of the set $V$ we know that the fiber $F=\\Phi'^{-1}(x)$ does not contain any line of the type $\\K\\times\\{y\\}$, so in particular the image $F'$ of the fiber $F$ under projection $\\K\\times Y\\rightarrow Y$ has the same dimension. For general linear subspace $L\\subset \\K^M$ of dimension $M+n-s-1$ the dimension of $L\\cap F'$ equals to $0$. Let us fix such $L$, and let $R$ be any irreducible component of $L\\cap Y$ intersecting $F'$. Now mapping \n$$\\Phi'\\vert_{\\K\\times R}:\\K\\times R\\rightarrow X$$ \nconfirms the assertion, since it has one fiber of dimension $0$ (at $x$), dimension of $R$ is $n-1$ (at most $n-1$, because of the $0$ dimensional fiber, at least $n-1$ because of the small dimension of $L$), so as a consequence it is dominant. \n\nTo prove the implication $(3)\\Rightarrow (2)$ it is enough to notice that for each $w\\in W$ the map \n$$\\phi_w: \\K\\ni t\\rightarrow\\phi(t,w)\\in X$$\nis a parametric curve of degree at most $d$ or it is constant. Image of $\\phi$ contains an open dense subset, so after excluding points with \ninfinite preimages (closed set, at most of codimension one) we get an open set $U$ with required properties. \n\\end{proof}\n\nWe are ready to define the degree of $\\K$-uniruledness - a parameter which measures degree of parametric curves covering a variety.\n\n\\begin{definition}\\label{k-uniruleddef}\nWe say that an affine variety $X$ has \\textup{degree of $\\K$-uni\\-ruled\\-ness at most $d$}, if all its irreducible components satisfy the above conditions. A \\textup{degree of $\\K$-uniruledness} is the minimum $d$ for which it has degree of $\\K$-uni\\-ruled\\-ness at most $d$. An affine variety is called \\textup{$\\K$-uniruled}, if it is $\\K$-uniruled of some degree.\n\\end{definition}\n\nThe condition (3) from Propositions \\ref{k-uniruled} and \\ref{k-uniruledofdeg} is clearly the same, so both definitions of \n$\\K$-uniruled varieties coincide for an uncountable field $\\K$. From now on we are going to use only the second - Definition \\ref{k-uniruleddef}.\n\n\\begin{example}\nLet $X\\subset \\K^n$ be a hypersurface of degree $d1$, $f:X\\rightarrow Y$ is a regular dominant map, and let $y\\in \\s_f$, then there exists an affine variety $X'\\subset X$, such that $$\\dim X'=\\dim X-1\\text{ and }y\\in \\s_{f\\vert_{X'}}$$ Then the assertion follows from the induction.\n\nSuppose $X\\subset\\K^m$. Without the loss of generality we can assume $X$ is irreducible. From Proposition \\ref{graph} we know that \n$$y\\in \\s_f=p_2(\\overline{\\graph(f)}\\setminus \\graph(f)).$$ \nSo there exists\n$$x\\in\\overline{\\graph(f)}\\setminus\\graph(f)=\\overline{\\graph(f)}\\cap (\\p^m\\setminus\\K^m),$$\nsuch that $(x,y)\\in\\overline{\\graph(f)}$.\nConsider irreducible hypersurfaces $H$ in \\newline $\\overline{\\graph(f)}$ passing through $(x,y)$ and different \nfrom $\\overline{\\graph(f)}\\setminus\\graph(f)$. Take any such $H$, now $X'$ equal to projection of $H\\cap\\K^m$ to $X$ satisfies our requirements.\n\\end{proof}\n\nTwo most important properties of the set $\\s_f$ are the following:\n\n\\begin{theorem}[\\cite{je99}, page $5$, Theorem $3.8$]\\label{hiper}\nLet $X,Y$ be affine varieties, and let $f:X\\rightarrow Y$ be a regular generically finite map. Then the set $\\s_f$ is a hypersurface in $\\overline{f(X)}$ or it is empty. \n\\end{theorem}\n\n\nThe second property is that if $X$ is additionally $\\K$-uniruled, then the set $\\s_f$ is also $\\K$-uniruled. We are going to prove it. First we will consider the case of surfaces.\n\nLet $X$ be a smooth projective surface and let $D=\\sum_{i=1}^n D_i$ be a simple normal crossing divisor on $X$ (here we consider only reduced divisors). Let $\\graph(D)$ be a graph of $D$, with vertices $D_i$, and an edge between $D_i$ and $D_j$ for each point of intersection of $D_i$ and $D_j$. \n\n\\begin{definition}\nWe say that a simple normal crossing divisor $D$ on a smooth surface $X$ is a tree if $\\graph(D)$ is a tree (it is connected and acyclic).\n\\end{definition}\n\nThe following fact is obvious.\n\n\\begin{proposition}\\label{acykl}\nLet $X$ be a smooth projective surface and let $D\\subset X$ be a divisor which is a tree. If $D', D''\\subset D$ are connected divisors without common components, then $D'$ and $D''$ have at most one common point.\n\\end{proposition}\n\n\\begin{theorem}\\label{Sfsurface}\nLet $\\Gamma$ be an affine curve, and let $f:\\Gamma\\times\\K\\rightarrow\\K^m$ be a generically finite mapping. Then the set $S_{f}$ is $\\K$-uniruled or it is empty.\n\\end{theorem}\n\n\\begin{proof}\nTake an affine normalization $\\nu:\\Gamma^{\\nu}\\rightarrow\\Gamma$ (\\cite{sh}) of the curve $\\Gamma$. From Proposition \\ref{zlozeniesk} we get that $\\s_f=\\s_{\\nu\\circ f}$, because normalization is a finite mapping. Hence we can assume that the curve $\\Gamma$ is smooth. \n\nLet $\\overline{\\Gamma}$ be a smooth completion of $\\Gamma$, and let us denote $\\overline{\\Gamma}\\setminus\\Gamma=\\{a_1,\\dots,a_l\\}$. Let $X=\\Gamma\\times\\K$ and $\\overline{X}= \\overline{\\Gamma}\\times\\p^1$ be a projective closure of $X$. The divisor \n$$D=\\overline{X}\\setminus X=\\overline{\\Gamma}\\times\\infty+\\sum_{i=1}^l \\{a_i\\}\\times\\p^1$$\nis a tree. We can resolve points of indeterminacy of the rational map $f:\\overline{X}\\dashrightarrow\\p^m$.\n\\vspace{5mm}\n\n\\begin{center}\n\\begin{picture}(240,160)(-40,40)\n\\put(-20,117.5){\\makebox(0,0)[l]{$\\pi\\left\\{\\rule{0mm}{2.7cm}\\right.$}}\n\\put(0,205){\\makebox(0,0)[tl]{$\\overline{X}_m$}}\n\\put(0,153){\\makebox(0,0)[tl]{$\\overline{X}_{m-1}$}}\n\\put(4,105){\\makebox(0,0)[tl]{$\\vdots$}}\n\\put(0,40){\\makebox(0,0)[tl]{$\\overline{X}$}}\n\\put(170,40){\\makebox(0,0)[tl]{$\\p^m$}}\n\\put(80,50){\\makebox(0,0)[tl]{$f$}}\n\\put(100,140){\\makebox(0,0)[tl]{$f'$}}\n\\put(10,70){\\makebox(0,0)[tl]{$\\pi_1$}}\n\\put(10,130){\\makebox(0,0)[tl]{$\\pi_{m-1}$}}\n\\put(10,180){\\makebox(0,0)[tl]{$\\pi_m$}}\n\\put(5,190){\\vector(0,-1){30}} \\put(5,140){\\vector(0,-1){30}}\n\\put(5,80){\\vector(0,-1){33}}\n\\multiput(20,35)(8,0){17}{\\line(1,0){5}}\n\\put(157,35){\\vector(1,0){10}} \\put(20,200){\\vector(1,-1){150}}\n\\end{picture}\n\\end{center}\n\\vspace{5mm}\n \nDenote the proper transform of $\\overline{\\Gamma}\\times\\infty$ by $\\pi^*(\\overline{\\Gamma}\\times\\infty)$. It is an easy observation that $f'(\\pi^*(\\overline{\\Gamma}\\times\\infty))\\subset H_\\infty$, where $H_\\infty$ is the hyperplane at infinity of $\\p^m$. \n\nNote that the divisor $D'=\\pi^*(D)$ is a tree, since each $\\pi_i$ is a blow up. Moreover irreducible components of $D'$ are $\\pi^*(\\overline{\\Gamma}\\times\\infty)$ and $\\p^1$'s. The curve $C=f'^{-1}(H_\\infty)\\subset D'$ is a complement of a semi-affine variety $f'^{-1}(\\K^m)$ hence it is connected (for details see \\cite{je99}, Lemma~$4.5$). \n\nNow $\\s_f=f'(D'\\setminus C)$.\nLet $R\\subset \\s_f$ be an irreducible component. From Proposition \\ref{hiper} we know that $R$ is a curve. So there is an irreducible curve $Z\\subset D'$, such that $R=f'(Z\\setminus C)$. By Proposition \\ref{acykl} we have that $Z$ has at most one common point with $C$. Of course it has at least one, since every morphism from irreducible projective variety to an affine variety is constant. So $R=f'(Z\\setminus C)=f'(\\K)$. This completes the proof due to Proposition \\ref{k-uniruledofdeg} and the fact that $\\s_f$ consists only of finite number of parametric curves.\n\\end{proof}\n\n\\begin{theorem}[\\cite{je10}, page $3673$, Theorem $4.1$]\\label{Sfkuni}\nLet $X,Y$ be affine varieties, and let $f:X\\rightarrow Y$ be a regular dominant map. If $X$ is additionally $\\K$-uniruled, then the set $\\s_f$ is also $\\K$-uniruled or it is empty.\n\\end{theorem}\n\n\\begin{proof}\nDue to Proposition \\ref{k-uniruledofdeg} there exists a regular dominant map $g:\\K\\times W\\rightarrow X$. From Proposition \\ref{zlozenie} we have \nthat $\\s_f\\subset \\s_{g\\circ f}$, and from Proposition \\ref{hiper} both sets are of the same pure dimension. Therefore it is enough to prove the assertion for $f:\\K\\times W\\rightarrow Y$. \n\nLet $y\\in \\s_f$. Using Lemma \\ref{curve} we get that there is a curve $\\Gamma\\subset\\K\\times W$ such that $f\\vert_{\\Gamma}$ is not proper at $y$. Let $p_2:\\K\\times W\\rightarrow W$ be a projection, and $\\Gamma'=p_2(\\Gamma)$. Of course $\\Gamma'$ is still a curve, because otherwise $\\Gamma=\\K$ and $f\\vert_{\\Gamma}$ would be finite. We have that $\\Gamma\\subset\\K\\times\\Gamma'$, so \n$$y\\in \\s_{f\\vert_{\\Gamma}}\\subset \\s_{f\\vert_{\\K\\times\\Gamma'}}\\subset \\s_f.$$ \nFrom Proposition \\ref{Sfsurface} we get that there exists a parametric curve contained in $\\s_f$ passing through $y$. By Propositions \\ref{k-uniruled} and \\ref{k-uniruledofdeg} this completes the proof for uncountable fields $\\K$. For countable fields we extend $X,Y,f$ to an uncountable field $\\K\\subset\\Le$ and use Proposition \\ref{indep}.\n\\end{proof}\n\nTo motivate one conjecture we will need one more property of the set $\\s_f$. It describes its degree as a hypersurface.\n\n\\begin{theorem}[\\cite{je93}, page $264$, Theorem $15$]\\label{hiperc}\nLet $f:\\C^n\\rightarrow\\C^n$ be a dominant polynomial map. Then the degree of a hypersurface $\\s_f$ is at most $$\\frac{\\prod_i\\deg f_i-\\mu(f)}{\\min_i\\deg f_i},$$ where $\\mu(f)$ denotes the multiplicity of $f$.\n\\end{theorem}\n\n\n\\section{A complex field case}\\label{c}\n\nFor the whole section we asume that the base field is $\\C$. \n\nThe condition that a map is finite at a point $y$ is equivalent to the fact that it is locally\nproper in the classical topology sense (there exists an open neighborhood\n$U$ of $y$ such that $f^{-1}(\\overline{U})$ is compact). This\ncharacterization gives the following:\n\n\\begin{proposition}[\\cite{je93}, page $260$, Proposition $3$]\\label{topo}\nLet $f:X\\rightarrow Y$ be a generically finite mapping between\naffine varieties over $\\C$. Then $y\\in \\s_f$ if and only if there exists a\nsequence $(x_n)_{n\\in\\N}$, such that $\\vert x_n\\vert\\rightarrow \\infty$ and\n$f(x_n)\\rightarrow y$.\n\\end{proposition}\n\nWe will need a classical theorem of complex analysis.\n\n\n\n\\begin{theorem}[Rouch\\'{e} \\cite{ru}]\\label{rouche}\nLet $f,g:\\C\\rightarrow\\C$ be holomorphic functions. Suppose that $\\vert g(z)\\vert<\\vert f(z)\\vert$ for every $z\\in\\partial D$, where $D$ is a region. Then $f$ and $f+g$ have the same number of zeros (counted with multiplicities) inside $D$. \n\\end{theorem}\n\nWe are ready to prove the following:\n\n\\begin{theorem}\\label{cn}\nLet $f:\\C^n\\rightarrow \\C^m$ be a generically finite polynomial mapping of\ndegree $d$. Then the set $\\s_f$ has degree of $\\C$-uniruledness at most $d-1$ or it is empty.\n\\end{theorem}\n\n\\begin{proof} Let $y\\in \\s_f$, by an affine transformation we can assume that \n$$y=O=(0,0,\\dots,0)\\in \\C^m.$$\nFrom the same reason we can assume that $O\\notin f^{-1}(\\s_f)$.\n\nDue to Proposition \\ref{topo} there exists a sequence of points\n$\\lvert x_k\\rvert\\to\\infty$ such that $f(x_k)\\to O$. Let us consider lines\n$$L_k(t)=tO+(1-t)x_k=(1-t)x_k,\\;\\; t\\in\\C.$$\nDenote $l_k(t)=f(L_k(t))$.\nInfinite fibers cover only a subset of codimension at least $2$ in $\\C^n$, and due to Proposition \\ref{hiper} the codimension of $\\s_f$ equals to $1$. Thus by Proposition \\ref{k-uniruledofdeg} we can assume that $\\deg l_k>0$. Note also that $\\deg l_k\\leq d$. Each curve $l_k$ is given by $m$ polynomials of one variable: \n$$l_k(t)=(\\sum_{i=0}^d a^1_i(k)\nt^i,\\dots,\\sum_{i=0}^d a^m_i(k) t^i).$$ \nHence we can associate $l_k$ with a point\n$$(a^1_0(k),\\dots,a^1_d(k);\\ldots;a^m_0(k),\\dots,a^m_d(k))\\in \\C^N.$$\n\nFor each $i$ when $k\\to \\infty$, then $a^i_0(k)\\to 0$. So we\ncan change the parametrization of $l_k$ by putting $t\\rightarrow\n\\lambda_k t$, in such a way that $\\Vert l_k\\Vert=1,$ for $k\\gg 0$\n(we consider here $l_k$ as an element of $\\C^N$ with the Euclidean\nnorm). \n\nNow, since unit sphere is compact, there exists a subsequence $(l_{k_r})_{r\\in\\N}$ of $(l_k)_{k\\in\\N}$, which is convergent to a polynomial mapping \n$$l : \\C\\rightarrow \\C^m,$$ with $l (0) = O$. Moreover, $l$ is non-constant, because $\\Vert l \\Vert\n= 1$, and $l(0)= O$. \n\nChoosing a subsequence we can also assume that the limit\n$$\\textit{lim}_{k\\rightarrow \\infty}\\lambda_k=\\lambda$$\nexists in the compactification of the field $\\C$. Consider two cases:\n\\begin{enumerate}\n\\item $\\lambda\\in\\C$ is finite. If $k\\to\\infty$, then $L_k(\\lambda_kt)=(1-\\lambda_kt)x_k\\to\\infty$ for $t\\not=\\lambda^{-1}$.\n\\item $\\lambda=\\infty$. If $k\\to\\infty$, then $\\Vert L_k(\\lambda_k t)\\Vert \\geq (\\vert\\lambda_kt\\vert-1)\\Vert x_k\\Vert$, and hence $\\Vert L_k(\\lambda_k t)\\Vert \\to\n \\infty$ for every $t\\neq 0$.\n\\end{enumerate}\n\nOn the other hand if $k\\to\\infty$\n$$f(L_k(\\lambda_k t))=l_k(\\lambda_kt)\\to l(t),$$ \nso using once more Proposition \\ref{topo} this means that the curve $l$ is contained in the set $\\s_f$. As a consequence the set $\\s_f$ has \ndegree of $\\C$-uniruled\\-ness at most $d$.\n\nTo complete the proof we should show that $\\deg l0$. Note also that $\\deg l_k\\leq d$. Each curve $l_k$ is given by $m$ polynomials of one\nvariable: \n$$l_k(t)=(\\sum_{i=0}^d a^1_i(k) t^i,\\dots,\\sum_{i=0}^d\na^m_i(k) t^i).$$\nAs before we can associate $l_k$ with a point\n$$(a^1_0(k),\\dots,a^1_d(k);\\ldots;a^m_0(k),\\dots,a^m_d(k))\\in \\C^N.$$\n\nFor each $i$ when $k\\to \\infty$, then $a^i_0(k)\\to 0$. Thus we\ncan change the parametrization of $l_k$ by $t\\rightarrow\n\\lambda_k t$, in such a way that $\\Vert l_k\\Vert=1,$ for $k\\gg 0$\n(we consider $l_k$ as an element of $\\C^N$ with the Euclidean\nnorm). \n\nCompactness of unit sphere implies that there exists a\nsubsequence $(l_{k_r})_{r\\in\\N}$ of $(l_k)_{k\\in\\N}$, which is convergent to a\npolynomial mapping $$l : \\C\\rightarrow \\C^m,$$ with $l(0) = O$.\nMoreover, $l$ is non-constant, because $\\Vert l \\Vert = 1$ and\n$l(0)= O$. \n\nWe can also assume that the limit\n$$\\textit{lim}_{k\\rightarrow \\infty}\\lambda_k=\\lambda$$ \nexists in the compactification of the field $\\C$. \nWe consider two cases:\n\\begin{enumerate}\n \\item $\\lambda$ is finite. If $k\\to\\infty$, then $L_k(\\lambda_kt)=((1-\\lambda_kt)a_k,w_k)\\to\\infty$ for $t\\not=\\lambda^{-1}$.\n\\item $\\lambda=\\infty$. If $k\\to\\infty$, then $\\Vert L_k(\\lambda_k t)\\Vert \\geq \\max((\\vert\\lambda_kt\\vert-1)\\vert a_k\\vert,\\Vert w_k\\Vert)$, and $\\Vert L_k(\\lambda_k t)\\Vert \\to \\infty$ for every $t\\neq 0$.\n\\end{enumerate}\n\nBut on the other hand \n$$f(L_k(\\lambda_k t))=l_k(\\lambda_kt)\\to l(t),$$ so using once more Proposition \\ref{topo} we get that the curve $l$ is contained in the set $\\s_f$. Thus the set $\\s_f$ has degree of $\\C$-uniruledness at most $d$.\n\\end{proof}\n\n\\begin{corollary}\\label{wn}\nLet $f=(f_1,...,f_m) :\\C^n\\rightarrow \\C^m$ be a generically\nfinite polynomial mapping with $d=\\min_j\\max_i\\deg _{x_j}f_i$. Then\nthe set $\\s_f$ has degree of $\\C$-uniruledness at most $d$ or it is empty.\n\\end{corollary}\n\n\\begin{theorem}\\label{multc}\nLet $X$ be an affine variety with degree of $\\C$-uniruledness at\nmost $d_1$ and let $f:X\\rightarrow \\C^m$ be a\ngenerically finite polynomial mapping of degree $d_2$. Then the set $\\s_f$ has degree of $\\C$-uniruledness at\nmost $d_1d_2$ or it is empty.\n\\end{theorem}\n\n\\begin{proof}\nBy Definition \\ref{k-uniruleddef} there exists an affine variety\n$W$ with $\\dim W = \\dim X-1$, and a dominant polynomial map \n$$\\phi:\\C\\times W\\rightarrow X$$\n of degree in the first coordinate at\nmost $d_1$. Equality $\\dim \\C\\times W = \\dim X$ implies that\n$\\phi$ is generically finite, hence also generically finite is the map\n$$f\\circ\\phi:\\C\\times W\\rightarrow \\C^m.$$ \nIt is of degree in the first coordinate at\nmost $d_1d_2$. Due to Theorem \\ref{cxw} the set $\\s_{f\\circ\\phi}$ \nhas degree of $\\C$-uniruledness at most $d_1d_2$.\nBy Proposition \\ref{zlozenie} there is an inclusion $\\s_f\\subset\\s_{f\\circ\\phi}$. From Theorem\n\\ref{hiper} we know that if not empty, then both sets are of pure\ndimension $\\dim X-1$, so components of $\\s_f$ are components of\n$\\s_{f\\circ\\phi}$. This implies the assertion.\n\\end{proof}\n\n\\begin{example}\nConsider the following polynomial mapping \n$$f: \\C^n\\ni (x_1,\\ldots, x_n)\\to (x_1, x_1x_2,\\ldots,x_1x_n)\\in \\C^n.$$ \nWe have $\\deg f=2$ and $\\s_f=\\{x\\in\\C^n:x_1=0\\}$. The set $\\s_f$ has degree of $\\C$-uniruledness equal to one. This shows that in general Theorems \\ref{cn}, \\ref{cxw} and Corollary \\ref{wn} can not be improved.\n\\end{example}\n\n\\begin{example}\nConsider the following polynomial mapping \n$$f: \\C^2\\ni (x,y)\\to (x+(xy)^d, xy)\\in \\C^2.$$ \nWe have $\\deg f =2d$ and $\\s_f=\\{(s,t)\\in\\C^2:s=t^d\\}$. The set $\\s_f$ has degree of $\\C$-uniruledness equal to $d$. This shows that in general Corollary\n\\ref{wn} can not be improved.\n\\end{example}\n\n\\begin{example}\nFor $n>2$ let $X=\\{ x\\in \\C^n : x_1x_2=1\\}$, and \n$$f:X\\ni (x_1,\\ldots, x_n)\\to (x_2,\\ldots, x_n)\\in\\C^{n-1}.$$ The variety $X$ has degree of $\\C$-uniruledness equal to one, moreover we have $\\deg f=1$ and $\\s_f=\\{x\\in\\C^{n-1}:x_1=0\\}$. So the set $\\s_f$ has degree of $\\C$-uniruledness one. This shows that in general Theorems \\ref{cxw}, \\ref{multc} can not be improved.\n\\end{example}\n\n\\begin{remark}\nLet us note that by Proposition \\ref{indep} all results from this\nsection remain true for arbitrary algebraically closed field of\ncharacteristic zero.\n\\end{remark}\n\n\\section{A real field case}\\label{r}\n\nFor the whole section we assume that the base field is $\\R$. \n\nLet $X\\subset\\R^m$ be a closed semialgebraic set, and let $f: X\\to\\R^n$ be a polynomial mapping. \nAs in the complex case we say that $f$ is not proper at a point $y\\in\\R^n$ if there is no neighborhood $U$ of $y$ such that $f^{-1}(\\overline{U})$ \nis compact. The set of all points $y\\in\\overline{f(X)}$ at which the mapping $f$ is not proper we denote as before by $\\s_f$. This set is also closed and semialgebraic \\cite{je02}. The results of \\cite{je02} can be generalized as follows:\n\n\\begin{theorem}\\label{rn1}\nLet $f:\\R^n\\rightarrow\\R^m$ be a generically finite polynomial mapping of degree $d$. Then the set $\\s_f$ has degree of $\\R$-uniruledness at most $d-1$ or it is empty.\n\\end{theorem}\n\n\\begin{theorem}\\label{rxw1}\nLet $X=\\R\\times W\\subset \\R\\times \\R^n$ be a closed semialgebraic cylinder and let \n$$f:\\R\\times W\\ni (t,w)\\rightarrow (f_1(t,w),\\dots,f_m(t,w))\\in \\R^m$$ \nbe a generically finite polynomial mapping. Assume that for every $i$ there is $\\deg _tf_i\\leq d$. Then the set $\\s_f$ has degree of $\\R$-uniruledness at most $d$ or it is empty.\n\\end{theorem}\n\n\\begin{corollary}\\label{rwn1}\nLet $f=(f_1,...,f_m) :\\R^n\\rightarrow \\R^m$ be a generically finite polynomial mapping with $d=\\min_j\\max_i\\deg _{x_j}f_i$. Then the set $\\s_f$ has degree of $\\R$-uniruledness at most $d$ or it is empty.\n\\end{corollary}\n\nProofs of these facts are exactly the same as in the complex case (see section \\ref{c}). To prove a real analog of Theorem \\ref{multc} we need some new ideas. \n\n\n\n\n\n\n\\begin{theorem}\\label{multc1}\nLet $X$ be a closed semialgebraic set with degree of $\\R$-uni\\-ruled\\-ness at most $d_1$, and let $f:X\\rightarrow\\R^m$ be a generically finite polynomial mapping of degree $d_2$. Then the set $\\s_f$ has degree of $\\R$-uniruledness at most $2d_1d_2$ or it is empty.\n\\end{theorem}\n\n\\begin{proof}\nLet $a\\in \\s_f$ and let $(x_k)_{k\\in\\N}\\subset X$ be a sequence of points such that $f(x_k)\\to a$ and $x_k\\to\\infty$. \nBy Proposition \\ref{real} there exists a semialgebraic curve $W$ and a generically finite polynomial map \n$$\\phi:\\R\\times W \\ni (t,w)\\to \\phi(t,w)\\in X,$$ \nsuch that $\\deg_t\\phi \\leq d_1$, and there exists a sequence $(y_k)_{k\\in\\N}\\subset\\R\\times W$\nsuch that $f(\\phi(y_k))\\to a$ and $\\phi(y_k)\\to\\infty$. In particular $a\\in\\s_{f\\circ\\phi}$. Let $\\Gamma$ be a Zariski closure of $W$. We can assume that $\\Gamma$ and its complexification $\\Gamma^c$ are smooth. Denote $Z:=\\R\\times\\Gamma$. We have the induced mapping $\\phi: Z \\to X$. Hence we have also the induced complex mapping $\\phi^c: Z^c:=\\C\\times\\Gamma^c\\to X^c$, where $Z^c,X^c$ denote the complexification of $Z$ and $X$ respectively.\n\nLet $\\overline{\\Gamma^c}$ be a smooth completion of $\\Gamma^c$ and let us denote $\\overline{\\Gamma^c}\\setminus\\Gamma^c=\\{ a_1,..., a_l\\}.$ Now $\\p^1\\C\\times\\overline{\\Gamma^c}$ is a projective closure of $Z^c$. The divisor \n$$D=\\overline{Z^c}\\setminus Z^c=\\infty\\times\\overline{\\Gamma^c}+\\sum_{i=1}^l \\p^1\\C\\times\\{a_i\\}$$ \nis a tree. The mapping $\\phi$ induces a rational mapping $\\phi: \\overline{Z^c}\\dashrightarrow\\overline{X^c}$, where $\\overline{X^c}$ denotes the\nprojective closure of $X^c.$ We can resolve points of indeterminacy of this mapping:\n\n\\vspace{3mm} \n\\begin{center}\\begin{picture}(240,160)(-40,40)\n\\put(-20,117.5){\\makebox(0,0)[l]{$\\pi\\left\\{\\rule{0mm}{2.7cm}\\right.$}}\n\\put(0,205){\\makebox(0,0)[tl]{$\\overline{Z^c}_m$}}\n\\put(0,153){\\makebox(0,0)[tl]{$\\overline{Z^c}_{m-1}$}}\n\\put(4,105){\\makebox(0,0)[tl]{$\\vdots$}}\n\\put(0,40){\\makebox(0,0)[tl]{$\\overline{Z^c}$}}\n\\put(170,40){\\makebox(0,0)[tl]{$\\overline{X^c}$}}\n\\put(80,50){\\makebox(0,0)[tl]{$\\phi$}}\n\\put(100,140){\\makebox(0,0)[tl]{$\\phi '$}}\n\\put(10,70){\\makebox(0,0)[tl]{$\\pi_1$}}\n\\put(10,130){\\makebox(0,0)[tl]{$\\pi_{m-1}$}}\n\\put(10,180){\\makebox(0,0)[tl]{$\\pi_m$}}\n\\put(5,190){\\vector(0,-1){30}} \\put(5,140){\\vector(0,-1){30}}\n\\put(5,80){\\vector(0,-1){33}}\n\\multiput(20,35)(8,0){17}{\\line(1,0){5}}\n\\put(157,35){\\vector(1,0){10}} \\put(20,200){\\vector(1,-1){150}}\n\\end{picture}\n\\end{center}\n\n\\vspace{3mm} \nObserve that $H:=\\pi^{-1}(\\overline{Z})$ has a structure of a real variety and $\\R\\times\\Gamma\\subset H$. Denote $Q:=\\overline{Z_m}\\cap \\phi'^{-1}(X^c).$ Then the mapping $\\phi': Q\\to X^c$ is proper. Moreover, $Q=\\overline{Z_m}\\setminus\n\\phi'^{-1}(\\overline{X^c}\\setminus X^c).$ The divisor\n$D_1=\\phi'^{-1}(\\overline{X^c}\\setminus X^c)$ is connected as a\ncomplement of a semi-affine variety $\\phi'^{-1}(X^c)$ (for\ndetails see \\cite{je99}, page $8$, Lemma~$4.5$). Note that the divisor\n$D'=\\pi^*(D)$ is a tree. Hence the divisor $D_1\\subset D'$ is also\na tree.\n\nWe can consider the mapping $f'=f\\circ\\phi': \\R\\times \\Gamma\\to \\R^n$ as the regular mapping $f': Q\\to \\C^n.$ This mapping\ninduces a rational mapping from $H^c=\\overline{Z_m^c}$ to $\\p^n\\C$. As\nbefore we can resolve its points of indeterminacy:\n\\vspace{3mm} \n\n\\begin{center}\\begin{picture}(240,160)(-40,40)\n\\put(-20,117.5){\\makebox(0,0)[l]{$\\psi\\left\\{\\rule{0mm}{2.7cm}\\right.$}}\n\\put(0,205){\\makebox(0,0)[tl]{$\\overline{H^c_k}$}}\n\\put(0,153){\\makebox(0,0)[tl]{$\\overline{H^c_{k-1}}$}}\n\\put(4,105){\\makebox(0,0)[tl]{$\\vdots$}}\n\\put(0,40){\\makebox(0,0)[tl]{$\\overline{H^c}$}}\n\\put(170,40){\\makebox(0,0)[tl]{${\\p^n(\\C)}$}}\n\\put(80,50){\\makebox(0,0)[tl]{$f'$}}\n\\put(100,140){\\makebox(0,0)[tl]{$F$}}\n\\put(10,70){\\makebox(0,0)[tl]{$\\rho_1$}}\n\\put(10,130){\\makebox(0,0)[tl]{$\\rho_{k-1}$}}\n\\put(10,180){\\makebox(0,0)[tl]{$\\rho_k$}}\n\\put(5,190){\\vector(0,-1){30}} \\put(5,140){\\vector(0,-1){30}}\n\\put(5,80){\\vector(0,-1){33}}\n\\multiput(20,35)(8,0){17}{\\line(1,0){5}}\n\\put(157,35){\\vector(1,0){10}} \\put(20,200){\\vector(1,-1){150}}\n\\end{picture}\n\\end{center}\n\n\\vspace{3mm} \nNote that the divisor $D_1'=\\psi^*(D_1)$ is a tree.\nDenote a proper transform of\n$\\infty\\times\\overline{\\Gamma}$ by $\\infty'\\times\\overline{\\Gamma}$. \nIt is an easy observation that\n$F(\\infty'\\times\\overline{\\Gamma})\\subset \\pi_\\infty$, where\n$\\pi_\\infty$ denotes the hyperplane at infinity of $\\p^n\\C$.\nNow $\\s_{f'}= F(D_1\\setminus F^{-1}(\\pi_\\infty)).$ The curve\n$L=F^{-1}(\\pi_\\infty)$ is connected (the same argument as above). Now\nby Proposition \\ref{acykl} we have that every irreducible curve\n$l\\subset D_1$ (we have $l\\cong \\p^1\\C$)\nwhich does not belong to $L$ has at most one common point with\n$L$. \n\nLet $R$ be an irreducible component of $\\s_{f'}$. Hence $R$\nis a curve. There is a curve $l\\subset D_1,$ which has exactly one\ncommon point with $L$, such that $R=F(l\\setminus L).$ If $l$ is\ngiven by blowing up of a real point, then $L$ also has a real\ncommon point with $l$ (since the conjugate point also is a\ncommon point of $l$ and $L$). When we restrict to the real model\n$l^r$ of $l$ we have $l^r\\setminus L\\cong \\R.$ Hence if we\nrestrict our consideration only to the real points and to the set\n$Q^r:=\\overline{\\R\\times W}\\subset Q$ (we consider here a closure in the\neuclidian topology) we see that the set $\\s$ of\nnon-proper points of the mapping $f'|_{Q^r}$ is a union of\nparametric curves $F(l^r\\setminus L), \\ l\\in D_1,\n\\psi(l)\\in \\overline{Q^r}$, where the last closure is the closure in a real\nprojective space. Of course $a\\in \\s\\subset\\s_f$.\nSimilarly the set $\\s_{f\\circ\\phi}$\n is a union of parametric curves $F(l^r\\setminus L), \\ l\\in \\psi^*(D'), \\pi(\\psi(l))\\subset\n\\overline{\\R\\times W}\\subset\\overline{Z}$. Hence we can say that a \"irreducible\ncomponent\" of the set of non-proper points of $f'|_{Q^r}$ is also a\n\"irreducible\" component of $\\s_{f\\circ\\phi}$. Now we can finish the\nproof by Theorem \\ref{cxw} and the following Lemma \\ref{xx}.\n\\end{proof}\n\n\\begin{lemma}\\label{xx}\nLet $\\psi:\\R\\to\\R^n$ be a parametric curve. If there exists a parametric curve $\\phi: \\R\\to \\R^n$ of degree at most $d$ with $\\psi(\\R)\\subset \\phi(\\R)$, then $\\psi(R)$ has degree of $\\R$-uniruledness at most $2d$.\n\\end{lemma}\n\n\\begin{proof}\nIndeed, let $\\phi(t)=(\\phi_1(t),\\dots,\\phi_n(t))$ and consider a\nfield \n$$\\Le=\\R(\\phi_1,\\dots,\\phi_n).$$ \nBy the L\\\"{u}roth Theorem \\ref{luroth} there\nexists a rational function $g(t)\\in \\R(t)$ such that $\\Le=\\R(g(t)).$\nIn particular $\\phi_i(t)=f_i(g(t)).$ In fact we have two induced\nmappings \n$$\\overline{f} : \\p^1\\R\\to \\p^n\\R\\text{ and }\n\\overline{g}: \\p^1\\R\\to \\p^1\\R.$$ \nMoreover,\n$\\overline{f}\\circ \\overline{g}=\\overline{\\phi}.$ Let $A_\\infty$\ndenote the unique point at infinity of a Zariski closure of\n$\\psi(\\R)$ and let $\\infty=\\overline{f}^{-1}(A_\\infty).$ This implies\nthat $\\#\\overline{g}^{-1}(\\infty)=1$ and we can assume that\n$\\overline{g}^{-1}(\\infty)=\\infty$, i.e., $g\\in \\R[t].$ Similarly\n$f_i\\in \\R[t].$ Now if deg $g=1$ then $f:\\R\\to \\R^n$ covers the whole\n$\\phi(R)$, because the image of $f$ is open and closed in\n$\\phi(R).$ Otherwise we can compose $f$ with suitable polynomial\nof degree two to obtain the whole $\\phi(R)$ in the image.\n\nIn the same way we can compose $f$ with suitable polynomial of degree one or two to obtain whole\n$\\psi(\\R)$ in the image. In any case $\\psi(\\R)$ has a parametrization of degree bounded by\n$2\\deg f\\leq 2d$.\n\\end{proof}\n\nTheorem \\ref{multc1} gives the following:\n\n\\begin{corollary}\\label{multc3}\nLet $X$ be a closed semialgebraic set which is $\\R$-uniruled and let $f:X\\rightarrow \\R^m$ be a generically\nfinite polynomial mapping. Then every connected component of the set $\\s_f$ is unbounded.\n\\end{corollary}\n\n\\section{A field of positive characteristic case}\\label{positive}\n\nIn this section we assume $\\K$ to be an arbitrary algebraically closed field. We begin with a useful lemma.\n\n\\begin{lemma}\\label{tool}\nLet $A\\subset\\K^n$ be an affine set, and let $f$ be a regular function on $\\K^n$ not equal to\n$0$ on any component of $A$. Suppose that for each $c\\in \\K^*$ the set \n$$A_c:=A\\cap\\{x\\in\\K^n:f(x)=c\\}$$\nhas degree of $\\K$-uniruledness at most $d$. Then $A_0$ also has degree of $\\K$-uniruledness at most $d$.\n\\end{lemma}\n\\begin{proof}\nSuppose that \n$$A=\\{x\\in\\K^n:g_1(x)=0,\\dots,g_r(x)=0\\}.$$ \nFor $a=(a_1,\\dots,a_n)\\in\\K^n$ and $b=(b_{1,1}:\\dots:b_{d,n})\\in\\p^{dn-1}$, let\n$$\\varphi_{a,b}:\\K\\ni t\\rightarrow (a_1+b_{1,1}t+\\dots+b_{1,d}^dt^d,\\dots,a_n+b_{n,1}t+\\dots+b_{n,d}^dt^d)\\in\\K^n$$ \nbe a parametric curve of degree at most $d$. Let us consider a variety and a projection\n$$\\K^n\\times\\p^{dn-1}\\supset V=\\{(a,b)\\in\\K^n\\times\\p^{dn-1}:\\forall_{t,i}\\;g_i(\\varphi_{a,b}(t))=0\\;\\text{and}$$\n$$\\forall_{t_1,t_2}\\;f(\\varphi_{a,b}(t_1))=f(\\varphi_{a,b}(t_2))\\}\\ni(a,b)\\rightarrow a\\in\\K^n.$$\nThe definition of the set $V$ says that parametric curves $\\varphi_{a,b}$ are contained in $A$ and $f$ is constant on them. Hence $V$ is closed and the image of the projection is contained in $A$. Moreover the image contains all $A_c$ for $c\\in \\K^*$, since they are filled with\nparametric curves of degree at most $d$. But since $\\p^{dn-1}$ is complete and $V$ is closed we have that the image is closed, and hence it is the whole set $A$. In particular $A_0$ is contained in the image, so it is filled with parametric curves of degree at most $d$.\n\\end{proof}\n\nWe are ready to prove an analog of Theorem \\ref{cn} for general fields.\n\n\\begin{theorem}\\label{kn}\nLet $\\K$ be an arbitrary algebraically closed field, and let\n$f:\\K^n\\rightarrow\\K^m$ be a generically finite polynomial mapping of degree\n$d$. Then the set $\\s_f$ has degree of $\\K$-uniruledness at most $d$ or it is empty.\n\\end{theorem}\n\\begin{proof}\nIf $n=1$ then the map is proper and $\\s_f$ is empty. \n\nWe consider the case $n\\geq 2$. Due to Proposition \\ref{graph} $$\\s_f=p_2(\\overline{\\graph(f)}\\setminus\\graph(f)),$$\nso it is enough to prove that the set\n$$\\overline{\\graph(f)}\\setminus\\graph(f)$$ \nis filled with parametric curves of degree at most $d$. It is because we take images of curves under projection $p_2$ which is of degree $1$, \nand both sets are of the same dimension, so only on a codimension $1$ subvariety images of curves can become points. \n\nDenote the coordinates in $\\p^n\\times\\K^m$ by $(x_{0} : \\dots : x_n;x_{n+1},\\dots,x_{n+m})$. Let us take an arbitrary point $$z\\in\\overline{\\graph(f)}\\setminus\\graph(f)=\\overline{\\graph(f)}\\cap\\{x_0=0\\}.$$ \nThere exists $1\\leq i\\leq n$ such that $z_i\\neq 0$. Consider sets \n$$A:=\\overline{\\graph(f)}\\cap\\{x_i\\neq 0\\}\\text{ and }A_c:=\\overline{\\graph(f)}\\cap\\{x_i\\neq 0\\}\\cap\\{x_0=cx_i\\}\\text{ for }c\\in\\K.$$ \nThe set $A$ is affine and sets $A_c$ satisfy\nassumptions of Lemma \\ref{tool} for $f=\\frac{x_0}{x_i}$. Sets $A_c$ for $c\\neq 0$ are filled with parametric\ncurves of degree at most $d$, because we can take $j\\neq 0,i$ and curves\n$$\\K\\ni t\\rightarrow (c:a_1:\\dots:a_{i-1}:1:\\dots:a_{j-1}:t:a_{j+1}:\\dots)\\xrightarrow{f} A_c\\subset\\graph(f).$$\nHence the set \n$$A_0=\\overline{\\graph(f)}\\cap\\{x_i\\neq 0\\}\\cap\\{x_0=0\\}$$ \nis also filled with such curves and we get that though $z$ passes a parametric curve of degree at most $d$ in $$\\overline{\\graph(f)}\\cap\\{x_0=0\\},$$ \nthis finishes the proof.\n\\end{proof}\n\nBy looking carefully at the proof of the above theorem we get the following slightly more general result:\n\n\\begin{theorem}\\label{kw}\nLet $\\K$ be an arbitrary algebraically closed field, and let $f:\\K\\times W\\rightarrow\\K^m$ be a generically finite polynomial mapping of degree $d$. Then the set $\\s_f$ is empty or all its components have degree of $\\K$-uniruledness at most $d$, except possibly components of $p_2(\\overline{\\graph(f)}\\cap \\{(0:1:0:\\dots)\\}\\times \\K^m)$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\K\\times W\\subset\\K\\times\\K^n$. As before it is enough to show that \n$$\\overline{\\graph(f)}\\setminus\\graph(f)$$ \nis filled with parametric curves of degree at most $d$. Let us take an arbitrary point $$z\\in\\overline{\\graph(f)}\\setminus\\graph(f)=\\overline{\\graph(f)}\\cap\\{x_0=0\\}.$$ \n\nIf there exists $i\\neq 1$ such that $z_i\\neq 0$ then the idea from the proof of Theorem \\ref{kn} works.\nWe apply Lemma \\ref{tool} to the set \n$$A:=\\overline{\\graph(f)}\\cap\\{x_i\\neq 0\\}\\text{ and }f=\\frac{x_0}{x_i}.$$ \nSets $A_c$ for $c\\neq 0$ are filled with parametric\ncurves of degree at most $d$\n$$\\K\\ni t\\rightarrow (c:t:a_2:\\dots:a_{i-1}:1:a_{i+1}\\dots:a_n)\\xrightarrow{f} A_c\\subset\\graph(f).$$\n\nOtherwise $z=(0,1,0\\dots,0)$. So for every component $C$ of $\\s_f$ which is different from components of $p_2(\\overline{\\graph(f)}\\cap \\{(0:1:0:\\dots)\\}\\times \\K^m)$ its open dense subset is covered by parametric curves of degree at most $d$, thus by Proposition \\ref{k-uniruledofdeg} component $C$ has degree of $\\K$-uniruledness at most $d$.\n\\end{proof}\n\nWhen applying Theorem \\ref{kw} to two different directions we get:\n\n\\begin{corollary}\\label{kkw}\nLet $\\K$ be an arbitrary algebraically closed field, and let $f:\\K^2\\times W\\rightarrow\\K^m$ be a generically finite polynomial mapping of degree $d$. Then the set $\\s_f$ has degree of $\\K$-uniruledness at most $d$ or it is empty.\n\\end{corollary}\n\n\\begin{proof}\nApply Theorem \\ref{kw} to two directions. As previously it is enough to show that any component $C$ of $$\\overline{\\graph(f)}\\setminus\\graph(f)$$ \nhas degree of $\\K$-uniruledness at most $d$. From the first direction we get that for any component of $$\\overline{\\graph(f)}\\setminus\\graph(f)\\setminus\\overline{\\graph(f)}\\cap \\{(0:1:0:0:\\dots)\\}\\times \\K^m,$$ \nhas degree of $\\K$-uniruledness at most $d$,\nwhile from the second direction we get it for any component of \n$$\\overline{\\graph(f)}\\setminus\\graph(f)\\setminus\\overline{\\graph(f)}\\cap \\{(0:0:1:0:\\dots)\\}\\times \\K^m.$$\nSince $C$ is a component of at least one of them the assertion follows.\n\\end{proof}\n\n\\section{Remarks}\n\nWhen we consider all generically finite mappings $\\C^n\\rightarrow\\C^n$ of degree at most $D$, then by Theorem \\ref{Sfkuni} hypersurfaces $\\s_f$ are all $\\C$-uniruled and by Theorem \\ref{hiperc} their degree is bounded. It happens that they all are also of bounded degree of $\\C$-uniruledness (namely by $D-1$, this follows from Theorem \\ref{cn}). It is reasonable to ask the following question:\n\n\\begin{question}\nDoes for every $n$ and $d$ exist a universal constant $D=D(n,d)$, such that all $\\K$-uniruled hypersurfaces in $\\K^n$ of degree at \nmost $d$ have degree of $\\K$-uniruledness at most $D(n,d)$?\n\\end{question}\n\nWe can ask an even stronger question:\n\n\\begin{question}\\label{22}\nDoes for every $n$ and $d$ there exist a universal constant $D=D(n,d)$, such that if a hypersurface in $\\K^n$ of degree at most $d$ \ncontains a parametric curve passing through $O=(0,\\dots,0)$, then it also contains such a parametric curve of degree at most $D(n,d)$?\n\\end{question}\n\nWe can show the following equivalent condition to the affirmative answer to Question \\ref{22}. Let us define\n$$K_{n,d,D}=\\{a=(a_1,\\dots,a_s)\\in\\K^{\\binom{n+d}{d}}:\\text{ such that hypersurface in }\\K^n\\text{ defined}$$ $$\\text{by }f_a=a_1+a_2x_1+\\dots+a_sx_1^d\\cdots x_n^d=0\\text{ contains a parametric curve}$$ \n$$\\text{passing through }O\\text{ of degree at most }D\\},$$\nand let\n$$K_{n,d}=\\{(a_1,\\dots,a_s)\\in\\K^{\\binom{n+d}{d}}:\\text{ such that hypersurface in }\\K^n\\text{ defined by}$$ $$f_a=a_1+a_2x_1+\\dots+a_sx_1^d\\cdots x_n^d=0\\text{ contains a parametric curve}$$\n$$\\text{passing through }O\\}.$$\n\n\\begin{proposition}\nThe set $K_{n,d,D}$ is closed. \n\\end{proposition}\n\n\\begin{proof}\nFor $b=(b_{1,1}:\\dots:b_{D,n})\\in\\p^M$, where $M=Dn-1$, let \n$$\\varphi_{b}:\\K\\ni t\\rightarrow (b_{1,1}t+\\dots+b_{1,D}^Dt^D,\\dots,b_{n,1}t+\\dots+b_{n,D}^Dt^D)\\in\\K^n$$\nbe a parametric curve of degree at most $D$ passing through $O$. Consider a variety and a projection\n$$\\K^{\\binom{n+d}{d}}\\times\\p^M\\supset V=\\{(a,b):\\forall_{t}\\;f_a(\\varphi_{b}(t))=0\\}\\ni(a,b)\\rightarrow a\\in \\K^{\\binom{n+d}{d}}.$$\nBy the definition the set $V$ is closed and it consists of pairs $(a,b)$ corresponding to pairs $(f_a,\\varphi_b)$ of a hypersurface and a parametric curve passing through $O$ contained in it. Hence the image equals to $K_{n,d,D}$. But since $\\p^M$ is complete and $V$ is closed the image is also closed. \n\\end{proof}\n\n\\begin{proposition}\nLet $\\K$ be an uncountable field. Then the following conditions are equivalent:\n\\begin{enumerate}\n\\item the set $K_{n,d}$ is closed,\n\\item the answer to Question \\ref{22} is positive. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nObviously $K_{n,d}=\\bigcup_{D\\in\\N}K_{n,d,D}$. \nImplication $(1)\\Rightarrow(2)$ follows from Baire's Theorem \\ref{baire}, the opposite one is trivial.\n\\end{proof}\n\nAffirmative answers in the case of $\\K=\\C$ from section \\ref{c}\nand $\\K=\\R$ from section \\ref{r} lead to the following natural conjecture:\n\n\\begin{conjecture}\nLet $\\K$ be an arbitrary algebraically closed field, and let\n$f:\\K^n\\rightarrow \\K^m$ be a generically finite polynomial mapping of degree\n$d$. Then the set $\\s_f$ has degree of $\\K$-uniruledness at most $d-1$ or it is empty.\n\\end{conjecture}\n\nTheorems \\ref{multc} and \\ref{multc1} suggests the following two conjectures:\n\n\\begin{conjecture}\nLet $\\K$ be an arbitrary algebraically closed field, $X$ be an affine variety with degree of $\\K$-uniruledness at most $d_1$ and let $f:X\\rightarrow\\K^m$ be a generically finite polynomial mapping of degree $d_2$. Then the set $\\s_f$ has degree of $\\K$-uniruledness at most $d_1d_2$ or it is empty.\n\\end{conjecture}\n\n\\begin{conjecture}\nLet $X$ be a closed semialgebraic set with degree of $\\R$-uni\\-ruled\\-ness at most $d_1$, and let $f:X\\rightarrow\\R^m$ be a generically finite polynomial mapping of degree $d_2$. Then the set $\\s_f$ has degree of $\\R$-uniruledness at most $d_1d_2$ or it is empty.\n\\end{conjecture}\n\n\\pagebreak\n\n\\chapter{The set of fixed points of group actions}\n\nIn the whole Chapter, unless stated otherwise, $\\K$ is assumed to be an arbitrary algebraically closed field.\nWe are going to remind the definition and a characterization of unipotent groups. The main goal of this Chapter is to show that under some conditions the set of fixed points $\\Fix(\\G)$ of action of an algebraic group $\\G$ is $\\K$-uniruled. \n\nIn particular we prove:\n\n\\begin{theoremm}\nLet $\\G$ be a nontrivial connected unipotent algebraic group which acts effectively on an affine variety $X$. Then the set of fixed points of $\\G$ is a $\\K$-uniruled variety.\n\\end{theoremm}\n\n\\begin{theoremm}\nLet $\\G$ be an infinite connected algebraic group which acts effectively on $\\K^n$, for $n\\geq 2$. Assume that an irreducible hypersurface $H$ is contained in the set of fixed points of $\\G$. Then $H$ is $\\K$- uniruled.\n\\end{theoremm}\n\n\\section{Introduction}\n\nWe will use the following definition of unipotent groups.\n\n\\begin{definition}\\label{series}\nAn algebraic group $\\G$ is \\textup{unipotent} if there exists a series of normal algebraic subgroups\n$$0=\\G_0\\vartriangleleft \\G_1\\vartriangleleft\\dots\\vartriangleleft \\G_r=\\G,$$\nsuch that $\\G_i\/\\G_{i-1}\\cong \\G_a=(\\K,+,0)$.\n\\end{definition}\n\nLet $\\G$ be a connected unipotent algebraic group, which acts effectively on a variety $X$. The set\nof fixed points of this action was intensively studied (see \\cite{bi, caso, gr, ho}). \nIn particular Bia\\l ynicki-Birula proved that if $X$ is an affine variety,\nthen $\\G$ has no isolated fixed points.\n\nHere we also consider the case when $X$ is an affine variety. We\ngeneralize the result of Bia\\l ynicki-Birula by proving, that\nthe set $\\Fix(\\G)$ of fixed points of $\\G$ is in fact a $\\K$-uniruled\nvariety. So through every point $x\\in\\Fix(\\G)$ passes a\nparametric curve.\n\nWe also show that if an arbitrary infinite connected algebraic group $\\G$\nacts effectively on $\\K^n$ and the set of fixed points contains a\nhypersurface $H$ then this hypersurface is $\\K$-uniruled. It was known before for the case $\\K=\\C$ \\cite{je03}.\n\n\\section{The set of fixed points of a unipotent group}\n\nWe begin with a quite natural observation:\n\n\\begin{lemma}\\label{ginv}\nLet algebraic group $\\G$ act on affine varieties $X,Y$ and let $f:X\\rightarrow Y$ be a generically finite $\\G$-invariant mapping (for any $x\\in X,g\\in\\G$ there is $gf(x)=f(gx)$). Then the set $\\s_f$ of points at which map $f$ is not proper is also $\\G$-invariant (for any $y\\in \\s_f,g\\in\\G$ there is $gy\\in \\s_f$). \n\\end{lemma}\n\n\\begin{proof}\nIt is enough to show that the complement of the set $\\s_f$ is\n$\\G$-invariant.\n\nSuppose that $f$ is proper at $y\\in Y$. This means that\nthere exists an open neighborhood $U$ of $y$ such that the mapping\n$$f\\vert_{f^{-1}(U)}:f^{-1}(U)\\to U$$\nis finite. We have the following diagram:\n\n\\begin{center}\n\\begin{picture}(240,160)(-40,40)\n\\put(140,160){\\makebox(0,0)[tl]{$f^{-1}(gU)=gf^{-1}(U)$}}\n\\put(20,160){\\makebox(0,0)[tl]{$f^{-1}(U)$}}\n\\put(30,40){\\makebox(0,0)[tl]{$U$}}\n\\put(180,40){\\makebox(0,0)[tl]{$gU$}}\n\\put(190,100){\\makebox(0,0)[tl]{$f\\vert_{f^{-1}(gU)}$}}\n\\put(40,100){\\makebox(0,0)[tl]{$f\\vert_{f^{-1}(U)}$}}\n\\put(95,50){\\makebox(0,0)[tl]{$g$}}\n\\put(95,170){\\makebox(0,0)[tl]{$g^{-1}$}}\n\\put(33,145){\\vector(0,-1){100}} \\put(45,35){\\vector(1,0){125}}\n\\put(135,155){\\vector(-1,0){75}} \\put(183,145){\\vector(0,-1){100}}\n\\end{picture}\n\\end{center}\n\\vspace{3mm}\n\nHorizontal mappings are isomorphisms, hence by Corollary \\ref{isoo} they are finite. The map $f\\vert_{f^{-1}(gU)}$ is a composition of finite maps, so it is also finite. Hence $gy\\notin \\s_f$.\n\\end{proof}\n\nThe aim of this section is to prove the following:\n\n\\begin{theorem}\\label{glowne}\nLet $\\G$ be a nontrivial connected unipotent algebraic group which acts effectively on an affine variety $X$. Then the set of fixed points of $\\G$ is a $\\K$-uniruled variety.\n\\end{theorem}\n\n\\begin{proof} By the induction on $\\dim\\G$ we can easily reduce the general case to the case of $\\G=\\G_a$. Indeed let \n$$0=\\G_0\\vartriangleleft \\G_1\\vartriangleleft\\dots\\vartriangleleft \\G_r=\\G$$\nbe a normal series like in Definition \\ref{series}. Now $\\Fix(\\G)\\subset\\Fix(\\G_{r-1})$. The set $\\Fix(\\G_{r-1})$ is invariant under $\\G$ action, moreover $\\G_{r-1}$ acts trivially on it. So the group $\\G\/\\G_{r-1}\\cong \\G_a$ acts on $\\Fix(\\G_{r-1})$. If this action is trivial, then $\\Fix(\\G)=\\Fix(\\G_{r-1})$ and the assertion follows from the inductive assumption. Otherwise $\\G_a$ acts effectively on $\\Fix(\\G_{r-1})$ so the set of fixed points $\\Fix(\\G)$ is $\\K$-uniruled.\n\nFirst assume that the field $\\K$ is uncountable. Take a point $a\\in\\Fix(\\G)$. By Propositions \\ref{k-uniruled} and \\ref{k-uniruledofdeg} it is enough to prove, that\nthere exists a parametric curve $S\\subset\\Fix(\\G)$ passing through $a$.\nLet $L$ be an irreducible curve in $X$ passing through $a$, which is\nnot contained in any orbit of $\\G$ and which is not contained in\n$\\Fix(\\G)$. \n\nConsider a surface $Y=L\\times\\G$. There is natural $\\G$\naction on $Y$. For $h\\in\\G$ and $y=(l,g)\\in Y$ we put\n$h(y)=(l,hg)\\in Y$. Take a mapping\n$$\\Phi : L\\times G\\ni (x, g)\\to gx\\in X.$$ \nIt is a generically finite polynomial mapping. \n\nObserve that it is $\\G-$invariant,\n$\\Phi(gy)=g\\Phi(y)$. Due to Lemma \\ref{ginv} this implies that the set $\\s_\\Phi$ of points\nat which the mapping $\\Phi$ is not finite is also $\\G$-invariant.\n\nTheorem \\ref{Sfsurface} gives that the set $\\s_\\Phi$ is $\\K$-uniruled. Let $\\s_\\Phi=S_1\\cup\\dots\\cup S_k$ be the decomposition of $\\s_\\Phi$ \ninto parametric curves. Since the set $\\s_\\Phi$ is $\\G$-invariant, we have that each curve $S_i$ is also\n$\\G$-invariant. \n\nNote that the point $a$ belongs to $\\s_\\Phi$,\nbecause the fiber over $a$ has an infinite number of points. We can\nassume that $a\\in S_1$. We want to show that $S_1\\subset\\Fix(\\G)$. Let $x\\in S_1$, assume $x\\notin\\Fix(\\G)$, then $\\G x=S_1$ and $a$ would be in the orbit\nof $x$, this is a contradiction with $a\\in\\Fix(\\G)$. Hence $a\\in S_1\\subset\\Fix(\\G)$, which ends the proof of an uncountable field case.\n\nNow assume that the field $\\K$ is countable.\nLet $\\Le$ be an uncountable algebraically closed extension of $\\K$. \nThen the group $\\Le\\G$ acts on $\\Le X$ and $\\Fix(\\Le\\G)=\\Le\\Fix(\\G)$.\nBy the first part of our proof the variety\n$\\Fix(\\Le\\G)$ is $\\Le$-uniruled, so due to Proposition \\ref{indep} the set $\\Fix(\\G)$ is $\\K$-uniruled.\n\\end{proof}\n\nWe get a following corollary:\n\n\\begin{corollary}[Bia\\l ynicki-Birula \\cite{bi}]\nLet $\\G$ be a nontrivial connected unipotent group which acts\neffectively on an affine variety $X$. Then $\\G$ has no isolated fixed points.\n\\end{corollary}\n\n\\section{Hypersurface contained in the set\\newline of fixed points}\n\nWe generalize the main result of \\cite{je03} from $\\C$ to an arbitrary algebraically closed field.\n\n\\begin{theorem}\\label{glowne1}\nLet $\\G$ be an infinite connected algebraic group which acts\neffectively on $\\K^n$, for $n\\geq 2$. Assume that an irreducible\nhypersurface $H$ is contained in the set $\\Fix(\\G)$ of fixed points of $G$.\nThen $H$ is $\\K$-uniruled.\n\\end{theorem}\n\n\\begin{proof} Since $\\G$ acts effectively on the affine space $\\K^n$, then by the Chevalley Theorem (\\cite{sh}, page $190$, Theorem C) we can assume that the group $\\G$ is affine. \nIn particular it contains either the subgroup $\\G_m=(\\K^*,\\cdot,1)$ or the subgroup $\\G_a=(\\K,+,0)$ (see \\cite{ii}). Thus we can assume that the group $\\G$ is either $\\G_m$ or $\\G_a$.\n\nAs before we can assume that the field $\\K$ is uncountable. Take a\npoint $a \\in H$. By Propositions \\ref{k-uniruled} and \\ref{k-uniruledofdeg} it is enough to\nprove, that there exists a parametric curve $S\\subset H$ passing\nthrough $a$. Let $L$ be a line in $\\K^n$ going through $a$ such\nthat the set $L\\cap\\Fix(\\G)$ is finite. Denote $L\\cap H=\\{a, a_1,\\dots,a_m\\}$. \n\nConsider a mapping\n$$\\phi:L\\times\\G\\ni (x,g)\\to gx\\in\\K^n.$$ \nObserve that $\\phi(L\\times\\G)$ is a union of disjoint orbits of $\\G$. This\nimplies that $\\phi(L\\times\\G)\\cap H=\\{ a, a_1,\\dots,a_m\\}$. Take\n$X=\\overline{\\phi(L\\times\\G)}$. Note that $X\\cap H$ is a union of\ncurves. This means that there exists a curve $S\\subset X\\cap H$, which\ncontains the point $a$. However $S\\subset \\overline{X\\setminus\n\\phi(L\\times\\G)}.$ This implies that $S\\subset\\s_\\phi$ and we\nconclude by Theorem \\ref{Sfsurface}.\n\\end{proof}\n\n\\section{A real field case}\n\nWe give a real counterpart of Theorem \\ref{glowne}.\n\n\\begin{theorem}\\label{glowne2}\nLet $\\G$ be a real nontrivial connected unipotent algebraic group, which acts effectively and polynomially on a closed semialgebraic $\\R$-uniruled set $X$. Then the set $\\Fix(\\G)$ of fixed points of $\\G$ is also $\\R$-uniruled. In particular it does not contain isolated points.\n\\end{theorem}\n\n\\begin{proof} By the induction on $\\dim\\G$ we\ncan easily reduce the problem to the case of $\\G=\\G_a$. \n\nAssume that $\\G=\\G_a$. Let $D$ be the degree of $\\R$-uniruledness\nof $X$. Take a point $a\\in\\Fix(\\G)$. Let \n$$\\phi:\\G\\times X\\ni\n(g,x)\\to \\phi(g,x)\\in X$$ \nbe a polynomial action of $\\G$ on $X$.\nThis action also induces a polynomial action of the\ncomplexification $\\G^c=(\\C,+,0)$ of $\\G$ on $X^c$. We will denote\nthis action by $\\overline{\\phi}$. Assume that $\\deg_g\\phi\\le d$.\nBy Definition \\ref{r-uniruleddef} we have to prove, that\nthere exists a parametric curve $S\\subset\\Fix(\\G)$ passing through\n$a$ of degree bounded by max$(d,D)$. Let $L$ be a parametric\ncurve in $X$ passing through $a$. If it is contained in\n$\\Fix(\\G)$, then the assertion is true. Otherwise consider a closed semialgebraic\nsurface $Y=L\\times\\G$. There is a natural $\\G$ action on $Y$: \n$$\\text{for }h\\in \\G\\text{ and }y=(l,g)\\in Y\\text{ we put }h(y)=(l,hg)\\in Y.$$ \nTake a generically finite polynomial mapping\n$$\\Phi:L\\times\\G\\ni (x, g)\\to \\phi(g,x)\\in X.$$ \n\nObserve that it is $\\G$-invariant, which means\n$\\Phi(gy)=g\\Phi(y)$. Lemma \\ref{ginv} implies that the set $\\s_\\Phi$ of points\nat which the mapping $\\Phi$ is not finite is also $\\G$-invariant.\n\nLet $\\s_\\Phi=S_1\\cup\\dots\\cup S_k$ be a decomposition of $\\s_\\Phi$ into irreducible components. Due to Theorem \\ref{multc1} each $\\s_i$ is a parametric curve. Since\nthe set $\\s_\\Phi$ is $\\G-$invariant each $S_i$ is also $\\G$-invariant. \n\nNote that the point $a$ belongs to $\\s_\\Phi$, because the fiber over $a$ has infinite number of points. We can assume that $a\\in S_1$. Let us note that the point $a$ is also a fixed point for $\\G^c$. We want to show that $S_1\\subset\\Fix(\\G)$. Let $x\\in S_1$. The set $S_1^c$ is also $\\G^c$-invariant and if $x\\not\\in\\Fix(\\G)$ then $\\G^cx=S_1^c$ and $a$ would be in the orbit of $x$, which is a contradiction. Hence $S_1\\subset\\Fix(\\G)$ completes the proof.\n\\end{proof}\n\n\\begin{corollary}\nLet $\\G$ be a real nontrivial connected unipotent group which acts effectively and polynomially on a closed semialgebraic set $X$. If the set $\\Fix(\\G)$ of fixed points of $\\G$ is nowhere dense in $X$, then it is $\\R$-uniruled.\n\\end{corollary}\n\n\\begin{corollary}\nLet $\\G$ be a real nontrivial connected unipotent group which acts effectively and polynomially on a connected Nash submanifold $X\\subset\\R^n$. Then the set $\\Fix(\\G)$ is $\\R$-uniruled.\n\\end{corollary}\n\n\\section{Remarks}\n\nTo finish we state:\n\n\\begin{conjecture} Let $\\G$ be an algebraic group, which\nacts effectively on $\\K^n$. If $R$ is an irreducible component of\nthe set $\\Fix(\\G)$ of fixed points of $\\G$, then $R$ is a $\\K$-uniruled variety or it is a point.\n\\end{conjecture}\n\\pagebreak\n\\clearpage\n\\addcontentsline{toc}{chapter}{Bibliography}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf Introduction}\n\nThere is an intriguing parallel between the $D=5$ pure Yang-Mills theory\nand the $D=4$ chiral lagrangian theory of mesons. We first summarize \nfeatures of the $D=5$ Yang-Mills theory.\n\nThe pure $SU(N)$ Yang-Mills $D=5$ gauge theory supports a \ntopological soliton, unique to $D=5$ \\cite{deser1}. \nThis soliton is simply the instanton, an $SU(2) $ configuration,\nlifted to a time slice in $D=5$, associated\nwith the nontrivial homotopy class $\\Pi_3(SU(2))$, and\nit carries a conserved topological current \\cite{topo1,topo2}.\nThe theory actually has two conserved\ntopological currents, built out of the gauge fields: firstly an \nadjoint representation current\n(only for groups possessing $d$-symbols, e.g., SU(3) and higher), and secondly \na singlet current (present for all groups). The adjoint current controls \ntransitions between the various ways in which the instantonic soliton\ncan be embedded into the gauge group (e.g., a pure ``I-spin'' embedding can\nflip to a ``U-spin'' or ``V-spin'' embedding\nfor $SU(3)$). The singlet current is identically conserved, \nand yields the topological charge of the soliton.\n\nEach of these currents is topological, and cannot be derived by Noetherian variation\nof the gauge kinetic term action. The theory \nmust therefore be supplemented with an additional \nChern-Simons term. The Chern-Simons term that\ngenerates the adjoint current is known as\nthe ``second Chern character'',\n(the $D=5$ generalization of the Deser-Jackiw-Schonfeld-Siegel-Templeton\nmass term of $D=3$ \\cite{schonfeld,deser}; see also \\cite{niemi}). \nUnder variation \nof the gauge fields, the second Chern character generates\nthe adjoint current as a source term in the equation\nof motion of the gauge field. \nWhile not manifestly gauge invariant, under small gauge\ntransformations (those continuously connected\nto the identity), the action containing\nthe second Chern character is invariant. By contrast, \nfor topologically nontrivial transformations,\nthe action shifts by an additive numerical factor, and\nthe coefficient of the Chern character is necessarily quantized so \nthe path integral is invariant (\\ie, with the proper \ncoefficient, this shift in the action is then $2\\pi N$) \\cite{deser,Wu}.\n\nThe singlet current has no associated Chern-Simons term\nbuilt out of the gauge fields alone. We presently propose to \nintroduce a ``dual variable,'' a vector\npotential associated with the instantonic soliton. This\nallows us to write a new $U(1)$-gauge\ninvariant topological term which is analogous to the second Chern character\nand which generates the singlet current. \n\nOn the other hand, chiral theories of mesons in $D=4$ based on flavor \n$SU(N)_L\\times SU(N)_R$ also possess remarkable, and quite similar, \ntopological properties. The theories support \nthe skyrmion solution, which is an $SU(2) $ configuration\nand a stable topological object \n(whose core is stabilized when the ``Skyrme'' term is added). \nThe skyrmion also reflects the nontrivial $\\Pi_3(SU(2))$, and \nit carries a conserved\n(modulo anomalies) singlet current, the Goldstone-Wilczek current\n\\cite{goldstone}, which is interpreted as baryon number.\nThe chiral theory also contains adjoint representation \ntopological currents, conserved modulo anomalies. \nThese latter currents again exist only for groups with $d$-symbols, and \ngovern transitions\nof the embeddings of the skyrmion in the diagonal subgroup $SU(N)$. \nA connection between the instantonic soliton of $D=5$ and the skyrmion \nof $D=4$, through compactification, and a matching \nof their $U(1)$ currents was discussed in \nsome detail in ref.(\\cite{topo2}). In fact, the full form of the \nGoldstone-Wilczek current can be easily inferred from this matching.\n\nThe adjoint topological currents in $D=4$ chiral\ntheories derive from \nthe Wess-Zumino-Witten term. \nRemarkably, the WZW term is neither \nmanifestly chirally nor gauge invariant; yet it possesses both symmetries \nfor small transformations---those\nthat are continuously connected to the identity. The overall invariance \nof the path integral \nunder large topological chiral and gauge transformations leads again to \nquantization conditions on the WZW term coefficient \\cite{Witten}.\nThe singlet Goldstone-Wilczek current has no corresponding WZW term, but\nas will be discussed below, and elaborated\nin a companion paper, \\cite{hill}, \nsuch a term can be written, provided the $\\sigma$ and $\\eta'$\nmesons are incorporated into the theory. \nThis, in turn, leads to a new singlet axial\nvector topological current. \n\nThese topologically interesting aspects of $D=4$ chiral lagrangians\nhave long been known to\nfollow from the structure of the theory in one higher dimension. \nIn the case of a $D=4$, $SU(N)_L\\times SU(N)_R$ nonlinear $\\sigma$-model, \ndescribed by an\n$N\\times N$ unitary matrix field, Witten has shown that the WZW term\ncan be obtained by promoting the global theory\nof mesons to $D=5$, where a certain manifestly chirally invariant\nChern-Simons term occurs, built out of the mesons. One \nthen compactifies the fifth-dimension \nterm, back to $D=4$. This results in the global \nWess-Zumino term of $D=4$ \\cite{Witten}. \nAt this stage, by performing\ngauge transformations upon the global object, one can \ninfer how to introduce the gauge fields to compensate\nthe local changes in the WZ term.\nThis leads to the full Wess-Zumino-Witten term\nfor the gauged chiral lagrangian, which contains the \nfull anomaly structure of the theory. \nThe WZW term plays another crucial role, that of locking\nthe parity of the pion to the parity of space \\cite{Witten}.\nCertain allowed transitions, \nsuch as $K+\\bar{K}\\rightarrow 3 \\pi$, which would be\nabsent without the WZW term, now occur\nwith a topologically quantized amplitude. \n\nA conceptual drawback of \nthis procedure is that local gauge invariance\nis induced into a non-gauge invariant object, {\\em a posteriori}. Local\ngauge invariance, however, is a more fundamental symmetry than the\nchiral symmetry which it breaks, and it is thus preferable to rely on \na procedure in which local gauge invariance\nis present at the outset. \nThen, upon compactification, one would\nrequire that the meson fields appear with their proper kinetic terms \nand gauging. \nImplementing such a procedure for compactification, we would expect that\nthe second Chern character of the $D=5$ pure Yang-Mills theory\nmorphs into the $D=4$ WZW term.\nThis approach, at least,\nmay shed some light on the interplay on the interrelationship\nof the various symmetries and topology. \n\nIndeed, there exists in principle such a procedure, \nthe latticization of extra dimensions, \\cite{wang},\nor ``dimensional deconstruction\" \\cite{harvard}. \nThe approach latticizes only the extra dimensions, yielding\nthe effective kinetic and interaction terms,\nwhile keeping the $D=4$ subspace\nin the continuum. This is related to the earlier\n``transverse lattice'' of Bardeen, Pearson and Rabinovici \\cite{bardeen}.\nFor extra dimensional theories, this is a powerful tool, leading to a \ncontinuum $D=4$ effective description of a theory\nthat originated as a pure Yang-Mills theory \nin higher dimensions, with emphasis\non maximal manifest gauge invariance. \nIn this approach, one derives\nthe gauge invariant effective lagrangian for a theory in $D=4$\nthat is defined by compactification of a theory in higher dimensions.\nStarting with pure Yang-Mills in $D=5$, one can thus engineer a $D=4$ gauged\nchiral lagrangian. The mesons then appear in the compactified\ntheory, packaged into exponential chiral fields,\nwhich are the Wilson links associated with the latticization.\n\nIn the present work, we study the deconstruction of\nthe $D=5$ Yang-Mills theory, supplemented with the \nsecond Chern character, and a new singlet auxiliary term.\nWe begin with a discussion of\nthe physical basis of orbifold boundary conditions, and\nconsideration of the topological\naspects of a gauged chiral lagrangian in $D=4$\nthrough the pure Yang-Mills theory in $D=5$ without mesons. \nPresently, we will show how to derive the WZW\nterm for a gauged chiral \nlagrangian in $D=4$, by \nmatching of the vector potentials and the field strengths of\nthe $D=5$ Yang-Mills theory onto the relevant operators\nin the deconstructed effective lagrangian. \nWe begin in the next section, after a review of\n$D=5$ Yang-Mills and the Chern-Simons terms, with a simple\nheuristic discussion that readily yields the WZW \nterm on orbifold compactification to $D=4$. \nWe will also\nanticipate how a new WZW-like term arises from\nthe singlet auxiliary term in the parent $D=5$ \nYang-Mills theory. This new WZW term generates the\nsinglet Goldstone-Wilczek current, and a new $U(1)$ axial current\nfor the skyrmion.\n\nWe then study the general issue of\ntopological deformation of $D=5$ Yang-Mills into \n$D=4$ chiral theories in more detail. \nFirst, we note that there is\na key element we must address that\nis missing in a naive deconstruction, and\nwhich is essential to the propagation of topology\nfrom one geometrical dimension to another. This\nis the consistency of {\\em the Bianchi \nidentities}. The ordinary $D=4$ Bianchi \nidentities are always automatically \nsatisfied by the\nspecification that the field strength tensor is a commutator \nof covariant derivatives,\nsince the $D=4$ relations are simply the \nJacobi identity for antisymmetrized nested\ncommutators. However, we'll find that \nthere is an additional nontrivial Bianchi \nconstraint involving the ``lattice hopping derivative.'' \nThis is seen to fail for the first plausible definition\nof the hopping derivative in $D=5$.\n\nWe thus formulate the Bianchi identities on the \ndeconstruction lattice, and we find that\nwe are able to satisfy them \nwith a modified definition\nof the $D=4$ covariant derivative. The ordinary\nderivative is modified by the addition\nof a vector combination of chiral currents\nwith a special coefficient of $1\/2$. The\nformalism automatically\nimplements the ``magnetic superconductivity,''\nor confinement phase on the orbifold branes, $G_{4\\mu} =0$.\n\nThe Bianchi-consistent modification implies that the effective \nfield strength, $G_{\\mu\\nu}$, is modified by the addition \nof terms involving the commutators of chiral currents of the mesons.\nThis term occurs with a fixed coefficient.\nThe gauge action is therefore modified as well, and there now appear \nin the classical action two Skyrme terms. The usual Skyrme term\nis generated by the current commutator terms in the field strength\ntensor and now has a fixed coefficient. \nMoreover, a new Skyrme term that involves the gauge\nfield, is also present. We thus conjecture\nthat this modified theory may tighten the link between the \ninstantonic soliton in $D=5$ and\nthe skyrmion in the deconstructed theory in $D=4$. \nWith these terms,\nthere may exist a ``self-dual,'' and even an analytic\nskyrmion solution, matching the instantonic soliton \nat large distances.\n\nGiven the new Bianchi-consistent action and field \nstrength, the pure Yang-Mills \nsecond Chern character (CS2) term again goes into the \nWZW term. The resulting WZW term is consistent with\nWitten's minimal coefficient, but is larger by\na factor of $5$. Thus, we infer that the dimensionality\nof the space-time, $D=5$, appears as the\nindex of the WZW term in the Bianchi-compliant theory, \nwhere we normally would install\n$N_c=3$, the number of colors of QCD.\n\n\n\n\\section{The D=5 Pure Yang-Mills Theory and Heuristic Derivation of the\n$D=4$\nWess-Zumino-Witten Term}\n\n\\vskip 0.15in\n\\noindent{\\bf (i) Preliminaries}\n\\vskip 0.15in\n\n\nWe start with an $SU(N)$ Yang-Mills gauge theory\nin $D=5$. The theory relies on vector potentials, $A_A^a(x)$ \nand coordinates $x^A$, where $(A=0,1,2,3,4)$,\nand where $x^\\mu$ and $(\\mu = 0,1,2,3)$ refers to the\nusual space-time dimensions. When we say ``fifth component\nof a vector, $x^A$'' we mean, of course, $x^4$.\n\nThe covariant derivative is \n\\beq\nD_A = \\partial_A - iA_A~, \\qquad \\qquad A_A = A_A^aQ^a ,\n\\eeq\nwhere $Q^a$ is an abstract operator that takes on\nthe values of $Q^a=\\lambda^a\/2$ in the adjoint representation.\nThe field strength then is \n\\beq\nG_{AB} = i[D_A,D_B]\n=\\partial_A A_B-\\partial_B A_A - i[A_A,A_B] .\n\\eeq\nThis theory has the standard kinetic term:\n\\beq\n\\label{start}\n{\\cal{L}} = -\\frac{1}{2\\tilde{g}^2}\\Tr(G_{AB}G^{AB}) ,\n\\eeq\nwhere $1\/\\tilde{g}_2^2$ is the coupling with dimensions of mass. With this\nnormalization, gauge fields have the canonical dimensionality \nwith respect to $D=4$, \\ie, $[A_A]= M^{1}$, and $[G_{AB}]= M^2$. \n\nThe theory possesses two identically conserved Chern-Simons \ncurrents of the form:\n\\bea\n\\label{cur1}\nJ_A & = & \\epsilon_{ABCDE}\\Tr(G^{BC}G^{DE}),\n\\eea\n\\bea\n\\label{current2}\nJ^a_A & = & \\epsilon_{ABCDE}\n\\Tr \\Bigl ( \\frac{\\lambda^a}{2}\\{G^{BC},G^{DE}\\}\\Bigr ).\n\\eea\nThe second current requires that $SU(N)$ possess a $d$-symbol, \nhence $N\\geq 3$; and it is further {\\em covariantly}\nconserved, \n\\beq\n[D^A,~J^a_A~ Q^a]=0. \n\\eeq\nThese topological \ncurrents do not arise from eq. (\\ref{start}) \nunder local Noetherian variation of the fields. \n\nThe adjoint currents can be derived from an\naction containing the ``second Chern character.''\nThe second Chern character, which we'll abbreviate as CS2, \nis derived by ascending to \n$D=6$ and considering the generalization of the Pontryagin index\n(a $D=6$ generalization of the $\\theta$-term),\n\\beq\n{\\cal{L}}_0 =\\epsilon_{ABCDEF}\\Tr G^{AB}G^{CD}G^{EF} .\n\\eeq\nThis can be written as a total\ndivergence, \n\\beq\n\\frac{1}{8}{\\cal{L}}_0 \n=\n-\\partial^F\\epsilon_{ABCDEF}\n\\Tr\\Bigl (A_A \\partial_B A_C \\partial_D A_E \n- \\frac{3i}{2}A_A A_BA_C \\partial_D A_E \n- \\frac{3}{5}A_A A_B A_C A_D A_E\\Bigr ). \n\\eeq\nFormally, compactifying the sixth\ndimension and integrating ${\\cal{L}}_0$ over \nthe boundary in $x^5$ leads to ${\\cal{L}}_1$,\nthe second Chern character as an element\nof the $D=5$ Lagrangian, \n\\bea\n\\label{CSterm0}\n{\\cal{L}}_1 & = & c\\epsilon^{ABCDE}\n\\Tr \\Bigl (A_A \\partial_B A_C \\partial_D A_E \n - \\frac{3i}{2}A_A A_BA_C \\partial_D A_E \n- \\frac{3}{5}A_A A_B A_C A_D A_E \\Bigr ) .\n\\eea\nThis can be rewritten in a convenient\nform involving gauge covariant field\nstrengths,\n\\bea\n\\label{CS2}\n{\\cal{L}}_1 \n& = &\n\\frac{c}{4}\\epsilon^{ABCDE}\n\\Tr \\Bigl (A_A G_{BC}G_{DE} \n + i A_A A_B A_C G_{DE}\n- \\frac{2}{5}A_A A_B A_C A_D A_E \\Bigr ) ,\n\\eea\nhence, for pure gauge configurations all but the last term vanish. \nThe second Chern characters can be constructed in any odd\ndimension from a general algorithm \\cite{Wu}.\n\nWhile not manifestly gauge invariant, it is\nstraightforward to verify that CS2 is indeed gauge\ninvariant for gauge transformations continuously\nconnected to the identity. By contrast, for topologically\nnontrivial gauge transformations, the action shifts by a constant. \nHence, the coefficient $c$ must be chosen\nfor effective invariance so that the action shifts by $2\\pi N$: \nthe path integral is then invariant. \nIt can be shown that this factor is:\n\\beq\nc = \\frac{1}{48\\pi^2} ~.\n\\eeq\nThe\nvariation of the action with respect to the gauge field $A^a$ indeed \ngenerates the current of eq.(\\ref{current2}) as a source for the\nequation of motion.\n\n\\newpage\n\\vskip 0.15in\n\\noindent{\\bf (ii) Heuristic Derivation of $D=4$\nWess-Zumino-Witten Term}\n\\vskip 0.15in\n\nConsider orbifold compactification of the $D=5$ Yang-Mills theory\nto a $D=4$ theory.\nOrbifold compactification is usually specified mathematically\nfollowing Horava and Witten \\cite{horava}, such as\n``compactification on $S_1\/Z_2$.'' One thus\nconsiders an interval $0\\leq x^4\\leq 2a$, \nclassifies basis functions as even\n$P=(+)$ or odd $P=(-)$ under reflection about $x^4=a$, and under\ncompactification, demands that the $P=(+)$ basis functions \nare assigned to the \n$D=4$ vector potentials, $A_\\mu^A$, and the $P=(-)$ basis functions\nto the $A_4^A$ vector potentials. \nOrbifolding is the basis of many\nmodels of low energy extra dimensions, but we prefer a more\nphysical statement on orbifold compactification. \n\nAlternatively, we can consider two branes to be located at\n$x^4=0$ and $x^4=a$. Each brane $i$ has a normal vector \n$\\eta_i^A$; \\eg, for brane ``L'' we have $\\eta_L=(0,0,0,0,1)$\nand for brane ``R'' we have $\\eta_{R} = (0,0,0,0,-1)$. The orbifold\nboundary conditions can be viewed as a special gauge choice\nfor the boundary condition applied on each brane of: \n\\beq\n \\left. \\eta_{A} ~G^{AB} \\right|_{L,R} = 0.\n\\eeq\nThis boundary condition is manifestly gauge invariant. \nFor the $\\eta_i$ defined above, we see that $G_{04}=0$,\nhence the normal component\nof the chromoelectric field strength is zero. Moreover,\nthe ``parallel'' magnetic field $G_{\\mu 4} $ where $\\mu \\neq 0$\nis also zero. These boundary conditions on the\nbranes are dual to those of an electric superconductor, and\nthey thus correspond to a magnetic superconductor. A magnetic\nsuperconductor would form electric flux tubes between electric charges\n(quarks) in the medium, hence a magnetic superconducting phase is\na confinement phase.\n\nWe thus consider the orbifold compactification as a kind of parallel\nplate magnetic superconducting capacitor (it can likewise be \nviewed as a magnetic superconducting\nJosephson junction). Spanning the gap\nbetween the plates, is a Wilson line:\n\\beq\nU = P\\exp\\left( -i\\int_0^{a} dx^A A_A \\right) = \\exp(i\\tilde{\\pi}),\n\\eeq\nwhere, upon compactification, we view the Wilson line as a\nchiral field of mesons, as indicated, with $\\tilde{\\pi} =\n\\pi^a\\lambda^a\/f_\\pi $, where $f_\\pi = 95$ MeV. \n\nIn the superconducting boundary brane, or\ncapacitor plate regions (we'll refer to these generically\nas the ``end-zones''), we \ncan perform local gauge transformations. If the gauge group is $SU(N)$,\nthen there exist gauge transformations $V_L$ ($V_R$) that are\nconstant over the entire left-hand (right-hand) end-zone. These\ncan be identified as global $SU(N)_L$ ($SU(N)_R$) transformations. Under\nthese transformations we see that $U$ transforms as \n\\beq\nU \\rightarrow V_L U V^\\dagger_R ~, \n\\eeq\nand the theory under compactification becomes a gauged $SU(N)_L\\times\nSU(N)_R$ chiral lagrangian. The gauge fields should be viewed\nas left-- and right-- handed combinations of\nthe normal vector and axial vector mesons of QCD, and\nthey should be supplemented with additional Higgs fields\nto acquire masses. We thus do not pass to a unitary gauge\nin which the $A_4$ modes are eaten by gauge fields to acquire masses.\n\n\nIn the end-zones, we have\nthe magnetic\nsuperconducting phase. Here we hypothesize that the vector potentials\nare determined by ``London currents,'' the chiral currents\nbuilt out of the Wilson line:\n\\beq\n\\label{london}\nA_{A}(\\makebox{L end-zone}) = iU[\\partial_A, U^\\dagger] \\equiv i\\alpha_A ~,\n\\qquad\nA_{A}(\\makebox{R end-zone}) = iU^\\dagger[\\partial_A, U] \\equiv i\\beta_A ~.\n\\eeq\nLondon currents are generated by the magnetic condensate kinetic term (\\eg,\nanalogous to a Higgs field), that locks the vector\npotential to the Nambu-Goldstone boson, \\eg, in our\npresent case $A_A(L)= \\partial_A\\tilde{\\pi} + ...$ \nand $A_A(R)= -\\partial_A\\tilde{\\pi} + ...$ in the endzones.\nThe particular definitions given in eq.\\ (\\ref{london})\nare pure gauges, and thus the gauge field\nstrength vanishes (\\eg, using form notation,\n$d\\alpha = -\\alpha^2$, and $(1\/2)G = dA-iA^2 = 0$ when $A=i\\alpha$).\n\nWe now seek the low energy effective \ntheory. We substitute the London current vector\npotentials into the Chern-Simons term of eq.(\\ref{CS2})\nto obtain the $D=4$ effective topological\n lagrangian:\n\\beq\n \\left(\\frac{1}{2\\times 5}\\right) \\frac{i}{48\\pi^2}\\epsilon_{ABCDE}\\left(\n \\Tr\\alpha^A\\alpha^B\\alpha^C\\alpha^D\\alpha^E + \n \\Tr\\beta^A\\beta^B\\beta^C\\beta^D\\beta^E\\right).\n\\eeq\nwhere the $\\alpha$ ($\\beta$) terms reside on the left (right)\nend-zone.\nTo leading order in the expansion\nin pions, we can write, \n\\bea\n\\epsilon_{ABCDE}\\Tr\\alpha^A\\alpha^B\\alpha^C\\alpha^D\\alpha^E & = &\ni\\epsilon_{ABCDE}\\partial_A\\Tr\\tilde{\\pi}\\alpha^B\\alpha^C\\alpha^D\\alpha^E + ...,\n\\nonumber \\\\\n\\epsilon_{ABCDE}\\Tr\\beta^A\\beta^B\\beta^C\\beta^D\\beta^E & = &\n-i\\epsilon_{ABCDE}\\partial_A\\Tr\\tilde{\\pi}\\beta^B\\beta^C\\beta^D\\beta^E + ...,\n\\eea\nThus, when we integrate \n$x^4$ over\nthe gap between the end-zones, $\\int_0^a dx^4 $,\nwe arrive at the effective lagrangian, \n\\beq\n\\label{WZ}\n \\frac{1}{240\\pi^2}\\epsilon_{\\mu\\nu\\rho\\sigma}\\left(\n \\Tr\\tilde{\\pi}\\alpha^\\mu\\alpha^\\nu\\alpha^\\rho\\alpha^\\sigma \\right).\n\\eeq\nEq.(\\ref{WZ}) is the precise structure of the leading\npiece of the Wess-Zumino term in an expansion \nin pions with\nWitten's normalization. \n\nA few comments are in order. Note that the expression\nis hermitian---and it can be written as either\n$\\Tr(\\pi \\alpha^4)$ or $\\Tr(\\pi \\beta^4)$. Witten's derivation \ninvolves compactification on a disk, and the WZW term\nresides on the periodic boundary of the disk, while\nthe present approach has used the orbifold configuration. \nWitten writes in an expansion in pions $(2\/15\\pi^2F_\\pi^5)\\Tr(A\\partial_\\mu A\n\\partial_\\nu A\\partial_\\rho A\\partial_\\sigma A) + ...$\nwith $A=\\pi^a\\lambda^a$ and $F_\\pi = 2f_\\pi$, which is consistent\nwith eq.(\\ref{WZ}). We note that the $\\alpha$ terms \nin the above derivation received a minus sign upon integrating\nfrom $0$ to $a$\n(which canceled against $i^2$) since the left end-zone\nresides at the lower limit of the integral; the $\\beta$\nterms received a positive sign. In\nthe $D=4$ theory the currents $\\alpha(x_\\mu)$ and $\\beta(x_\\mu)$ \nare viewed as residing\nat a common point in $D=4$ space-time, and we then have\nidentities such as: \n\\beq\n\\epsilon_{ABCDE}\\Tr\\tilde{\\pi}\\beta^B\\beta^C\\beta^D\\beta^E\n=\\epsilon_{ABCDE}\\Tr U\\tilde{\\pi}U^\\dagger\\alpha^B\\alpha^C\\alpha^D\\alpha^E,\n\\eeq\nand $U\\tilde{\\pi}U^\\dagger =\\tilde{\\pi}$, and \nwe use $U\\beta U^\\dagger = -\\alpha$\nand $U\\tilde{\\pi} U^\\dagger$ to bring the two terms into the\ncommon form.\n\n\nWith covariant London currents, \\eg,\n$\\alpha_A \\rightarrow U[D_A, U^\\dagger]$, the expression\n becomes fully gauge invariant. The field strength is then\n nonzero, and other\noperators like $\\Tr(\\pi \\alpha^2 G)$, $\\Tr(\\pi \\alpha G \\alpha)$, \\etc,\nnow appear. This expression can be integrated by parts\ninto the full Wess-Zumino-Witten term \nwhich will be developed\nin greater detail elsewhere \\cite{hill}.\n\nThe present ``derivation'' is only meant to be heuristic, and\nis not well-defined (\\eg, operators like $ U[D_A, U^\\dagger]$\nhave path dependence). Nonetheless, there\nare many alternative ways to proceed to formalize the deformation\ntheory from $D=5$ pure Yang-Mills into $D=4$ chiral lagrangians. \nIn the subsequent sections we'll\nbe led to a particular and well-defined deformation \nof the $D=5$ Yang-Mills theory\ninto a $D=4$ chiral lagrangian in which, \\eg, $A_\\mu(L)\\rightarrow\nA_\\mu(L)+i\\half U[D_\\mu, U^\\dagger]$. \n\n\\vskip 0.15in\n\\noindent{\\bf (iii) Singlet Auxiliary Chern-Simons Term\nand a New Singlet WZW Term}\n\\vskip 0.15in\n\nWe presently turn to the singlet topological current,\nand we'll merely anticipate some results that follow for\nthe compactification and deconstruction, using the\ntechniques of the next section. \n \nThe singlet Chern-Simons current can be\ngenerated by an additional modification \nof the Lagrangian of the form\n(CS1):\n\\beq\n\\label{CS1}\n{\\cal{L}}_2 = {c'}\\epsilon_{ABCDE}V^A\\Tr(G^{BC}G^{DE}),\n\\eeq\nwhere $V^A$ is a singlet auxiliary vector field. Since it is identically \nconserved, the CS singlet current couples to this vector field \nin ${\\cal{L}}_2$ compatibly with a simple abelian gauge-invariance, \n$\\delta V^A= \\partial_A \\sigma$. \nIf the vector field $V$ is endowed with kinetic terms, the singlet current \nis also generated as a source in the corresponding Maxwell equations\nof motion for $V$. \nNote that the singlet current cannot be derived\nfrom CS2, as the Chern-Simons term of eq.(\\ref{CSterm0}) only exists\nin $SU(N)$ for $N\\geq 3$ and does not occur, e.g., in $SU(2)$ Yang-Mills, while\nthe singlet current is always present. One might argue\nthat in $SU(2)$ the form of the current can be inferred, {\\em a posteriori},\ne.g., by considering the $\\lambda^8$\ncomponent in $SU(3)$ of the adjoint current, and setting coset fields\nto zero to descend to $SU(2)$. The singlet current cannot\narise from direct variation of CS2, eq.(\\ref{CSterm0}), and \neq.(\\ref{CS1}) (CS1) is required to generate it {\\em a priori}. \n\nThe appearance of $V_A$ is linked to the instantonic soliton \n\\cite{topo1,topo2}, the 't Hooft \ninstanton lifted to a static ``monopole'' configuration in $D=5$.\nThis object has a mass of $8\\pi^2\/\\tilde{g}^2$, and it \ndescends to the \nskyrmion, characterized by the Goldstone-Wilczek current \\cite{goldstone},\nin $D=4$\n\\cite{topo2}. \n$V_A$ can be interpreted as an effective field associated with the \ninstantonic soliton. \nThe choice of $V_A$ is dictated by the degrees of freedom in the theory. \nWe must generate a conserved current, hence the\nvariation $\\delta V^A= \\partial_A \\sigma $, \\ie, we have\nno complex fields to draw upon. However, the instantonic\nsoliton must be described as a massive excitation, hence we cannot use\na Nambu-Goldstone field $\\sigma$ by itself. We may thus infer that the\ninstantonic soliton is associated with a massive $U(1)$ gauge field. \n\n\nMaking use of the chiral deconstruction techniques\ndiscussed in the present paper, we can deconstruct\neq.(\\ref{CS1}) to obtain a new auxiliary WZW term that generates\nthe Goldstone-Wilczek current. The field $V^A$ is\ndecomposed into $V^A_L+V^B_R$ where $V_L$ ($V_R$) has support\nin the $L$ ($R$) end-zone.\nThe $x^4$ integrated zero modes of $V^A$\nare then defined in terms of $\\sigma$ and $\\eta'$ fields\nof a chiral theory of mesons:\n\\bea\n\\int dx^4 V_{4R} & = & a(\\sigma + \\eta') ,\n\\quad\n\\int dx^4 V_{4L} = a(\\sigma - \\eta') , \\nonumber \\\\\n\\int dx^4 V_{\\mu R} & = & af^{-1}\\partial_\\mu(\\sigma - \\eta' ),\n\\qquad\n\\int dx^4 V_{\\mu L} = af^{-1}\\partial_\\mu(\\sigma + \\eta'),\n\\eea\nwhere the choices are consistent with\nparities, and the Noether variations\nthat we would make for the original $V^A$\nto generate the currents (here we have set\nthe decay constant of $\\sigma$ and $\\eta'$\nto unity). Note that $\\sigma$ and $\\eta'$\ncan be viewed as glueballs, physical objects\nin the end-zone phases, even though the\ntheory is quarkless.\nWe find that CS1, using methods\ndeveloped in the next section, deconstructs \nto terms containing the following form:\n\\bea\n\\label{CS2GW04}\n{\\cal{L}}_2 \n& \\rightarrow & -\\frac{ac'}{2} \\eta'\\epsilon_{\\mu\\nu\\rho\\sigma}\n\\Tr\\left(G_L^{\\mu\\nu}G_L^{\\rho\\sigma} \n+ G_R^{\\mu\\nu}G_R^{\\rho\\sigma}\n\\right) \n+i\\frac{ac'}{2f} \\partial^\\mu (\\eta') \\epsilon_{\\mu\\nu\\rho\\sigma}\n\\Tr\\left(\\alpha^\\nu G_L^{\\rho\\sigma}\n-\\alpha^\\nu \\overline{G}_R^{\\rho\\sigma}\\right) \\cr\n&& \\cr\n&&\n-\\frac{2ac'}{f}\\partial^\\mu \\sigma \\epsilon_{\\mu\\nu\\rho\\sigma}\n\\Tr\\left(\\frac{3 i}{2} \\alpha^\\nu G_L^{\\rho\\sigma}\n+\\frac{3i}{2} \\alpha^\\nu \\overline{G}_R^{\\rho\\sigma}\n+ \\alpha^\\nu \\alpha^\\rho \\alpha^\\nu \\right)\n\\cr\n&&\n\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\n+\\;\\;\\frac{3a}{2}c'\\sigma \\epsilon_{\\mu\\nu\\rho\\sigma}\\Tr\\left(\nG_{L\\mu\\nu}G_{L\\rho\\sigma} - G_{R\\mu\\nu} {G}_{R\\rho\\sigma} \n\\right), \\cr\n&&\n\\eea\nwhere $\\alpha = U[D, U^\\dagger]$ and $\\beta=U^\\dagger[D, U]$,\nand $\\overline{G}_R^{\\rho\\sigma} = U{G}_R^{\\rho\\sigma}U^\\dagger$.\nThis is a new WZW-like term\nthat correctly generates the full Goldstone-Wilczek current,\n\\cite{topo2,goldstone} \nwith the correct normalization of a unit of baryon number for the\nskyrmion, provided $c' = 1\/48\\pi^2$ (identifying $c=c'$\nof the second Chern character), \n\\be\n\\label{eqb}\n{Q}^\\mu = \\frac{1}{24\\pi^2}\\epsilon^{\\mu\\nu\\rho\\sigma}\\Tr \\left( \n\\alpha_\\nu \\alpha_\\rho \\alpha_\\sigma \n+\\frac{3i}{2}(\n G_{L\\nu\\rho} \\alpha_\\sigma + \\overline{G}_{R\\nu\\rho} \\alpha_\\sigma)\\right).\n\\ee\nThus, by constructing the Noether equation\nof motion of the $\\sigma$ meson,\nwe generate\nthe full conservation equation of the GW current, including its anomaly,\n\\beq\n\\partial_\\mu {Q}^\\mu = \n-\\frac{1}{32\\pi^2}\\epsilon^{\\mu\\nu\\rho\\sigma}\\Tr \\left( \nG_{L\\mu\\nu}G_{L\\rho\\sigma} - {G}_{R\\mu\\nu}{G}_{R\\rho\\sigma} \\right).\n\\eeq\nThis shows that the singlet topological Chern-Simons current matches\nthe full GW current under compactification. \n\nMoreover, by Noether variation of\nthe $\\eta'$, we obtain a ``$U(1)$ axial current,'' \n\\be\n\\label{cur3}\nQ_5^\\mu = \\frac{Z}{32\\pi^2}\\epsilon^{\\mu\\nu\\rho\\sigma}\\Tr \\left( \nG_{L\\nu\\rho} \\alpha_\\sigma - \\overline{G}_{R\\nu\\rho} \\alpha_\\sigma \n )\\right),\n\\ee\nThis actually has an indeterminate normalization\n$Z$. Its divergence equation likewise follows from\nthe $\\eta'$ equation of motion:\n\\bea\n\\label{eqb}\n\\partial_\\mu {Q}_5^\\mu & = & \\frac{Z}{32\\pi^2}\n\\epsilon^{\\mu\\nu\\rho\\sigma}\n\\Tr\\left[\niG_{L\\mu\\nu }\\alpha_\\rho\\alpha_\\sigma\n+i\\overline{G}_{R\\mu\\nu }\\alpha_\\rho\\alpha_\\sigma\n+G_{L\\mu\\nu }G_{L\\rho\\sigma} + {G}_{R\\mu\\nu }{G}_{R\\rho\\sigma}\n\\right],\n\\eea\nand $Z=1$ is thus favored by matching to the $U(1)$ axial anomaly.\nThe last two terms are the correct form of an\naxial current anomaly, while the first terms on\nthe {\\em rhs} are \nanalogous to the Skyrme terms. The first two terms\nform a pseudoscalar and can be\ninterpreted as $2im\\bar{\\psi}\\gamma^5\\psi$\nin the axial current divergence of a massive nucleon.\nThis current, to our knowledge, has not been\npreviously discussed in the literature.\nThe details of this derivation will be \npresented elsewhere, \\cite{hill}.\n\n\\vskip 0.15in\n\\noindent\n\\section{Deconstruction and Bianchi Identities}\n\\vskip 0.15in\n\nThe heuristic argument presented in section II\nsuggests that a direct morphing of \nthe Chern-Simons terms of $D=5$ Yang-Mills into $D=4$ chiral\nlagrangians is possible and meaningful. We expect that\nthere are many possible deformations of the parent theory\nin $D=5$, through deconstruction, that can yield various\nchiral theories in $D=4$. These deformations may or may not\nexploit the full geometrical and topological matching. We \nwould expect that\nan integral multiple of\nthe minimal coefficient of Witten, $1\/240\\pi^2$, will always obtain in\na consistent theory. \n\nThe heuristic argument indeed gave the ``minimal\ncoefficient'' of the WZ term of Witten. We will now turn to a more\nliteral interpretation of dimensional deconstruction of\npure $D=5$ Yang-Mills which pays\ncloser attention to the details of topological mapping---in particular,\nto the definition of motion (``hopping'') in the fifth dimension\nand to the Bianchi identities. Remarkably, the present construction\nyields the WZW term with a coefficient that is of the form $N\/240\\pi^2$,\nwhere the index $N=D=5$, is the dimensionality of the parent space-time. \n\n\n\\vskip 0.15in\n\\noindent{\\bf (i) Preliminaries}\n\\vskip 0.15in\n\nWe presently consider the \ncompactification of the $SU(N)$ Yang-Mills theory in $D=5$\non the interval $0\\leq x^4 \\leq a$. First, construct\na coarse-grained lattice of the $x^4$ dimension\nwith $2$ slices. \nOn each slice lies a copy of the gauge group with hermitian generators \n$Q_i^a$. The covariant derivative is a sum over all slices with\nthe appropriate abstract charge assigned to each gauge field,\n\\beq\nD_\\mu = \\partial_\\mu -i A_{L\\mu}^a Q_L^a - i A_{R\\mu}^a Q_R^a,\n\\eeq\nwhere we use the notation ``left,'' $L$ (``right,'' $R$) \nfor brane 1 (2).\nThe generators $Q_L$ and $Q_R$ act on\nthe given slice, and the \nslices are connected from $L$ to $R$ \nby a unitary Wilson link $U$,\nconnecting the 1st to the 2nd slice (while the link $U^\\dagger$ \nconnects slice 2 to slice 1). Thus,\n\\beq\n[Q_L, Q_R ] = 0 .\n\\eeq\nThe hermitian field\nstrength tensor is,\n\\beq\nG_{\\mu\\nu} = i[D_\\mu, D_\\nu] = G_{L\\mu\\nu}^a Q^a_L + G_{R\\mu\\nu}^a Q^a_R ,\n\\eeq\nand also resolves into $L$ and $R$ operator components. \n\n\n\\vskip 0.15in\n\\noindent{\\bf (ii) Matrix Formalism}\n\\vskip 0.15in\n\nWe choose to define a ``left-handed derivative,'' \n$D_{L\\mu} = \\partial_\\mu -i A_{L\\mu}^a Q_L^a$ , \nso that $G_{L\\mu\\nu}^a Q_L^a = i[D_{L\\mu}, D_{L,\\nu}]$; \nand, respectively, a ``right-handed derivative,'' \n$D_{R\\mu} = \\partial_\\mu -i A_{R\\mu}^a Q_R^a$,\nso that $G_{R\\mu\\nu}^a Q_R^a = i[D_{R\\mu}, D_{R,\\nu}]$\nfor the right-handed fields. $D_L$ applies to fields\non the left-hand lattice slice, while $D_R$\napplies on the right-hand slice. We further \nrequire $[D_{L\\mu}, D_{R\\nu }] =0$, which\ndoes not hold, naively; however, we can still implement\nthis construction as a $2\\times 2$ matrix representation.\n\nOperators are defined as left-handed and right-handed \nin the chirality matrix format,\n\\beq\n{\\cal{O}} \n= \\left( \\begin{array}{cc} \nO^L & 0 \\\\\n0 & O^R \\\\\n\\end{array} \\right).\n\\eeq\nHence, the matrix covariant derivative can be defined\nas\n\\beq\n{\\cal{D}}_\\mu \n= \\left( \\begin{array}{cc} \nD_\\mu^L & 0 \\\\\n0 & D_\\mu^R \\\\\n\\end{array} \\right).\n\\eeq\nThe commutator, then, yields the field strengths residing\non their respective lattice slices:\n\\beq\n{\\cal{G}}_{\\mu\\nu} = i[{\\cal{D}}_\\mu, {\\cal{D}}_\\nu] \n= \\left( \\begin{array}{cc} \nG^L_{\\mu\\nu} & 0 \\\\\n0 & G^R_{\\mu\\nu} \\\\\n\\end{array} \\right).\n\\eeq\nThe gauge transformations in this space are thus,\n\\beq\n{\\cal O} \\rightarrow {\\cal V O V}^\\dagger ,\n\\qquad\n{\\cal V} = \\left( \\begin{array}{cc} \nV_L & 0 \\\\\n0 & V_R \\\\\n\\end{array} \\right).\n\\eeq\nLattice link fields are off-diagonal matrices,\n\\beq\n{\\cal{U}} = \\left( \\begin{array}{cc} \n0 & U \\\\\n0 & 0 \\\\\n\\end{array} \\right), \n\\qquad\n{\\cal{U}}^\\dagger = \\left( \\begin{array}{cc} \n0 & 0 \\\\\nU^\\dagger & 0 \\\\\n\\end{array} \\right).\n\\eeq\nNote, then, \n\\beq\n{\\cal{U}}^\\dagger {\\cal{U}} = \\left( \\begin{array}{cc} \n0 & 0 \\\\\n0 & 1 \\\\\n\\end{array} \\right),\n\\qquad\n{\\cal{U}} {\\cal{U}}^\\dagger = \\left( \\begin{array}{cc} \n1 & 0 \\\\\n0 & 0 \\\\\n\\end{array} \\right),\n\\eeq\nso that \n\\beq\n{\\cal{U}} {\\cal{U}}^\\dagger + {\\cal{U}}^\\dagger {\\cal{U}} = \\openone ~,\n\\qquad\n{\\cal{U}} {\\cal{U}}^\\dagger - {\\cal{U}}^\\dagger {\\cal{U}} = \\sigma_z .\n\\eeq\n\nThe lattice Wilson links transform as bifundamentals,\n\\beq\n{\\cal{U}} \\rightarrow {\\cal V U V}^\\dagger = \\left( \\begin{array}{cc} \n0 & V_L U V_R^\\dagger \\\\\n0 & 0 \\\\\n\\end{array} \\right),\n\\qquad\n{\\cal{U}}^\\dagger \\rightarrow {\\cal V {U}}^\\dagger {\\cal V}^\\dagger \n= \\left( \\begin{array}{cc} \n0 & 0 \\\\\nV_R U^\\dagger V_L^\\dagger & 0 \\\\\n\\end{array} \\right).\n\\eeq\nThe commutators of operators with link fields are:\n\\bea\n\\label{comm}\n[{\\cal{O}}, {\\cal{U}}] & = & \\left( \\begin{array}{cc} \n0 & O_LU-UO_R \\\\\n0 & 0 \\\\\n\\end{array} \\right),\n\\qquad\n[{\\cal{O}}, {\\cal{U}}^\\dagger] = \\left( \\begin{array}{cc} \n0 & 0 \\\\\nU^\\dagger O_L - O_R U^\\dagger & 0 \\\\\n\\end{array} \\right).\n\\eea\nThe abstract charge is defined as\n\\beq\n{\\cal Q}^a = \\left( \\begin{array}{cc} \nQ^a_L & 0 \\\\\n0 & Q^a_R \\\\\n\\end{array} \\right).\n\\eeq\n\nThus, define the $Q^a$ as having commutators on the $U$'s:\n\\beq\nT^a \\equiv \\frac{\\lambda^a}{2} ,\n\\qquad\n[Q_L, U] = T^aU ,\\qquad \n[Q_R, U] = -UT^a .\n\\eeq\nWe often enconter these charges sandwiched between $U$ and\n$U^\\dagger$ matrices. We thus see that, e.g.,\n\\bea\nU^\\dagger Q_L^a U & = & U^\\dagger T^a U + Q_L^a .\n\\eea\n\nThe structure of eq. (\\ref{comm}) allows \ncovariant differentiation to be written as a commutation\nrelation, and takes the following form on $U$, \n\\beq\n[D_\\mu , U] = \\partial_\\mu U -iA_{L\\mu}^a\\frac{\\lambda^a}{2}U\n+ iA_{R\\mu}^aU\\frac{\\lambda^a}{2} .\n\\eeq\nThis corresponds to the chirality matrix commutator, \n\\beq\n[{\\cal{D}}_\\mu, {\\cal {U}}]\n= \\left( \\begin{array}{cc} \n0 & D_{L\\mu}U-UD_{R\\mu} \\\\\n0 & 0 \\\\\n\\end{array} \\right).\n\\eeq\n\nFrom the link field $U$, we may thus form left-handed (right-invariant), \nand right-handed \n(left-invariant) chiral currents, respectively (non-matrix),\n\\beq\n\\label{alpha}\n\\alpha_{\\mu} \\equiv U [D_\\mu , U^\\dagger], \n\\qquad \\qquad \n\\beta_{\\mu} \\equiv U^\\dagger [D_\\mu , U] .\n\\eeq\nMore explicitly, \n\\bea\n\\alpha_\\mu & = & \n U(\\partial_\\mu - iA^a_{R\\mu}\\frac{\\lambda^a}{2} )U^\\dagger \n + iA^a_{L\\mu}\\frac{\\lambda^a}{2} \n = U(D_{R\\mu}U^\\dagger -U^\\dagger D_{L\\mu}),\n\\cr\n\\beta_\\mu & = & U^\\dagger(\\partial_\\mu - iA^a_{L\\mu}\\frac{\\lambda^a}{2} )U \n + iA^a_{R\\mu}\\frac{\\lambda^a}{2} \n = U^\\dagger(D_{L\\mu}U -U D_{R\\mu}),\n\\eea\nwhere the action of the derivatives follows Leibniz's rule, \n$~D_L U = [D_L,U] + UD_L$; likewise, \n$D_R U^\\dagger = [D_R,U^\\dagger] + U^\\dagger D_R$.\n\nIn the chiral matrix representation, these amount to\n\\beq\n{\\hat{\\alpha}}_{\\mu} = {\\cal{U}}[{\\cal{D}}_\\mu, {\\cal{U}}^\\dagger] \n= \\left( \\begin{array}{cc} \n\\alpha_{\\mu} & 0 \\\\\n0 & 0 \\\\\n\\end{array} \\right),\n\\qquad\n{\\hat{\\beta}}_{\\mu} = {\\cal{U}}^\\dagger[{\\cal{D}}_\\mu, {\\cal{U}}] \n= \\left( \\begin{array}{cc} \n0 & 0 \\\\\n0 & \\beta_{\\mu} \\\\\n\\end{array} \\right).\n\\eeq\n\n\nFinally, it is useful to define the hermitian link chiral matrix:\n\\beq\n{\\cal{U}}_+ \\equiv {\\cal{U}} + {\\cal{U}}^\\dagger \n= \\left( \\begin{array}{cc} \n0 & U \\\\\nU^\\dagger & 0 \\\\\n\\end{array} \\right).\n\\eeq\nThus,\n\\beq\n{\\cal{U}}_+ {\\cal{U}}_+ = \\openone ,\n \\eeq\nand one sees that \n\\beq\n\\label{central}\n{\\cal{A}}_\\mu \\equiv {\\hat{\\alpha}}_{\\mu}+ {\\hat{\\beta}}_{\\mu} = \n{\\cal {U}}[{\\cal {D}}_\\mu, {\\cal {U}}^\\dagger] + {\\cal {U}}^\\dagger\n[{\\cal{D}}_\\mu, {\\cal {U}}]\n= {\\cal{U}}_+[{\\cal D}_\\mu, {\\cal{U}}_+ ]\n=\n\\left( \\begin{array}{cc} \n\\alpha_{\\mu} & 0 \\\\\n0 & \\beta_\\mu \\\\\n\\end{array} \\right).\n\\qquad\n\\eeq\nA useful set of relationships that recur throughout,\nespecially in computing current divergences,\nis \n\\bea\n\\label{alpha1}\n[ D_\\mu, \\alpha_\\nu ] - [D_\\nu, \\alpha_\\mu]\n& = &\n-[\\alpha_\\mu, \\alpha_\\nu] -iU[G_{\\mu\\nu}, U^\\dagger], \\cr\n[D_\\mu, \\beta_\\nu] - [D_\\nu, \\beta_\\mu]\n& = &\n-[\\beta_\\mu, \\beta_\\nu] -iU^\\dagger[G_{\\mu\\nu}, U] ,\n\\eea\nwith the correspondence in the chirality matrix representation:\n\\bea\n\\label{curvature}\n[{\\cal{D}}_\\mu, {\\cal{A}}_\\nu]\n-[{\\cal{D}}_\\nu, {\\cal{A}}_\\mu]\n& = & - [{\\cal{A}}_\\mu, {\\cal{A}}_\\mu] -\ni {\\cal{U}}_+ [{\\cal{G}}_{\\mu\\nu}, {\\cal{U}}_+ ]\n\\cr &&\\cr\n& = & \\left( \\begin{array}{cc} \niG^L_{\\mu\\nu} -iU G^R_{\\mu\\nu}U^\\dagger -[\\alpha_{\\mu},\\alpha_\\nu] & 0 \\\\\n0 & \niG^R_{\\mu\\nu} -iU^\\dagger G^L_{\\mu\\nu}U-[\\beta_{\\mu},\\beta_\\nu] \\\\\n\\end{array} \\right).\n\\eea\n\n\nHow should ${\\cal D}_4$ be defined in D=4? \n${\\cal D}_4$ is some kind of a lattice \nderivative, or ``brane-hop'', in the $x^4$ direction. In general, hopping \non an $N$-slice lattice works through the Wilson link, $U_i$, fields,\nwhich are identified with the continuum $A^4$ through:\n\\beq\nU_i = P \\exp \\left( -i\\int_{x^4_i}^{x^4_{i+1}} A_4 dx^4 \\right ) .\n\\eeq \nConsider a field $\\psi_i(x)$ on the $i$th slice, where $\\psi_i(x)\n\\rightarrow V_i(x) \\psi_i (x)$ under local gauge transformations\n$V_i(x)$ of the local gauge symmetry group on the $i$th slice.\nTo define a covariant derivative in the $x^4$ direction, \none seeks a difference like $(\\psi_{i+1}(x) - \\psi_i(x))\/a$, for lattice \nspacing $a$. But since this has a mixed gauge symmetry, \none is led to define the covariant difference \n$(U_{i}\\psi_{i+1}(x) - \\psi_i(x))\/a$.\nThe link now pulls the first term back from the slice\n$i+1$ to $i$, where the $i$-covariant difference can be computed,\ninvariant under $i+1$ transformations. A vanishing covariant difference \nthus amounts to link-gauge transformation. For adjoint quantities, both sides \nof the corresponding operator need such adjustment.\n\nThe hopping derivative in deconstruction must handle left and right in \na manner consistent with parity. One possibility would be to define an \n{\\em off diagonal} (antihermitian) hopping derivative as a commutator,\nand thus traceless,\n\\beq\n[{\\cal D}^4, {\\cal O}] \\equiv -\\frac{1}{a} [{\\cal{U}}_+,{\\cal{O}}]=\n-\\frac{1}{a} \\left( \\begin{array}{cc} \n0 & -UO_R + O_L U \\\\\nO_RU^\\dagger - U^\\dagger O_L & \n0\n\\end{array} \\right).\n\\eeq\nWith this definition, the hopping derivative obeys Leibniz's chain rule \nof differentiation, as a commutator, and so the Bianchi identities in $D=4$ \nare automatically satisfied, so there is no need for modification of the \ntheory. Thus, an orbifold\ncompactification solves the Bianchi identity with the usual spectrum.\nIn addition, by Leibniz's rule, ${\\cal O \\psi}$ hop-transforms \nexactly like ${\\cal \\psi}$. This may be at the root of a deficiency,\nhowever. Being off diagonal, this ${\\cal D}_4$ maps operators from one\nrepresentation into another. For example,\nit maps an adjoint representation under $SU(N)_L$, \\ie, $(N^2-1, 0)$,\ninto a bifundamental under $SU(N)_L\\times SU(N)_R$, \\ie $(N,\\bar{N})$.\nA covariant derivative then which does not faithfully map a given\nrepresentation into itself is unsatisfactory. \n\nMoreover, if it \nis applied to fermions, one immediately encounters the fermion doubling \nproblem. The remedy\nto this is the addition of a Wilson term, which is a \ncontinuum second derivative.\nIf we generalize the Wilson term to the case of\nhigher representations, such as adjoint\noperators, we are led to the diagonal definition, below \\cite{leib}.\nThe Wilson term projects out the unwanted fermionic doublers, and \npermits the appearance of anomalies consistently with topology. \nMultiplication of the above ${\\cal D}_4$ by $-\\sigma_z {\\cal{U}}_+ $ \non the left, however, leads to a different, {\\em diagonal hopping derivative} \ndefined below.\n\nA preferred definition, and the one we will be using presently is a \n{\\em diagonal hopping derivative},\n\\beq\n{\\cal D}^4 ({\\cal O}) \\equiv \n\\frac{1}{a}\\left({\\cal{U}}[{\\cal{O}},{\\cal{U}}^\\dagger]\n-{\\cal{U}}^\\dagger[{\\cal{O}},{\\cal{U}}]\\right)\n=\n\\left( \\begin{array}{cc} \nUO_RU^\\dagger - O_L & 0 \\\\\n0 & O_R - U^\\dagger O_L U\n\\end{array} \\right),\n\\eeq\nwhere $ a$ is the spacing between neighboring slices.\n\nNote the second term pushes the previous slice fields forward,\nas the first term pulls the subsequent slice fields back, hence\na relative sign difference, which is commensurate \nwith parity:\n Under parity, $L\\leftrightarrow R$,\n$U\\leftrightarrow U^\\dagger$,\nand ${\\cal D}^4 \\rightarrow -{\\cal D}^4$, and hence the definitions are\nparity invariant. It is important to note, however, that, like lattice \nderivatives, this derivative does not obey the Leibniz rule of differentiation,\nand so {\\em cannot} be written as a commutator \n$[{\\cal D}^4 , {\\cal A} {\\cal B}]= \n[{\\cal D}^4 , {\\cal A}] {\\cal B}+ {\\cal A} [{\\cal D}^4 , {\\cal B}]$.\n\nWe thus define the coset field strength as a transform, not a commutator,\nthrough the diagonal hopping derivative:\n\\bea\n\\label{trivium}\n{\\cal G}_{4\\mu} & =& -{\\cal G}_{\\mu 4} \\equiv i{\\cal D}_4 ( {\\cal D}_\\mu) \n\\nonumber \\\\ & = & \n \\frac{i}{a}\\left({\\cal{U}} [{\\cal D}_\\mu, {\\cal {U}}^\\dagger]\n- {\\cal{U}}^\\dagger[{\\cal D}_\\mu, {\\cal{U}}] \\right)\n=\\frac{i}{a}(\\hat\\alpha_\\mu - \\hat\\beta_\\mu) \n= \\frac{i}{a}\\left( \\begin{array}{cc} \n\\alpha_\\mu & 0 \\\\\n0 & -\\beta_\\mu \\\\\n\\end{array} \\right).\n\\eea\n\nThe conventional deconstructed\nlagrangian in the chirality matrix formalism can then be written,\n\\bea\n\\label{lag2}\n{\\cal{L}} & = & -\\frac{1}{2{g}^2}\\left( \n\\Tr {\\cal{G}}_{\\mu\\nu}{\\cal{G}}^{\\mu\\nu}\n-\\Tr {\\cal{G}}_{4\\nu}{\\cal{G}}^{4\\nu}\n\\right)\n\\cr\n& = & -\\frac{1}{2{g}^2} \\Tr G_{L\\mu\\nu}G^{L\\mu\\nu}\n-\\frac{1}{2{g}^2} \\Tr G_{R\\mu\\nu}G^{R\\mu\\nu}\n- \\frac{1}{8} f_\\pi^2~ \\Bigl (\\Tr (\\alpha_\\mu)^2 + \\Tr (\\beta_\\mu)^2 \\Bigr ), \n\\eea\nwhere we identify $1\/g^2 = a\/\\tilde{g}^2$, and \n$f^2_\\pi = 4\/ a\\tilde{g}^2 = 1\/a^2{g}^2$.\n\nIt could be interpreted as a gauged chiral lagrangian\nwith external vector fields, $A^\\mu _L$ and $A^\\mu _R$. We may wish to\nassign the octet of vector mesons, including the\n$\\rho$ to a vector combination of the fields, and the axial\nvector mesons to the axial vector combination. To do\nthis in detail would require additional Higgs fields to give\nmasses to the vector ($\\rho$ ) and axial vector ($A_1$)\ncombinations. Once these combinations have acquired with longitudinal degrees\nof freedom, then one cannot eliminate the mesons by gauge transformations. \n\n\nAs an effective fundamental theory, this represents a massless zero mode \ntogether with a massive KK mode. To see this, pass to unitary gauge to remove \nthe spinless mesons altogether, i.e., note that \n$\\Tr (\\alpha_\\mu)^2 = \\Tr (\\beta_\\mu)^2$, and introduce a ``St\\\"{u}ckelberg'' \nfield, \n\\beq\nV_\\mu \\equiv -i\\alpha_\\mu\/g .\n\\eeq\nThe corresponding field strength, by eq. (\\ref{alpha1}) is:\n\\bea\nF^V_{\\mu\\nu} & = & [D_\\mu, V_\\nu] - [D_\\nu, V_\\mu] - i[V_\\mu, V_\\nu]\n\\cr && \\cr\n& = & -\\frac{1}{g}U[G_{\\mu\\nu}, U^\\dagger] = \n\\frac{1}{g}G^a_{L\\mu\\nu}\\frac{\\lambda^a}{2}\n-\\frac{1}{g}UG^a_{R\\mu\\nu}\\frac{\\lambda^a}{2}U^\\dagger . \n\\eea\nFurther, the orthogonal zero-mode field strength is likewise right-invariant, \n\\bea\nF^0_{\\mu\\nu} & = & \\frac{1}{g}\n\\left(UG^a_{R\\mu\\nu}\\frac{\\lambda^a}{2}U^\\dagger +\nG^a_{L\\mu\\nu}\\frac{\\lambda^a}{2}\\right). \n\\eea \nThus, the effective lagrangian takes the form,\n\\beq\n\\label{lag3}\n{\\cal{L}} = -\\frac{1}{2} \\Tr F^0_{\\mu\\nu}F^{0\\mu\\nu}\n-\\frac{1}{2} \\Tr F^V_{\\mu\\nu}F^{V\\mu\\nu}\n- \\frac{1}{4}g^2f_\\pi^2~ \\Tr V_{\\mu}V_{\\mu}, \n\\eeq\ndescribing a massless zero mode and massive KK mode\nof mass $gf_\\pi\/\\sqrt{2}$. (The spinless mesons have been\nabsorbed into the longitudinal components of $V_\\mu$.) \n\nNote that one can always perform\na left gauge transformation on these fields, \n$D_R \\rightarrow U D_R U^\\dagger = D_R^\\prime$\nleading to $G^a_{R\\mu\\nu}{}^\\prime =\nUG^a_{R\\mu\\nu}\\frac{\\lambda^a}{2}U^\\dagger$,\nhence $gF^V_{\\mu\\nu} = G^a_{L\\mu\\nu}\\frac{\\lambda^a}{2}\n-G^a_{R\\mu\\nu}{}^\\prime\\frac{\\lambda^a}{2} $; thus \n$gF^0_{\\mu\\nu} = G^a_{R\\mu\\nu}{}^\\prime\\frac{\\lambda^a}{2} +\nG^a_{L\\mu\\nu}\\frac{\\lambda^a}{2}$. With these field\nredefinitions, evidently only one linearly\nrealized symmetry transforms\nall fields, the vectorial symmetry,\n$O \\rightarrow VOV^\\dagger $, where $V= V_L$.\n\n\\vskip 0.15in\n\\noindent{\\bf (iii) Bianchi Identities}\n\\vskip 0.15in\n\nThe {\\em Bianchi identities} in $D=5$ are just the Jacobi identities for \ncovariant derivatives, \n\\beq\n\\epsilon_{ABCDE}[D^C, G^{DE}] = i \\epsilon_{ABCDE}[D^C, [D^D , D^E]] =0.\n\\eeq\nConsistency in $D=4$ requires:\n\\beq\n\\label{one}\n\\epsilon_{\\mu\\nu\\rho\\sigma} [{\\cal D}^\\nu, {\\cal G}^{\\rho\\sigma}]= \ni\\epsilon_{\\mu\\nu\\rho\\sigma} [{\\cal D}^\\nu, [{\\cal D}^{\\rho}, \n{\\cal D}^{\\sigma}]] \n= 0, \n\\eeq\nas well as,\n\\beq \n\\label{two}\n\\epsilon_{\\mu\\nu\\rho\\sigma}{\\cal D}^4 ( {\\cal G}^{\\mu\\nu} ) = \n\\epsilon_{\\mu\\nu\\rho\\sigma}\\left([{\\cal D}^{\\mu}, \n{\\cal G}^{4\\nu}]-[{\\cal D}^{\\nu}, {\\cal G}^{4\\mu}]\\right). \n\\eeq\nEq. (\\ref{one}) holds automatically in the $D=4$ theory,\nas ${\\cal G}_{\\mu\\nu}$ is defined as a commutator of \ncovariant derivatives, for {\\em any choice of ${\\cal D}_\\mu$}. \n\nThe off-diagonal hopping derivative\nsatisfies the coset identity eq. (\\ref{two}), \nwhile the diagonal hopping derivative, not being a commutator, does not,\nin general: the Bianchi relation implies\na nontrivial constraint. Consider the \ndiagonal hopping on the {\\em lhs} of eq.(\\ref{two}),\n\\beq\n\\label{bianchilhs}\n{\\cal{D}}^4 ({\\cal {G}}^{\\mu\\nu} )= \\frac{1}{a}\\left(\n{\\cal{U}}[ {\\cal{G}}_{\\mu\\nu}, {\\cal{U}}^\\dagger ]- \n{\\cal{U}}^\\dagger [{\\cal{G}}_{\\mu\\nu}, {\\cal{U}}]\\right),\n\\eeq\nand compare to the {\\em rhs} of eq.(\\ref{two}),\n\\bea\n\\label{bianchirhs10}\ni[{\\cal D}_\\mu , {\\cal D}_4 ( {\\cal{D}}_\\nu) ] \n - i[{\\cal{D}}_\\nu , {\\cal{D}}_4 ( {\\cal{D}}_\\mu) ] & = & \n\\frac{i}{a}\\left( [{\\cal{D}}_\\mu, {\\cal{U}} \n[{\\cal{D}}_\\nu, {\\cal{U}}^\\dagger]]\n-[{\\cal{D}}_\\mu, {\\cal{U}}^\\dagger [{\\cal{D}}_\\nu, {\\cal{U}}]] - \n(\\mu\\leftrightarrow \\nu) \\right)\n\\nonumber \\\\\n& = & \\; \\frac{i}{a}\\left( {\\cal{U}}[[{\\cal{D}}_{\\mu}, {\\cal{D}}_{\\nu}], \n{\\cal{U}}^\\dagger ]- {\\cal{U}}^\\dagger[[{\\cal{D}}_{\\mu}, \n{\\cal{D}}_{\\nu}], {\\cal{U}} ]\\right. \\nonumber \\\\\n& & \\left.\n+[{\\cal{D}}_{[\\mu}, {\\cal{U}}][{\\cal{D}}_{\\nu]}, {\\cal{U}}^\\dagger] \n- [{\\cal{D}}_{[\\mu}, {\\cal{U}}^\\dagger][{\\cal{D}}_{\\nu]}, {\\cal{U}}] \\right)\n\\nonumber \\\\\n& = &\n{\\cal{D}}_4 ({\\cal{G}}^{\\mu\\nu}) \n+\\frac{i}{a} \\left(- [\\hat{\\alpha}_\\mu, \\hat{\\alpha}_\\nu] \n+ [\\hat{\\beta}_\\mu, \\hat {\\beta}_\\nu]\\right).\n\\eea\n\nThe first term of the {\\em rhs} is consistent,\nbut the last term is an unwanted nonvanishing current commutator.\nThis term is nonzero, and is the current algebra \nof the chiral theory. Thus, the Bianchi identity fails given\nthe presence of this term.\n\nNonetheless, the constraint, eq.(\\ref{two}), can be satisfied if we\nconsider a {\\em modified covariant derivative}.\nWe find that the desired modification takes the form,\n\\beq\n\\label{newderiv}\n\\fbox{ ${\\cal{D}}'_\\mu \\equiv {\\cal{D}}_\\mu + \\frac{1}{2} {\\cal{A}}_\\mu$} ~.\n\\eeq\t\t\nThe Bianchi identities of eq.(\\ref{one}) thus remain automatic in the \n$D=4$ subspace, since the gauge field strengths \nare defined, as usual, by commutators of ${\\cal D}'_\\mu$.\nThe Bianchi constraint, eq. (\\ref{two}), \nnow\nrequires the vanishing of the following expression,\nwith the modified derivative: \n\\beq\n\\epsilon^{\\mu\\nu\\rho\\sigma} \n\\left( [{\\cal{D}}'_\\mu, U][{\\cal{D}}'_\\nu, U^\\dagger ] \n- [{\\cal{D}}'_\\mu, U^\\dagger] [{\\cal{D}}'_\\nu , U] \\right)\n = 0.\n\\eeq\nTo see the vanishing of this constraint, we first note:\n\\beq\n{\\cal{U}}_+ [{\\cal{A}}_{\\mu},{\\cal{U}}_+] = -2{\\cal{A}}_{\\mu},\n\\eeq\nhence, \n\\beq\n\\{ {\\cal{U}}_+ , {\\cal{A}}_{\\mu} \\}=0 ,\n\\eeq\nand so, by eqn. (\\ref{central}),\n\\beq\n[ {\\cal{D}}'_{\\mu}, {\\cal{U}}_+] = 0.\n\\eeq\nIt is evident that this, in fact, resolves into the two components,\n\\beq\n[ {\\cal{D}}'_{\\mu}, {\\cal{U}}] = 0, \\qquad \\qquad \n[ {\\cal{D}}'_{\\mu}, {\\cal{U}}^\\dagger ] = 0,\n\\eeq\nand the Bianchi constraint is therefore satisfied.\nWe remark that one can derive the same result without recourse to the\nmatrix representation by careful analysis, where, allowing\nan arbitrary factor $w$ in the current part\nof eq.(\\ref{newderiv}), one obtains the unwanted current commutators\nof eq.(\\ref{bianchirhs10}), multiplied by a factor of $(1-4w+4w^2)$.\nThe Bianchi constraint is thus satisfied with the new \ncovariant derivative\nof eq.(\\ref{newderiv}) with the special coefficient of $w=1\/2$.\nThe matrix formulation both streamlines and automates this derivation. \n\n \nObserve that the field strength ${\\cal G}'_{4\\mu}$ of (\\ref{trivium}) \nmanifestly vanishes for this hopping-flat modified derivative,\n\\beq\n{\\cal G}'_{4\\mu} = i{\\cal D}_4 ( {\\cal D}'_\\mu)=0. \n\\eeq\n(Actually, by ${\\cal{D}}_4 ({\\cal{G}}'_{\\mu\\nu})=0$, each of the three terms\nin the respective coset identity eq. (\\ref{two}) vanishes separately \nfor modified covariant derivatives.) \n\nThe rest of the field strength tensor, by (\\ref{curvature}),\nreduces to \n\\beq\n{\\cal{G}}'_{\\mu\\nu} \n= i[{\\cal{D}}'_\\mu, {\\cal{D}}'_\\nu] \n=\\half ({\\cal{G}}_{\\mu\\nu}+ {\\cal{U}}_+ {\\cal{G}}_{\\mu\\nu}{\\cal{U}}_+ \n-\\frac{i}{2}[{\\cal{A}}_\\mu, {\\cal{A}}_\\mu] ),\n\\eeq\nso that \n\\beq\n{\\cal{G}}'_{\\mu\\nu} =\n\\half \\left( \\begin{array}{cc} \nG^L_{\\mu\\nu}+ U G^R_{\\mu\\nu} U^\\dagger \n- \\frac{i}{2}[\\alpha_\\mu,\\alpha_\\nu] & 0 \\\\\n0 & G^R_{\\mu\\nu} + U^\\dagger G^L_{\\mu\\nu} U\n-\\frac{i}{2}[\\beta_\\mu,\\beta_\\nu]\\\\\n\\end{array} \\right).\n\\eeq\nEvidently, the right-slice amounts to the gauge-transformed image \ntheory of the left-slice, \n\\beq\n{\\cal{G}}'_{\\mu\\nu} \n= \\half \\left( \\begin{array}{cc} \nG^L_{\\mu\\nu}+ U G^R_{\\mu\\nu} U^\\dagger \n- \\frac{i}{2}[\\alpha_\\mu,\\alpha_\\nu] & 0 \\\\\n0 & U^\\dagger (U G^R_{\\mu\\nu} U^\\dagger \n + G^L_{\\mu\\nu} \n-\\frac{i}{2}[\\alpha_\\mu,\\alpha_\\nu])U\\\\\n\\end{array} \\right),\n\\eeq\nFor gauge-invariant combinations, this ``hop-invariant'' setup effectively \ndoubles up the theory. The effective field strength appearing here \nis simply the (hop-symmetric) \nzero-mode combination encountered previously,\n\\beq\nF_{\\mu\\nu}^0 = G^L_{\\mu\\nu}+ U G^R_{\\mu\\nu} U^\\dagger ,\n\\eeq\nwhereas the orthogonal KK-mode combination is absent. \n\nIn effect, the diagonal hopping derivative\nBianchi-compatible theory of the two-slice orbifold\ncontains only one propagating gauge field, together\nwith the spinless mesons. This does not mean that there is no\nKK mode, but that the simplest hop-symmetric deconstruction truncates\nthe spectrum on the propagating zero mode. To obtain the second KK mode \nwould require that we start with $N=3$ branes, and\nwe would expect that, for any $N$, the Bianchi-improved theory would \ndescribe the zero mode and $N-2$ KK modes. \n\nNote that the chiral feature of an orbifold is still present, \\ie, we may treat\n$F_{\\mu\\nu}^0$ as any combination of left-hand or right-hand\ngauging. For example, we may gauge only the left-hand side\nof the meson fields, whence setting $ G^{R}_{\\mu\\nu} =0$, so that \n$F^0_{\\mu\\nu}= G^L_{\\mu\\nu}$; or, else, we may choose to gauge\nisospin, $G^L_{\\mu\\nu} = G^{R}_{\\mu\\nu} $, so\n$F^0_{\\mu\\nu}= 2G^L_{\\mu\\nu}$ (which rescales the coupling constant). \n\nIn the simplifying case that we set the right-hand \nYang-Mills fields to zero (\\ie, we retain only a single\n$SU(N)_L$ gauge group), we end up with a pure left-handed chiral theory:\n\\beq\n{\\cal{G}}'_{\\mu\\nu} = \\half \\left( \\begin{array}{cc} \nG^L_{\\mu\\nu} \n- \\frac{i}{2}[\\alpha_\\mu,\\alpha_\\nu] & 0 \\\\\n0 & U^\\dagger \\left( G^L_{\\mu\\nu} \n-\\frac{i}{2}[\\alpha_\\mu,\\alpha_\\nu]\\right)U\\\\\n\\end{array} \\right).\n\\eeq\nThe resulting gauge action is then, \n\\bea\n\\label{skyrmeterm}\n-\\frac{1}{2\\tilde{g}^2}\\Tr {\\cal{G}}'_{\\mu\\nu}{\\cal{G}}'^{\\mu\\nu}\n& = & -\\frac{1}{4\\tilde{g}^2} \n\\left(\n\\Tr G^L_{\\mu\\nu} G^{L\\mu\\nu} \n-i\\Tr (G^{L\\mu\\nu} \n[\\alpha_\\mu, \\alpha_\\nu]) \n- \\frac{1}{4}\\Tr[\\alpha_\\mu, \\alpha_\\nu][\\alpha^{\\mu}, \\alpha^{\\nu}]\n \\right) . \n\\eea\n\nThe resulting theory has several interesting properties \nevident at this point. The last term, $ \\Tr([\\alpha,\\alpha]^2)$, \nis the Skyrme term required for\nthe stability of the core of the Skyrmion solution. \nIt is normally a puzzle to understand how these terms are\ngenerated in a deconstructed theory, since they are needed classically,\nbecause the skyrmion core is not an entirely short-distance structure.\nThe Bianchi identities have fixed the coefficient of the \nSkyrme terms to definite values.\nWhile one could always add other contributions to the\nSkyrme terms by hand, their appearance here reflects self-consistency with\nthe parent $D=5$ theory, which admits stable large instantonic solitons,\nwhich, in turn, carry the current that matches to the Skyrmionic current. \n\nWe note that the new cross-term of the form $G^L [\\alpha,\\alpha] $, \nwhich is allowed by the presence of the\ngauge field. This term has significant effects\nupon the mass of the skyrmion, and bounds related to those\nof magnetic monopoles arise \\cite{briyahe}.\n\nWe are thus led to speculate that this Bianchi-consistent theory, \nwith these fixed Skyrme terms, points to a more intricate relationship\nbetween the instantonic soliton and the skyrmion. Perhaps we could \nnow find a skyrmion solution that is ``self-dual,'' matching\nthe self-duality of the instantonic soliton in $D=5$, which, in turn, is\na consequence of the self-duality of the instanton. \n\nIn non-matrix notation, the modified derivative reads,\n\\beq\nD'_\\mu = \\partial_\\mu -i (A_{L\\mu}+ \\frac{i}{2} \\alpha_\\mu )\\cdot \nQ_L - i (A_{R\\mu}+ \\frac{i}{2} \\beta_\\mu )\\cdot Q_R ,\n\\eeq\nand hence,\n\\beq\n\\label{nonmatrixprimeder} \nD'_{L\\mu} = \\half ( D_{L\\mu}+ U D_{R\\mu} U^\\dagger)\n= \\partial_\\mu - i A_{\\mu L} +\\half \\alpha_\\mu,\\quad \nD'_{R\\mu} = \\half ( D_{R\\mu}+ U^\\dagger D_{L\\mu} U) \n= \\partial_\\mu - i A_{\\mu R} +\\half \\beta_\\mu .\n\\eeq\nEffectively, the gauge fields are augmented by the meson currents \n$\\alpha_\\mu$ and $\\beta_\\mu$. In the limit of vanishing gauge fields, \nthe effective primed gauge fields are still non-trivial, \n\\beq\n\\fbox{ $A'_{\\mu L} \\rightarrow \\frac{i}{2} U \\partial_\\mu U^\\dagger, \\qquad \n\\qquad \nA'_{\\mu R} \\rightarrow \\frac{i}{2} U^\\dagger \\partial_\\mu U $} ~,\n\\label{trans}\n\\eeq\t\nreminiscent of the London equation inside a superconducting medium.\nSince they are not pure gauges, because of the coefficient of $1\/2$, they \nyield nonvanishing primed field strengths, and hence the Skyrme term \nexhibited above. \n\n\nTo summarize, the deconstruction prescription we have been led to \nis based on the diagonal hopping derivative \n${\\cal D}_4$; the Bianchi-consistent hopping-flat \nmodified covariant derivatives, ${\\cal D}'_\\mu$;\nand the corresponding field strengths, ${\\cal{G}}'_{\\mu \\nu}$.\nHaving rejected the nonvanishing ${\\cal{G}}_{\\mu 4}$, in favor \nof its vanishing primed counterpart, we have forfeited the meson currents' \nkinetic term, in the naive chiral lagrangian above. To recover them, we might, \nfor instance, supplement the lagrangian with a term of the form:\n\\beq\n\\sim \\frac{f_\\pi^2 }{8}\\Tr{\\cal{A_\\mu}}{\\cal{A_\\mu}},\n\\eeq\nor somehow match ${\\cal {A_\\mu}}\\mapsto {\\cal G}_{4\\mu} $. \nThis is equivalent to\ndefining ${\\cal G}_{4\\mu} $ as an off-diagonal operator using\nthe off-diagonal hopping derivative. Another possibility,\nmore consistent with Wilson fermions, is a hybrid\nhopping derivative that is a combination of the off-diagonal Leibnitz form\nand the diagonal form discussed above (see \\cite{leib}; this\nhappens automatically with supersymmetric \ndeconstruction in which hopping terms \nare defined as superpotentials).\nWe will never\nneed this operator in the derivation of the usual\nWZW term in the subsequent section, so these ambiguities\nare irrelevant. We will need the fact, however, \nthat the diagonal ${\\cal G}'_{4\\mu}=0$.\n\nUltimately, such prescriptions codify a number of implicit choices \nof brane configurations and phenomenological outcomes. Unlike the off-diagonal \nantihermitian hopping derivative, the hermitian diagonal one \npreserves topological structures \nassociated with chirality (\\eg, anomalies). \n\n\\section{Derivation of the WZW Term in the Bianchi Theory}\n\n\nThe CS2 lagrangian may be written in a form more suitable for \nsubsequent considerations. Specifically, we start by separating the $A_4$ \ncomponent,\n\\bea\n\\label{CStermr}\n{\\cal{L}}_1 & = & {\\cal{L}}_{1a} + {\\cal{L}}_{1b}, \\cr\n{\\cal{L}}_{1a} & = & \\frac{c}{4}\\epsilon^{\\mu\\nu\\rho\\sigma}\n\\Tr(A_4 G_{\\mu\\nu}G_{\\rho\\sigma} ~, \n + i A_4 A_\\mu A_\\nu G_{\\rho\\sigma}\n + i A_4 A_\\mu G_{\\nu\\rho}A_\\sigma + i A_A G_{\\mu\\nu}A_\\rho A_\\sigma\n \\cr\n& & \\;\\;\\;\\;\\; \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\n- {2}A_4 A_\\mu A_\\nu A_\\rho A_\\sigma )~, \n\\cr\n{\\cal{L}}_{1b} & = &\\frac{c}{2}\\epsilon^{\\mu\\nu\\rho\\sigma}\n\\Tr(A_\\mu G_{\\nu\\rho}G_{\\sigma4} \n + A_\\mu G_{\\nu 4 }G_{\\rho\\sigma }\n + i A_\\mu A_\\nu A_{\\rho}G_{\\sigma4} ) .\n\\eea\nThis helps re-express ${\\cal{L}}_{1a}$ as a lower CS covariant current \ndivergence plus an anomaly term,\n\\bea\n\\label{CSterm1a}\n{\\cal{L}}_{1a} & = & -\\frac{c}{2} \\Tr(A_4 ~[D_\\mu, K^{\\mu}]) +\n\\frac{3c}{4}\\epsilon^{\\mu\\nu\\rho\\sigma}\\Tr(A_4 \nG_{\\mu\\nu}G_{\\rho\\sigma}), \n\\eea\nwhere \n\\bea\nK^{\\mu} \\equiv \\epsilon^{\\mu\\nu\\rho\\sigma}\n\\left(iA_\\nu A_\\rho A_\\sigma + G_{\\nu\\rho}A_\\sigma\n+ A_\\nu G_{\\rho\\sigma}\\right).\n \\eea\nLikewise, since $G_{\\mu 4} = [D_\\mu , A_4] -\\partial_4 A_\\mu$, \nthe second term can be written as\n\\beq\n\\label{CSterm01b}\n{\\cal{L}}_{1b} = -\\frac{c}{2}\\Tr(([D_\\mu , A_4] -\\partial_4 A_\\mu) K^{\\mu}). \n\\eeq\nThe combined CS2 is then \n\\bea\n\\label{CS11}\n{\\cal{L}}_{1} = \n \\frac{c}{2}\\Tr ((\\partial_4A_\\mu)K^\\mu)\n+\\frac{3c}{4}\\epsilon^{\\mu\\nu\\rho\\sigma}\\Tr(A_4 \nG_{\\mu\\nu}G_{\\rho\\sigma}),\n\\eea\nwhere some total divergences have been discarded.\n \n\nOur problem is the interpretation of\nthe first term above. This problem is obviated when\n$G_{\\mu 4}=0$, whence we use eq.(\\ref{CSterm1a})\nfor the full lagrangian. We then need to\ninterpret $[D_\\mu, A_4]$.\nConsider the definition\nof the Wilson line, which we identify with\nthe chiral field of mesons:\n\\beq\nU = \\exp (-i\\int A_4 dx^4) = \\exp (i\\tilde{\\pi}) ,\n\\eeq\nwhere, for a zero-mode $A_4$ we can neglect path-ordering.\nWe can then write, upon\nexpanding the $U$'s to second order (this is\nthe order necessary for for consistent WZW terms---see below):\n\\beq\n\\alpha_\\mu = U [D_\\mu, U^\\dagger]\n= -i [D_\\mu, \\tilde{\\pi} ] - \\half \n(\\tilde{\\pi} [D_\\mu, \\tilde{\\pi} ] - [D_\\mu, \\tilde{\\pi} ]\\tilde{\\pi} ) \n+ O(\\tilde{\\pi}^3), \n\\eeq\n\\beq\n\\alpha_\\mu = U [D_\\mu, U^\\dagger]\n= -i [D_\\mu, \\int dx^4 A_4] - \\half \n(\\tilde{\\pi} [D_\\mu, \\int dx^4 A_4] - [D_\\mu, \\int dx^4 A_4]\\tilde{\\pi} ) \n+ ...\n\\eeq\nWe invert this to make the identification \n\\beq\n\\label{res1}\n[D_\\mu, \\int dx^4 A_4] = i\\alpha_\\mu \n- \\frac{1}{2}\n(\\tilde{\\pi} \\alpha_\\mu - \\alpha_\\mu \\tilde{\\pi} ) \n+ ... .\n\\eeq\n\nWe now impose the condition that, from our Bianchi-improved theory,\n$G_{4\\mu} = 0$, equivalently, $\\partial_4A_\\mu = [D_\\mu, A_4]$, and we\nsubstitute eq.(\\ref{res1}) into the expression eq.(\\ref{CS11}).\nThe full lagrangian upon integrating over\n$x^4$ becomes, \n\\bea\n\\label{CSterm1a}\n{\\cal{L}}_{1} & = & i\\frac{c}{2}\\Tr(\\alpha_\\mu K^{\\mu}) -\n\\frac{c}{4}\\Tr(\\tilde{\\pi}\\alpha_\\mu K^{\\mu}-\\tilde{\\pi}K^{\\mu}\\alpha_\\mu ) \n+\\frac{3c}{4}\\epsilon^{\\mu\\nu\\rho\\sigma}\\Tr(\\tilde{\\pi} \nG_{\\mu\\nu}G_{\\rho\\sigma}) +... ,\n\\eea \n\nWe can now check that we recover the Wess-Zumino term.\nTurn off the gauge fields, but make the deconstructive\nreplacement, with the modified vector potential\nand field strength summarized in (\\ref{trans}), with the primes omitted, \n\\beq\nA_\\mu \\rightarrow i\\frac{\\alpha_\\mu}{2},\n\\qquad \\makebox{hence,}\n\\qquad G_{\\mu\\nu}\\rightarrow -\\frac{i}{2}[\\alpha_\\mu,\\alpha_\\nu],\n\\qquad K_\\mu \\rightarrow\n\\frac{5}{8}\\epsilon_{\\mu\\nu\\rho\\sigma}\\alpha^\\nu\\alpha^\\rho\\alpha^\\sigma .\n\\eeq\n\nOwing to the vector potential which \nis no longer a pure gauge (due to the factor of $1\/2$), \nthe $G_{\\mu\\nu}$ terms\nare now non-negligible and active in our expression for the \nsecond Chern character,\nand this modifies the WZW term's overall coefficient from\nthe heuristic argument result in which $G_{\\mu\\nu}=0$.\nThe CS term thus becomes on the left end-zone (the $(11)$\nmatrix element contribution to the trace):\n\\beq\n{\\cal{L}}_{1L} = \n -\\frac{c}{2}\\epsilon_{\\mu\\nu\\rho\\sigma}\\Tr(\\tilde{\\pi} \n \\alpha^\\mu \\alpha^\\nu \\alpha^\\rho \\alpha^\\sigma ) +... .\n\\eeq\nFrom the right end-zone, we likewise get the result:\n\\beq\n{\\cal{L}}_{1R} = \n -\\frac{c}{2}\\epsilon_{\\mu\\nu\\rho\\sigma}\\Tr(U^\\dagger\\tilde{\\pi} U \n \\beta^\\mu \\beta^\\nu \\beta^\\rho \\beta^\\sigma ) +... , \n\\eeq\nwhich is equivalent, since\n$\\tilde{\\pi} = U\\tilde{\\pi}U^\\dagger$.\n\nThus, combining, we obtain the Wess-Zumino term\nfor the Bianchi-consistent theory:\n\\beq\n{\\cal{L}}_{1} = \n -\\frac{N}{240\\pi^2}\\epsilon_{\\mu\\nu\\rho\\sigma}\\Tr(\\tilde{\\pi}\n \\alpha^\\mu \\alpha^\\nu \\alpha^\\rho \\alpha^\\sigma ) +... ,\n\\eeq\nwhere the ``index'' $N$ is given by the dimensionality\nof the parent theory space-time:\n\\beq\nN = D = 5 . \n\\eeq\nEvidently, the ``'t Hooft matching'' of our Bianchi\nimproved theory intrinsically identifies\n$D=5$, reflected in the value of\nthis index. We have no deeper interpretation for\nthis result at present.\n\nParenthetically, we may suggest care in manipulating the \nWZ term. For example, we could write (using forms,\nand $d\\alpha = -\\alpha^2$ when the vector\npotential is ignored):\n\\beq\n\\tr(\\tilde{\\pi} \\alpha^4) = \\tr(\\tilde{\\pi} d\\alpha d\\alpha)\n= \\tr(d\\tilde{\\pi} \\alpha^3 ).\n\\eeq\nBy naively replacing $d\\tilde{\\pi} \\rightarrow i\\alpha$, we get \nzero for the {\\em rhs} by cyclicity of the\n$\\epsilon$-symbol, $\\Tr (\\alpha^4) =0$; so we would only access the vanishing, leading \npart of the WZ term: zero! Of course, at the next order in the \nexpansion in pions, we recover the properly modified covariant derivative,\n\\beq\n\\label{picur}\nd\\tilde{\\pi} \\rightarrow i\\alpha\n-(1\/2)[\\tilde{\\pi} , \\alpha] ,\n\\eeq\nand hence consistency for the WZ term, to leading non-trivial order. \n\nThe higher orders for the WZ term have been discussed mathematically in, \n\\eg, \\cite{BCZ}. \nBeyond leading order, however, the WZ term is not universal in form, as \nan expansion in pions. Indeed, the expansion of unitary chiral fields, such as \n$U=\\exp(i\\tilde{\\pi})$, as a power series\nin $\\tilde{\\pi}$ is non-universal beyond the second\norder. (This owes to the fact that pion fields\nare ``coordinates'', which parameterize the unitary manifold\nsatisfying $U^\\dagger U =1$. We could equally well have\nchosen, \\eg, $U= (1+i\\tilde{\\pi} )\/\\sqrt{1 + \\pi_a\\pi^a\/f_\\pi^2}$. Upon\ncomparing expansions of both $U$s, it is evident that universality \nis lost at $O(\\tilde{\\pi}^3)$.) Physically, there is no general way to lock\nthe coefficients of higher order terms to lower order\nones without additional constraints. Imposing\nthe equations of motion, however, does lock\nthe higher order terms to the universal lower order ones\n(one must use an expansion in pions in the kinetic term\nas well as in the WZ term when the equations of motion\nare implemented). The actual on-mass-shell matrix\nelements are thus universal.\nConsequently, the form of the WZ term is\nuniversal only at the fifth order in $\\Tr(\\pi\\alpha^4)$, since\nat the next order we pick up nonuniversal terms from expansions\nof $\\alpha$. Moreover, there is no way to insure\nthe self-consistency beyond this order off-mass shell.\n\n\\vskip .1in\n\\noindent\n\\section{Conclusions}\n\\vskip .1in\n\nWe have initiated the discussion as to how the Chern-Simons terms\nof a $D=5 $ pure Yang-Mills theory can be deformed into\nthe Wess-Zumino-Witten terms of gauged chiral\nlagrangians of $D=4$. \n\nAdjoint currents in $D=5$ are controlled by the second Chern character. \nThis in\nturn becomes the WZW term in $D=4$. The minimal coefficient of Witten for the\nWess-Zumino term follows from the simplest case of pure gauge\nvector potentials generated by London currents in the orbifold \nmagnetic superconducting end-zones, as shown in our heuristic argument. \n\nSinglet currents follow from introduction of a singlet\n$U(1)$ vector potential in $D=5$ Yang-Mills,\nwhich is a dual variable describing the\ninstantonic soliton that uniquely\noccurs there. We summarize how this morphs into a new\nWZW term in $D=4$, involving the $\\sigma$ and $\\eta'$ fields,\nand which generates the corresponding chiral current equations of\nmotion. A new $U(1)$ axial current, associated with the $\\eta'$, \nhas also been identified.\nThese results are a consequence of the present\napproach, and may have application to skyrmion physics.\n\nWe then embark upon a formal discussion of the latticization\nof the extra (fifth) dimension, and study hopping\nderivatives and the Bianchi identities. The coset Bianchi\nidentity is shown to fail in the case of the diagonal \nhopping derivative in the fifth dimension, \nthe most natural definition for a lattice gauge\ntheory. We find, however, that the coset Bianchi identity can\nbe rescued if the basic $D=4$ covariant derivative is modified by\nthe addition of a chiral vector current with the special\ncoefficient of $1\/2$. \n\nThis result has intriguing implications. For one, it converts the\norbifold compactification into an effective periodic compactification. It also\nprovides a Skyrme term in the effective action that must match the topology of\nthe instantonic soliton to the skyrmion. We conjecture that with\nthe fixed coefficient of the Skyrme term provided by the theory,\nthe matching may be quite powerful, leading perhaps to an analytic skyrmion\nsolution and some form of ``self-duality.'' \n\nWe finally examine the WZW term implied by the Bianchi-consistent theory. Again\nwe obtain the WZW term, but now with a coefficient that has an index\nof $N=D=5$. Many other issues are raised and future lines to explore \nare suggested by the\npresent work.\n\n\n\\vskip .2in\n\\noindent\n{\\bf Acknowledgments}\n\\vskip .1in\nWe thank Nima Arkani-Hamed,\nBill Bardeen, and Lisa Randall for some helpful discussions.\nOne of us (CTH) thanks the University\nof Minnesota's Frontiers II Workshop, and Harvard University\nfor its hospitality during the final\ncompletion of the present work.\nThis work is supported in part by\nthe US Department of Energy, High Energy Physics Division,\nContract W-31-109-ENG-38, and \ngrant DE-AC02-76CHO3000.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe fundamental paradigm of solar system astronomy prior to the time of \nCopernicus was that the Earth was at the center of the solar system. Also, \ncelestial bodies were assumed to move along perfect circles. This led to the \nsystem of deferents and epicycles. One prime motivation for the use of epicycles\nwas to account for retrograde motion. Copernicus' great book {\\em On the \nRevolutions of the Heavenly Spheres} (1543) asserted that the Sun was \nphysically and truly at the center of the solar system, and that this provided \na much simpler explanation for the retrograde motion of the planets. However, \nCopernicus retained circular motion. Also, he retained the notion of epicycles \nbecause they were needed to account for variations of distance of the planets \nfrom the Sun \\citep{Ging93}.\n\nIn ancient Greek astronomy, the {\\em direction} towards the Moon, Sun, \nor a planet was more important than the implied distance to it. \nPtolemy's model of the motion of the Moon implied that its distance from \nthe Earth varied nearly a factor of two. Naked eye observations by this \nauthor have demonstrated that without a telescope one can show that the \nangular size of the Moon varies in a regular fashion, implying that \nMoon's distance varies in a regular fashion \\citep{Kri10}. The implied \neccentricity of the Moon's orbit was $\\approx$0.04. (The true eccentricity\nof the Moon's orbit is 0.055, but its orbit is anything but a simple ellipse,\nowing to the gravitational force of the Sun.\\footnote[2]{\\citet{Lah12} derived\na value of the Earth's orbital eccentricity of 0.017 $\\pm$ 0.001, which compares\nextremely well with the official modern value of 0.0167. This was \naccomplished by determining the variation of the\nequation of time (difference of apparent solar time and mean solar time)\nover the course of the year using observations of the length of the\nshadow of a {\\em gnomon}. It was also necessary to know the obliquity of the\necliptic, which is directly obtained from such observations on the first\nday of summer and the first day of winter.}) The point here is \nthat epicycles in Ptolemaic astronomy were a geometrical device to \nexplain retrograde motion (in the case of the planets) or to determine \nthe direction towards the Moon. In Copernicus' model, the use of an \nepicycle implies a realistic, physical variation of distance.\n\nIn 1609 Johannes Kepler published the original versions of his first two \nlaws of planetary motion: 1) the orbit of a planet is an ellipse, with \nthe Sun at one focus; and 2) what we now call the law of areas, that the \nradius vector of a planet sweeps out equal areas in equal times. The \nSecond Law can be stated as follows: \n\n\\begin{equation}\nr^2 \\rm{d}\\theta \\; = \\; h \\; , \n\\end{equation}\n\n\n\\noindent\nwhere $r$ is the \ndistance between a planet and the Sun, d$\\theta$ is an angular increment \nin radians, and $h$ is a constant unique to each planet.\n\nNewton's breakthroughs in mathematics and mechanics led to the \nrealization that Kepler's First Law needed correction. The very center \nof the Sun is not at the focus of a planetary orbit. A planet orbits the \n{\\em center of mass} of the planet-Sun system, and the Sun orbits that \ncenter of mass too \\citep[][chapter 2]{Car_Ost07}. This idea, of course, \nhas led to the discovery of many extra-solar planets via the radial \nvelocity method.\n\nIn the autumn of 2015 Mars was nicely situated in the constellation Leo before sunrise.\nWe began a sequence of observations of Mars\nusing a simple cross staff (Fig. \\ref{f1}).\\footnote[3]{A pattern for \nmaking the cross staff can be obtained from this link: \nhttps:\/\/sites.google.com\/a\/uw.edu\/introductory-astronomy-clearinghouse\/labs-exercises\/measuring-angular-sizes-and-distances. \nThe reader should note that when printed out the scale may look like inches, but \nthe scale is, in fact, somewhat different.}\n\nSay the full width of the cross staff is $d$, and suppose at a linear distance \n$D$ down the ruler the angular separation of two celestial objects exactly matches \nthe width of the cross staff. Then the angular separation of the two objects will be \n\n\\begin{equation}\n\\theta \\; = \\; 2 ~\\rm{tan}^{-1} \\left(\\frac{d}{2D}\\right) \\; .\n\\end{equation}\n\nIf an observer can measure the angular separation of a planet and two \nstars of known celestial coordinates, there are two solutions for the \nposition of the planet, one on each side of the great circle arc joining \nthe two stars. If the planet is close to being on the great circle arc \nbetween the two stars, perhaps no solution results, given errors of \nmeasurement. If the positions of a planet and the two stars form a \nspherical triangle with reasonably equal sides, this\nis ideal, and the planetary position can be \ndetermined as accurately as possible. Because we are using a handheld \nnaked eye instrument, it is advised to use three to five reference \nstars. We assume a system of accurate stellar coordinates of bright \nstars along the zodiac. We adopt the J2000.0 coordinates of such stars \nfrom the SIMBAD database.\n\nUnderstanding Johannes Kepler's efforts to discover the elliptical \nnature of Mars' orbit requires serious effort. A good place to start is \nan article by \\citet{Ging89}. To make a long story shorter, Kepler \nbelieved that an ovoid would fit the (then) most accurate data available \n(obtained by Tycho and his assistants). Kepler found anomalies \namounting to 8\\arcmin ~between the measured ecliptic longitudes and his model. \nSince he believed that Tycho's data were good to $\\pm$2\\arcmin ~or \nbetter, he concluded that there was a problem with the {\\em model}. \nThis led him to conclude that his {\\em approximation} to the ovoid \n(namely, an ellipse) was the true orbital shape.\n\nWe wondered if it were possible to demonstrate from simple naked eye \nobservations that the orbit of Mars is indeed an ellipse. Or, requiring \nless rigor, are the positions of Mars consistent with an elliptical \norbit? Here we present results based on nearly a year of observations.\nA full blown orbital determination for the planet Mars is beyond the\nscope of the present paper. That would involve \nsimultaneously solving for all six orbital elements. We are only trying \nto show that a data set obtained with simple equipment can be fit with \nan ellipse of eccentricity $\\approx$ 0.093. Other values of the \neccentricity can be shown to give ecliptic longitudes that differ from \nthe observational data by 1.5 deg, far larger systematic \ndifferences than the internal random errors of the observations.\n\n\\section {Data Acquisition}\n\nIn Table \\ref{data} we give various data relating to Mars. For each \nJulian Date we give the ``true'' right ascension ($\\alpha$) and declination \n($\\delta$) of the planet, obtained using an algorithm by \n\\citet{vanF_Pul79}. These coordinates are accurate to $\\pm$1\\arcmin.\nNote that these coordinates will correspond to the equinox of date in the years\n2015 or 2016. To convert these coordinates to ecliptic latitude ($\\beta$) and\nlongitude ($\\lambda$) we need the following formulas from spherical\ntrigonometry \\citep[][p. 40]{Smart77}:\n\n\\begin{equation}\n\\rm{sin}(\\beta) \\; = \\; \\rm{sin}(\\delta) ~\\rm{cos}(\\epsilon) - \n\\rm{cos}(\\delta) ~\\rm{sin}(\\alpha) ~\\rm{sin}(\\epsilon) \\; ;\n\\end{equation}\n\n\\begin{equation}\n\\rm{sin}(\\lambda) \\; = \\; \\frac{\\rm{cos}(\\delta) ~\\rm{sin}(\\alpha) ~\\rm{cos}(\\epsilon) + \n\\rm{sin}(\\delta) ~\\rm{sin}(\\epsilon)}{\\rm{cos}(\\beta)} \\; ;\n\\end{equation}\n\n\\begin{equation}\n\\rm{cos}(\\lambda) \\; = \\; \\frac{\\rm{cos}(\\delta) ~\\rm{cos}(\\alpha)}{\\rm{cos}(\\beta)} \\; .\n\\end{equation}\n\n\\noindent\n$\\epsilon$ is the obliquity of the ecliptic, 23\\arcdeg ~26\\arcmin ~21\\arcsec.406\nfor the year 2000. Using the {\\sc atan2} function in FORTRAN or Python \nwith arguments sin($\\lambda$) and cos($\\lambda$), we obtain the ecliptic \nlongitude in the correct quadrant.\n\nTable \\ref{data} also gives the observed right ascension, declination, \necliptic longitude, and ecliptic latitude of Mars derived from the cross \nstaff measurements, along with the number of reference stars used and the \nvalue for each date of the Sun's ecliptic longitude. The values of the \nSun's longitude were calculated using the second method of \\citet[][p. \n80]{Meeus88}. This method uses the Sun's mean anomaly, but apparently does \n{\\em not} use the Earth's orbital eccentricity. On occasion we desired \none more sufficiently bright reference star and used the {\\em derived} \nposition of Saturn as that reference position.\n\nConsider two celestial objects with equatorial coordinates ($\\alpha _1$, $\\delta _1$) and\n($\\alpha _2$, $\\delta _2$). The angular separation ($\\theta$) between two objects is:\n\n\\begin{equation}\n\\rm{cos} (\\theta) \\; = \\; \\rm{sin}(\\delta _1) ~\\rm{sin}(\\delta _2) +\n\\rm{cos}(\\delta _1) ~\\rm{cos}(\\delta _2) ~\\rm{cos}(\\alpha _1 - \\alpha _2) \\; .\n\\end{equation}\n\nNext consider a spherical quadrilateral that is bounded by starting and ending \nright ascensions, and starting and ending declinations. The quadrilateral is \ndivided into a grid, given a nominal increment in each coordinate of 0.01 deg. \nWe used a computer program of our devising that uses the coordinates of two \nreference stars and the measured angular distance of a planet from each of these \nstars to determine the coordinates of the planet. If the coordinates of the \nstars are J2000.0 coordinates, then the derived right ascension and declination \nof the planet also have J2000.0 coordinates.\n\nThe derived ecliptic coordinates of Mars are shown in Figure \\ref{f2}. The \nsolid line in the plot shows the locus of ``true'' positions from \n\\citet{vanF_Pul79}. The earliest observations are of lesser quality. On \nthese occasions we only used two reference stars, and was getting used to a new \nobserving procedure. On almost all subsequent occasions we used more than two \nreference stars. Two of these first three observations are easily shown to be \noutliers. Our first three observations will thus be excluded from subsequent \nanalysis.\n\nAnother thing to note about Figure \\ref{f2} is that Mars was north of \nthe ecliptic until JD $\\approx$ 2,457,504 (April 25, 2016) and then was \nlocated south of the ecliptic. In other words, the orbit of Mars is \ninclined to the ecliptic. The modern value of the orbital inclination \nis 1.850 deg \\citep[][on p. 295]{Tho_etal00}. For our purposes here we will\nassume that Mars' orbit is coplanar with the ecliptic.\n\nGiven that most positions of Mars listed in Table \\ref{data} were \nderived from angular separations with respect to three to five reference \nstars, almost all derived right ascensions and declinations have \neasy-to-calculate internal random errors. These are sometimes as small \nas $\\pm$0.01 deg (which we do not really believe). On one occasion (JD \n2457399.9840) the internal random error for right ascension was \n$\\pm$0.23 deg and the internal random error of declination was $\\pm$0.47 \ndeg. Typical internal random errors for right ascension and declination \nare $\\sigma _{\\alpha} \\approx \\sigma _{\\delta} \\approx \\pm~0.10$ deg.\n\nSince we have ``true'' positions of Mars from \\citet{vanF_Pul79} we can \nmake a direct estimate of the accuracy of our observations. The easiest \nway to do this is to precess the ecliptic longitudes from column 4 in \nTable \\ref{data} to equinox J2000 by subtracting 50.25 arc seconds per year \ntimes the number of years from JD 2,457,543.5 (January 0.0, 2000) to the \ndate of observation. The ecliptic latitudes require no precession \ncorrection. The subsequent differences of ecliptic longitude and \nlatitude (``observed'' {\\em minus} ``true'') give mean differences of $-0.062 \n\\pm 0.016$ deg in ecliptic longitude and $-0.037 \\pm0.027$ deg in \necliptic latitude. The standard deviations of the distributions of \ndifferences are: $\\sigma _{\\lambda} = \\pm$0.10 deg and $\\sigma _{\\beta} \n= \\pm$0.17 deg. The square root of the sum of squares of those errors \nis $\\pm$0.20 deg, or $\\pm$12\\arcmin. This is the accuracy of the \nposition of a bright object (such as a planet) obtainable with our \nsimple cross staff.\n\n\\section{Fitting the Data}\n\nIn Book 5, Chapter 19, of {\\em On the Revolutions of the Heavenly Spheres} \n\\citet{Cop1543} derived the perigee, mean, and apogee distances of Mars. \nHe obtained the values 1.374, 1.520, and 1.649 AU, respectively.\nThus, Copernicus knew the amount\nby which Mars' distance from the Sun varies, and his mean distance\nis very close to the modern value of the semi-major axis size of\nMars' orbit (1.5237 AU). Further discussion of Copernicus' model\ncan be found in Appendix \\ref{sec:comparison}.\n\nLet us consider the elliptical orbit of Mars. The equation of an\nellipse is:\n\n\\begin{equation}\nr \\; = \\; \\frac{a(1 \\; - \\; e)}{1 \\; + \\; e \\; \\rm{cos} \\; \\theta} \\; ,\n\\end{equation}\n\n\\noindent\nwhere r is the Mars-Sun distance, $a$ is the semi-major axis of\nthe ellipse, $e$ is the eccentricity, and angle $\\theta$ = 0 when Mars is \nat perihelion.\n\nThe velocity along the orbit \\citep[][Eq. 2.36]{Car_Ost07} is\n\n\\begin{equation}\nv^2 \\; = \\; G (M_{\\odot} + M_{Mars}) \\left(\\frac{2}{r} \\; - \\; \\frac{1}{a} \\right) \\; .\n\\end{equation}\n\n\\noindent\nSince the mass of Mars is $\\approx$ 3.23 $\\times$ 10$^{-7}$ M$_{\\odot}$\n\\citep[][p. 295]{Tho_etal00}, for our purposes here we shall ignore it.\n\nWe wish to calculate the position of Mars at increments of one Earth day\nstarting at the moment of its perihelion. At perihelion r = r$_{min}$ = $a (1 - e)$.\nUsing the known semi-major axis size and eccentricity of Mars' orbit,\nwe can calculate the maximum velocity at perihelion with Equation 8.\nOn perihelion day traveling at velocity\n$v_{max}$ Mars moves 0.6349 degrees along its orbit as viewed from the Sun.\nThis allows us to calculate the constant $h$ for Mars using Equation 1.\nThen, by alternating use of Equations 7 and 1 we can calculate $r$ and $\\theta$\nfor Mars each day along its orbit. The X-Y coordinates are obtained simply:\n$X = r$ cos($\\theta$) and $Y = r$ sin($\\theta$).\n\nFor the Earth let us adopt a circular orbit. (The eccentricity of the Earth's\norbit is actually about 0.0167.) On average, the Earth moves 360.0\/365.2422 = \n0.985647 degrees per day with respect to the Sun. This gives us another \nset of X-Y coordinates in the same coordinate system, with\nthe Sun at the origin and the +X-axis in the direction of Mars' perihelion.\n\nConsider Figure \\ref{f3}. According to the Solar Systems Dynamic Group, Horizons \nOn-Line Ephemeris System at Jet Propulsion Laboratory, the previous perihelion of \nMars occurred on Julian Date 2,457,003.8524 (December 12, 2014, at 08:27:29 UT). \nThe next perihelion occurs on Julian Date 2,457,691.0507 (October 29, 2016, at \n13:13:04 UT). For the moment we will take the perihelion dates as given. They \nare not directly observable, but the date of opposition of Mars essentially is. \nMars' ecliptic longitude differs from that of the Sun by 180 deg near the \nmid-time of its retrograde motion. From our data we find opposition to have \noccurred on May 22, 2016, at 2 hours UT. According to the {\\em Astronomical \nAlmanac} for 2016 opposition occurred on May 22nd at 11 hours UT. The agreement \nis reasonably good.\n\nMars' latest opposition occurred 526.9 days after the perihelion of December 12, \n2014. Call it 527 days. The X-Y coordinates of the day-by-day position of Mars \nin our coordinate system give $\\theta$ = 265.545 deg on the day of opposition. \nSince the Earth moves 0.985647 deg per day along its orbit, the 269th pair of \nEarth coordinates gives an angle $\\theta$ most closely matching that of Mars \n(265.139 vs. 265.545 deg, in fact). Given the index $i$ of the Earth's \ncoordinates, the corresponding index of Mars' coordinates for the same day is \nequal to $i$ + (527 $-$ 269).\n\nWe wrote a simple program in Python that calculates the X-Y coordinates of Mars\nfor each day starting at perihelion, and X-Y coordinates of the Earth. With \nan appropriate offset of the indices of the two sets of coordinates we can\nobtain the direction toward Mars {\\em from the Earth} for any given date. \nThen we can find an arithmetic offset between days-since-Mars-perihelion and Julian \nDate, and we can find another arithmetic offset to transform the angles obtained\nto ecliptic longitude. This is just an example of shifting a template in\ntwo coordinates to minimize the sum of squares of deviations from the\ndata to the offset template. This produces a goodness of fit parameter.\nThe square root of the goodness of fit parameter divided by the number of\ndata points minus 2 gives the RMS scatter of the fit of the template\nto the data. Varying the model parameters allows us to search for an\neven better fit to the data.\n\nOur program lets us a generate a template of ecliptic longitude values\nvs. time. Using the official values of Mars' perihelion date, orbital\neccentricity, and orbit size given in Table \\ref{summary}, we can adjust\nthe X and Y axis values to minimize a goodness of fit parameter and \nmake a plot that fits the data well enough to eye. The goodness of\nfit parameter is the sum of squares of differences between a template\nand the data, in other words, like $\\chi^2$ minimization, but with\nequal weights for all the points.\n\nWe wondered if a better fit to our data might be obtained with different fit \nparameters. In other words, we start by deriving the date of perihelion, using \nour data. In the top panel of Figure \\ref{f4} we show the goodness of fit \nparameter for a range of dates with respect to October 29.5. On the basis of our \ndata, we find the best fit is obtained for a perihelion date of October 25.4. The \ngoodness of fit parameter doubles, compared to the value at the minimum, for \nperihelion dates October 21.5 and October 29.3. Our perihelion date is therefore \nOctober 25.4 $\\pm$ 3.9, 2016 (UT). This is the equivalent of December 8.5, 2014 \n(the date of the previous perihelion). In what follows that is our effective\nperihelion date, allowing us to work {\\em forward} in time toward the \nsubsequent perihelion on October 25.4, 2016.\n\nHow robust is the eccentricity? In the middle panel of Figure \\ref{f4} we show \nthe goodness of fit parameter for a range of eccentricities ranging from 0.043 to \n0.133. The minimum of the sum of squares of residuals occurs for $e$ = 0.086,\nwith an uncertainty of $\\pm$0.010.\n\nFinally, we tried a number of values of the orbital semi-major axis size,\nranging from 1.50 to 1.55. The goodness of fit parameter is minimized for\n$a$ = 1.526 AU. Using our determination of the time of perihelion, the eccentricity, and a slight\nadjustment to the mean distance from the Sun, we obtain a\nsum of squares of residuals of 1.111. There were 42 data points\nused for the analysis, and 2 constraints (adjusting time to truncated Julian \nDate and adjusting angle to ecliptic longitude). The best fit to our data\ngives an RMS residual of $\\pm 10^{\\prime}.0$ for the fit of the ecliptic longitudes\nto the best model. This is only slightly better than the average difference\nbetween the data and the ``true'' values ($\\pm$12\\arcmin). \nIgnoring the eccentricity of the Earth's orbit was not a serious mistake.\n\nIn Figure \\ref{f5} we plot the observed ecliptic longitudes of Mars (for equinox\nJ2000) vs. the Julian Date. Low order polynomial fits to the two stationary\npoints indicate that retrograde motion lasted for 74.1 days. The range of\necliptic longitude between the stationary points was\n15.23 $\\pm$ 0.42 deg. We also show three fits of the data\nfor values of the orbital eccentricity of 0.053, 0.086, and 0.123. Even if this\nfigure is displayed in color, it is difficult to see that the middle locus is\nbetter than the other two. In Figure \\ref{f6} we show the differentials\n(data {\\em minus} model). Figure \\ref{f6} clearly shows that the\neccentricity cannot be as small as 0.053 or as large as 0.123. In both\ncases the model differs from the data by 1.0 to 1.5 degrees 190 days before\nopposition and 70 days after opposition. Given that the individual positions\nof Mars are good to $\\pm$0.2 deg on average, values of\nthe eccentricity of 0.053 or 0.123 are strongly rejected by the data. \n\n\\section{Conclusions}\n\nIn Table \\ref{summary} we summarize the values derived from our simple dataset\nbased on naked eye observations using a handheld cross staff. For\ncomparison we also give the official (``true'') values based on a much\nmore sophisticated orbital determination using the most modern methods. \nAdmittedly, in science we almost never have the ``true'' values for comparison. \nTypically, we can only calculate our internal random errors and estimate sources \nof systematic error.\n\nOur value of the orbital eccentricity of Mars (0.086 $\\pm$ 0.010) compares \nreasonably well the official modern value (0.0934). While our data are not \naccurate enough to {\\em prove} that Mars' orbit is an ellipse, if we fit the data \nwith an ellipse, the eccentricity must be near 0.09. Thus, a dataset based on \nnaked eye observations can be shown to be in strong agreement with Kepler's First \nLaw.\n\n\\acknowledgments\n\nWe made use of the SIMBAD database, operated at CDS, Strasbourg, France.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzswp b/data_all_eng_slimpj/shuffled/split2/finalzswp new file mode 100644 index 0000000000000000000000000000000000000000..fc0ce3cb557d29be365b48e0c3313afbad8e0b4c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzswp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nPropulsion of lightweight spacecraft to sub-relativistic speeds that greatly exceed $30 ~\\rm km\\,{\\rm s}^{-1}$ (or $10^{-4}c$) attainable by chemical rockets is a promising frontier for space exploration (\\citealt{Turyshev:2020to}; \\citealt{2020arXiv200703530F}). \nRecently, \\cite{2017ApJ...834L..20C} proposed a novel method to measure the mass of planets via interferometry by an array of relativistic spacecraft, envisioned by the Breakthrough Starshot initiative\\footnote{https:\/\/breakthroughinitiatives.org\/Initiative\/3}. \\cite{2018AcAau.152..370P} suggested a precursor that will launch slower spacecraft at $v\\sim 0.01c$ to explore the Solar system. \\cite{Witten:2020tc} proposed to use a sub-relativistic spacecraft of speeds $v\\gtrsim 0.001c$ to indirectly probe Planet 9 through its gravitational influence on the spacecraft trajectory. \\cite{2020ApJ...895L..35H} calculated the turbulent noise associated with drag and magnetic forces from the interstellar medium (ISM) at these high speeds. \n\nAs relativistic spacecraft moves through the solar system, it is heated by its interaction with gas particles as well as solar wind and radiation (\\citealt{2017ApJ...837....5H}). Here, we study the detectability of the hot surface of a relativistic spacecraft from its long-wavelength thermal emission. In particular, we aim to derive the minimum size and speed of the spacecraft that could be detected with a high-sensitivity telescope.\n\nThe structure of the paper is as follows. In Section \\ref{sec:model}, we describe the model of calculations, including spacecraft heating, cooling, and its thermal emission. In Section \\ref{sec:results}, we present our numerical results of the surface temperature and emission flux, and discuss the minimum sizes of objects detectable by telescopes. In Section \\ref{sec:discuss}, we discuss implications of our findings.\n\n\n\n\\section{Model}\\label{sec:model}\nLet us first evaluate the equilibrium temperature of the spacecraft due to collisional and radiative heating by absorption of interstellar starlight and solar radiation. Collisional heating is dominated by H and He which carry most of the gas mass. The rate of collisional heating to the spacecraft surface moving at a speed $v$ can be written as,\n\\begin{eqnarray}\n\\frac{dE_{\\rm coll}}{dt} = \\frac{1.4n_{\\rm H}m_{\\rm H}v^{3}\\pi R^{2}}{2}\n\\simeq 9.86\\times 10^{4}\\left(\\frac{n_{\\rm H}}{1\\,{\\rm {cm}}^{-3}}\\right) \\left(\\frac{R}{1\\,{\\rm {cm}}}\\right)^{2}\\left(\\frac{v}{0.1c}\\right)^{3} \\,{\\rm {erg}}\\,{\\rm s}^{-1},\\label{eq:dEdt_coll}\n\\ena\nwhere $\\pi R^{2}$ is the frontal surface area (for a radius $R$) of the spacecraft, $n_{\\rm H}$ is the hydrogen density, and a factor of $1.4$ accounts for the $10\\%$ abundance of He relative to H. \n\nBeyond the heliopause, at a distance $D>D_{\\rm hp}=100\\,{\\rm {au}}$, from the Sun, the hydrogen density is $n_{\\rm H}=n_{\\rm ISM}$ with $n_{\\rm ISM}$ the ISM density. Inside the heliosphere, $n_{\\rm H}=n_{\\rm sw}+n_{\\rm neu}$ with $n_{\\rm neu}$ being the neutral hydrogen density and $n_{\\rm sw}$ being the proton density of the solar wind. We take $n_{\\rm neu}= 0.1\\,{\\rm {cm}}^{-3}$ (\\citealt{2009SSRv..146..235F}). The mean proton density of the solar wind at a distance $D$ is described by $n_{\\rm sw}=n_{0}(D\/1\\,{\\rm {au}})^{2}\\,{\\rm {cm}}^{-3}$ with $n_{0}\\approx 6\\,{\\rm {cm}}^{-3}$ being the density at $D=1\\,{\\rm {au}}$ (\\citealt{Venzmer:2018gy}). Note that heating by heavy elements and dust is negligible on average because it carries only a small fraction of the gas mass.\n\nThe heating by cosmic rays (CRs) of particle energy $E$ and flux $F_{\\rm CR}$ is given by,\n\\begin{eqnarray}\n\\frac{dE_{\\rm CR}}{dt} = F_{\\rm CR}E\\pi R^{2}\\simeq 0.02\\left(\\frac{E}{1\\rm GeV}\\right)\\left(\\frac{F_{\\rm CR}}{1\\,{\\rm {cm}}^{2}\\,{\\rm s}^{-1}}\\right)\\left(\\frac{R}{1\\,{\\rm {cm}}}\\right)^{2}\\,{\\rm {erg}}\\,{\\rm s}^{-1}.\\label{eq:dECR}\n\\ena\nThe CR's measured near the Earth have $F_{\\rm CR}\\approx 1\\,{\\rm {cm}}^{2}\\,{\\rm s}^{-1}$ (\\citealt{1985A&A...144..147L}) and $E=1$ GeV, yielding subdominance of CR heating relative to collisional heating for $v\\gtrsim 10^{-3}$c.\n \nWe assume that the local radiation field has the same spectrum as the interstellar radiation field (ISRF) in the solar neighborhood \\citep{1983A&A...128..212M}, with a total radiation energy density of $u_{\\rm MMP}\\approx 8.64\\times 10^{-13} \\,{\\rm {erg}}\\,{\\rm {cm}}^{-3}$, where the local radiation field is expressed as a factor, $U$, times the energy density of the ISRF, $u_{\\rm MMP}$. The heating rate of the frontal surface by the background radiation is given by,\n\\begin{eqnarray}\n\\frac{dE_{\\rm abs}}{dt}=\\pi R^{2}cUu_{\\rm MMP}\\epsilon_{\\star},\\label{eq:dErad}\n\\ena\nwhere $\\epsilon_{\\star}$ is the surface absorption efficiency averaged over the starlight radiation spectrum. \n\nThe local radiation field includes the averaged ISRF and the solar radiation, with\n\\begin{eqnarray}\nU=1+\\frac{L_{\\star}}{4\\pi D^{2}cu_{\\rm MMP}}=1+0.5\\times 10^{8}\\left(\\frac{L_{\\star}}{L_{\\odot}}\\right)\\left(\\frac{1\\,{\\rm {au}}}{D}\\right)^{2},\n\\ena\nimplying an enhancement by the Sun within $D=10^{4}\\,{\\rm {au}}$. \n\nThe surface of the spacecraft at a temperature $T_{s}$ also emits thermal radiation, which results in radiative cooling at a rate:\n\\begin{eqnarray}\n\\frac{dE_{\\rm emiss}}{dt}=\\int d\\nu \\pi R^{2}Q_{\\rm abs, \\nu}\\pi B_{\\nu}(T_{s})\n=\\pi R^{2}\\epsilon_{T} \\sigma T_{s}^{4},~~~~\n\\label{eq:dEcdt}\n\\ena\nwhere,\n\\begin{eqnarray}\n\\epsilon_{T}=\\frac{\\int d\\nu Q_{\\rm abs,\\nu}B_{\\nu}(T_{s})}{\\int d\\nu B_{\\nu}(T_{s})}, \\label{eq:Qabsavg}\n\\ena\nis the Planck-spectrum averaged absorption efficiency, and $Q_{\\rm abs}$ is the absorption efficiency of the spacecraft material.\n \nThe energy balance between radiative heating and cooling yields the surface equilibrium temperature,\n\\begin{eqnarray}\nT_{\\rm s}=\\left(\\frac{cUu_{\\rm MMP}+1.4 n_{\\rm H}m_{\\rm H}v^{3}\/2+F_{\\rm CR}E_{\\rm CR}}{\\sigma}\\right)^{1\/4}\\left(\\frac{\\epsilon_{\\star}}{\\epsilon_{T}}\\right)^{1\/4}.\\label{eq:Tsp}\n\\ena\n\nFor the ISM and $v\\gtrsim 10^{-3}c$, collisional heating dominates. Equation (\\ref{eq:Tsp}) yields the surface temperature,\n\\begin{eqnarray}\nT_{\\rm s}\\simeq \\left(\\frac{1.4 n_{\\rm H}m_{\\rm H}v^{3}}{2\\sigma} \\right)^{1\/4}\\left(\\frac{\\epsilon_{\\star}}{4\\epsilon_{T}}\\right)^{1\/4}\\simeq 272.6\\left(\\frac{n_{\\rm H}}{10\\,{\\rm {cm}}^{-3}}\\right)^{1\/4}\\left(\\frac{v}{0.1c}\\right)^{3\/4}\\left(\\frac{\\epsilon_{\\star}}{\\epsilon_{T}}\\right)^{1\/4}\\rm K,\\label{eq:Tsp_coll}\n\\ena\nimplying that the emission spectrum peaks at a wavelength,\n\\begin{eqnarray}\n\\lambda_{\\rm max}=\\frac{b}{T_{\\rm sp}}\\simeq 10.6\\left(\\frac{n_{\\rm H}}{10\\,{\\rm {cm}}^{-3}}\\right)^{-1\/4}\\left(\\frac{v}{0.1c}\\right)^{-3\/4}\\left(\\frac{\\epsilon_{\\star}}{\\epsilon_{T}}\\right)^{-1\/4}\\mum,\n\\ena \nwith $b$ is Wien's constant.\n\nWhen the spacecraft enters the Solar system, radiative heating dominates. Equation (\\ref{eq:Tsp}) yields the equilibrium temperature\n\\begin{eqnarray}\nT_{\\rm s}\\simeq 388.7\\left(\\frac{D}{1\\,{\\rm {au}}}\\right)^{-1\/2}\\left(\\frac{\\epsilon_{\\star}}{\\epsilon_{T}}\\right)^{1\/4}\\,{\\rm K},\\label{eq:Tsp_rad}\n\\ena\nimplying a peak wavelength\n\\begin{eqnarray}\n\\lambda_{\\rm max}=\\frac{b}{T_{\\rm s}}\\simeq 7.45\\left(\\frac{D}{1\\,{\\rm {au}}}\\right)^{1\/2}\\left(\\frac{\\epsilon_{\\star}}{\\epsilon_{T}}\\right)^{-1\/4}\\mum.\n\\ena \n\nEquations (\\ref{eq:Tsp_coll}) and (\\ref{eq:Tsp_rad}) imply that radiative heating by the Sun dominates over collisional heating at $D<10\\,{\\rm {au}}$ for a speed $v=0.1c$. \n\nThe bolometric radiation flux observed on Earth from the spacecraft is,\n\\begin{eqnarray}\nF=\\frac{1}{4}\\theta^{2}\\sigma T_{\\rm s}^4=\\frac{1}{4}\\left(\\frac{R}{D}\\right)^{2}\\sigma T_{\\rm s}^{4},\n\\ena\nwhere $\\theta=(R\/D)$ is the angular size of the spacecraft on the sky, namely its radius, $R$, divided by its distance, $D$. For a temperature of $T_{s}=100\\,{\\rm K}$, the bolometric flux is,\n\\begin{eqnarray}\nF\\simeq 6.3\\times 10^{-24} \\left(\\frac{R}{1\\rm m}\\frac{100\\,{\\rm {au}}}{D}\\right)^{2}\\left(\\frac{T_{\\rm s}}{100\\rm K}\\right)^{4}~\\frac{\\rm erg}{\\,{\\rm {cm}}^{2}\\,{\\rm s}}.\n\\ena\n\nThe spectral flux density is given by \n\\begin{eqnarray}\nF_{\\nu}=\\frac{1}{4}\\theta^{2}\\pi B_{\\nu}(T_{\\rm s})=\\frac{1}{4}\\left(\\frac{R}{D}\\right)^{2}\\frac{2\\pi h\\nu^{3}}{c^{2}}\\frac{1}{\\,{\\rm {exp}}(h\\nu\/kT_{\\rm s})-1}.\\label{eq:Sflux}\n\\ena\n\n\n\n\n\\section{Results}\\label{sec:results}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{f1.pdf}\n\\includegraphics[width=0.5\\textwidth]{f2.pdf}\n\\caption{Left panel: Surface temperature of the spacecraft as a function of its speed relative to the speed of light, $v\/c$, at different heliocentric distances, $D$. Right panel: Spectral flux density for different spacecraft speeds, assuming the object radius $R=10$ m and $D=1,10, 100$ au. Horizontal black lines mark the sensitivity of various telescopes.}\n\\label{fig:Teq}\n\\end{figure}\n\nThe left panel of Figure \\ref{fig:Teq} shows the surface temperature as a function of spacecraft speed in units of the speed of light, ($v\/c$), for different heliocentric distances, and assuming $\\epsilon_{\\star}\/\\epsilon_{T}=1$. The temperature is dictated by radiative heating at low speeds. When the speed is sufficiently large, collisional heating becomes important and $T_{\\rm sp}$ increases with $v\/c$. For example, at $D=10$ au, the temperature increases with speed for $v\\gtrsim 0.02c$. At $D=1000$ au, the temperature at $v>0.005c$ is higher than that for $D=10, 100$ au due to the stronger collisional heating.\n\nThe right panel of Figure \\ref{fig:Teq} shows the spectral flux density from Equation (\\ref{eq:Sflux}) as a function of frequency (wavelength) for different spacecraft speeds, $v\/c$, assuming a spacecraft radius $R=10$ m. The frequency bands of several telescopes are shown, including ALMA (Atacama Large Millimeter\/submillimeter Array), JWST (James Webb Space Telescope), and LSST (Legacy Survey of Space and Time). The sensitivity is $S=8.7~\\mu$Jy at $\\lambda\\approx 870\\mum$ for ALMA\\footnote{https:\/\/almascience.eso.org\/proposing\/sensitivity-calculator}, $0.01~\\mu$Jy at $\\lambda\\sim 0.6-28.3\\mum$ with integration time of $10^{4}$ s for JWST\\footnote{https:\/\/www.stsci.edu\/files\/live\/sites\/www\/files\/home\/news\/newsletters\/documents\/2013-volume030-issue02.pdf} (\\citealt{Anonymous:vWxVySok}; \\citealt{2015PASP..127..686G}). LSST can reach the limiting magnitude of $m=25$ with two exposure times of 15 s (\\cite{2019ApJ...873..111I}), yielding a sensitivity of $S=3631\\times 10^{-m\/2.5}~\\rm Jy\\sim 0.36\\mu$J for the $\\sim 0.3-1\\mum$ wavelengths.\n\nWe find that only when a 10 meter-size objects enter the solar system, it is feasible to detect them using JWST. At a distance of 100 au, JWST can detect an object of size $\\sim 400$ m moving at $v\\gtrsim 0.1c$. For travel through the Solar wind near the Earth's orbit, i.e., at a distance of $D\\sim$ 1 au, we get detectability for objects larger than $50$ cm. At $D=10\\,{\\rm {au}}$, objects larger than $100$ m can be detected by JWST.\n\n\\section{Discussion}\\label{sec:discuss}\nMost sky surveys dismiss objects that move too fast on the sky. A sub-relativistic nearby object would move at an angular speed of $(v\/D)=2.48(v\/10^{-3}c)(10\\,{\\rm {au}}\/D)~{\\rm arcsec\/min}$, and this would limit the integration time within the field of view of telescopes. Hence the sensitivity will be lower than for steady point sources, unless a special tracking program is applied.\n\nThe situation improves for objects closer to the Sun, since the solar wind density increases inversely with the square of the distance from the Sun. The planned size of the transmitter or sails of Starshot spacecraft is a few meters and the goal of Starshot is to probe the habitable zone around a star, similar to the region around the Earth's orbit. Thus, the mid- and far-infrared special regimes is optimal for detecting analogous spacecraft in our vicinity.\n\n\n\\acknowledgments \nThis work is supported in part by a grant from the Breakthrough Prize Foundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nKendall's tau is a measure of association which evaluates every copula $ C : [0,1]^d\\to[0,1] $ by a real number $ \\kappa[C] $ \nsatisfying $ - 1\/(2^{d-1}\\!-\\!1) \\leq \\kappa[C] \\leq 1 $. \n\n\\bigskip\nIn the present paper we study Kendall's tau for a class of copulas related to the order statistic $ \\XXX_{:d} = (X_{1:d},\\dots,X_{d:d}) $ \nof a random vector $ \\XXX = (X_1,\\dots,X_d) $ with identical univariate marginal distributions. \nIn this case, \nevery copula $ C $ for $ \\XXX $ determines a copula $ C_{:d} $ for $ \\XXX_{:d} $; \nsee \nNavarro and Spizzichino [2010] for the special case where the distribution functions of the coordinates of $ \\XXX $ are continuous and \nDietz et al.\\ [2016] for the general case. \nSince the construction of the order transform $ C_{:d} $ of a copula $ C $ \nis determined by a map transforming every random vector into its order statistic, \nwe shall study Kendall's tau for the order transform $ C_{:d} $ of an arbitrary copula $ C $. \n\n\\bigskip\nAs a general result, \nwe show that the order transform $ C_{:d} $ of a copula $ C $ satisfies $ \\kappa[C] \\leq \\kappa[C_{:d}] $ \n(Theorem \n\\ref{t.Kendall}). \nWe also show that the inequality is strict when $ C $ is the product copula \n(Theorem \n\\ref{t.product}), \nwhich corresponds to the case where the coordinates of $ \\XXX $ are also independent. \nBy contrast, \nthe inequality becomes an equality when $ C $ is the upper Fr{\\'e}chet--Hoeffding bound (and hence maximizes Kendall's tau) \nor a copula which is symmetric and minimizes Kendall's tau \n(Corollary \n\\ref{c.bounds} and Theorem \n\\ref{t.Kendall}). \nThis result is of interest \nsince the computation of $ C_{:d} $ or $ \\kappa[C_{:d}] $ may be tedious and is not needed in the case where $ \\kappa[C] = \\kappa[C_{:d}] $ and \nsince there exist many symmetric copulas minimizing Kendall's tau; \nsee \nFuchs et al.\\ [2018]. \n\n\\bigskip\nThe major part of this paper is devoted to the order transform $ \\Pi_{:d} $ of the product copula $ \\Pi $. \nWe first determine $ \\kappa[\\Pi_{:d}] $ \n(Theorem \n\\ref{t.product}) and then present some identities for the computation of $ \\kappa[\\varrho_K(\\Pi_{:d})] $, \nwhere $ \\varrho_K(\\Pi_{:d}) $ denotes the multivariate margin of $ \\Pi_{:d} $ with respect to the coordinates \nin $ K\\subseteq\\{1,\\dots,d\\} $ with $ |K|\\geq2 $ \n(Theorem \n\\ref{t.product-K}). \nIn particular, \nwe obtain an explicit formula for Kendall's tau of $ \\varrho_{\\{1,\\dots,k\\}}(\\Pi_{:d}) $ \n(Corollary \n\\ref{c.product-lowertail}), \nwhich is a copula for the lower $ k $ coordinates of $ \\XXX_{:d} $, \nand we show that, \ndue to a general reflection principle \n(Theorem \n\\ref{t.product-reflection}), \nthis formula is also valid for Kendall's tau of $ \\varrho_{\\{d-k+1,\\dots,d\\}}(\\Pi_{:d}) $, \nwhich is a copula for the upper $ k $ coordinates of $ \\XXX_{:d} $. \nWe thus extend certain results for the case $ |K|=2 $; \nsee \nAv{\\'e}rous et al.\\ [2005] and \nNavarro and Balakrischnan [2010]. \n\n\\bigskip\nThis paper is organized as follows: \nSection \\ref{preliminaries} collects some definitions and results on copulas and related topics which will be needed in this paper. \nSection \\ref{kendall} provides a brief discussion of Kendall's tau and Kendall's distribution function. \nSection \\ref{ot-copula} starts with the definition of the order transform of the Euclidean space, \nwhich turns every random vector into its order statistic, \nand proceeds with the construction of the order transform of a copula. \nIn Section \\ref{ot-kendall} we present some result of Kendall's tau for the order transform of an arbitrary copula and \nin Section \\ref{ot-product} we study Kendall's for the order transform of the product copula and its multivariate margins. \nSome auxiliary results needed in Section \\ref{ot-product} are established in the Appendix. \n\n\\bigskip\nThroughout this paper we shall use the following notation: \nLet $ \\I := [0,1] $ and \nlet $ \\leb $ denote the Lebesgue measure on $ \\BB(\\R) $. \nFurthermore, \nlet $ d\\geq2 $ be an integer, \nwhich will be kept fixed, \nand \nlet $ \\leb^d $ denote the Lebesgue measure on $ \\BB(\\R^d) $. \nWe denote \nby $ \\eee_1,\\dots,\\eee_d $ the standard basic unit vectors in $ \\R^d $, \nby $ \\zero $ the vector in $ \\R^d $ with all coordinates being equal to $ 0 $ and \nby $ \\eins $ the vector in $ \\R^d $ with all coordinates being equal to $ 1 $.\nFor $ \\xxx,\\yyy\\in\\R^d $, \nwe write $ \\xxx\\leq\\yyy $ if $ x_k \\leq y_k $ holds for every $ k\\in\\{1,\\dots,d\\} $. \nThen we have $ \\I^d = [\\zero,\\eins] $. \nOn the collection of all real--valued maps on a set $ S $, \nthe pointwise order $ \\leq $ is defined by letting $ g \\leq h $ if $ g(s) \\leq h(s) $ holds for every $ s \\in S $. \n\n\\bigskip\nDue to the central role of the order transform $ T $ of the Euclidean space in the construction of the order transform of a copula $ C $, \nthe symbol $ C_{:d} $ used in the Abstract and in this Introduction will henceforth be replaced by $ C_T $. \n\n\n\n\n\\section{Preliminaries}\n\\label{preliminaries}\n\nIn this section, \nwe recall some definitions and results on \ncopulas, \ncopula measures, \ngroups of transformations of copulas and \na biconvex form for copulas. \nFor further details we refer to \nFuchs [2014; 2016]. \n\n\n\\subsection*{Copulas}\n\nFor $ K\\subseteq\\{1,...,d\\} $, \nwe consider the map $ \\eeta_K : \\I^d\\times\\I^d \\to \\I^d $ given coordinatewise by\n\\begin{eqnarray*}\n (\\eeta_K(\\uuu,\\vvv))_k \n& := & \\begin{cases} \n u_k & k \\in \\{1,...,d\\} \\setminus K \\\\\n v_k & k \\in K \n \\end{cases} \n\\end{eqnarray*}\nand for $ k\\in\\{1,\\dots,d\\} $ we put $ \\eeta_k := \\eeta_{\\{k\\}} $. \n\n\\bigskip\nA \n\\emph{copula} is a function $ C: \\I^{d} \\to \\I $ satisfying the following conditions: \n\\begin{romlist}\n\\item The inequality $ \\sum_{K \\subseteq \\{1,...,d\\}} (-1)^{d-|K|}\\,C(\\eeta_K(\\uuu,\\vvv)) \\geq 0 $ \nholds for all $ \\uuu,\\vvv\\in\\I^d $ such that $ \\uuu\\leq\\vvv $. \n\\item The identity $ C(\\eeta_k(\\uuu,\\zero)) = 0 $ holds for every $ k\\in\\{1,...,d\\} $ and every $ \\uuu\\in\\I^d $. \n\\item The identity $ C(\\eeta_k(\\eins,\\uuu)) = u_k $ holds for every $ k\\in\\{1,...,d\\} $ and every $ \\uuu\\in\\I^d $. \n\\end{romlist}\nThis definition of a copula is in accordance with the literature;\nsee, \ne.\\,g., \nDurante and Sempi [2016] and \nNelsen [2006]. \nThe collection $ \\CC $ of all copulas is convex.\n\n\\bigskip\nThe following copulas are of particular interest: \n\\begin{hylist}\n\\item The \n\\emph{upper Fr{\\'e}chet--Hoeffding bound} $ M $ given by \n$ M(\\uuu) := \\min\\{u_1,\\dots,u_d\\} $ \nis a copula and every copula $ C $ satisfies $ C \\leq M $. \n\\item The \n\\emph{product copula} $ \\Pi $ given by \n$ \\Pi(\\uuu) := \\prod_{k=1}^d u_k $ \nis a copula. \n\\item In the case $ d=2 $, \nthe \n\\emph{lower Fr{\\'e}chet--Hoeffding bound} $ W $ given by \n$ W(\\uuu) := \\max\\{ \\sum_{k=1}^d u_k + 1 - d, 0 \\} $ \nis a copula and every copula $ C $ satisfies $ W \\leq C $. \n\\end{hylist}\n\n\n\\subsection*{A Group of Transformations of Copulas}\n\nLet $ \\Phi $ denote the collection of all transformations $ \\CC\\to\\CC $ \nand consider the composition $ \\circ: \\Phi\\times\\Phi \\to \\Phi $ given by $ (\\varphi_2\\circ\\varphi_1)(C) := \\varphi_2(\\varphi_1(C)) $ \nand the map $ \\iota\\in\\Phi $ given by $ \\iota(C) := C $. \nThen $ (\\Phi,\\circ) $ is a semigroup with neutral element $ \\iota $. \nFor $ i,j,k\\in\\{1,...,d\\} $ with $ i \\neq j $, \nwe define the maps $ \\pi_{i,j},\\nu_k : \\CC\\to\\CC $ by letting\n\\begin{eqnarray*}\n (\\pi_{i,j}(C))(\\uuu) \n& := & C(\\eeta_{\\{i,j\\}}(\\uuu,u_j\\,\\eee_i+u_i\\,\\eee_j)) \\\\*\n (\\nu_k (C))(\\uuu) \n& := & C(\\eeta_k(\\uuu,\\eins)) - C(\\eeta_k(\\uuu,\\eins\\!-\\!\\uuu)) \n\\end{eqnarray*}\nEach of these maps is an involution and there exists \n\\begin{hylist}\n\\item a smallest subgroup $ \\Gamma^\\pi $ of $ \\Phi $ containing every $ \\pi_{i,j} $, \n\\item a smallest subgroup $ \\Gamma^\\nu $ of $ \\Phi $ containing every $ \\nu_k $ and \n\\item a smallest subgroup $ \\Gamma $ of $ \\Phi $ containing $ \\Gamma^\\pi\\cup\\Gamma^\\nu $. \n\\end{hylist}\nThe group $ \\Gamma^\\nu $ is commutative. \nMoreover, \nthe \n\\emph{total reflection} \n\\begin{eqnarray*}\n \\tau \n& := & \\bcirc_{k\\in\\{1,...,d\\}} \\nu_{k} \n\\end{eqnarray*}\ntransforms every copula into its survival copula and satisfies $ \\tau(M)=M $, \nand we put $ \\Gamma^\\tau := \\{\\iota,\\tau\\} $. \nThe total reflection is used in the definition of the \n\\emph{concordance order} $ \\leq_c $ on $ \\CC $ \nwhich is defined by letting $ C \\leq_c D $ if and only if \n$ C \\leq D $ and $ \\tau(C) \\leq \\tau(D) $. \n\n\\bigskip\nThe group $ \\Gamma $ is a representation of the hyperoctahedral group, \nwhich also has a well--known geometric representation: \n\n\\bigskip\nLet $ \\tilde\\Phi $ denote the collection of all transformations $ \\I^d\\to\\I^d $ \nand consider the composition $ \\diamond: \\I^d\\times\\I^d\\to\\I^d $ \ngiven by $ (\\tilde\\varphi_2\\diamond\\tilde\\varphi_1)(\\uuu) := \\tilde\\varphi_2(\\tilde\\varphi_1(\\uuu)) $ \nand the map $ \\tilde\\iota\\in\\Phi $ \ngiven by $ \\tilde\\iota(\\uuu) := \\uuu $. \nThen $ (\\tilde\\Phi,\\diamond) $ is a semigroup with neutral element $ \\tilde\\iota $. \nFor $ i,j,k\\in\\{1,...,d\\} $ with $ i \\neq j $, \nwe define the maps $ \\tilde\\pi_{i,j},\\tilde\\nu_k : \\I^d\\to\\I^d $ by letting\n\\begin{eqnarray*}\n \\tilde\\pi_{i,j}(\\uuu) \n& := & \\eeta_{\\{i,j\\}}(\\uuu,u_j\\,\\eee_i+u_i\\,\\eee_j)) \\\\*\n \\tilde\\nu_k (\\uuu) \n& := & \\eeta_k(\\uuu,\\eins\\!-\\!\\uuu) \n\\end{eqnarray*}\nEach of these maps is an involution and there exists \n\\begin{hylist}\n\\item a smallest subgroup $ \\tilde\\Gamma^\\pi $ of $ \\tilde\\Phi $ containing every $ \\tilde\\pi_{i,j} $, \n\\item a smallest subgroup $ \\tilde\\Gamma^\\nu $ of $ \\tilde\\Phi $ containing every $ \\tilde\\nu_k $ and \n\\item a smallest subgroup $ \\tilde\\Gamma $ of $ \\tilde\\Phi $ containing $ \\tilde\\Gamma^\\pi\\cup\\tilde\\Gamma^\\nu $. \n\\end{hylist}\nThe groups $ \\Gamma $ and $ \\tilde\\Gamma $ are related by an isomorphism $ J : (\\Gamma,\\circ) \\to (\\tilde\\Gamma,\\diamond) $ \nsatisfying $ J(\\pi_{i,j}) = \\tilde\\pi_{i,j} $ and $ J(\\nu_k) = \\tilde\\nu_k $ for all $ i,j,k\\in\\{1,...,d\\} $. \nFor $ \\gamma\\in\\Gamma $ we put $ \\tilde\\gamma := J(\\gamma) $, \nand for $ \\tilde\\gamma\\in\\tilde\\Gamma $ we put $ \\gamma := J^{-1}(\\tilde\\gamma) $. \nThen $ \\pi(C) = C\\circ\\tilde\\pi $ holds for every $ \\pi\\in\\Gamma^\\pi $ and every $ C\\in\\CC $. \n\n\n\\subsection*{Copula Measures}\n\nSince every copula $ C\\in\\CC $ has a unique extension to a distribution function $ \\R^d\\to\\I $, \nthere exists a unique probability measure $ Q^C : \\BB(\\I^d)\\to\\I $ satisfying \n\\begin{eqnarray*}\n Q^C[[\\zero,\\uuu]] \n& = & C(\\uuu) \n\\end{eqnarray*}\nfor every $ \\uuu\\in\\I^d $. \nThe probability measure $ Q^C $ is said to be the \n\\emph{copula measure} with respect to $ C $. \nIt satisfies $ Q^C[(\\uuu,\\vvv)] = Q^C[[\\uuu,\\vvv]] $ for all $ \\uuu,\\vvv\\in\\I^d $ such that $ \\uuu\\leq\\vvv $, \nand $ Q^{\\gamma(C)} = (Q^C)_{\\tilde\\gamma} $ holds for every $ \\gamma\\in\\Gamma $. \n\n\n\\subsection*{A Biconvex Form for Copulas}\n\nConsider the map $ [.\\,,.]: \\CC\\times\\CC \\to \\R $ given by \n\\begin{eqnarray*}\n [C,D] \n& := & \\int_{\\I^d} C(\\uuu)\\,dQ^D(\\uuu) \n\\end{eqnarray*}\nThe map $ [.\\,,.] $ is in either argument linear with respect to convex combinations and is therefore called a \n\\emph{biconvex form}. \nMoreover, \nthe map $ [.\\,,.] $ is in either argument monotone with respect to the concordance order. \nIt satisfies $ 0 \\leq [C,D] \\leq 1\/2 $ for all $ C,D\\in\\CC $, \nand the bounds are attained since $ [M,M] = 1\/2 $ and since $ [\\nu(M),\\nu(M)] = 0 $ holds for every $ \\nu\\in\\Gamma^\\nu\\setminus\\Gamma^\\tau $. \nWe also note that $ [\\tau(C),\\tau(D)] = [D,C] $ holds for all $ C,D\\in\\CC $. \n\n\n\n\n\\section{Kendall's Tau and Kendall's Distribution Function}\n\\label{kendall}\n\nThe map $ \\kappa : \\CC\\to\\R $ given by \n\\begin{eqnarray*}\n \\kappa[C] \n& := & \\frac{2^{d}}{2^{d-1}-1}\\,\\biggl( [C,C] - \\frac{1}{2^d} \\biggr) \n\\;\\,=\\,\\; \\frac{2^d\\,[C,C]-1}{2^{d-1}-1}\n\\end{eqnarray*}\nis called \n\\emph{Kendall's tau}; \nsee \nNelsen [2002]. \nKendall's tau satisfies \n\\begin{eqnarray*}\n -\\,\\frac{1}{2^{d-1}-1} \n&\\leq& \\kappa[C] \n\\;\\,\\leq\\,\\; 1 \n\\end{eqnarray*}\nand the bounds are attained. \nThe following result is obvious from the properties of the biconvex form and will be tacitly used throughout this paper: \n\n\\bigskip\n\\begin{lemma}{}\n\\label{l.maxmin}\n\\begin{onelist}\n\\item Kendall's tau is monotone with respect to the concordance order. \n\\item A copula $ C $ maximizes Kendall's tau if and only if $ [C,C] = 1\/2 $. \n\\item A copula $ C $ minimizes Kendall's tau if and only if $ [C,C] = 0 $. \n\\end{onelist}\n\\end{lemma}\n\n\\bigskip\nFor a copula $ C $, \nthe function $ K_C: \\R\\to\\I $ given by \n\\begin{eqnarray*}\n K_C(t) \n& := & Q^C[\\{ \\uuu\\in\\I^d \\colon C(\\uuu) \\leq t \\}] \n\\;\\,=\\,\\; (Q^C)_C[[0,t]] \n\\end{eqnarray*}\nis said to be \n\\emph{Kendall's distribution function} with respect to $ C $, \nwhich is well--known from the literature. \nIt is easy to see that $ K_C $ is indeed a distribution function \nwhich satisfies $ K_C(0) = 0 $ and $ K_C(1) = 1 $ as well as $ t \\leq K_C(t) $ for every $ t\\in\\I $. \nThe following result implies that Kendall's tau can be expressed in term of Kendall's distribution function: \n\n\\bigskip\n\\begin{lemma}{}\n\\label{distribution.l-Kendall-2}\nThe identity \n$$ [C,C] = \\int_{\\I} \\Bigl( 1 - K_C(t) \\Bigr) d\\leb(t) $$\nholds for every copula $ C $. \nIn particular, \n$ \\kappa[C] \\leq \\kappa[D] $ holds for any two copulas $ C $ and $ D $ satisfying $ K_D \\leq K_C $. \n\\end{lemma}\n\n\\bigskip\nWe refer to Fuchs et al.\\ [2018] for further details on the results of this section. \nThe comparison of copulas via Kendall's distribution function was considered by \nCap{\\'e}ra{\\`a} et al.\\ [1997]. \n\n\n\n\n\\section{The Order Transform of a Copula} \n\\label{ot-copula}\n\nThroughout this section, \nconsider the map $ T : \\overline\\R^d\\to\\overline\\R^d $ which is defined coordinatewise by letting \n\\begin{eqnarray*}\n (T(\\xxx))_k \n& := & \\min_{J\\subseteq\\{1,\\dots,d\\},\\;|J|=k} \\; \\max_{l \\in J}\\; x_l \n\\end{eqnarray*}\nThen the coordinates of $ T(\\xxx) $ satisfy $ (T(\\xxx))_1 \\leq \\dots \\leq (T(\\xxx))_d $, \nand $ \\xxx\\leq\\yyy $ implies $ T(\\xxx) \\leq T(\\yyy) $. \nMoreover, \nthe map $ T $ is measurable and satisfies $ T^{-1}(\\I^d) = \\I^d $. \nThe map $ T $ is called the \n\\emph{order transform} and associates with every random vector its \n\\emph{order statistic}. \n\n\\bigskip\nFor a distribution function $ F : \\overline\\R^d\\to\\I $, \nlet $ F_1,\\dots,F_d $ denote the univariate marginal distribution functions of $ F $ and \nlet $ F_1^\\leftarrow,\\dots,F_d^\\leftarrow $ denote the corresponding lower quantile functions. \nConsider also the maps $ \\FFF : \\overline\\R^d\\to\\I^d $ and $ \\FFF^\\leftarrow : \\I^d\\to\\overline\\R^d $ which are defined coordinatewise \nby letting \n$$ (\\FFF(\\xxx))_k \n := F_k(x_k) \n\\qquad\\text{and}\\qquad \n (\\FFF^\\leftarrow(\\uuu))_k \n := F_k^\\leftarrow(u_k) \n$$\nFurthermore, \nlet $ Q^F $ denote the distribution $ \\BB(\\overline\\R^d) \\to \\I $ corresponding to $ F $ and \nlet $ F_T $ denote the distribution function corresponding to $ (Q^F)_T $. \nThen we have $ Q^{F_T} = (Q^F)_T $. \n\n\\bigskip\nFor the remainder of this section, \nconsider a fixed copula $ C $ and let $ H^C $ denote the distribution function extending $ C $. \nThen we have \n\\begin{eqnarray*}\n H^C \n& = & C\\circ\\HHH^C \n\\end{eqnarray*}\nSince the univariate marginal distribution functions of $ H^C $ are continuous, \nthose of $ H^C_T := (H^C)_T $ are continuous as well. \nTherefore, \nthere exists a unique copula $ C_T $ satisfying \n\\begin{eqnarray*}\n H^C_T \n& = & C_T\\circ\\HHH^C_T \n\\end{eqnarray*}\nand hence $ C_T = H^C_T \\circ (\\HHH^C_T)^\\leftarrow $. \nThe copula $ C_T $ was introduced and studied by \nDietz et al.\\ [2016] and is called the \n\\emph{order transform} of the copula $ C $. \n\n\\bigskip\nThe following theorem is due to \nDietz et al.\\ [2016; Theorem 5.2]: \n\n\\bigskip\n\\begin{theorem}{}\n\\label{equal.t}\nLet $ F $ be a distribution function satisfying $ F_1=\\ldots=F_n $. \nIf $ C $ is a copula for $ F $, \nthen $ C_T $ is a copula for $ F_T $. \n\\end{theorem}\n\n\\bigskip\nWe complete this section with two technical results which will be needed later. \n\n\\bigskip\n\\begin{lemma}{}\n\\label{l-image}\n$ H^C_T $ satisfies \n$ (Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow})_{(\\HHH^C_T)^\\leftarrow} = Q^{H^C_T} $ and \n$ (Q^{C_T})_{C_T} = (Q^{H^C_T})_{H^C_T} $. \n\\end{lemma\n\n\\bigskip\n\\begin{proof}\nFor every $ \\xxx\\in\\overline\\R^d $ we have \n\\begin{eqnarray*}\n (Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow})_{(\\HHH^C_T)^\\leftarrow}[[-\\iinfty,\\xxx]] \n& = & Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow} \\Bigl[ \\Bigl\\{ \\uuu\\in\\I^d \\Bigm| (\\HHH^C_T)^{\\leftarrow}(\\uuu)\\leq \\xxx \\Bigr\\} \\Bigr] \\\\\n& = & Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow} \\Bigl[ \\Bigl\\{ \\uuu\\in\\I^d \\Bigm| \\uuu \\leq\\HHH^C_T(\\xxx) \\Bigr\\} \\Bigr] \\\\[.5ex]\n& = & ( H^C_T\\circ(\\HHH^C_T)^\\leftarrow \\circ \\HHH^C_T )(\\xxx) \\\\[1ex]\n& = & H^C_T (\\xxx) \\\\*[1ex]\n& = & Q^{H^C_T} [[-\\iinfty,\\xxx]] \n\\end{eqnarray*}\nThis yields the first identity. \nSince $ C_T = H^C_T\\circ(\\HHH^C_T)^\\leftarrow $, \nthe second identity follows from the first. \n\\end{proof}\n\n\\bigskip\n\\begin{lemma}{}\n\\label{l.commute}\nLet $ S : \\overline\\R^d\\to\\overline\\R^d $ be a measurable map satisfying $ S(\\I^d)\\subseteq\\I^d $ and \n$$ ((\\HHH^C_T)^\\leftarrow \\circ S)(\\uuu) = (S \\circ (\\HHH^C_T)^\\leftarrow)(\\uuu) $$\nfor every $ \\uuu\\in\\I^d $. \nThen \n$$ \\int_{\\I^d} ( C_T \\circ S)(\\uuu)\\,dQ^{ C_T}(\\uuu) \n = \\int_{\\overline\\R^d} (H^C_T \\circ S)(\\xxx)\\,dQ^{H^C_T}(\\xxx) \n$\n\\end{lemma\n\n\\bigskip\n\\begin{proof}\nLemma \n\\ref{l-image} yields \n\\begin{eqnarray*}\n \\int_{\\I^d} ( C_T \\circ S)(\\uuu)\\,d Q^{ C_T} (\\uuu) \n& = & \\int_{\\I^d} (H^C_T \\circ (\\HHH^C_T)^\\leftarrow \\circ S)(\\uuu)\\,d Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow}(\\uuu) \\\\\n& = & \\int_{\\I^d} (H^C_T \\circ S \\circ (\\HHH^C_T)^\\leftarrow)(\\uuu)\\,d Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow}(\\uuu) \\\\\n& = & \\int_{\\overline\\R^d} (H^C_T \\circ S)(\\xxx) \\,d(Q^{H^C_T\\circ(\\HHH^C_T)^\\leftarrow})_{(\\HHH^C_T)^\\leftarrow}(\\xxx) \\\\*\n& = & \\int_{\\overline\\R^d} (H^C_T \\circ S)(\\xxx) \\,d Q^{H^C_T} (\\xxx) \n\\end{eqnarray*}\nas was to be shown. \n\\end{proof}\n\n\\bigskip\nThe previous results are remarkable since $ C_T $ and the restriction of $ H^C_T $ to $ \\I^d $ are usually distinct. \n\n\n\n\n\\section{The Order Transform and Kendall's Tau}\n\\label{ot-kendall}\n\nThroughout this section, \nwe consider a fixed copula $ C $. \nThe discussion of Kendall's tau for the order transform $ C_T $ of $ C $ basically relies on the discussion of $ [C_T,C_T] $. \n\n\\bigskip\nThe following result provides a comparison of the copulas $ C $ and $ C_T $ in terms of Kendall's distribution function: \n\n\\bigskip\n\\begin{theorem}{}\n\\label{t.order}\n$ C_T $ satisfies \n$ K_{C_T} \\leq K_C $. \nIn particular, \n$$ [C,C] \\leq [C_T,C_T] $\n\\end{theorem\n\n\\bigskip\n\\begin{proof}\nLemma \n\\ref{l-image} yields \n$$ (Q^{C_T})_{C_T} \n = ( Q^{H^C_T} )_{H^C_T} \n = ((Q^{H^C})_T)_{H^C_T} \n = (Q^{H^C})_{H^C_T \\circ T} \n$$\nand for every $ \\uuu\\in\\I^d $ we have \n$$ C(\\uuu)\n = Q^{H^C} \\Bigl[ \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| \\vvv \\leq \\uuu \\Bigr\\} \\Bigr]\n\\leq Q^{H^C} \\Bigl[ \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| T(\\vvv) \\leq T(\\uuu) \\Bigr\\} \\Bigr]\n = (H^C_T \\circ T)(\\uuu)\n$$\nFor every $ t\\in\\I^d $, \nwe thus obtain \n\\begin{eqnarray*}\n K_{C_T}(t) \n& = & (Q^{C_T})_{C_T}[[0,t]] \t \\\\[1ex]\n& = & (Q^{H^C})_{H^C_T \\circ T}[[0,t]] \\\\[.5ex]\n& = & Q^C \\Bigl[ \\Bigl\\{ \\uuu\\in\\I^d \\Bigm| (H^C_T \\circ T)(\\uuu) \\leq t \\Bigr\\} \\Bigr] \\\\\n&\\leq& Q^C \\Bigl[ \\Bigl\\{ \\uuu\\in\\I^d \\Bigm| C (\\uuu) \\leq t \\Bigr\\} \\Bigr] \\\\*[1ex]\n& = & K_C(t) \n\\end{eqnarray*}\nwhich proves the first inequality. \nThe second inequality then follows from Lemma \n\\ref{distribution.l-Kendall-2}. \n\\end{proof}\n\n\\bigskip\nThe next result provides several useful representations of $ [C_T,C_T] $: \n\n\\bigskip\n\\begin{theorem}{} \n\\label{t.biconvex-1}\n$ C_T $ satisfies \n\\begin{eqnarray*}\n [C_T,C_T] \n& = & \\int_{\\I^d} Q^C \\Bigl[ \n \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| T(\\vvv) \\leq T(\\uuu) \\Bigr\\} \\Bigr] \\,dQ^C(\\uuu) \\\\\n& = & \\int_{\\I^d} Q^C\\biggl[ \\bigcup\\nolimits_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \n \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| \\vvv \\leq\\tilde\\pi(\\uuu) \\Bigr\\} \\biggr]\\,dQ^C(\\uuu) \\\\*\n& = & \\int_{\\I^d} H^C_T(T(\\uuu)) \\,dQ^C(\\uuu) \n\\end{eqnarray*\n\\end{theorem\n\n\\bigskip\n\\begin{proof}\nLemma \n\\ref{l-image} yields \n\\begin{eqnarray*}\n [C_T,C_T] \n& = & \\int_{\\I^d} C_T(\\uuu) \\,dQ^{ C_T}(\\uuu) \\\\\n& = & \\int_{\\overline\\R^d} H^C_T(\\xxx) \\,dQ^{H^C_T}(\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} Q^{H^C_T}\\Bigl[ \\Bigl\\{ \\sss\\in\\overline\\R^d \\Bigm| \\sss \\leq \\xxx \\Bigr\\} \\Bigr] \\,dQ^{H^C_T}(\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} Q^{H^C} \\Bigl[ \\Bigl\\{ \\yyy\\in\\overline\\R^d \\Bigm| T(\\yyy) \\leq \\xxx \\Bigr\\} \\Bigr] \\,dQ^{H^C_T}(\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} Q^{H^C} \\Bigl[ \\Bigl\\{ \\yyy\\in\\overline\\R^d \\Bigm| T(\\yyy) \\leq T(\\zzz) \\Bigr\\} \\Bigr] \\,dQ^{H^C} (\\zzz) \\\\*\n& = & \\int_{\\I^d} Q^C \\Bigl[ \\Bigl\\{ \\vvv\\in \\I^d \\Bigm| T(\\vvv) \\leq T(\\uuu) \\Bigr\\} \\Bigr] \\,dQ^C (\\uuu) \n\\end{eqnarray*}\nMoreover, \nfor any $ \\uuu,\\vvv\\in\\I^d $ there exist some $ \\tilde\\pi_\\uuu, \\tilde\\pi_\\vvv \\in \\tilde\\Gamma^\\pi $ such that \n$ T(\\uuu) = \\tilde\\pi_\\uuu(\\uuu) $ and \n$ T(\\vvv) = \\tilde\\pi_\\vvv(\\vvv) $. \nSince $ \\tilde\\Gamma^\\pi $ is a group, \nthis yields $ T(\\vvv) \\leq T(\\uuu) $ if and only if there exists some $ \\tilde\\pi\\in\\tilde\\Gamma^\\pi $ such that $ \\vvv\\leq\\tilde\\pi(\\uuu) $. \nThis yields \n\\begin{eqnarray*}\n [C_T,C_T] \n& = & \\int_{\\I^d} Q^C\\Bigl[ \n \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| T(\\vvv) \\leq T(\\uuu) \\Bigr\\} \\Bigr] \\,dQ^C(\\uuu) \\\\*\n& = & \\int_{\\I^d} Q^C\\biggl[ \\bigcup\\nolimits_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \n \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| \\vvv \\leq \\tilde\\pi(\\uuu) \\Bigr\\} \\biggr]\\,dQ^C(\\uuu) \n\\end{eqnarray*}\nFinally, \nwe have \n\\begin{eqnarray*}\n [C_T,C_T] \n& = & \\int_{\\I^d} Q^C \\Bigl[ \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| T(\\vvv) \\leq T(\\uuu) \\Bigr\\} \\Bigr] \\,dQ^C(\\uuu) \\\\\n& = & \\int_{\\I^d} Q^{H^C_T}\\Bigl[ \\Bigl\\{ \\www\\in\\I^d \\Bigm| \\www \\leq T(\\uuu) \\Bigr\\} \\Bigr] \\,dQ^C(\\uuu) \\\\*\n& = & \\int_{\\I^d} H^C_T (T(\\uuu)) \\,dQ^C(\\uuu) \n\\end{eqnarray*}\nwhich completes the proof. \n\\end{proof}\n\n\\bigskip\n\\begin{corollary}{}\n\\label{c.biconvex}\n\\begin{onelist}\n\\item If $ [C_T,C_T] = 0 $, \nthen $ [C,C] = 0 $.\n\\item If $ C $ is symmetric with $ [C,C] = 0 $, \nthen $ [C_T,C_T] = 0 $. \n\\end{onelist}\n\\end{corollary}\n\n\\bigskip\n\\begin{proof}\nBecause of Theorem \n\\ref{t.order} we have $ [C,C] \\leq [C_T,C_T] $, \nwhich yields (1). \nFrom Theorem \n\\ref{t.biconvex-1} we obtain \n\\begin{eqnarray*}\n [C_T,C_T] \n& = & \\int_{\\I^d} Q^C\\biggl[ \\bigcup\\nolimits_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| \\vvv\\leq\\tilde\\pi(\\uuu) \\Bigr\\} \\biggr]\\,dQ^C(\\uuu) \\\\\n&\\leq& \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\int_{\\I^d} Q^C\\Bigl[ \\Bigl\\{ \\vvv\\in\\I^d \\Bigm| \\vvv\\leq\\tilde\\pi(\\uuu) \\Bigr\\} \\Bigr ]\\,dQ^C(\\uuu) \\\\\n& = & \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\int_{\\I^d} (C \\circ \\tilde\\pi)(\\uuu) \\,dQ^C(\\uuu) \\\\\n& = & \\sum_{ \\pi\\in \\Gamma^\\pi} \\int_{\\I^d} (\\pi(C))(\\uuu) \\,dQ^C(\\uuu) \\\\*\n& = & \\sum_{ \\pi\\in \\Gamma^\\pi} [\\pi(C),C] \n\\end{eqnarray*}\nIf $ C $ is symmetric, \nthen the previous inequality becomes $ [C_T,C_T] \\leq d!\\,[C,C] $ since $ |\\Gamma^\\pi| = d! $\\,. \nThis yields (2). \n\\end{proof}\n\n\\bigskip\nThe following result resumes Theorem \n\\ref{t.order} and Corollary \n\\ref{c.biconvex} in terms of Kendall's tau: \n\n\\bigskip\n\\begin{theorem}{}\n\\label{t.Kendall}\nKendall's tau satisfies \n$$ \\kappa[C] \\leq \\kappa[C_T] $$\nIn particular: \n\\begin{onelist}\n\\item If $ C_T $ minimizes Kendall's tau, \nthen $ C $ minimizes Kendall's tau as well. \n\\item If $ C $ is symmetric and minimizes Kendall's tau, \nthen $ C_T $ minimizes Kendall's tau as well. \n\\end{onelist}\n\\end{theorem}\n\\pagebreak\n\n\\bigskip\nWe note in passing that Theorem \n\\ref{t.Kendall} yields two results due to \nDietz et al.\\ [2016; Examples 4.2 and 4.3]: \n\n\\bigskip\n\\begin{corollary}{}\n\\label{c.bounds}\nThe upper Fr{\\'e}chet--Hoeffding bound satisfies $ M_T=M $, \nand in the bivariate case \nthe lower Fr{\\'e}chet--Hoeffding bound satisfies $ W_T=W $. \n\\end{corollary}\n\n\\bigskip\n\\begin{proof}\nIt has been shown by \nFuchs et al.\\ [2018; Theorems 3.2 and 3.3] \nthat the upper Fr{\\'e}chet--Hoeffding bound is the only copula maximizing Kendall's tau and \nthat in the bivariate case the lower Fr{\\'e}chet--Hoeffding bound is the only copula minimizing Kendall's tau. \nThe assertion follows. \n\\end{proof}\n\n\\bigskip\nTheorem \n\\ref{t.Kendall} provides conditions under which $ \\kappa[C] = \\kappa[C_T] $. \nBelow we shall show that the product copula satisfies $ \\kappa[\\Pi] < \\kappa[\\Pi_T] $; \nsee Theorem \n\\ref{t.product}. \n\n\\bigskip\nThe following example provides a copula $ D $ for which $ D \\not\\leq_c D_T $; \nit thus shows that Theorem \n\\ref{t.Kendall} cannot be obtained from the fact that Kendall's tau is monotone with respect to the concordance order $ \\leq_c $: \n\n\\bigskip\n\\begin{example}{}\n\\label{e.bounds}\nAssume that $ d=2 $ and consider the copula \n\\begin{eqnarray*}\n D \n& := & \\frac{M+W}{2} \n\\end{eqnarray*}\nLet $ F_{(0,1)} $ denote the distribution function corresponding to the uniform distribution on $ (0,1) $. \nAccording to \nDietz et al.\\ [2016; Examples 4.2 and 4.3], \nwe have \n\\begin{eqnarray*}\n H^D_T(\\xxx) \n& = & \\frac{1}{2}\\,H^M_T(\\xxx) + \\frac{1}{2}\\,H^W_T(\\xxx) \\\\*\n& = & \\frac{1}{2} \\, F_{(0,1)}(\\min\\{x_1,x_2\\}) \n + \\frac{1}{2} \\, \\Bigl( F_{(0,1)}(2x_1) + \\Bigl( 2\\,F_{(0,1)}(x_2)-1 \\Bigr)^+ -1 \\Bigr)^+ \n\\end{eqnarray*}\nand hence \n\\begin{eqnarray*}\n \\HHH^D_T(\\xxx) \n& = & \\leftmatrix{c} \n\t\t \\dps \\frac{1}{2}\\,F_{(0,1)} (x_1) + \\frac{1}{2}\\,F_{(0,1)}(2x_1) \\\\[2ex]\n\t\t \\dps \\frac{1}{2}\\,F_{(0,1)} (x_2) + \\frac{1}{2}\\,\\Bigl( 2\\,F_{(0,1)}(x_2)-1 \\Bigr)^+ \n\t\t\\rightmatrix \n\\end{eqnarray*}\nSince the univariate marginal distribution functions of $ H^D_T $ are continuous and strictly increasing on $ (0,1) $, \nwe obtain \n$$ D \n \\leftmatrix{c} \n 3\/8 \\\\\n 4\/8 \n \\rightmatrix \n = \\frac{3}{16} \n > \\frac{2}{16} \n = (H^D_T\\circ(\\HHH^D_T )^{-1}) \n \\leftmatrix{c} \n 3\/8 \\\\\n 4\/8 \n \\rightmatrix \n = D_T \n \\leftmatrix{c} \n 3\/8 \\\\\n 4\/8 \n \\rightmatrix \n$$\nand hence $ D \\not\\leq D_T $. \nSince $ d=2 $, \nthe concordance order agrees with the pointwise order and we obtain $ D \\not\\leq_c D_T $. \n\\end{example}\n\n\\bigskip\nThe following example shows that the map $ C \\mapsto C_T $ is not order preserving with respect to the concordance order: \n\n\\bigskip\n\\begin{example}{}\n\\label{e.order}\nAssume that $ d=2 $. \nFor every symmetric copula $ C : \\I^2\\to\\I $, \nTheorem \n\\ref{t.biconvex-1} yields \n\\begin{eqnarray*}\n [C_T,C_T] \n& = & 2\\,[C,C] - \\int_{\\I^2} C(\\min\\{u_1,u_2\\},\\min\\{u_1,u_2\\})\\,dQ^C(u_1,u_2) \n\\end{eqnarray*}\nand for the computation of $ [C,C] $ in the case where $ C $ is a shuffle we refer to \nFuchs et al.\\ [2018; Theorem 5.1]. \nThe copula $ A: \\I^2\\to\\I $ \n\\begin{center}\n\\begin{tikzpicture}[xscale=0.8,yscale=0.8] \n\\draw [thin] (0,0) -- (0,4);\n\\draw [thin] (1,0) -- (1,4);\n\\draw [thin] (2,0) -- (2,4);\n\\draw [thin] (3,0) -- (3,4);\n\\draw [thin] (4,0) -- (4,4);\n\\draw [thin] (0,0) -- (4,0);\n\\draw [thin] (0,1) -- (4,1);\n\\draw [thin] (0,2) -- (4,2);\n\\draw [thin] (0,3) -- (4,3);\n\\draw [thin] (0,4) -- (4,4);\n\\draw [thick] (0,2) -- (2,4);\n\\draw [thick] (2,0) -- (4,2);\n\\end{tikzpicture}\n\\end{center}\ndefined as the shuffle of $ M $ with respect to the shuffling structure $ \\{[\\aaa_i,\\bbb_i]\\}_{i\\in\\{1,\\dots,4\\}} $ with \n$$\\begin{array}{ccccccc}\n \\aaa_1 &=& (\\phantom{00}0,2\/4) && \\bbb_1 &=& (1\/4, 3\/4) \\\\\n \\aaa_2 &=& (1\/4 ,3\/4) && \\bbb_2 &=& (2\/4,\\phantom{00}1) \\\\\n \\aaa_3 &=& (2\/4,\\phantom{00}0) && \\bbb_3 &=& (3\/4 ,1\/4) \\\\\n \\aaa_4 &=& (3\/4, 1\/4) && \\bbb_4 &=& (\\phantom{00}1,2\/4) \n\\end{array}$$\nis symmetric and satisfies $ [A_T,A_T] = 1\/2 $, \nand the copula $ B : \\I^2\\to\\I $ \n\\begin{center}\n\\begin{tikzpicture}[xscale=0.8,yscale=0.8] \n\\draw [thin] (0,0) -- (0,4);\n\\draw [thin] (1,0) -- (1,4);\n\\draw [thin] (2,0) -- (2,4);\n\\draw [thin] (3,0) -- (3,4);\n\\draw [thin] (4,0) -- (4,4);\n\\draw [thin] (0,0) -- (4,0);\n\\draw [thin] (0,1) -- (4,1);\n\\draw [thin] (0,2) -- (4,2);\n\\draw [thin] (0,3) -- (4,3);\n\\draw [thin] (0,4) -- (4,4);\n\\draw [thick] (0,0) -- (1,1);\n\\draw [thick] (1,3) -- (2,4);\n\\draw [thick] (2,2) -- (3,3);\n\\draw [thick] (3,1) -- (4,2);\n\\end{tikzpicture}\n\\end{center}\ndefined as the shuffle of $ M $ with \n$$\\begin{array}{ccccccc}\n \\aaa_1 &=& (\\phantom{00}0,\\phantom{00}0) && \\bbb_1 &=& (1\/4, 1\/4) \\\\\n \\aaa_2 &=& (1\/4 , 3\/4) && \\bbb_2 &=& (2\/4,\\phantom{00}1) \\\\\n \\aaa_3 &=& (2\/4 , 2\/4) && \\bbb_3 &=& (3\/4 ,3\/4) \\\\\n \\aaa_4 &=& (3\/4 , 1\/4) && \\bbb_4 &=& (\\phantom{00}1,2\/4) \n\\end{array}$$\nis symmetric as well and satisfies $ [B_T,B_T] = 3\/8 $. \n\\\\\nIt it obvious that $ A \\leq B $. \nOn the other hand, \nwe have $ \\kappa[A_T] = 1 $, \nand it then follows from \nFuchs et al.\\ [2018; Theorem 3.2] \nthat $ A_T = M $.\nThis yields $ B_T \\leq M = A_T $, \nand from $ [B_T,B_T] < [A_T,A_T] $ we obtain $ A_T \\neq B_T $ and hence $ A_T \\not\\leq B_T $. \nSince $ d=2 $, \nthis yields $ A \\leq_c B $ and $ A_T \\not\\leq_c B_T $. \n\\end{example}\n\n\\bigskip\nReturning to the case of arbitrary dimension $ d $, \nwe conclude this section with another representation of $ [C_T,C_T] $ for a class of copulas \nwhich includes every copula whose copula measure is absolutely continuous with respect to Lebesgue measure. \nThis class includes the product copula, \nwhich will be studied in the following section. \n\n\\bigskip\nFor $ \\tilde\\pi\\in\\tilde\\Gamma^\\pi $, \ndefine\n\\begin{eqnarray*}\n A_{\\tilde\\pi} \n& := & \\Bigl\\{ \\uuu \\in \\I^d \\Bigm| (\\tilde\\pi(\\uuu))_i < (\\tilde\\pi(\\uuu))_{i+1} \\;\\text{\\rm for all}\\; i\\in\\{1,\\dots,d\\!-\\!1\\} \\Bigr\\} \n\\end{eqnarray*}\nThen every $ \\uuu \\in A_{\\tilde\\pi} $ satisfies $ T(\\uuu) = \\tilde\\pi(\\uuu) $ \nand the family $ \\fa{A}{{\\tilde\\pi}}{{\\tilde\\Gamma^\\pi}} $ is disjoint. \n\n\\bigskip\n\\begin{lemma}{} \n\\label{l-biconvex-2}\nAssume that $ C $ satisfies $ Q^C[ \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} A_{\\tilde\\pi} ] = 1 $.\nThen \n$$ [C_T,C_T] = \\sum_{\\pi\\in\\Gamma^\\pi} \\int_{T(\\I^d)} H^C_T (\\uuu) \\,dQ^{\\pi(C)}(\\uuu) $$\nIn particular, \nif $ C $ is also symmetric, \nthen \n$$ [C_T,C_T] = d! \\int_{T(\\I^d)} H^C_T(\\uuu) \\,dQ^C(\\uuu) $\n\\end{lemma\n\n\\bigskip\n\\begin{proof}{}\nSince $ Q^C[ \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} A_{\\tilde\\pi} ] = 1 $, \nTheorem \n\\ref{t.biconvex-1} yields \n\\begin{eqnarray*} \n [C_T,C_T] \n& = & \\int_{\\I^d} H^C_T( T(\\uuu)) \\,dQ^C (\\uuu) \\\\\n& = & \\int_{\\I^d} \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\chi_{A_{\\tilde\\pi}}(\\uuu) \\, H^C_T( T(\\uuu)) \\,dQ^C (\\uuu) \\\\\n& = & \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\int_{\\I^d} \\chi_{A_{\\tilde\\pi}}(\\uuu) \\, H^C_T(\\tilde\\pi(\\uuu)) \\,dQ^C (\\uuu) \\\\*\n& = & \\sum_{\\pi\\in\\Gamma^\\pi} \\int_{T(\\I^d)} H^C_T( \\vvv ) \\,dQ^{\\pi(C)}(\\vvv) \n\\end{eqnarray*}\nThis proves the assertion.\n\\end{proof}\n\n\n\n\n\\section{The Order Transform of the Product Copula}\n\\label{ot-product}\n\nIn the present section we determine Kendall's tau for the order transform $ \\Pi_T $ of the product copula $ \\Pi $ \nand for certain marginals of the order transform. \nTo this end, \nwe first recall a formula for the distribution function $ H^\\Pi_T $; \nsee Dietz et al.\\ [2016; Example 5.3]: \n\n\\bigskip\n\\begin{lemma}{}\n\\label{l.product-identity}\nThe product copula $ \\Pi $ satisfies \n$$ H^\\Pi_T(\\uuu) = d!\\,\\det\\Bigl[ (a_{i,j}(\\uuu))_{i,j\\in\\{1,\\dots,d\\}} \\Bigr] $$\nfor every $ \\uuu \\in T(\\I^d) $, \nwhere \n$$ a_{i,j}(\\uuu) \n:= \\begin{cases} \n \\dps \\frac{u_i^{j-i+1}}{(j\\!-\\!i\\!+\\!1)!} & \\text{if $ i \\leq j+1 $} \\\\[2ex]\n 0 & \\text{else} \n \\end{cases} \n$$\nfor all $ i,j\\in\\{1,\\dots,d\\} $. \n\\end{lemma\n\n\\bigskip\nWe also recall that the copula measure of the product copula is the Lebesgue measure, \nwhich means that Lemma \n\\ref{l-biconvex-2} applies to the product copula. \n\n\\bigskip\n\\begin{theorem}{}\n\\label{t.product}\nThe product copula satisfies \n$$ [\\Pi,\\Pi] \n = \\frac{1}{2^d} \n < \\frac{1}{d+1} \n = [\\Pi_T,\\Pi_T] \n$$\nand hence \n$$ \\kappa[\\Pi] \n = 0 \n < \\frac{2^d - (d\\!+\\!1)}{(2^{d-1}-1)(d+1)} \n = \\kappa[\\Pi_T] \n$$\nIn particular, \n$ \\lim_{d\\to\\infty} \\kappa[\\Pi_T] = 0 $. \n\\end{theorem}\n\n\\bigskip\n\\begin{proof}{}\nSince the product copula is absolutely continuous and symmetric, \nLemma \n\\ref{l-biconvex-2} together with Lemma \n\\ref{l.product-identity} and Lemma \n\\ref{l.app-1} yields \n\\begin{eqnarray*} \n [\\Pi_T,\\Pi_T] \n& = & d! \\int_{T(\\I^d)} H^\\Pi_T(\\uuu) \\,dQ^\\Pi (\\uuu) \\\\\n& = & d! \\int_{T(\\I^d)} d! \\, \\det\\Bigl[ (a_{i,j}(\\uuu))_{i,j\\in\\{1,\\dots,d\\}} \\Bigr] \\,d\\leb^d(\\uuu) \\\\\n& = & d!\\,d!\\,\\frac{1}{d!\\,(d+1)!} \\\\*\n& = & \\frac{1}{d+1} \n\\end{eqnarray*}\nThe identity for $ [\\Pi,\\Pi] $ is well known. \n\\end{proof}\n\n\\bigskip\nSimilar but more complicated identities can be obtained for certain margins of the order transform of the product copula. \nIn the sequel, \nwe use the canonical extensions \nof transformations of $ \\I^d $ \nto transformations of $ \\overline\\R^d $ without changing the notation. \nWe also need the following definitions: \n\n\\bigskip\nConsider $ K\\subseteq\\{1,\\dots,d\\} $ such that $ |K|\\geq2 $. \nThen there exists a unique strictly increasing sequence $ \\fa{k}{j}{\\{1,\\dots,|K|\\}} \\subseteq \\{1,\\dots,d\\} $ \nsuch that $ K = \\fa{k}{j}{\\{1,\\dots,|K|\\}} $ and we denote \n\\begin{hylist}\n\\item by $ \\overline\\R^K $ a copy of $ \\overline\\R^{|K|} $ with coordinates $ k_1,\\dots,k_{|K|} $ instead of $ 1,\\dots,|K| $, \n\\item by $ \\I^K $ the unit cube of $ \\overline\\R^K $, and \n\\item by $ \\CC^K $ the collection of all copulas $ \\I^K\\to\\I $. \n\\end{hylist}\nWe also define a map $ \\tilde\\varrho_K : \\I^K\\to\\I^d $ by letting \n\\begin{eqnarray*}\n \\tilde\\varrho_K(\\vvv) \n& := & \\sum_{j \\in \\{1,\\dots,|K|\\}} v_j\\,\\eee_{k_j} + \\sum_{k \\in \\{1,\\dots,d\\} \\setminus K} \\eee_k \n\\end{eqnarray*}\nand the map $ \\varrho_K : \\CC^d\\to\\CC^K $ given by \n\\begin{eqnarray*}\n \\varrho_K(C) \n& := & C \\circ \\tilde\\varrho_K \n\\end{eqnarray*}\nThen $ \\varrho_K $ associates indeed with every copula in $ \\CC^d $ a copula in $ \\CC^K $, \nwhich is called the \n\\emph{margin} of $ C $ with respect to $ K $. \nWe have the following result: \n\n\\bigskip\n\\begin{theorem}{}\n\\label{t.product-K}\n$ \\varrho_K(\\Pi_T) $ satisfies \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T) \\Bigr] \n& = & \\int_{\\I^d} \\Pi_T(\\eeta_K(\\eins,\\uuu)) \\,dQ^{ \\Pi_T}(\\uuu) \\\\*\n& = & \\int_{\\overline\\R^d} H^\\Pi_T(\\eeta_K(\\eins,\\xxx)) \\,dQ^{H^\\Pi_T}(\\xxx) \\\\*\n& = & d! \\int_{T(\\I^d)} H^\\Pi_T(\\eeta_K(\\eins,\\uuu)) \\,dQ^{ \\Pi }(\\uuu) \n\\end{eqnarray*\n\\end{theorem\n\n\\bigskip\n\\begin{proof}\nConsider the map $ \\eeta_{K,\\eins} : \\I^d\\to\\I^d $ given by $ \\eeta_{K,\\eins}(\\uuu) := \\eeta_K(\\eins,\\uuu) $. \nThen we obtain \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T) \\Bigr] \n& = & \\Bigl[ \\Pi_T\\circ\\tilde\\varrho_K,\\Pi_T\\circ\\tilde\\varrho_K \\Bigr] \\\\[.5ex]\n& = & \\int_{\\I^K} \\Pi_T(\\tilde\\varrho_K(\\vvv)) \\,d Q^{\\Pi_T \\circ \\tilde\\varrho_K} (\\vvv) \\\\\n& = & \\int_{\\tilde\\varrho_K^{-1}(\\eeta_{K,\\eins}(\\I^d))} \\Pi_T(\\tilde\\varrho_K(\\vvv)) \\,d Q^{\\Pi_T \\circ \\tilde\\varrho_K} (\\vvv) \\\\\n& = & \\int_{\\eeta_{K,\\eins}(\\I^d)} \\Pi_T(\\uuu) \\,d(Q^{\\Pi_T \\circ \\tilde\\varrho_K})_{\\tilde\\varrho_K}(\\uuu) \\\\\n& = & \\int_{\\eeta_{K,\\eins}(\\I^d)} \\Pi_T(\\uuu) \\,d Q^{\\Pi_T} (\\uuu) \\\\\n& = & \\int_{\\eeta_{K,\\eins}(\\I^d)} \\Pi_T(\\uuu) \\,d(Q^{\\Pi_T})_{\\eeta_{K,\\eins}} (\\uuu) \\\\\n& = & \\int_{\\I^d} \\Pi_T(\\eeta_{K,\\eins}(\\uuu)) \\,d Q^{\\Pi_T} (\\uuu) \\\\*\n& = & \\int_{\\I^d} \\Pi_T(\\eeta_K (\\eins, \\uuu)) \\,d Q^{\\Pi_T} (\\uuu) \n\\end{eqnarray*}\nwhich gives the first identity. \nThe second identity then follows from the first identity and Lemma \n\\ref{l.commute}. \nFurthermore, \nwe have \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T) \\Bigr] \n& = & \\int_{\\overline\\R^d} H^\\Pi_T(\\eeta_K(\\eins, \\xxx )) \\,dQ^{H^\\Pi_T}(\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T(\\eeta_K(\\eins,T(\\xxx))) \\,dQ^{H^\\Pi} (\\xxx) \\\\\n& = & \\int_{\\I^d} H^\\Pi_T(\\eeta_K(\\eins,T(\\uuu))) \n \\Biggl( \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\chi_{A_{\\tilde\\pi}}(\\uuu) \\Biggr) \\,dQ^\\Pi (\\uuu) \\\\\n& = & \\sum_{\\tilde\\pi\\in\\tilde\\Gamma^\\pi} \\int_{A_{\\tilde\\pi}} H^\\Pi_T(\\eeta_K(\\eins,\\tilde\\pi(\\uuu))) \\,dQ^\\Pi (\\uuu) \\\\*\n& = & d! \\int_{T(\\I^d)} H^\\Pi_T(\\eeta_K(\\eins, \\uuu )) \\,dQ^\\Pi (\\uuu) \n\\end{eqnarray*}\nwhich gives the last identity. \n\\end{proof}\n\n\\bigskip\nFrom the previous result, \ncombined with Lemma \n\\ref{l.product-identity}, \nwe obtain explicit formulas for $ [\\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T)] $, \nand hence for $ \\kappa[\\varrho_K(\\Pi_T)] $, \nin the case $ K=\\{1,\\dots,k\\} $: \n\n\\bigskip\n\\begin{corollary}{}\n\\label{c.product-lowertail}\n$ \\varrho_{\\{1,\\dots,k\\}}(\\Pi_T) $ satisfies \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_{\\{1,\\dots,k\\}}(\\Pi_T),\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T) \\Bigr] \n& = & \\frac{1}{2} - \\frac{1}{4} \\sum_{h=2}^k \\frac{1}{2h-1}\\, \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\bigg\/ \\binom{2d}{d} \\\\*\n& = & \\frac{1}{d+1} + \\frac{1}{2d} \\sum_{l=1}^{d-k} \\binom{d}{l-1} \\binom{d}{l} \\bigg\/ \\binom{2d-1}{2l-1} \n\\end{eqnarray*}\nfor every $ k\\in\\{2,\\dots,d\\} $. \nIn particular, \n\\begin{eqnarray*}\n \\kappa[\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T)] \n& = & 1 - \\frac{2^{k-2}}{2^{k-1}-1} \\sum_{h=2}^k \\frac{1}{2h-1}\\, \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\bigg\/ \\binom{2d}{d} \n\\end{eqnarray*}\nholds for every $ k\\in\\{2,\\dots,d\\} $, \nthe sequence $ \\{ \\kappa[\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T)] \\}_{k\\in\\{1,\\dots,d\\}} $ is decreasing with \n\\begin{eqnarray*}\n \\kappa[\\varrho_{\\{1,\\dots,d\\}}(\\Pi_T)] \n& = & \\frac{1}{d+1} \n\\end{eqnarray*} \nand for every $ k\\in\\{2,3,\\dots\\} $ the sequence $ \\{ \\kappa[\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T)] \\}_{d\\in\\{k,k+1,\\dots\\}} $ is increasing with \n\\begin{eqnarray*}\n \\lim_{d\\to\\infty} \\kappa[\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T)] \n& = & 1 - \\frac{2^{k-2}}{2^{k-1}-1} \\sum_{h=2}^k \\frac{1}{2h-1}\\, \\binom{2h}{h} \\,\\frac{1}{2^{2h-2}} \n\\end{eqnarray*\n\\end{corollary\n\n\\bigskip\n\\begin{proof}\nSince $ \\eeta_{\\{1,\\dots,k\\}}(\\eins,\\uuu) \\in T(\\I^d) $ holds for every $ \\uuu \\in T(\\I^d) $, \nTheorem \n\\ref{t.product-K} together with Lemma \n\\ref{l.product-identity} and Corollary \n\\ref{c.app-1} yields \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_{\\{1,\\dots,k\\}}(\\Pi_T),\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T) \\Bigr] \n& = & d! \\int_{T(\\I^d)} H^\\Pi_T(\\eeta_{\\{1,\\dots,k\\}}(\\eins,\\uuu)) \\,dQ^\\Pi (\\uuu) \\\\\n& = & d! \\int_{T(\\I^d)} d! \\,\\det\\Bigl[ (a_{i,j}(\\eeta_{\\{1,\\dots,k\\}}(\\eins,\\uuu)))_{i,j\\in\\{1,...,d\\}} \\Bigr] \\,d\\leb^d(\\uuu) \\\\\n& = & (d!)^2 \\biggl( \\frac{1}{d!\\,(d\\!+\\!1)!} + \\frac{1}{(2d)!} \\sum_{l=1}^{d-k} \\frac{(2d\\!-\\!2l)!}{(d\\!-\\!l)!\\,(d\\!+\\!1\\!-\\!l)!} \\binom{2l-1}{l} \\biggr) \\\\*\n& = & \\frac{1}{d+1} + \\frac{1}{2d} \\sum_{l=1}^{d-k} \\binom{d}{l-1} \\binom{d}{l} \\bigg\/ \\binom{2d-1}{2l-1} \n\\end{eqnarray*}\nwhich gives the second identity. \nOn the other hand, \nLemma \n\\ref{l.combi} yields \n\\begin{eqnarray*} \n\t \\sum_{l=1}^{d-k} \\frac{(2d-2l)!}{(d-l)!\\,(d+1-l)!}\\,\\frac{(2l-1)!}{l! (l-1)!} \n& = & \\frac{1}{4} \\sum_{l=1}^{d-k} \\binom{2d+2-2l}{d+1-l} \\binom{2l}{l} \\frac{1}{2d-2l+1} \\\\\n& = & \\frac{1}{4} \\sum_{h=k+1}^d \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\frac{1}{2h-1} \t \\\\\n& = & \\frac{1}{4} \\biggl( - \\binom{2d+2}{d+1} \\frac{1}{2d+1} \n - \\sum_{h=0}^k \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\frac{1}{2h-1} \\biggr) \\\\*\n& = & \\frac{1}{4} \\biggl( \\binom{2d}{d} \\frac{2(d-1)}{d+1} \n\t\t\t\t - \\sum_{h=2}^k \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\frac{1}{2h-1} \\biggr) \n\\end{eqnarray*}\nand hence \n\\begin{eqnarray*} \n \\Bigl[ \\varrho_{\\{1,\\dots,k\\}}(\\Pi_T),\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T) \\Bigr] \n& = & \\frac{1}{d+1} \n + \\frac{(d!)^2}{(2d)!} \\sum_{l=1}^{d-k} \\frac{(2d\\!-\\!2l)!}{(d\\!-\\!l)!\\,(d\\!+\\!1\\!-\\!l)!} \\binom{2l-1}{l} \\\\\n& = & \\frac{1}{d+1} \n + \\frac{1}{2}\\,\\frac{d-1}{d+1} \n - \\frac{1}{4}\\,\\frac{(d!)^2}{(2d)!} \\sum_{h=2}^k \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\frac{1}{2h-1} \\\\*\n& = & \\frac{1}{2} \n - \\frac{1}{4} \\sum_{h=2}^k \\frac{1}{2h-1}\\, \\binom{2h}{h} \\binom{2d+2-2h}{d+1-h} \\bigg\/ \\binom{2d}{d} \n\\end{eqnarray*}\nwhich gives the first identity. \n\\end{proof}\n\n\\bigskip\nThe first identity of Corollary \n\\ref{c.product-lowertail} is suitable for small $ k $ while the second is suitable for large $ k $. \nFor $ k=2 $, \nCorollary \n\\ref{c.product-lowertail} yields \n\\begin{eqnarray*}\n \\kappa[\\varrho_{\\{1,2\\}}(\\Pi_T)] \n& = & \\frac{d-1}{2d-1} \n\\end{eqnarray*}\nwhich is in accordance with the literature; \nsee \nAv{\\'e}rous et al.\\ [2005] and \nNavarro and Balakrishnan [2010]. \nIn particular, \nthe sequence $ \\{ \\kappa[\\varrho_{\\{1,2\\}}(\\Pi_T)] \\}_{d\\in\\{2,3,\\dots\\}} $ is increasing \nwith $ \\lim_{d\\to\\infty}\\kappa[\\varrho_{\\{1,2\\}}(\\Pi_T)] = 1\/2 $. \nWe illustrate Corollary \n\\ref{c.product-lowertail} by numerical examples for certain values of $ d $ and $ k\\in\\{2,\\dots,d\\} $: \n\n\\bigskip\n\\begin{examples}{}\n\\label{e.product-lowertail}\nFor $ d\\in\\{2,\\dots,5\\} $ and $ k\\in\\{2,\\dots,d\\} $, \nthe table \n$$\\begin{array}{|c|@{\\;}cccc|}\n\\hline\\rule{0ex}{2.5ex\n & \\multicolumn{4}{c|}{d} \\\\\n k & 2 & 3 & 4 & 5 \\\\\n\\hline\\rule{0ex}{2.5ex\n 2 & 315\/945 & 378\/945 & 405\/945 & 420\/945 \\\\\n 3 & & 315\/945 & 369\/945 & 395\/945 \\\\\n 4 & & & 297\/945 & 345\/945 \\\\\n 5 & & & & 273\/945 \\\\\n\\hline\n\\end{array}$$\npresents the values of $ \\kappa[\\varrho_{\\{1,\\dots,k\\}}(\\Pi_T)] $ for $ d\\in\\{2,\\dots,5\\} $ and $ k\\in\\{2,\\dots,d\\} $. \n\\end{examples}\n\n\\bigskip\nWe now introduce a general reflection principle which, \nwhen combined with Corollary \n\\ref{c.product-lowertail}, \nyields explicit formulas for $ [\\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T)] $, \nand hence for $ \\kappa[\\varrho_K(\\Pi_T)] $, \nin the case $ K=\\{d-k+1,\\dots,d\\} $. \nWe shall need the following lemma: \n\n\\bigskip\n\\begin{lemma}{}\n\\label{l.product-taurho}\nThe identity \n$$ \\rho_K(\\tau(\\Pi_T)) = \\tau(\\rho_K(\\Pi_T)) $$\nholds for every $ K\\subseteq\\{1,\\dots,d\\} $ such that $|K|\\geq2 $. \n\\end{lemma}\n\n\\bigskip\nIn the identity of Lemma \n\\ref{l.product-taurho}, \nthe transformation $ \\tau $ acts on $ \\CC^d $ on the left hand side and on $ \\CC^K $ on the right hand side. \n\\pagebreak\n\n\\bigskip\n\\begin{proof}\nAccording to \nFuchs [2014; Theorem 4.1], \nevery copula $ D $ satisfies \n\\begin{eqnarray*}\n (\\tau(D))(\\vvv) \n& = & \\sum_{L \\subseteq \\{1,...,d\\}} (-1)^{d-|L|} \\, D(\\eeta_L(\\eins\\!-\\!\\vvv,\\eins)) \n\\end{eqnarray*}\nFor every $ \\uuu\\in\\I^K $ we obtain \n\\begin{eqnarray*}\n (\\rho_K(\\tau(\\Pi_T)))(\\uuu) \n& = & (\\tau(\\Pi_T))(\\tilde\\rho_K(\\uuu)) \\\\[1ex]\n& = & \\sum_{L \\subseteq \\{1,...,d\\}} (-1)^{d-|L|} \\, \\Pi_T(\\eeta_L (\\eins\\!-\\!\\tilde\\rho_K(\\uuu),\\eins)) \\\\\n& = & \\sum_{L \\subseteq \\{1,...,d\\}} (-1)^{d-|L|} \\, \\Pi_T(\\eeta_{\\{1,\\dots,d\\} \\setminus L} (\\eins,\\eins\\!-\\!\\tilde\\rho_K(\\uuu))) \\\\\n& = & \\sum_{J \\subseteq \\{1,...,d\\}} (-1)^{|J|} \\, \\Pi_T(\\eeta_J (\\eins,\\eins\\!-\\!\\tilde\\rho_K(\\uuu))) \\\\\n& = & \\sum_{J \\subseteq K} (-1)^{|J|} \\, \\Pi_T(\\eeta_J (\\eins,\\eins\\!-\\!\\tilde\\rho_K(\\uuu))) \\\\\n& = & \\sum_{J \\subseteq K} (-1)^{|J|} \\, \\Pi_T(\\eeta_{\\{1,\\dots,d\\} \\setminus L} (\\eins\\!-\\!\\tilde\\rho_K(\\uuu),\\eins)) \\\\\n& = & \\sum_{L \\subseteq K} (-1)^{|K|-|L|}\\, \\Pi_T(\\eeta_L (\\tilde\\rho_K(\\eins\\!-\\!\\uuu),\\eins)) \\\\\n& = & \\sum_{L \\subseteq K} (-1)^{|K|-|L|}\\, (\\Pi_T \\circ \\tilde\\rho_K) (\\eeta_L (\\eins\\!-\\!\\uuu ,\\eins)) \\\\\n& = & \\sum_{L \\subseteq K} (-1)^{|K|-|L|}\\, (\\rho_K(\\Pi_T)) (\\eeta_L (\\eins\\!-\\!\\uuu ,\\eins)) \\\\*[1ex]\n& = & (\\tau(\\rho_K(\\Pi_T)))(\\uuu) \n\\end{eqnarray*}\nNote that in the last three lines the transformation $ \\eeta_L $ acts on $ \\I^K $. \n\\end{proof}\n\n\\bigskip\nConsider now the map $ b : \\{1,\\dots,d\\}\\to\\{1,\\dots,d\\} $ given by \n\\begin{eqnarray*}\n b(i) \n& := & d-i+1 \n\\end{eqnarray*}\nand the linear map $ B : \\overline\\R^d\\to\\overline\\R^d $ given by the $ (d \\times d) $--matrix with entries \n\\begin{eqnarray*}\n b_{i,j} \n& := & \\begin{cases} \n 1 & \\text{if $ i+j=d+1 $} \\\\\n 0 & \\text{else} \n \\end{cases} \n\\end{eqnarray*}\nWe can now state the announced reflection principle: \n\n\\bigskip\n\\begin{theorem}{}\n\\label{t.product-reflection}\nThe identity \n$$ \\Bigl[ \\varrho_{b(K)}(\\Pi_T) , \\varrho_{b(K)}(\\Pi_T) \\Bigr] \n = \\Bigl[ \\varrho_K(\\Pi_T) , \\varrho_K(\\Pi_T) \\Bigr] \n$$\nholds for every $ K\\subseteq\\{1,\\dots,d\\} $ such that $ |K|\\geq2 $. \nIn particular, \n$$ \\kappa[\\varrho_{b(K)}(\\Pi_T)] = \\kappa[\\varrho_K(\\Pi_T)] $$\nholds for every $ K\\subseteq\\{1,\\dots,d\\} $ such that $ |K|\\geq2 $. \n\\end{theorem}\n\\pagebreak\n\n\\bigskip\n\\begin{proof}\nUsing Lemma \n\\ref{l.product-taurho} and Theorem \n\\ref{t.product-K} we obtain \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_{b(K)}(\\Pi_T) , \\varrho_{b(K)}(\\Pi_T) \\Bigr] \n& = & \\Bigl[ \\tau(\\varrho_{b(K)}(\\Pi_T)) , \\tau(\\varrho_{b(K)}(\\Pi_T)) \\Bigr] \\\\\n& = & \\Bigl[ \\varrho_{b(K)}(\\tau(\\Pi_T)) , \\varrho_{b(K)}(\\tau(\\Pi_T)) \\Bigr] \\\\*\n& = & \\int_{\\I^d} (\\tau(\\Pi_T))(\\eeta_{b(K)}(\\eins,\\uuu)) \\,dQ^{\\tau(\\Pi_T)}(\\uuu) \n\\end{eqnarray*}\nand from Theorem \n\\ref{t.product-K} we also obtain \n\\begin{eqnarray*}\n \\Bigl[ \\varrho_K(\\Pi_T) , \\varrho_K(\\Pi_T) \\Bigr] \n& = & \\int_{\\I^d} \\Pi_T(\\eeta_K(\\eins,\\uuu)) \\,dQ^{ \\Pi_T}(\\uuu) \\\\*\n& = & \\int_{\\overline\\R^d} H^\\Pi_T(\\eeta_K(\\eins,\\xxx)) \\,dQ^{H^\\Pi_T}(\\xxx) \n\\end{eqnarray*}\nIt remains to show that the two integrals are identical. \nWe have $ B \\circ T = \\tilde\\tau \\circ T \\circ \\tilde\\tau $ and hence $ \\tilde\\tau \\circ T = B \\circ T \\circ \\tilde\\tau $, \nand we also have \n$ Q^{H^\\Pi_{T \\circ \\tilde\\tau}} = (Q^{H^{\\Pi}})_{T \\circ \\tilde\\tau} = ((Q^{H^\\Pi})_{\\tilde\\tau})_T = (Q^{H^\\Pi})_T = Q^{H^\\Pi_T} $. \nFrom these identities and Lemma \n\\ref{l.commute} we obtain \n\\begin{eqnarray*}\n\\lefteqn{ \n \t\t\\int_{\\I^d} \\Pi_T (\\eeta_L(\\eeta_{b(K)}(\\zero, \\uuu ),\\eins)) \n \\,dQ^{\\Pi_T} (\\uuu) }\\\\*\n& = & \\int_{\\overline\\R^d} H^\\Pi_T (\\eeta_L(\\eeta_{b(K)}(\\zero, \\xxx ),\\eins)) \n \\,dQ^{H^\\Pi_T}(\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T (\\eeta_L(\\eeta_{b(K)}(\\zero,T(\\xxx)),\\eins)) \n \\,dQ^{H^\\Pi} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T (\\eeta_L(\\eins - \\eeta_{b(K)}(\\eins,( \\tilde\\tau \\circ T)(\\xxx)),\\eins)) \n \\,dQ^{H^\\Pi} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T (\\eeta_L(\\eins - \\eeta_{b(K)}(\\eins,(B \\circ T \\circ \\tilde\\tau)(\\xxx)),\\eins)) \n \\,dQ^{H^\\Pi} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T (\\eeta_L(\\eins - \\eeta_{b(K)}(\\eins,B(\\zzz)) ,\\eins)) \n \\,dQ^{H^\\Pi_{T \\circ \\tilde\\tau}}(\\zzz) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T (\\eeta_L(\\eins - B \\circ \\eeta_K(\\eins,\\zzz),\\eins)) \n \\,dQ^{H^\\Pi_T} (\\zzz) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T ((B \\circ \\eeta_L)(\\eins-\\eeta_K(\\eins,\\zzz),\\eins)) \n \\,dQ^{H^\\Pi_T} (\\zzz) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_{B \\circ T} (\\eeta_L (\\eins-\\eeta_K(\\eins,\\zzz),\\eins)) \n \\,dQ^{H^\\Pi_T} (\\zzz) \\\\*\n& = & \\int_{\\overline\\R^d} H^\\Pi_{\\tilde\\tau \\circ T \\circ \\tilde\\tau} (\\eeta_L (\\eeta_K(\\zero,\\eins-\\zzz),\\eins)) \n \\,dQ^{H^\\Pi_T} (\\zzz) \n\\end{eqnarray*}\nUsing the first identity in the proof of Lemma \n\\ref{l.product-taurho} we thus obtain \n\\begin{eqnarray*}\n\\lefteqn{\n \\Bigl[ \\varrho_{b(K)}(\\Pi_T),\\varrho_{b(K)}(\\Pi_T) \\Bigr] }\\\\*\n& = & \\int_{\\I^d} (\\tau(\\Pi_T))(\\eeta_{b(K)}(\\eins,\\uuu)) \\,dQ^{\\tau(\\Pi_T)}(\\uuu) \\\\\n& = & \\int_{\\I^d} \\sum_{L\\subseteq\\{1,\\dots,d\\}} (-1)^{d-|L|}\\, \n \\Pi_T (\\eeta_L(\\eins-\\eeta_{b(K)}(\\eins,\\uuu),\\eins)) \\,dQ^{\\tau(\\Pi_T)}(\\uuu) \\\\\n& = & \\sum_{L\\subseteq\\{1,\\dots,d\\}} (-1)^{d-|L|}\\,\\int_{\\I^d} \n \\Pi_T (\\eeta_L(\\eeta_{b(K)}(\\zero,\\tilde\\tau(\\uuu)),\\eins)) \\,dQ^{\\tau(\\Pi_T)}(\\uuu) \\\\\n& = & \\sum_{L\\subseteq\\{1,\\dots,d\\}} (-1)^{d-|L|}\\,\\int_{\\I^d} \n \\Pi_T (\\eeta_L(\\eeta_{b(K)}(\\zero,\\uuu),\\eins)) \\,dQ^{\\Pi_T} (\\uuu) \\\\\n& = & \\sum_{L\\subseteq\\{1,\\dots,d\\}} (-1)^{d-|L|}\\,\\int_{\\overline\\R^d} \n H^\\Pi_{\\tilde\\tau \\circ T \\circ \\tilde\\tau} (\\eeta_L(\\eeta_K(\\zero,\\eins\\!-\\!\\xxx),\\eins)) \\,dQ^{H^\\Pi_T} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} \\sum_{L\\subseteq\\{1,\\dots,d\\}} (-1)^{d-|L|}\\, \n Q^\\Pi_{\\tilde\\tau \\circ T \\circ \\tilde\\tau} \n \\Bigl[ \\Bigl[ \\zero, \\eeta_L(\\eeta_K(\\zero,\\eins\\!-\\!\\xxx),\\eins) \\Bigr] \\Bigr] \\,dQ^{H^\\Pi_T} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} Q^\\Pi_{\\tilde\\tau \\circ T \\circ \\tilde\\tau} \n \\Bigl[ \\Bigl[ \\eeta_K(\\zero,\\eins\\!-\\!\\xxx) , \\eins \\Bigr] \\Bigr] \\,dQ^{H^\\Pi_T} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} Q^\\Pi_{ T \\circ \\tilde\\tau} \n \\Bigl[ \\Bigl[ \\zero , \\eins - \\eeta_K(\\zero,\\eins\\!-\\!\\xxx) \\Bigr] \\Bigr] \\,dQ^{H^\\Pi_T} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} Q^\\Pi_T \n \\Bigl[ \\Bigl[ \\zero , \\eeta_K(\\eins, \\xxx) \\Bigr] \\Bigr] \\,dQ^{H^\\Pi_T} (\\xxx) \\\\\n& = & \\int_{\\overline\\R^d} H^\\Pi_T( \\eeta_K(\\eins, \\xxx)) \\,dQ^{H^\\Pi_T} (\\xxx) \\\\*\n& = & \\Bigl[ \\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T) \\Bigr] \n\\end{eqnarray*}\nas was to be shown. \n\\end{proof}\n\n\\bigskip\nTheorem \n\\ref{t.product-reflection} extends a result of \nAv{\\'e}rous et al.\\ [2005; Proposition 10] for $ |K|=2 $. \nAs noted before, \ncombining Theorem \n\\ref{t.product-reflection} and Corollary \n\\ref{c.product-lowertail} yields explicit formulas for $ [\\varrho_K(\\Pi_T),\\varrho_K(\\Pi_T)] $, \nand hence for $ \\kappa[\\varrho_K(\\Pi_T)] $, \nin the case $ K=\\{d-k+1,\\dots,d\\} $. \n\n\\bigskip\nAs a final remark, \nwe note that Theorem \n\\ref{t.product-K} combined with Lemma \n\\ref{l.product-identity} provides a general tool to compute Kendall's tau for the margins of $ \\Pi_T $ with respect to any $ K\\subseteq\\{1,\\dots,d\\} $. \nWe illustrate this by the following example which is not covered by Corollary \n\\ref{c.product-lowertail}: \n\n\\bigskip\n\\begin{example}{}\nAssume that $ d=5 $ and consider $ K=\\{1,2,3,5\\} $. \nSince \n\\linebreak\n $ Q^\\Pi_T[ \\I^d \\setminus T(\\I^d) ] = 0 $, \nTheorem \n\\ref{t.product-K} together with Lemma \n\\ref{l.product-identity} yields \n\\begin{eqnarray*}\n \\Bigl[ \\rho_{\\{1,2,3,5\\}}(\\Pi_T), \\rho_{\\{1,2,3,5\\}}(\\Pi_T) \\Bigr] \n& = & 5! \\int_{T(\\I^5)} H^\\Pi_T((u_1,u_2,u_3, 1,u_5)) \\,dQ^\\Pi(\\uuu) \\\\\n& = & 5! \\int_{T(\\I^5)} H^\\Pi_T((u_1,u_2,u_3,u_5,u_5)) \\,dQ^\\Pi(\\uuu) \\\\\n& = & (5!)^2 \\int_{T(\\I^5)} \\det \n\t\t\\left( \\begin{matrix} \n\t\t u_1 & u_1^2\/2! & u_1^3\/3! & u_1^4\/4! & u_1^5\/5! \\\\\n\t\t 1 & u_2 & u_2^2\/2! & u_2^3\/3! & u_2^4\/4! \\\\\n\t\t\t0 & 1 & u_3 & u_3^2\/2! & u_3^3\/3! \\\\\n\t\t\t0 & 0 & 1 & u_5 & u_5^2\/2! \\\\\n\t\t\t0 & 0 & 0 & 1 & u_5 \n\t\t\\end{matrix} \\right) dQ^\\Pi(\\uuu) \\\\\n& = & (5!)^2 \\frac{47}{3\\,628\\,800} \\\\*\n& = & \\frac{47}{252} \n\\end{eqnarray*}\nWe thus obtain \n\\begin{eqnarray*}\n \\kappa[\\rho_{\\{1,2,3,5\\}}(\\Pi_T)] \n& = & \\frac{2^4\\,[\\rho_{\\{1,2,3,5\\}}(\\Pi_T),\\rho_{\\{1,2,3,5\\}}(\\Pi_T)]-1}{2^{4-1}-1} \n\\,\\;=\\;\\, \\frac{125}{441} \n\\end{eqnarray*}\nand Theorem \n\\ref{t.product-reflection} then yields $ \\kappa[\\rho_{\\{1,3,4,5\\}}(\\Pi_T)] = \\kappa[\\rho_{\\{1,2,3,5\\}}(\\Pi_T)] = 125\/441 $. \n\\end{example}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe suppression of $J\/\\psi$ production in relativistic heavy ion\ncollisions is considered as an important signature to identify the quark-gluon plasma\n\\cite{Matsui86}. The dissociation of $J\/\\psi$ in the quark-gluon\nplasma due to color screening can result in a reduction of its\nproduction. The interpretation of suppression requires the detailed knowledge of the\nexpected suppression due to the $J\/\\psi$ dissociation in the hadronic environment.\nThe in-medium hadron properties\n can affect the productions of the open-charmed mesons and the $J\/\\psi$ in\nthe relativistic heavy ion collisions, the higher charmonium states are\nconsidered as the major source of the $J\/\\psi$ \\cite{Jpsi-Source}. For example,\n the higher charmonium states can decay to the $D\\bar{D}$, $D^*\\bar{D}^*$ pairs instead\nof decaying to the lowest state $J\/\\psi$ in case of the mass reductions\n of the $D$, $D^*$, $\\bar{D}$, $\\bar{D}^*$ mesons are large enough.\n We have to disentangle the color screening versus the recombination of off-diagonal $\\bar{c}c$ (or $\\bar{b}b$) pairs in the hot dense medium versus cold nuclear matter effects, such as nuclear absorption,\nshadowing and anti-shadowing, so as to draw a definite conclusion on appearance of the quark-gluon plasma \\cite{Recombine-cc,CNM}.\nThe upcoming FAIR\n(Facility for Antiproton and Ion Research) project at GSI (Institute for Heavy Ion Research) in\nDarmstadt (Germany)\n provides the opportunity to study the in-medium properties of the charmoniums or charmed hadrons for the first time.\n The CBM (Compressed Baryonic Matter) collaboration intends to study the properties of the\nhadrons in the nuclear matter \\cite{CBM}, while the $\\rm \\bar{P}ANDA$ (anti-Proton Annihilation at Darmstadt) collaboration will focus on the charm spectroscopy, and mass and width modifications of the charmed hadrons in the nuclear matter \\cite{PANDA}.\n However,\n the in-medium mass modifications are not easy to access experimentally despite the interesting physics involved, and they\nrequire more detailed theoretical studies.\nOn the other hand, the bottomonium states are also sensitive to the color screening, the\n $\\Upsilon$ suppression in high energy heavy ion collisions can also be taken as a\nsignature to identify the quark-gluon plasma \\cite{QGP-rev}.\nThe suppressions on the\n $\\Upsilon$ production in ultra-relativistic heavy ion collisions will be studied in details at the Relativistic Heavy Ion Collider (RHIC) and\n Large Hadron Collider (LHC).\n\n\n\n Extensive theoretical and experimental studies are required to explore the hadron properties in nuclear matter.\n The connection between the condensates and the nuclear density dependence of the in-medium hadron masses is\nnot straightforward.\nThe QCD sum rules provides a powerful theoretical tool in\n studying the in-medium hadronic properties \\cite{SVZ79,PRT85}, and has been applied extensively\n to study the light-flavor hadrons and charmonium states in the nuclear matter \\cite{C-parameter,Drukarev1991,Jpsi-etac}.\n The\n works on the heavy mesons and heavy baryons are few, only the $D$, $B$, $D_0$, $B_0$, $D^*$, $B^*$, $D_1$, $B_1$, $\\Lambda_Q$, $\\Sigma_Q$, $\\Xi_{QQ}$ and $\\Omega_{QQ}$ are studied with the QCD sum rules \\cite{Hay,WangHuang,Azi,Hil,Hilger2,WangH}.\nThe heavy mesons (heavy baryons) contain a heavy quark and a light quark (two light quarks),\nthe existence of a light quark (two light quarks) in the heavy mesons (heavy baryons) leads to large\ndifference between the mass-shifts of\n the heavy mesons (heavy baryons) and heavy quarkonia in the nuclear matter.\nThe former have large contributions from the light-quark\ncondensates in the nuclear matter and the modifications of the masses originate mainly from the modifications of the quark condensates, while the latter\n are dominated by the gluon condensates, and the mass modifications are mild \\cite{Jpsi-etac,Hay,WangHuang,Azi,Hil,Hilger2,WangH}.\n\n In previous works \\cite{Hay,WangHuang,Azi,Hil,Hilger2},\n the properties of the heavy mesons in the nuclear matter are studied with the QCD sum rules by taking the leading order approximation for the contributions of the quark condensates. In this article, I take into account the next-to-leading order contributions of the quark condensates, and study the properties of the scalar, pseudoscalar, vector and axialvector heavy mesons in the nuclear matter with the QCD sum rules in a systematic way, and make predictions for the modifications of the masses and decay constants of the heavy mesons in the nuclear matter. Measuring those effects is a long term physics\ngoal based on further theoretical studies on the reaction\ndynamics and on the exploration of the experimental\nability to identify more complicated\nprocesses \\cite{CBM,PANDA}. Furthermore, I study the heavy-meson-nucleon scattering lengths as a byproduct. From the negative or positive sign of the scattering lengths, I can obtain the conclusion qualitatively that the interactions are attractive or repulsive, which favor or disfavor the\nformations of the heavy-meson-nucleon bound states. For example, the $\\Sigma_c(2800)$ and $\\Lambda_c(2940)$ can be assigned to be the $S$-wave $DN$ state with $J^P ={\\frac{1}{2}}^-$ and the $S$-wave $D^*N$ state with $J^P ={\\frac{3}{2}}^-$ respectively based on the QCD sum rules \\cite{ZhangDN}.\n\n\n\nThe article is arranged as follows: I study in-medium properties of the heavy mesons\n with the QCD sum rules in Sec.2; in Sec.3, I present the numerical results and discussions; and Sec.4 is reserved for my\nconclusions.\n\n\\section{The properties of the heavy mesons in the nuclear matter with QCD sum rules}\n\nI study the scalar, pseudoscalar, vector and axialvector heavy mesons in the nuclear matter with\nthe two-point correlation functions $\\Pi(q)$ and $\\Pi_{\\mu\\nu}(q)$, respectively. In the Fermi gas\napproximation for the nuclear matter, I divide the $\\Pi(q)$ and $\\Pi_{\\mu\\nu}(q)$\ninto the vacuum part $\\Pi^0(q)$ and $\\Pi^0_{\\mu\\nu}(q)$ and the static one-nucleon part\n$\\Pi_N(q)$ and $\\Pi^N_{\\mu\\nu}(q)$, and expand the\n$\\Pi_N(q)$ and $\\Pi^N_{\\mu\\nu}(q)$ up to the order ${\\mathcal{O}}(\\rho_N)$ at relatively low nuclear density \\cite{Drukarev1991,Hay},\n\\begin{eqnarray}\n\\Pi(q) &=& i\\int d^{4}x\\ e^{iq \\cdot x} \\langle\nT\\left\\{J(x)J^{\\dag}(0)\\right\\} \\rangle_{\\rho_N}\\nonumber \\\\\n& =& \\Pi_{0}(q)+ \\frac{\\rho_N}{2m_N}T_{N}(q)\\, , \\nonumber \\\\\n \\Pi_{\\mu\\nu}(q) &=& i\\int d^{4}x\\ e^{iq \\cdot x} \\langle T\\left\\{J_\\mu(x)J_\\nu^{\\dag}(0)\\right\\} \\rangle_{\\rho_N} \\nonumber\\\\\n &=&\\Pi^{0}_{\\mu\\nu}(q)+ \\frac{\\rho_N}{2m_N}T^{N}_{\\mu\\nu}(q)\\, ,\n \\end{eqnarray}\nwhere the $\\rho_N$ is the density of the nuclear matter, and the forward scattering amplitudes $T_{N}(q)$ and $T^{N}_{\\mu\\nu}(q)$ are defined as\n\\begin{eqnarray}\nT_{N}(\\omega,\\mbox{\\boldmath $q$}\\,) &=&i\\int d^{4}x e^{iq\\cdot x}\\langle N(p)|\nT\\left\\{J(x)J^{\\dag}(0)\\right\\} |N(p) \\rangle\\, , \\nonumber\\\\\nT^{N}_{\\mu\\nu}(\\omega,\\mbox{\\boldmath $q$}\\,) &=&i\\int d^{4}x e^{iq\\cdot x}\\langle N(p)|\nT\\left\\{J_\\mu(x)J_\\nu^{\\dag}(0)\\right\\} |N(p) \\rangle\\, ,\n\\end{eqnarray}\n where the $J(x)$ and $J_\\mu(x)$ denote the isospin averaged currents $\\eta(x)$, $\\eta_5(x)$, $\\eta_\\mu(x)$ and $\\eta_{5\\mu}(x)$, respectively,\n\\begin{eqnarray}\n \\eta(x) &=&\\eta^\\dag(x) =\\frac{\\bar{c}(x) q(x)+\\bar{q}(x) c(x)}{2}\\, , \\nonumber\\\\\n \\eta_{5}(x) &=&\\eta_{5}^\\dag(x) =\\frac{\\bar{c}(x)i \\gamma_5q(x)+\\bar{q}(x)i\\gamma_5 c(x)}{2}\\,,\\nonumber\\\\\n \\eta_\\mu(x) &=&\\eta_\\mu^\\dag(x) =\\frac{\\bar{c}(x)\\gamma_\\mu q(x)+\\bar{q}(x)\\gamma_\\mu c(x)}{2}\\, , \\nonumber\\\\\n \\eta_{5\\mu}(x) &=&\\eta_{5\\mu}^\\dag(x) =\\frac{\\bar{c}(x)\\gamma_\\mu \\gamma_5q(x)+\\bar{q}(x)\\gamma_\\mu\\gamma_5 c(x)}{2}\\,,\n\\end{eqnarray}\n which interpolate the scalar, pseudoscalar, vector and axialvector mesons $D_0$, $D$, $D^*$ and $D_1$, respectively.\n I choose the isospin averaged currents since the $D_0$, $D$, $D^*$ and $D_1$ mesons are produced in pairs in the antiproton-nucleon\nannihilation processes.\nThe $q$ denotes the $u$ or $d$ quark, the $q^{\\mu}=(\\omega,\\mbox{\\boldmath $q$}\\,)$ is the four-momentum carried by\nthe currents $J(x)$ and $J_\\mu(x)$, the $|N(p)\\rangle$ denotes the isospin and\nspin averaged static nucleon state with the four-momentum $p =\n(m_N,0)$, and $\\langle N(\\mbox{\\boldmath $p$})|N(\\mbox{\\boldmath\n$p$}')\\rangle = (2\\pi)^{3} 2p_{0}\\delta^{3}(\\mbox{\\boldmath\n$p$}-\\mbox{\\boldmath $p$}')$ \\cite{Hay}.\n\nI can decompose the correlation functions $T^{N}_{\\mu\\nu}(\\omega,\\mbox{\\boldmath $q$}\\,)$ as\n\\begin{eqnarray}\nT^{N}_{\\mu\\nu}(\\omega,\\mbox{\\boldmath $q$}\\,) &=&T_{N}(\\omega,\\mbox{\\boldmath $q$}\\,)\\left(-g_{\\mu\\nu}+\\frac{q_\\mu q_\\nu}{q^2}\\right)+T^0_N(\\omega,\\mbox{\\boldmath $q$}\\,) q_\\mu q_\\nu\\nonumber\\\\\n&&+T^1_N(\\omega,\\mbox{\\boldmath $q$}\\,) \\left(q_\\mu u_\\nu+q_\\nu u_\\mu \\right)+ T^2_N(\\omega,\\mbox{\\boldmath $q$}\\,) u_\\mu u_\\nu\\, ,\n\\end{eqnarray}\naccording to Lorentz covariance, where the $T_{N}(\\omega,\\mbox{\\boldmath $q$}\\,)$ denotes the contributions of the vector and axialvector charmed mesons,\n and the $T^{0\/1\/2}_{N}(\\omega,\\mbox{\\boldmath $q$}\\,)$ are irrelevant in the present analysis.\n\n\nIn the limit $\\mbox{\\boldmath $q$}\\rightarrow {\\bf 0}$, the forward scattering amplitude\n$T_{N}(\\omega,\\mbox{\\boldmath $q$}\\,)$ can be related to the $DN$ ($D_0N$, $D^*N$ and $D_1N$)\nscattering $T$-matrix,\n\\begin{eqnarray}\n{{\\cal T}_{D\/D_0\/D^*\/D_1\\,N}}(m_{D\/D_0\/D^*\/D_1},0) =\n8\\pi(m_N+m_{D\/D_0\/D^*\/D_1})a_{D\/D_0\/D^*\/D_1} \\, ,\n\\end{eqnarray}\n where the $a_{D\/D_0\/D^*\/D_1}$ are the $D\/D_0\/D^*\/D_1\\,N$\nscattering lengths. I can parameterize the\nphenomenological spectral densities $\\rho(\\omega,0)$ with three unknown parameters $a,\\,b$ and $c$ near the pole positions of the charmed mesons $D$, $D_0$, $D^*$ and $D_1$ according to Ref.\\cite{Hay},\n\\begin{eqnarray}\n\\rho(\\omega,0) &=& -\\frac{1}{\\pi} \\mbox{Im} \\left[\\frac{{{\\cal T}_{D\/D_0N}}(\\omega,{\\bf 0})}{\\left(\\omega^{2}-\nm_{D\/D_0}^2+i\\varepsilon\\right)^{2}} \\right]\\frac{f_{D\/D_0}^2m_{D\/D_0}^4}{m_c^2}+ \\cdots \\,, \\nonumber \\\\\n&=& a\\,\\frac{d}{d\\omega^2}\\delta\\left(\\omega^{2}-m_{D\/D_0}^2\\right) +\nb\\,\\delta\\left(\\omega^{2}-m_{D\/D_0}^2\\right) + c\\,\\delta\\left(\\omega^{2}-s_{0}\\right)\\, ,\n\\end{eqnarray}\nfor the pseudoscalar and scalar currents $\\eta_5(x)$ and $\\eta(x)$,\n\\begin{eqnarray}\n\\rho(\\omega,0) &=& -\\frac{1}{\\pi} \\mbox{Im} \\left[\\frac{{{\\cal T}_{D^*\/D_1N}}(\\omega,{\\bf 0})}{\\left(\\omega^{2}-\nm_{D^*\/D_1}^2+i\\varepsilon\\right)^{2}} \\right]f_{D^*\/D_1}^2m_{D^*\/D_1}^2+ \\cdots \\,, \\nonumber\\\\\n&=& a\\,\\frac{d}{d\\omega^2}\\delta\\left(\\omega^{2}-m_{D^*\/D_1}^2\\right) +\nb\\,\\delta\\left(\\omega^{2}-m_{D^*\/D_1}^2\\right) + c\\,\\delta\\left(\\omega^{2}-s_{0}\\right)\\, ,\n\\end{eqnarray}\nfor the vector and axialvector currents $\\eta_\\mu(x)$ and $\\eta_{5\\mu}(x)$.\n\nNow the hadronic correlation functions $\\Pi(\\omega,0)$ and $\\Pi_{\\mu\\nu}(\\omega,0)$ at the phenomenological side can be written as\n\\begin{eqnarray}\n\\Pi(\\omega,0)&=&\\frac{\\left( f_{D\/D_0}+\\delta f_{D\/D_0}\\right)^2\\left( m_{D\/D_0}+\\delta m_{D\/D_0}\\right)^4}{m_c^2}\\frac{1}{\\left(m_{D\/D_0}+\\delta m_{D\/D_0}\\right)^2-\\omega^2}+\\cdots \\nonumber\\\\\n&=&\\frac{ f_{D\/D_0}^2m_{D\/D_0}^4}{m_c^2}\\frac{1}{m_{D\/D_0}^2-\\omega^2}+\\cdots\\nonumber\\\\\n&&+\\frac{\\rho_N}{2m_N}\\left[\\frac{a}{\\left(m_{D\/D_0}^2-\\omega^2\\right)^2}+\\frac{b}{m_{D\/D_0}^2-\\omega^2} +\\cdots \\right]\\, ,\n\\end{eqnarray}\n\\begin{eqnarray}\n\\Pi_{\\mu\\nu}(\\omega,0)&=&\\left( f_{D^*\/D_1}+\\delta f_{D^*\/D_1}\\right)^2\\left( m_{D^*\/D_1}+\\delta m_{D^*\/D_1}\\right)^2\\frac{1}{\\left(m_{D^*\/D_1}+\\delta m_{D^*\/D_1}\\right)^2-\\omega^2}\\nonumber\\\\\n&&\\left( -g_{\\mu\\nu}+\\frac{q_\\mu q_\\nu}{q^2}\\right) +\\cdots \\, ,\\nonumber\\\\\n&=&f_{D^*\/D_1}^2m_{D^*\/D_1}^2\\frac{1}{m_{D^*\/D_1}^2-\\omega^2}\\left( -g_{\\mu\\nu}+\\frac{q_\\mu q_\\nu}{q^2}\\right)+\\cdots\\nonumber\\\\\n&&+\\frac{\\rho_N}{2m_N}\\left[\\left(\\frac{a}{\\left(m_{D^*\/D_1}^2-\\omega^2\\right)^2}+\\frac{b}{m_{D^*\/D_1}^2-\\omega^2} +\\cdots\\right)\\left( -g_{\\mu\\nu}+\\frac{q_\\mu q_\\nu}{q^2}\\right)+\\cdots \\right] \\, .\\nonumber\\\\\n\\end{eqnarray}\n\n\n\n\n\n\nIn Eqs.(6-7), the first\nterm denotes the double-pole term, and corresponds to the on-shell\neffect of the $T$-matrix,\n\\begin{eqnarray}\na&=&-8\\pi(m_N+m_{D\/D_0})a_{D\/D_0}\\frac{f_{D\/D_0}^2m_{D\/D_0}^4}{m_c^2}\\, ,\n\\end{eqnarray}\nfor the currents $\\eta_5(x)$ and $\\eta(x)$ and\n\\begin{eqnarray}\na&=&-8\\pi(m_N+m_{D^*\/D_1})a_{D^*\/D_1}f_{D^*\/D_1}^2m_{D^*\/D_1}^2\\, ,\n\\end{eqnarray}\nfor the currents $\\eta_\\mu(x)$ and $\\eta_{5\\mu}(x)$;\nthe second term denotes the single-pole term,\nand corresponds to the off-shell effect of the $T$-matrix;\n the\nthird term denotes the continuum term or the remaining effects,\nwhere the $s_{0}$ is the continuum threshold parameter.\nIn general, the continuum contributions are approximated by $\\rho_{QCD}(\\omega,0)\\theta(\\omega^2-s_0)$, where the $\\rho_{QCD}(\\omega,0)$ are the\nperturbative QCD spectral densities, and $\\theta(x)=1$ for $x\\geq 0$, else $\\theta(x)=0$. In this article, the QCD spectral densities are of the type $\\delta(\\omega^2-m_Q^2)$, which include both the ground state and continuum state contributions, I have attributed the excited state contributions to the continuum state contributions, so the collective continuum state contributions can be approximated as $c\\,\\delta(\\omega^2-s_0)$, then I obtain\n the result $c\/\\left(s_0-\\omega^2\\right)$ in the hadronic representation, see Eq.(15). The doublet $\\left(D(2550), D(2600)\\right)$ or $\\left(D_J(2580), D^*_J (2650)\\right)$ is assigned to be\n the first radial excited state of the doublet $(D,D^*)$ \\cite{WangHQET}. The single-pole contributions come from the doublet $\\left(D(2550), D(2600)\\right)$ or $\\left(D_J(2580), D^*_J (2650)\\right)$ are of the form $1\/\\left(m_{D(2550)\/D(2600)}^2-\\omega^2\\right)$, so the approximation $c\/\\left(s_0-\\omega^2\\right)$ is reasonable.\n\nThen the shifts of the masses and decay constants of\nthe charmed-mesons can be approximated as\n\\begin{eqnarray}\n\\delta m_{D\/D_0\/D^*\/D_1} &=&2\\pi\\frac{m_{N}+m_{D\/D_0\/D^*\/D_1}}{m_Nm_{D\/D_0\/D^*\/D_1}}\\rho_N a_{D\/D_0\/D^*\/D_1}\\, ,\n\\end{eqnarray}\n\\begin{eqnarray}\n\\delta f_{D\/D_0}&=&\\frac{m_c^2}{2f_{D\/D_0}m_{D\/D_0}^4}\\left(\\frac{b\\rho_N}{2m_N}-\\frac{4f_{D\/D_0}^2m_{D\/D_0}^3\\delta m_{D\/D_0}}{m_c^2} \\right) \\, , \\nonumber\\\\\n\\delta f_{D^*\/D_1}&=&\\frac{1}{2f_{D^*\/D_1}m_{D^*\/D_1}^2}\\left(\\frac{b\\rho_N}{2m_N}-2f_{D^*\/D_1}^2m_{D^*\/D_1}\\delta m_{D^*\/D_1} \\right) \\, .\n\\end{eqnarray}\n\n\n\n\nIn calculations, I have used the following definitions for the decay constants of the heavy mesons,\n\\begin{eqnarray}\n\\langle 0|\\eta(0)|D_0+\\bar{D}_0\\rangle &=&\\frac{f_{D_0}m^2_{D_0}}{m_c} \\,,\\nonumber\\\\\n\\langle 0|\\eta_5(0)|D+\\bar{D}\\rangle &=&\\frac{f_{D}m^2_{D}}{m_c} \\,,\\nonumber\\\\\n\\langle 0|\\eta_\\mu(0)|D^*+\\bar{D}^*\\rangle &=&f_{D^*}m_{D^*}\\epsilon_\\mu\\,,\\nonumber\\\\\n\\langle 0|\\eta_{5\\mu}(0)|D_1+\\bar{D}_1\\rangle &=&f_{D_1}m_{D_1}\\epsilon_{\\mu}\\,,\n\\end{eqnarray}\nwith summations of the polarization vectors $\\sum_\\lambda \\epsilon_\\mu(\\lambda,q)\\epsilon^*_\\nu(\\lambda,q)=-g_{\\mu\\nu}+\\frac{q_\\mu q_\\nu}{q^2}$.\n\n\nIn the low energy limit $\\omega\\rightarrow 0$, the\n$T_{N}(\\omega,{\\bf 0})$ is equivalent to the Born term $T_{N}^{\\rm\nBorn}(\\omega,{\\bf 0})$. Now I take\ninto account the Born terms at the phenomenological side,\n\\begin{eqnarray}\nT_{N}(\\omega^2)&=&T_{N}^{\\rm\nBorn}(\\omega^2)+\\frac{a}{\\left(m_{D\/D_0\/D^*\/D_1}^2-\\omega^2\\right)^2}+\\frac{b}{m_{D\/D_0\/D^*\/D_1}^2-\\omega^2}+\\frac{c}{s_0-\\omega^2}\n\\, ,\n\\end{eqnarray}\nwith the constraint\n\\begin{eqnarray}\n\\frac{a}{m_{D\/D_0\/D^*\/D_1}^4}+\\frac{b}{m_{D\/D_0\/D^*\/D_1}^2}+\\frac{c}{s_0}&=&0 \\, .\n\\end{eqnarray}\n\n\nThe contributions from the intermediate spin-$\\frac{3}{2}$ charmed\nbaryon states are zero in the soft-limit $q_\\mu \\to 0$\n\\cite{Wangzg}, and I only take into account the intermediate\nspin-$\\frac{1}{2}$ charmed baryon states in calculating the Born terms,\n\\begin{eqnarray}\n\\left(D\/D_0\/D^*\/D_1\\right)^0(c\\bar{u})+p(uud)\\ \\mbox{or}\\ n(udd)&\\longrightarrow&\n\\Lambda_c^+,\\Sigma_c^+(cud)\\ \\mbox{or}\\ \\Sigma_c^0(cdd) \\, ,\\nonumber\\\\\n \\left(D\/D_0\/D^*\/D_1\\right)^+(c\\bar{d})+p(uud)\\ \\mbox{or}\\ n(udd)\n&\\longrightarrow& \\Sigma_c^{++}(cuu)\\ \\mbox{or}\\\n\\Lambda_c^+,\\Sigma_c^+(cud)\\, ,\n\\end{eqnarray}\nwhere $M_{\\Lambda_c}=2.286\\,\\rm{GeV}$ and\n$M_{\\Sigma_c}=2.454\\,\\rm{GeV}$ \\cite{PDG}. I can take $M_H\\approx\n2.4\\,\\rm{GeV}$ as the average value, where the $H$ means either\n$\\Lambda_c^+$, $\\Sigma_c^+$, $\\Sigma_c^{++}$ or $\\Sigma_c^0$. In the case of the bottom baryons, I take the approximation $M_H=\\frac{M_{\\Sigma_b}+M_{\\Lambda_b}}{2}\\approx 5.7\\,\\rm{GeV}$ \\cite{PDG}. I write down the Feynman diagrams and calculate the Born terms directly,\nand obtain the results,\n\\begin{eqnarray}\nT_{N}^{\\rm\nBorn}(\\omega,{\\bf0})&=&\\frac{2m_N(m_H+m_N)}\n{\\left[\\omega^2-(m_H+m_N)^2\\right]\\left[\\omega^2-m_{D}^2\\right]^2}\\left(\\frac{f_{D}m_{D}^2g_{DNH}}{m_c}\\right)^2\\,,\n\\end{eqnarray}\nfor the current $\\eta_5(x)$,\n\\begin{eqnarray}\nT_{N}^{\\rm\nBorn}(\\omega,{\\bf0})&\\to&T_{N}^{\\rm\nBorn}(\\omega,{\\bf0})\\,\\left( {\\rm with}\\,\\,\\, m_N \\to -m_N\\, , \\,\\,\\, D\\to D_0 \\right)\\,,\n\\end{eqnarray}\nfor the current $\\eta(x)$,\n\n\\begin{eqnarray}\nT_{N}^{\\rm\nBorn}(\\omega,{\\bf0})&=&\\frac{2m_N(m_H+m_N)}\n{\\left[\\omega^2-(m_H+m_N)^2\\right]\\left[\\omega^2-m_{D^*}^2\\right]^2}\\left(f_{D^*}m_{D^*}g_{D^*NH}\\right)^2\\,,\n\\end{eqnarray}\nfor the current $\\eta_\\mu(x)$,\n\\begin{eqnarray}\nT_{N}^{\\rm\nBorn}(\\omega,{\\bf0})&\\to&T_{N}^{\\rm\nBorn}(\\omega,{\\bf0})\\,\\left( {\\rm with}\\,\\,\\, m_N \\to -m_N\\, , \\,\\,\\, D^*\\to D_1 \\right)\\,,\n\\end{eqnarray}\nfor the current $\\eta_{5\\mu}(x)$, where the $g_{D\/D_0\/D^*\/D_1NH}$ denote the strong coupling constants\n$g_{D\/D_0\/D^*\/D_1N\\Lambda_c}$\n and $g_{D\/D_0\/D^*\/D_1N\\Sigma_c}$.\nOn the other hand, there are no inelastic channels for the\n$(\\bar{D}\/\\bar{D}_0\/\\bar{D}^*\/\\bar{D}_1)^0 N$ and $(\\bar{D}\/\\bar{D}_0\/\\bar{D}^*\/\\bar{D}_1)^- N$ interactions, and $T_{N}^{\\rm\nBorn}(0)=0$.\nIn calculations, I have used the following definitions for the hadronic coupling constants,\n \\begin{eqnarray}\n \\langle\\Lambda_c\/\\Sigma_c(p-q)|D(-q)N(p)\\rangle &=&g_{\\Lambda_c\/\\Sigma_cDN}\\overline{U}_{\\Lambda_c\/\\Sigma_c}(p-q) i\\gamma_5 U_N(p)\\,,\\nonumber\\\\\n \\langle\\Lambda_c\/\\Sigma_c(p-q)|D_0(-q)N(p)\\rangle &=& g_{\\Lambda_c\/\\Sigma_cD_0N}\\overline{U}_{\\Lambda_c\/\\Sigma_c}(p-q) U_N(p)\\,,\\nonumber\\\\\n \\langle\\Lambda_c\/\\Sigma_c(p-q)|D^*(-q)N(p)\\rangle &=&\\overline{U}_{\\Lambda_c\/\\Sigma_c}(p-q)\\left( g_{\\Lambda_c\/\\Sigma_cD^*N}\\!\\not\\!{\\epsilon}+i\\frac{g^T_{\\Lambda_c\/\\Sigma_cD^*N}}{M_N+M_{\\Lambda_c\/\\Sigma_c}}\\sigma^{\\alpha\\beta}\\epsilon_\\alpha q_\\beta\\right)U_N(p)\\,,\\nonumber\\\\\n \\langle\\Lambda_c\/\\Sigma_c(p-q)|D_1(-q)N(p)\\rangle &=&\\overline{U}_{\\Lambda_c\/\\Sigma_c}(p-q)\\left( g_{\\Lambda_c\/\\Sigma_cD_1N}\\!\\not\\!{\\epsilon}+i\\frac{g^T_{\\Lambda_c\/\\Sigma_cD_1N}}{M_N+M_{\\Lambda_c\/\\Sigma_c}}\\sigma^{\\alpha\\beta}\\epsilon_\\alpha q_\\beta\\right)\\gamma_5U_N(p)\\,,\\nonumber\\\\\n \\end{eqnarray}\n where the $U_N$ and $\\overline{U}_{\\Lambda_c\/\\Sigma_c}$ are the Dirac spinors of the nucleon and the charmed baryons $\\Lambda_c\/\\Sigma_c$, respectively. In the limit $q_\\mu \\to 0$, the strong coupling constants $g_{\\Lambda_c\/\\Sigma_cD^*N}^T$ and $g_{\\Lambda_c\/\\Sigma_cD_1N}^T$ have no contributions.\n\n\n For example, near the thresholds, the $D^*N$ can translate to the $DN$, $D^*N$, $\\pi\\Sigma_c$, $\\eta\\Lambda_c$, etc, we can take into account the intermediate baryon-meson loops or the re-scattering effects with the Bethe-Salpeter equation to obtain the full $D^*N \\to D^*N$ scattering amplitude, and generate higher baryon states dynamically \\cite{DN-BSE-Negative}. We can saturate the full $D^*N\\to D^*N$ scattering amplitude with the tree-level Feynman diagrams describing the exchanges of the higher resonances $\\Lambda_c(2595)$, $\\Sigma_c({\\frac{1}{2}}^-)$, etc. While in other coupled-channels analysis, the $\\Lambda_c(2595)$ emerges as a $DN$ quasi-bound state rather than a $D^*N$ quasi-bound state \\cite{DN-BSE-Negative}. The translations $D^*N $ to the ground states $\\Lambda_c $ and $\\Sigma_c$ are favored in the phase-space, as the $\\Lambda_c(2595)$ and $\\Sigma_c({\\frac{1}{2}}^-)$ with $J^P={\\frac{1}{2}}^-$ have the average mass $m_{H'}\\approx 2.7\\,\\rm{GeV}$ \\cite{PDG,Wang-Negative}. In fact, $m_{H'}^2> s_0$, I can absorb the high resonances into the continuum states in case the high resonances do not dominate the\n QCD sum rules.\n In calculations, I observe that the mass-shift $\\delta m_{D^*}$ does not sensitive to contributions of the ground states $\\Lambda_c $ and $\\Sigma_c$,\n the contributions from the spin-$\\frac{1}{2}$ higher resonances maybe even smaller.\nIn this article, I neglect the intermediate baryon-meson loops, their effects are absorbed into continuum contributions.\n\n\nAt the low nuclear density, the condensates $\\langle{\\cal {O}}\\rangle_{\\rho_N}$ in the nuclear matter can be approximated as\n\\begin{eqnarray}\n\\langle{\\cal{O}}\\rangle_{\\rho_N} &=&\\langle{\\cal{O}}\\rangle+\\frac{\\rho_N}{2m_N}\\langle {\\cal{O}}\\rangle_N \\, ,\n\\end{eqnarray}\nbased on the Fermi gas model, where the $\\langle{\\cal{O}}\\rangle$ and $\\langle\n{\\cal{O}}\\rangle_N$ denote the vacuum condensates and nuclear matter induced condensates, respectively \\cite{Drukarev1991}. I neglect the terms\nproportional to $p_F^4$, $p_F^5$, $p_F^6$, $\\cdots$\nat the normal nuclear matter with the saturation density\n$\\rho_N=\\rho_0=\\frac{2p_F^3}{3\\pi^2}$, as the Fermi momentum\n$p_F=0.27\\,\\rm{GeV}$ is a small quantity \\cite{Drukarev1991}.\n\nI carry out the operator product expansion to the nuclear matter induced condensates $\\frac{\\rho_N}{2m_N}\\langle\n{\\cal{O}}\\rangle_N$ up to dimension-5\n at the large space-like region in the nuclear matter,\nand take into account the one-loop corrections to the quark condensate $\\langle \\bar{q}q\\rangle_N$. I insert the following term\n\\begin{eqnarray}\n\\frac{1}{2!}\\,\\, ig_s \\int d^D y \\bar{\\psi}(y)\\gamma^\\mu \\psi(y)\\frac{\\lambda^a}{2}G^a_\\mu(y)\\,\\, ig_s \\int d^D z \\bar{\\psi}(z)\\gamma^\\nu \\psi(z)\\frac{\\lambda^b}{2}G^b_\\nu(z) \\, ,\n\\end{eqnarray}\n with the dimension $D=4-2\\epsilon$, into the correlation functions $T_N(q)$ and $T^N_{\\mu\\nu}(q)$ firstly, where the $\\psi$ denotes the quark fields, the $G^a_\\mu$ denotes the gluon field, the $\\lambda^a$ denotes the Gell-Mann matrix, then contract the quark fields with Wick theorem, and extract the quark condensate $\\langle\\bar{q}{q}\\rangle_N$ according to the formula $\\langle N|q_\\alpha^i q_\\beta^j|N\\rangle=-\\frac{1}{12}\\langle \\bar{q}q\\rangle_N\\delta_{ij}\\delta_{\\alpha\\beta}$ to obtain the perturbative corrections $\\alpha_s\\langle\\bar{q}{q}\\rangle_N$, where the $i$ and $j$ are color indexes and the $\\alpha$ and $\\beta$ are Dirac spinor indexes. There are six Feynman diagrams make contributions, see Fig.1. Now I calculate the first diagram explicitly for the current $\\eta_5(x)$ in Fig.1,\n \\begin{eqnarray}\n 2T^{(\\alpha_s,1)}_N(q^2) &=&-\\frac{{\\rm Tr}(\\frac{\\lambda^a}{2} \\frac{\\lambda^b}{2})\\langle \\bar{q}q\\rangle_Ng_s^2 \\mu^{2\\epsilon}}{12} \\frac{i}{(2\\pi)^D}\\int d^Dk{\\rm Tr}\\left\\{ i\\gamma_5 \\frac{i}{\\!\\not\\! {k}}\\gamma^\\alpha \\gamma^\\beta \\frac{i}{\\!\\not\\! {k}} i \\gamma_5 \\frac{i}{\\!\\not\\! {q}+\\!\\not\\! {k}-m_c} \\frac{-i\\delta_{ab}g_{\\alpha\\beta}}{k^2}\\right\\} \\nonumber\\\\\n &=&-\\frac{4Dm_c\\langle \\bar{q}q\\rangle_Ng_s^2 \\mu^{2\\epsilon}}{3(2\\pi)^D}i\\frac{\\partial}{\\partial t}\\frac{(-2\\pi i)^2}{2\\pi i} \\int_{m_c^2}^{\\infty} ds \\frac{\\int d^Dk \\delta \\left( k^2-t\\right)\\delta\\left( (k+q)^2-m_c^2\\right)}{s-q^2}\\mid_{t=0} \\nonumber\\\\\n &=& -\\frac{Dm_c\\langle \\bar{q}q\\rangle_Ng_s^2 \\mu^{2\\epsilon}\\left[1+\\epsilon(\\log4\\pi-\\gamma_E) \\right]}{12\\pi^2} \\int_{m_c^2}^{\\infty} ds \\frac{1}{s-q^2} \\frac{s+m_c^2}{s^{1-\\epsilon}(s-m_c^2)^{1+2\\epsilon}} \\, ,\n \\end{eqnarray}\n where I have used Cutkosky's rule to obtain the QCD spectral density. There exists infrared divergence at the end point $s=m_c^2$.\n It is difficult to carry out the integral over $s$, I can perform the Borel transform $B_{M^2}$ firstly, then carry out the integral over $s$,\n \\begin{eqnarray}\n B_{M^2}2T^{(\\alpha_s,1)}_N(q^2)&=& -\\frac{Dm_c\\langle \\bar{q}q\\rangle_Ng_s^2 \\mu^{2\\epsilon}\\left[1+\\epsilon(\\log4\\pi-\\gamma_E) \\right]}{12\\pi^2M^2} \\int_{m_c^2}^{\\infty} ds \\frac{s+m_c^2}{s^{1-\\epsilon}(s-m_c^2)^{1+2\\epsilon}}\\exp\\left( -\\frac{s}{M^2}\\right) \\nonumber\\\\\n &=&\\frac{m_c\\langle \\bar{q}q\\rangle_Ng_s^2}{3\\pi^2M^2}\\exp\\left(-\\frac{m_c^2}{M^2} \\right)\\left( \\frac{1}{\\epsilon}-\\log4\\pi+\\gamma_E\\right)+\\frac{m_c\\langle \\bar{q}q\\rangle_Ng_s^2}{3\\pi^2M^2}\\Gamma\\left(0,\\frac{m_c^2}{M^2} \\right) \\nonumber\\\\\n &&-\\frac{m_c\\langle \\bar{q}q\\rangle_Ng_s^2}{6\\pi^2M^2}\\exp\\left(-\\frac{m_c^2}{M^2} \\right)+\\frac{m_c\\langle \\bar{q}q\\rangle_Ng_s^2}{3\\pi^2M^2}\\exp\\left(-\\frac{m_c^2}{M^2} \\right)\\log\\frac{m_c^2\\mu^2}{M^4} \\, ,\n \\end{eqnarray}\n where\n \\begin{eqnarray}\n\\Gamma(0,x)&=&e^{-x}\\int_0^\\infty dt \\frac{1}{t+x}e^{-t} \\, .\n\\end{eqnarray}\n Other diagrams are calculated analogously, I regularize the divergences in $D=4-2\\epsilon$ dimension, then remove the ultraviolet divergences through renormalization and absorb the\n infrared divergences into the quark condensate $\\langle \\bar{q}q\\rangle_N$.\n \\begin{figure}\n \\centering\n \\includegraphics[totalheight=6cm,width=14cm]{qq-aphs.eps}\n \\caption{The perturbative $\\mathcal{O}(\\alpha_s)$ corrections to the quark condensate $\\langle\\bar{q}q\\rangle_N$. }\n\\end{figure}\n\n\nI calculate the contributions of other condensates at the tree level, the calculations are straightforward and cumbersome. In calculations, I use the following formulas,\n\\begin{eqnarray}\n\\langle\nq_{\\alpha}(x)\\bar{q}_{\\beta}(0)\\rangle_N&=&-\\frac{1}{4}\\left[\\left(\\langle\\bar{q}q\\rangle_N+x^{\\mu}\\langle\\bar{q}D_{\\mu}q\\rangle_N\n+\\frac{1}{2}x^{\\mu}x^{\\nu}\\langle\\bar{q}D_{\\mu}D_{\\nu}q\\rangle_N\n+\\cdots\\right)\\delta_{\\alpha\\beta}\\right.\\nonumber \\\\\n&&\\left.+\\left(\\langle\\bar{q}\\gamma_{\\lambda}q\\rangle_N+x^{\\mu}\\langle\\bar{q}\n\\gamma_{\\lambda}D_{\\mu} q\\rangle_N\n+\\frac{1}{2}x^{\\mu}x^{\\nu}\\langle\\bar{q}\\gamma_{\\lambda}D_{\\mu}D_{\\nu}\nq\\rangle_N\n+\\cdots\\right)\\gamma^{\\lambda}_{\\alpha\\beta} \\right] \\, ,\n\\end{eqnarray}\n and\n\\begin{eqnarray}\n\\langle\ng_{s}q^i_{\\alpha}\\bar{q}^j_{\\beta}G_{\\mu\\nu}^{a}\\rangle_N&=&-\\frac{1}{96} \\frac{\\lambda^a_{ij}}{2}\\left\\{\\langle g_{s}\\bar{q}\\sigma Gq\\rangle_N\\left[\\sigma_{\\mu\\nu}+i(u_{\\mu}\\gamma_{\\nu}-u_{\\nu}\\gamma_{\\mu\n})\n\\!\\not\\! {u}\\right]_{\\alpha\\beta} +\\langle g_{s}\\bar{q}\\!\\not\\! {u}\\sigma Gq\\rangle_N\\right.\\nonumber\\\\\n&&\\left.\\left[\\sigma_{\\mu\\nu}\\!\\not\\!\n{u}+i(u_{\\mu}\\gamma_{\\nu}-u_{\\nu}\\gamma_{\\mu}\n)\\right]_{\\alpha\\beta}\n-4\\langle\\bar{q}u D u D q\\rangle_N\\left[\\sigma_{\\mu\\nu}+2i(u_{\\mu}\\gamma_{\\nu}-u_{\\nu}\\gamma_{\\mu}\n)\\!\\not\\! {u}\\right]_{\\alpha\\beta}\\right\\} \\, , \\nonumber\\\\\n\\end{eqnarray}\nwhere $D_\\mu=\\partial_\\mu-ig_s\\frac{\\lambda^a}{2}G^a_\\mu$,\n\\begin{eqnarray} \\label{ }\n\\langle\\bar{q}\\gamma_{\\mu}q\\rangle_N&=&\\langle\\bar{q}\\!\\not\\!{u}q\\rangle_N\nu_{\\mu} \\, , \\nonumber \\\\\n\\langle\\bar{q}D_{\\mu}q\\rangle_N&=&\\langle\\bar{q}u D\nq\\rangle_N\nu_{\\mu}=0\\, , \\nonumber \\\\\n\\langle\\bar{q}\\gamma_{\\mu}D_{\\nu}q\\rangle_N&=&\\frac{4}{3}\\langle\\bar{q}\n\\!\\not\\! {u}u D q\\rangle_N\\left(u_{\\mu}u_{\\nu}-\\frac{1}{4}g_{\\mu\\nu}\\right) \\, , \\nonumber \\\\\n\\langle\\bar{q}D_{\\mu}D_{\\nu}q\\rangle_N&=&\\frac{4}{3}\\langle\\bar{q}\nu D u D q\\rangle_N\\left(u_{\\mu}u_{\\nu}-\\frac{1}{4}g_{\\mu\\nu}\\right)\n-\\frac{1}{6} \\langle\ng_{s}\\bar{q}\\sigma Gq\\rangle_N\\left(u_{\\mu}u_{\\nu}-g_{\\mu\\nu}\\right) \\, , \\nonumber \\\\\n\\langle\\bar{q}\\gamma_{\\lambda}D_{\\mu}D_{\\nu}q\\rangle_N&=&2\\langle\\bar{q}\n\\!\\not\\! {u}u D u D\nq\\rangle_N\\left[u_{\\lambda}u_{\\mu}u_{\\nu} -\\frac{1}{6}\n\\left(u_{\\lambda}g_{\\mu\\nu}+u_{\\mu}g_{\\lambda\\nu}+u_{\\nu}g_{\\lambda\\mu}\\right)\\right]\n\\nonumber\\\\\n&&-\\frac{1}{6} \\langle\ng_{s}\\bar{q}\\!\\not\\! {u}\\sigma Gq\\rangle_N(u_{\\lambda}u_{\\mu}u_{\\nu}-u_{\\lambda}g_{\\mu\\nu}) \\, ,\n\\end{eqnarray}\nand\n\\begin{equation}\n \\langle\nG_{\\alpha\\beta}^{a}G_{\\mu\\nu}^{b}\\rangle_N=\\frac{\\delta^{ab}}{96}\n\\langle\nGG\\rangle_N\\left(g_{\\alpha\\mu}g_{\\beta\\nu}-g_{\\alpha\\nu}g_{\\beta\\mu}\\right)+O\\left(\\langle\n\\textbf{E}^{2}+\\textbf{B}^{2}\\rangle_N\\right).\n\\end{equation}\n\n\n\nOnce analytical results at the level of quark-gluon degree's of freedom are obtained,\n then I set $\\omega^2=q^2$, and take the\nquark-hadron duality below the continuum threshold $s_0$, and perform the Borel transform with respect\nto the variable $Q^2=-\\omega^2$, finally obtain the following QCD sum\nrules:\n\\begin{eqnarray}\n a\\, C_a+b\\, C_b &=&C_f \\, ,\n\\end{eqnarray}\n\\begin{eqnarray}\nC_a &=&\\frac{1}{M^2}\\exp\\left(-\\frac{m_{D}^2}{M^2}\\right)-\\frac{s_0}{m_{D}^4}\\exp\\left(-\\frac{s_0}{M^2}\\right) \\, ,\\nonumber\\\\\nC_b&=&\\exp\\left(-\\frac{m_{D}^2}{M^2}\\right)-\\frac{s_0}{m_{D}^2}\\exp\\left(-\\frac{s_0}{M^2}\\right) \\, ,\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\nC_f&=& \\frac{2m_N(m_H+m_N)}{(m_H+m_N)^2-m_{D}^2}\\left(\\frac{f_{D}m_{D}^2g_{DNH}}{m_c}\\right)^2\\left\\{ \\left[\\frac{1}{M^2}-\\frac{1}{m_{D}^2-(m_H+m_N)^2}\\right] \\exp\\left(-\\frac{m_{D}^2}{M^2}\\right)\\right.\\nonumber\\\\\n&&\\left.+\\frac{1}{(m_H+m_N)^2-m_{D}^2}\\exp\\left(-\\frac{(m_H+m_N)^2}{M^2}\\right)\\right\\}-\\frac{m_c\\langle\\bar{q}q\\rangle_N}{2}\\left\\{1+\\frac{\\alpha_s}{\\pi} \\left[ 6-\\frac{4m_c^2}{3M^2} \\right.\\right.\\nonumber\\\\\n&&\\left.\\left.-\\frac{2}{3}\\left( 1-\\frac{m_c^2}{M^2}\\right)\\log\\frac{m_c^2}{\\mu^2}-2\\Gamma\\left(0,\\frac{m_c^2}{M^2}\\right)\\exp\\left( \\frac{m_c^2}{M^2}\\right) \\right]\\right\\}\\exp\\left(- \\frac{m_c^2}{M^2}\\right) \\nonumber\\\\\n&&+\\frac{1}{2}\\left\\{-2\\left(1-\\frac{m_c^2}{M^2}\\right)\\langle q^\\dag i D_0q\\rangle_N +\\frac{4m_c\n}{M^2}\\left(1-\\frac{m_c^2}{2M^2}\\right)\\langle \\bar{q} i D_0 i D_0q\\rangle_N+\\frac{1}{12}\\langle\\frac{\\alpha_sGG}{\\pi}\\rangle_N\\right\\} \\nonumber\\\\\n&&\\exp\\left(- \\frac{m_c^2}{M^2}\\right)\\, ,\n\\end{eqnarray}\n for the current $\\eta_5(x)$,\n \\begin{eqnarray}\n C_i &\\to& C_i\\left( {\\rm with}\\,\\,\\, m_N \\to -m_N\\, , \\,\\,\\,m_c \\to -m_c\\, , \\,\\,\\,D \\to D_0\\right)\\, ,\n \\end{eqnarray}\nfor the current $\\eta(x)$,\n\\begin{eqnarray}\nC_a &=&\\frac{1}{M^2}\\exp\\left(-\\frac{m_{D^*}^2}{M^2}\\right)-\\frac{s_0}{m_{D^*}^4}\\exp\\left(-\\frac{s_0}{M^2}\\right) \\, ,\\nonumber\\\\\nC_b&=&\\exp\\left(-\\frac{m_{D^*}^2}{M^2}\\right)-\\frac{s_0}{m_{D^*}^2}\\exp\\left(-\\frac{s_0}{M^2}\\right) \\, ,\n\\end{eqnarray}\n\\begin{eqnarray}\nC_f&=& \\frac{2m_N(m_H+m_N)}{(m_H+m_N)^2-m_{D^*}^2}\\left(f_{D^*}m_{D^*}g_{D^*NH}\\right)^2\\left\\{ \\left[\\frac{1}{M^2}-\\frac{1}{m_{D^*}^2-(m_H+m_N)^2}\\right] \\exp\\left(-\\frac{m_{D^*}^2}{M^2}\\right)\\right.\\nonumber\\\\\n&&\\left.+\\frac{1}{(m_H+m_N)^2-m_{D^*}^2}\\exp\\left(-\\frac{(m_H+m_N)^2}{M^2}\\right)\\right\\}-\\frac{m_c\\langle\\bar{q}q\\rangle_N}{2}\\left\\{1+\\frac{\\alpha_s}{\\pi} \\left[ \\frac{8}{3}-\\frac{4m_c^2}{3M^2} \\right.\\right.\\nonumber\\\\\n&&\\left.\\left.+\\frac{2}{3}\\left( 2+\\frac{m_c^2}{M^2}\\right)\\log\\frac{m_c^2}{\\mu^2}-\\frac{2m_c^2}{3M^2}\\Gamma\\left(0,\\frac{m_c^2}{M^2}\\right)\\exp\\left( \\frac{m_c^2}{M^2}\\right) \\right]\\right\\}\\exp\\left(- \\frac{m_c^2}{M^2}\\right) \\nonumber\\\\\n&&+\\frac{1}{2}\\left\\{-\\frac{4\\langle q^\\dag i D_0q\\rangle_N}{3} +\\frac{2m_c^2\\langle q^\\dag i D_0q\\rangle_N}{M^2}+\\frac{2m_c\\langle\\bar{q}g_s\\sigma Gq\\rangle_N}{3M^2}+\\frac{16m_c\\langle \\bar{q} i D_0 i D_0q\\rangle_N}{3M^2}\\right.\\nonumber\\\\\n&&\\left.-\\frac{2m_c^3\\langle \\bar{q} i D_0 i D_0q\\rangle_N}{M^4}-\\frac{1}{12}\\langle\\frac{\\alpha_sGG}{\\pi}\\rangle_N\\right\\}\\exp\\left(- \\frac{m_c^2}{M^2}\\right)\\, ,\n\\end{eqnarray}\n for the current $\\eta_\\mu(x)$,\n\\begin{eqnarray}\n C_i &\\to& C_i\\left( {\\rm with}\\,\\,\\, m_N \\to -m_N\\, , \\,\\,\\,m_c \\to -m_c\\, , \\,\\,\\,D^* \\to D_1\\right)\\, ,\n \\end{eqnarray}\nfor the current $\\eta_{5\\mu}(x)$, where $i=a,b,f$.\nIn this article, I neglect the contributions from the heavy quark condensates $\\langle \\bar{Q}Q\\rangle$, $\\langle \\bar{Q}Q\\rangle=-\\frac{1}{12\\pi m_Q}\\langle \\frac{\\alpha_s GG}{\\pi} \\rangle$ up to the order $\\mathcal{O}(\\alpha_s)$ (here I count the condensate $ \\langle \\frac{\\alpha_s GG}{\\pi} \\rangle$ as of the order $\\mathcal{O}(\\alpha_s)$), the heavy quark condensates have practically no effect on the polarization\nfunctions, for detailed discussions about this subject, one can consult Ref.\\cite{PRT85}.\nIn Ref.\\cite{QQcond}, Buchheim, Hilger and Kampfer study the contributions of the condensates involve the heavy quarks in details, the results indicate that\nthose condensates are either suppressed by the heavy quark mass $m_Q$ or by the additional factor $\\frac{\\alpha_s}{4\\pi}$ (or $g_s^2\/(4\\pi)^2$).\nNeglecting the in-medium effects on the heavy quark condensates cannot affect the predictions remarkably, as the main contributions come from the terms $\\langle \\bar{q}q\\rangle_N$.\n\n\nDifferentiate above equation with respect to $\\tau=\\frac{1}{M^2}$, then\neliminate the\n parameter $b$ ($a$), I can obtain the QCD sum rules for\n the parameter $a$ ($b$),\n \\begin{eqnarray}\n a&=&\\frac{C_f\\left(-\\frac{d}{d\\tau}\\right)C_b-C_b\\left(-\\frac{d}{d\\tau}\\right)C_f}{C_a\\left(-\\frac{d}{d\\tau}\\right)C_b-C_b\\left(-\\frac{d}{d\\tau}\\right)C_a}\\, , \\nonumber\\\\\n b&=&\\frac{C_f\\left(-\\frac{d}{d\\tau}\\right)C_a-C_a\\left(-\\frac{d}{d\\tau}\\right)C_f}{C_b\\left(-\\frac{d}{d\\tau}\\right)C_a-C_a\\left(-\\frac{d}{d\\tau}\\right)C_b}\\, .\n \\end{eqnarray}\n With the simple replacements $m_c \\to m_b$, $D\/D_0\/D^*\/D_1 \\to B\/B_0\/B^*\/B_1$, $\\Lambda_c \\to \\Lambda_b$ and $\\Sigma_c \\to \\Sigma_b$,\n I can obtain the corresponding the QCD sum rules for the bottom mesons in the nuclear matter.\n\n\n\n\\section{Numerical results and discussions}\nAt the normal nuclear matter with the saturation density\n$\\rho_N=\\rho_0=\\frac{2p_F^3}{3\\pi^2}$, where the Fermi momentum\n$p_F=0.27\\,\\rm{GeV}$ is a small quantity,\n the condensates $\\langle{\\cal {O}}\\rangle_{\\rho_N}$ in the nuclear matter can be approximated as\n$\\langle{\\cal{O}}\\rangle_{\\rho_N} =\\langle{\\cal{O}}\\rangle+\\frac{\\rho_N}{2m_N}\\langle {\\cal{O}}\\rangle_N $, the terms\nproportional to $p_F^4$, $p_F^5$, $p_F^6$, $\\cdots$ can be neglected safely,\nwhere the $\\langle{\\cal{O}}\\rangle=\\langle0|{\\cal{O}}|0\\rangle$ and $\\langle\n{\\cal{O}}\\rangle_N=\\langle N|\n{\\cal{O}}|N\\rangle$ denote the vacuum condensates and nuclear matter induced condensates, respectively \\cite{Drukarev1991}.\n\n\nThe input parameters at the QCD side are taken as $\\rho_N=(0.11\\,\\rm{GeV})^3$,\n $\\langle\\bar{q} q\\rangle_N={\\sigma_N \\over m_u+m_d } (2m_N)$,\n $\\langle\\frac{\\alpha_sGG}{\\pi}\\rangle_N= - 0.65 \\,{\\rm {GeV}} (2m_N)$, $\\sigma_N=45\\,\\rm{MeV}$,\n $m_u+m_d=12\\,\\rm{MeV}$,\n$\\langle q^\\dagger iD_0 q\\rangle_N=0.18 \\,{\\rm{GeV}}(2m_N)$,\n$\\langle\\bar{q}g_s\\sigma G q\\rangle_N=3.0\\,{\\rm GeV}^2(2m_N) $,\n$\\langle \\bar{q} iD_0iD_0\nq\\rangle_N+{1\\over8}\\langle\\bar{q}g_s\\sigma G\nq\\rangle_N=0.3\\,{\\rm{GeV}}^2(2m_N)$,\n $m_N=0.94\\,\\rm{GeV}$ \\cite{C-parameter}, $m_c=(1.3\\pm0.1)\\,\\rm{GeV}$, $m_b=(4.7\\pm0.1)\\,\\rm{GeV}$, $\\alpha_s=0.45$ and $\\mu=1\\,\\rm{GeV}$.\n If we take the normalization $\\langle N(\\mbox{\\boldmath $p$})|N(\\mbox{\\boldmath\n$p$}')\\rangle = (2\\pi)^{3} \\delta^{3}(\\mbox{\\boldmath\n$p$}-\\mbox{\\boldmath $p$}')$, then $\\langle{\\cal{O}}\\rangle_{\\rho_N} =\\langle{\\cal{O}}\\rangle+\\rho_N\\langle {\\cal{O}}\\rangle_N $, the unit $2m_N$ in the brackets in the values of the condensates $\\langle\\bar{q} q\\rangle_N$, $\\langle\\frac{\\alpha_sGG}{\\pi}\\rangle_N$, $\\cdots$ disappears.\nI choose the values of the nuclear matter induced condensates determined in Ref.\\cite{C-parameter}, which are still widely used in the literatures.\nAlthough the values of some condensates are updated, those condensates are irrelevant to the present work. The updates focus on\nthe four-quark condensate \\cite{update-4q}. In this article, I take into account the condensates up to dimension-5, the four-quark condensates have no contributions, the dominant contributions come from the nuclear matter induced condensate $\\langle\\bar{q} q\\rangle_N$, $\\langle\\bar{q} q\\rangle_N={\\sigma_N \\over m_u+m_d } (2m_N)$. The value $m_u+m_d=12\\,\\rm{MeV}$ is obtained from the famous Gell-Mann-Oakes-Renner relation at the energy scale $\\mu=1\\,\\rm{GeV}$, while the value $\\sigma_N=45\\,\\rm{MeV}$ is still widely used \\cite{update-4q}.\n\n\nThe parameters at the hadronic side are taken as\n$m_D=1.870\\,\\rm{GeV}$, $m_B=5.280\\,\\rm{GeV}$, $m_{D_0}=2.355\\,\\rm{GeV}$, $m_{B_0}=5.740\\,\\rm{}GeV$,\n $m_{D^*}=2.010\\,\\rm{GeV}$, $m_{B^*}=5.325\\,\\rm{GeV}$, $m_{D_1}=2.420\\,\\rm{GeV}$, $m_{B_1}=5.750\\,\\rm{}GeV$,\n$f_{D}=0.210\\,\\rm{GeV}$, $f_B=0.190\\,\\rm{GeV}$,\n $f_{D_0}=0.334 \\frac{m_c}{m_{D_0}}\\,\\rm{GeV}$,\n$f_{B_0}=0.280\\frac{m_b}{m_{B_0}}\\,\\rm{GeV}$,\n$f_{D^*}=0.270\\,\\rm{GeV}$, $f_{B^*}=0.195\\,\\rm{GeV}$,\n $f_{D_1}=0.305\\,\\rm{GeV}$,\n$f_{B_1}=0.255\\,\\rm{GeV}$,\n $s^0_{D}=(6.2\\pm0.5)\\,\\rm{GeV}^2$, $s^0_{B}=(33.5\\pm1.0)\\,\\rm{GeV}^2$,\n$s^0_{D^*}=(6.5\\pm0.5)\\,\\rm{GeV}^2$, $s^0_{B^*}=(35.0\\pm1.0)\\,\\rm{GeV}^2$,\n$s^0_{D_0}=(8.0\\pm0.5)\\,\\rm{GeV}^2$, $s^0_{B_0}=(39.0\\pm1.0)\\,\\rm{GeV}^2$,\n$s^0_{D_1}=(8.5\\pm0.5)\\,\\rm{GeV}^2$ and $s^0_{B_1}=(39.0\\pm1.0)\\,\\rm{GeV}^2$, which are determined by the conventional two-point correlation functions\n using the QCD sum rules \\cite{WangHuang,WangJHEP}. I neglect the uncertainties of the decay constants to avoid double counting as the main uncertainties of the decay constants originate from the uncertainties of the continuum threshold parameters $s_0$.\n\n\nThe value of the strong coupling constant $g_{DN\\Lambda_c}$ is $g_{\\Lambda_c DN}=6.74$ from\nthe QCD sum rules \\cite{Nielsen98}, while the average value of the strong coupling constants $g_{\\Lambda_cDN}$ and $g_{\\Sigma_cDN}$ from the light-cone QCD sum rules is $\\frac{g_{\\Lambda_cDN}+g_{\\Sigma_cDN}}{2}=6.775$ \\cite{Khodjamirian1108}, those values are consistent with each other.\nThe average value of the strong coupling constants $g_{\\Lambda_c D^*N}$ and $g_{\\Sigma_c D^*N}$ from the light-cone QCD sum rules is $\\frac{g_{\\Lambda_c D^*N}+g_{\\Sigma_cD^*N}}{2}=3.86$ \\cite{Khodjamirian1108}. In this\narticle, I take the approximation\n$g_{DN\\Lambda_c}\\approx\ng_{DN\\Sigma_c}\\approx g_{BN\\Lambda_b}\\approx\ng_{BN\\Sigma_b}\\approx g_{D_0N\\Lambda_c}\\approx\ng_{D_0N\\Sigma_c}\\approx g_{B_0N\\Lambda_b}\\approx\ng_{B_0N\\Sigma_b}\\approx6.74$ and\n$g_{\\Lambda_cD^*N}\\approx g_{\\Sigma_cD^*N}\n\\approx g_{\\Lambda_cD_1N}\\approx g_{\\Sigma_cD_1N}\\approx g_{\\Lambda_bB^*N}\\approx g_{\\Sigma_bB^*N}\n\\approx g_{\\Lambda_bB_1N}\\approx g_{\\Sigma_bB_1N}\\approx3.86$.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[totalheight=5cm,width=7cm]{massPD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massVD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massSD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massAD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massPB.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massVB.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massSB.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{massAB.EPS}\n \\caption{(Color online) The shifts of the masses of the heavy mesons in the nuclear matter with variations of the Borel parameter $M^2$, the I (II) denotes contributions up to the next-to-leading order (leading order) are included. }\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[totalheight=5cm,width=7cm]{decayPD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decayVD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decaySD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decayAD.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decayPB.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decayVB.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decaySB.EPS}\n \\includegraphics[totalheight=5cm,width=7cm]{decayAB.EPS}\n \\caption{(Color online) The shifts of the decay constants of the heavy mesons in the nuclear matter with variations of the Borel parameter $M^2$, the I (II) denotes contributions up to the next-to-leading order (leading order) are included. }\n\\end{figure}\n\nIn Figs.2-3, I plot the shifts of the masses and decay constants of the heavy mesons in the nuclear matter with variations of the Borel\nparameter $M^2$, respectively. From the figures, I can see that there appear platforms. In this article, I choose the Borel parameters $M^2$ according to the criterion that the uncertainties originate from the Borel parameters $M^2$ are negligible. The values of the Borel parameters $M^2$ are shown explicitly in Table 1.\nFrom Figs.2-3 and Table 1, I can see that the Borel parameters $M^2$ in the QCD sum rules for the mass-shift $\\delta m$ and decay-constant-shift $\\delta f$ of the same meson are different. It is not un-acceptable, as the mass-shift $\\delta m$ and decay-constant-shift $\\delta f$ come from different QCD sum rules, not coupled QCD sum rules, see Eq.(39), the platforms maybe appear in different places in different QCD sum rules.\n\nI can obtain the shifts of the masses and decay constants of the heavy mesons in the nuclear matter in the Borel windows, which are shown explicitly in Table 2.\nFrom the Table 2, I can obtain the fractions of the shifts $\\frac{\\delta m_{D\/D^*\/D_0\/D_1}}{m_{D\/D^*\/D_0\/D_1}}\\leq 5\\%$, $\\frac{\\delta f_{D\/D^*\/D_0\/D_1}}{f_{D\/D^*\/D_0\/D_1}}\\leq 10\\%$, $\\frac{\\delta m_{B\/B^*\/B_0\/B_1}}{m_{B\/B^*\/B_0\/B_1}}= (5-15)\\%$ and $\\frac{\\delta f_{B\/B^*\/B_0\/B_1}}{f_{B\/B^*\/B_0\/B_1}}= (25-55)\\%$, which are shown explicitly in Table 3.\nIn calculations, I observe that the main contributions come from the terms $m_c\\langle\\bar{q}q\\rangle_N$ and $m_b\\langle\\bar{q}q\\rangle_N$.\n From Table 3, I can see that the next-to-leading order corrections $\\alpha_s \\langle\\bar{q}q\\rangle_N$ are important. In the case of the shifts $\\frac{\\delta m_{B^*\/B_1}}{m_{B^*\/B_1}}$ and $\\frac{\\delta f_{B^*\/B_1}}{f_{B^*\/B_1}}$, the next-to-leading order contributions $\\alpha_s \\langle\\bar{q}q\\rangle_N$ and the leading order contributions $\\langle\\bar{q}q\\rangle_N$ are almost equivalent. In this article, I choose the special energy scale $\\mu=1\\,\\rm{GeV}$. The logarithm $\\log \\frac{m_b^2}{\\mu^2}$ in the next-to-leading contributions is very large and enhances the next-to-leading contributions greatly. Although the nuclear matter induced condensates evolve with the renormalization group equation, their evolving behaviors with the energy scales are not well known, as this subject has not been studied in details yet at the present time. A larger energy scale $\\mu$ can lead to smaller logarithm $\\log \\frac{m_b^2}{\\mu^2}$ therefore more reasonable predictions. In Table 4, I present the main uncertainties, which originate from the uncertainties of the heavy quark masses and the continuum threshold parameters.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n &$\\delta m_{D}$ &$\\delta m_{D^*}$ &$\\delta m_{D_0}$ &$\\delta m_{D_1}$ &$\\delta m_{B}$ &$\\delta m_{B^*}$ &$\\delta m_{B_0}$ &$\\delta m_{B_1}$\\\\ \\hline\n $M^2$ &$4.4-5.4$ &$4.6-5.6$ &$6.0-7.0$ &$6.6-7.6$ &$29-33$ &$30-34$ &$32-36$ &$32-36$ \\\\ \\hline\\hline\n &$\\delta f_{D}$ &$\\delta f_{D^*}$ &$\\delta f_{D_0}$ &$\\delta f_{D_1}$ &$\\delta f_{B}$ &$\\delta f_{B^*}$ &$\\delta f_{B_0}$ &$\\delta f_{B_1}$\\\\ \\hline\n $M^2$ &$1.9-2.9$ &$3.5-4.5$ &$4.3-5.3$ &$5.3-6.3$ &$25-29$ &$27-31$ &$30-34$ &$31-35$ \\\\ \\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{ The Borel parameters in the QCD sum rules for the shifts of the masses and decay constants of the heavy mesons in the nuclear matter, the unit is $\\rm{GeV}^2$. }\n\\end{table}\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n &$\\delta m_{D}$ &$\\delta m_{D^*}$ &$\\delta m_{D_0}$ &$\\delta m_{D_1}$ &$\\delta m_{B}$ &$\\delta m_{B^*}$ &$\\delta m_{B_0}$ &$\\delta m_{B_1}$\\\\ \\hline\n{\\rm NLO} &$-72$ &$-102$ &$80$ &$97$ &$-473$ &$-687$ &$295$ &$522$ \\\\ \\hline\n{\\rm LO} &$-47$ &$-70$ &$54$ &$66$ &$-329$ &$-340$ &$209$ &$260$ \\\\ \\hline\n\\cite{Hay} &$-48$ & & & & & & &\\\\ \\hline\n\\cite{Hil} &$+45$ & & & & $+60$ & & &\\\\ \\hline\n\\cite{Azi} &$-46$ & & & & $-242$ & & &\\\\ \\hline\\hline\n &$\\delta f_{D}$ &$\\delta f_{D^*}$ &$\\delta f_{D_0}$ &$\\delta f_{D_1}$ &$\\delta f_{B}$ &$\\delta f_{B^*}$ &$\\delta f_{B_0}$ &$\\delta f_{B_1}$\\\\ \\hline\n{\\rm NLO} &$-6$ &$-26$ &$11$ &$31$ &$-71$ &$-111$ &$56$ &$134$ \\\\ \\hline\n{\\rm LO} &$-4$ &$-18$ &$7$ &$21$ &$-48$ &$-55$ &$39$ &$67$ \\\\ \\hline\n\\cite{Azi} &$-2$ & & & & $-23$ & & &\\\\ \\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{ The shifts of the masses and decay constants of the heavy mesons in the nuclear matter, where the NLO (LO) denotes contributions up to the next-to-leading order (leading order) are included, the unit is MeV. }\n\\end{table}\n\n\n\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n &$\\frac{\\delta m_{D}}{m_{D}}$ &$\\frac{\\delta m_{D^*}}{m_{D^*}}$ &$\\frac{\\delta m_{D_0}}{m_{D_0}}$ &$\\frac{\\delta m_{D_1}}{m_{D_1}}$\n &$\\frac{\\delta m_{B}}{ m_{B}}$ &$\\frac{\\delta m_{B^*}}{m_{B^*}}$ &$\\frac{\\delta m_{B_0}}{m_{B_0}}$ &$\\frac{\\delta m_{B_1}}{m_{B_1}}$\\\\ \\hline\n{\\rm NLO} &$-4\\%$ &$-5\\%$ &$3\\%$ &$4\\%$ &$-9\\%$ &$-13\\%$ &$5\\%$ &$9\\%$ \\\\ \\hline\n{\\rm LO} &$-3\\%$ &$-3\\%$ &$2\\%$ &$3\\%$ &$-6\\%$ &$-6\\%$ &$4\\%$ &$5\\%$ \\\\ \\hline\\hline\n &$\\frac{\\delta f_{D}}{f_{D}}$ &$\\frac{\\delta f_{D^*}}{f_{D^*}}$ &$\\frac{\\delta f_{D_0}}{f_{D_0}}$ &$\\frac{\\delta f_{D_1}}{f_{D_1}}$\n &$\\frac{\\delta f_{B}}{f_{B}}$ &$\\frac{\\delta f_{B^*}}{f_{B^*}}$ &$\\frac{\\delta f_{B_0}}{f_{B_0}}$ &$\\frac{\\delta f_{B_1}}{f_{B_1}}$\\\\ \\hline\n{\\rm NLO} &$-3\\%$ &$-10\\%$ &$6\\%$ &$10\\%$ &$-37\\%$ &$-57\\%$ &$24\\%$ &$53\\%$ \\\\ \\hline\n{\\rm LO} &$-2\\%$ &$-7\\%$ &$4\\%$ &$7\\%$ &$-25\\%$ &$-28\\%$ &$17\\%$ &$26\\%$ \\\\ \\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{ The fractions of the shifts of the masses and decay constants of the heavy mesons in the nuclear matter, where the NLO (LO) denotes contributions up to the next-to-leading order (leading order) are included. }\n\\end{table}\n\n\n\n\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n &$\\delta (\\delta m_{D})$ &$\\delta(\\delta m_{D^*})$ &$\\delta(\\delta m_{D_0})$ &$\\delta(\\delta m_{D_1})$\n &$\\delta(\\delta m_{B})$ &$\\delta(\\delta m_{B^*})$ &$\\delta(\\delta m_{B_0})$ &$\\delta(\\delta m_{B_1})$ \\\\ \\hline\n$\\delta m_Q$ &$\\pm14$ &$\\pm4$ &$\\pm26$ &$\\pm6$\n &$\\pm18$ &$\\pm1$ &$\\pm25$ &$\\pm1$ \\\\ \\hline\n$\\delta s_0$ &$\\pm9$ &$\\pm14$ &$\\pm12$ &$\\pm13$\n &$\\pm65$ &$\\pm80$ &$\\pm39$ &$\\pm70$ \\\\ \\hline \\hline\n &$\\delta (\\delta f_{D}$) &$\\delta (\\delta f_{D^*})$ &$\\delta (\\delta f_{D_0})$ &$\\delta (\\delta f_{D_1})$\n &$\\delta (\\delta f_{B})$ &$\\delta (\\delta f_{B^*})$ &$\\delta (\\delta f_{B_0})$ &$\\delta (\\delta f_{B_1})$\\\\ \\hline\n$\\delta m_Q$ &$\\pm1$ &$\\pm1$ &$\\pm3$ &$\\pm2$\n &$\\pm2$ &$\\pm1$ &$\\pm3$ &$\\pm0$ \\\\ \\hline\n$\\delta s_0$ &$\\pm1$ &$\\pm7$ &$\\pm4$ &$\\pm8$\n &$\\pm21$ &$\\pm25$ &$\\pm15$ &$\\pm34$ \\\\ \\hline \\hline\n\\end{tabular}\n\\end{center}\n\\caption{ The uncertainties of the shifts of\n the masses and decay constants of the heavy mesons in the nuclear matter originate from the uncertainties of the heavy quark masses and continuum threshold parameters, where the unit is MeV. }\n\\end{table}\n\n\nThe mass-shifts of the negative (positive) parity mesons are negative (positive), the decays of the high\ncharmonium states to the $D\\bar{D}$ and $D^*\\bar{D}^*$ ($D_0\\bar{D}_0$ and $D_1\\bar{D}_1$) pairs are enhanced (suppressed) in the phase space,\n and we should take into account those effects carefully in studying the production of the $J\/\\psi$ so as to identifying the quark-gluon plasmas.\n\n\n The currents $\\bar{Q} q$ and $\\bar{Q}i\\gamma_5 q$ (also $\\bar{Q}\\gamma_\\mu q$ and $\\bar{Q}\\gamma_\\mu\\gamma_5 q$) are mixed with each other under the chiral transformation $q\\to e^{i\\alpha\\gamma_5}q$, the currents $\\bar{Q} q$, $\\bar{Q}i\\gamma_5 q$, $\\bar{Q}\\gamma_\\mu q$, $\\bar{Q}\\gamma_\\mu\\gamma_5 q$ are not conserved in the limit $m_q \\to 0$, it is better to take the doublets $(D,D_0)$ and $(D^*,D_1)$ as the parity-doublets rather than the chiral-doublets.\nThe quark condensate $\\langle\\bar{q}q\\rangle_{\\rho_N}$ serves as the order parameter,\n and undergoes reduction in the nuclear matter,\n the chiral symmetry is partially restored; however, there appear new medium-induced condensates, which also break the chiral symmetry. In this article, the $\\langle\n{\\cal{O}}\\rangle_N$ are companied by the heavy quark masses $m_Q$, $m_Q^2$ or $m_Q^3$, the net effects cannot warrant that the chiral symmetry is monotonously restored with the increase of the $\\rho_N$.\n When the $\\rho_N$ is large enough, the order parameter $\\langle\\bar{q}q\\rangle_{\\rho_N} \\to 0$, the chiral symmetry is restored, the Fermi gas approximation for the nuclear matter breaks down, and the parity-doublets\n maybe have degenerated masses approximately. In this article, I study the parity-doublets at the low $\\rho_N$, the mass breaking effects of the parity-doublets maybe even larger, see Table 2.\n We expect that smaller mass splitting of the parity-doublets at the high nuclear density is favored, however, larger mass splitting of the parity-doublets at the lower nuclear density cannot be excluded.\n In Refs.\\cite{Hil,Hilger2}, the mass center $\\overline{m}_P$ of the pseudoscalar mesons increases in the nuclear matter while the mass center $\\overline{m}_S$ of the scalar mesons decreases in the nuclear matter, the mass breaking effect $\\overline{m}_S-\\overline{m}_P$ of the parity-doublets is smaller than that in the vacuum.\n\n\n\nIn Table 5, I show the scattering lengths $a_{D}$, $a_{D^*}$, $a_{D_0}$, $a_{D_1}$, $a_{B}$, $a_{B^*}$, $a_{B_0}$, $a_{B_1}$ explicitly, the $a_{D}$, $a_{D^*}$, $a_{B}$, $a_{B^*}$ are negative, which indicate the interactions $DN$, $D^*N$, $BN$, $B^*N$ are attractive, the $a_{D_0}$, $a_{D_1}$, $a_{B_0}$, $a_{B_1}$ are positive, which indicate the interactions $D_0N$, $D_1N$, $B_0N$, $B_0N$ are repulsive. It is difficult (possible) to form the $D_0N$, $D_1N$, $B_0N$, $B_0N$ ($DN$, $D^*N$, $BN$, $B^*N$) bound states. In Ref.\\cite{ZhangDN}, Zhang studies the $S$-wave $DN$ and $D^*N$ bound states with the QCD sum rules, the numerical results indicate that the $\\Sigma_c(2800)$ and $\\Lambda_c(2940)$ can be assigned to be the $S$-wave $DN$ state with $J^P ={\\frac{1}{2}}^-$ and the $S$-wave $D^*N$ state with $J^P ={\\frac{3}{2}}^-$, respectively.\n\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n &$a_{D}$ &$a_{D^*}$ &$a_{D_0}$ &$a_{D_1}$ &$a_{B}$ &$a_{B^*}$ &$a_{B_0}$ &$a_{B_1}$\\\\ \\hline\n{\\rm NLO} &$-1.1$ &$-1.5$ &$1.3$ &$1.6$ &$-8.9$ &$-12.9$ &$5.6$ &$9.9$ \\\\ \\hline\n{\\rm LO} &$-0.7$ &$-1.1$ &$0.9$ &$1.1$ &$-6.2$ &$-6.4$ &$4.0$ &$5.0$ \\\\ \\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{ The heavy-meson-nucleon scattering lengths, where the NLO (LO) denotes contributions up to the next-to-leading order (the leading order) are included, the unit is fm. }\n\\end{table}\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[totalheight=6cm,width=8cm]{Vacuum-PD.EPS}\n \\caption{(Color online) The contributions of the perturbative term (I) and quark condensate term (II) in the QCD sum rules for the $D$ mesons in the vacuum. Furthermore, I show the mass $m_D$ (III) and decay constant $f_D$ (IV) explicitly, which are normalized to be 1 at the value $M^2=1.5\\rm{GeV}^2$. }\n\\end{figure}\n\n\n\nIn the present work and Refs.\\cite{Hay,WangHuang,Azi}, the correlation functions are divided\ninto a vacuum part and a static one-nucleon part, and the nuclear matter induced effects are extracted explicitly;\n while in Refs.\\cite{Hil,Hilger2}, the pole terms of (or ground state contributions to) the hadronic spectral densities of the whole correlation functions are parameterized as $\\Delta\\Pi(\\omega)= F_{+}\\delta(\\omega-m_{+})- F_{-}\\delta(\\omega+m_{-})$, where $m_{\\pm}=m\\pm\\Delta m$ and $F_{\\pm}=F\\pm\\Delta F$,\nand QCD sum rules for the mass center $\\overline{m}$ and the mass splitting $\\Delta m$ are obtained.\nIn the leading order approximation, the present predictions of the $\\delta m_{D}$ and $\\delta m_{B}$ are compatible with that of Refs.\\cite{Hay,Azi} and differ greatly from that of Refs.\\cite{Hil,Hilger2}, see Table 2.\nThe values obtained from the QCD sum rules depend heavily on the Borel windows, the values extracted from different Borel windows especially in different QCD sum rules maybe differ from each other greatly.\n\nIn Refs.\\cite{Hil,Hilger2}, the authors study the masses of the heavy mesons in the nuclear matter directly by including both the vacuum part and the static one-nucleon part in the QCD sum rules, then the continuum contributions are well approximated by $\\rho_{QCD}(\\omega^2)\\theta\\left(\\omega^2-{\\omega_0^{\\pm}}^2\\right)$, where the ${\\omega_0^{\\pm}}^2$ are the continuum threshold parameters, it is one of the advantages of Refs.\\cite{Hil,Hilger2}. However, they define the moments $S_n(M^2)$ to study the mass-shifts,\n\\begin{eqnarray}\nS_n(M^2)&=&\\int_{\\omega_0^-}^{\\omega_0^+}d\\omega \\omega^n \\Delta\\Pi(\\omega) \\exp\\left( -\\frac{\\omega^2}{M^2}\\right)\\, ,\n\\end{eqnarray}\nthe odd moment $o=S_0(M^2)$ and the even moment $e=S_1(M^2)$, then obtain $\\frac{do}{d(1\/M^2)}=-S_2(M^2)$ and $\\frac{de}{d(1\/M^2)}=-S_3(M^2)$ by assuming the $F_{\\pm}$ and $m_{\\pm}$ are independent on the Borel parameters at the phenomenological side. In fact, $\\frac{do}{d(1\/M^2)}\\neq-S_2(M^2)$ and $\\frac{de}{d(1\/M^2)}\\neq-S_3(M^2)$ at the operator product expansion side according to the QCD spectral densities $\\Delta\\Pi(\\omega)$, which depend on the Borel parameters explicitly, the approximations $\\frac{do}{d(1\/M^2)}=-S_2(M^2)$ and $\\frac{de}{d(1\/M^2)}=-S_3(M^2)$ lead to undetermined uncertainties.\n\nIn Refs.\\cite{Hil,Hilger2}, the perturbative $\\mathcal{O}(\\alpha_s)$ corrections to the perturbative terms are taken into account. In the QCD sum rules for the pseudoscalar $D$ mesons in the vacuum, if we take into account the perturbative $\\mathcal{O}(\\alpha_s)$ corrections to the perturbative term and vacuum condensate term, the two criteria (pole dominance and convergence of the operator product expansion) of the QCD sum rules leads to the Borel window $M^2=(1.2-1.8)\\,\\rm{GeV}^2$, the resulting predictions\nof the mass $m_{D}$ and decay constant $f_{D}$ are consistent with the experimental data. In Fig.4, I plot the contributions of the perturbative term and quark condensate term in the operator product expansion. From the figure, I can see that the main contributions come from the perturbative term, the quark condensate $\\langle \\bar{q}q\\rangle$ plays a less important role.\n\nThe modifications of the condensates in the nuclear matter are mild, for example, $\\langle\\bar{q}q\\rangle_{\\rho_N}\\approx 0.64 \\langle\\bar{q}q\\rangle $, while the perturbative contributions are not modified (or modified slightly by introducing a minor splitting $\\Delta s_0$, ${\\omega_0^{\\pm}}^2=s_0\\pm\\Delta s_0$) by the nuclear matter.\nIf we turn on the in-medium effects, the contributions of the quark condensate are even smaller, the Borel windows are determined dominantly\nby the perturbative terms \\cite{Hil,Hilger2}.\nIf the perturbative $\\mathcal{O}(\\alpha_s^2)$ corrections to the perturbative terms are also included, the contributions of the perturbative are even larger \\cite{WangJHEP}, the QCD sum rules are dominated by the perturbative terms, which are not (or slightly) affected by the nuclear matter.\nIt is not favored to extract the mass-shifts in the nuclear matter, and impairs the predictive ability.\n\nIn the present work and Refs.\\cite{Hay,WangHuang,Azi}, the correlation functions are divided into\n the vacuum part and the static one-nucleon part, which are of the orders ${\\mathcal{O}}(0)$ and ${\\mathcal{O}}(\\rho_N)$, respectively. We can obtain independent QCD sum rules from the two parts respectively. The QCD sum rules correspond to the orders ${\\mathcal{O}}(0)$ and ${\\mathcal{O}}(\\rho_N)$ respectively can have quite different Borel parameters. In this article, I separate the nuclear matter induced effects unambiguously, study the QCD sum rules correspond to the order ${\\mathcal{O}}(\\rho_N)$, and determine the Borel parameters by the criteria of the QCD sum rules.\n\n\n\nIn the conventional QCD sum rules, we usually choose the Borel parameters $M^2$ to satisfy the following three criteria:\n\n$\\bf{1_\\cdot}$ Pole dominance at the phenomenological side;\n\n$\\bf{2_\\cdot}$ Convergence of the operator product expansion;\n\n$\\bf{3_\\cdot}$ Appearance of the Borel platforms.\n\n\nIn the present work and Refs.\\cite{Hay,WangHuang,Azi}, the nuclear matter induced effects are extracted explicitly, the resulting QCD sum rules are not contaminated by the contributions of the vacuum part, the Borel windows are determined completely by the nuclear matter induced effects, it is the advantage.\nAs the QCD spectral densities are of the form $\\delta(\\omega^2-m_Q^2)$, we have to take the hadronic spectral densities to be the form $\\delta(\\omega^2-m_{H}^2)$ and model the continuum contributions with the function $\\delta(\\omega^2-s_0)$, and determine the $s_0$ by some constraints, see Eq.(16), where the $H$ denotes the ground state and excited state heavy mesons. In this article, I attribute the higher excited states to the continuum contributions, the $\\delta$-type hadronic spectral densities make sense. So the pole dominance at the phenomenological side can be released as the continuum contributions are already taken into account.\nFurthermore, I expect that the couplings of\n the interpolating currents to the excited states are more weak than that to the ground states, the uncertainties originate from continuum contributions are very small. For example, the decay constants of the pseudoscalar mesons $\\pi(140)$ and $\\pi(1300)$ have the hierarchy $f_{\\pi(1300)}\\ll f_{\\pi(140)}$ from the Dyson-Schwinger equation \\cite{CDRoberts}, the lattice QCD \\cite{Latt-pion}, the QCD sum rules \\cite{QCDSR-pion}, etc, or from the experimental data \\cite{pion-exp}.\n\nIn the present work and Refs.\\cite{Hay,WangHuang,Azi}, large Borel parameters are chosen to warrant the convergence of the operator product expansion and to obtain the Borel platforms, and small Borel parameters cannot lead to platforms. In the Borel windows, where the platforms appear, the main contributions come from terms $\\langle \\bar{q}q\\rangle_N$, the operator product expansion is well convergent. The criteria $\\bf{2}$ and $\\bf{3}$ can be satisfied. The continuum contributions are not suppressed efficiently for large Borel parameters compared to that for small Borel parameters. In calculations, I observe that the predictions are insensitive to the $s_0$, the uncertainties originate from the continuum threshold parameters $s_0$ are very small in almost all cases, the large Borel parameters make sense. Furthermore, the continuum contributions are already taken into account. On the other hand, from Eqs.(8-9) and Eqs.(12-13), we can see that the mass-shifts $\\delta m_{D\/D_0\/D^*\/D_1}$ and decay constant shifts $\\delta f_{D\/D_0\/D^*\/D_1}$ reduce to zero in the limit $\\rho_N \\to 0$, the QCD sum rules correspond to the nuclear matter induced effects decouple, their Borel parameters (irrespective of large or small) are also irrelevant to the ones in the QCD sum rules for the vacuum part of the correlation functions.\nSo the present predictions are sensible.\n\n\nThe predictions depend on the in-medium hadronic spectral functions \\cite{Kwon-2008}, for example, there are two generic prototypes of the in-medium\nspectral functions for the $\\rho$ meson, they differ in details at the\nlow mass end of the spectrum. The Klingl-Kaise-Weise spectral function\n emphasizes the role of chiral in-medium $\\pi\\pi$ interactions \\cite{KKW-97}, while the Rapp-Wambach spectral function focuses\non the role of nucleon-hole, $\\Delta(1232)$-hole and $N^*(1520)$-hole excitations \\cite{RW-99}. Both of the\nspectral functions account quite well for the low-mass enhancements observed in dilepton spectra from high-energy nuclear collisions. However, the QCD\nsum rules analysis of the lowest spectral moments reveals qualitative differences with respect to their Brown-Rho scaling properties \\cite{Kwon-2008}. If the simple spectral densities $F\\delta(\\omega^2-M_{P\/V}^2)$ analogous to the ones in Refs.\\cite{Hil,Hilger2} are taken, where the $P$ denotes the pseudoscalar mesons $\\pi$, $\\eta_c$, the $V$ denotes the vector mesons $\\rho$, $\\omega$, $\\phi$, $J\/\\psi$, the $F$ denotes the constant pole residues, the in-medium mass-shifts $\\delta M_{P\/V}$ are smaller than zero qualitatively \\cite{Lee-Vector}. I expect that the S-wave mesons $q^{\\prime}\\bar{q}$, $c\\bar{q}$, $c\\bar{c}$ with the spin-parity $J^P=0^-$ (or $1^-$) have analogous in-medium mass-shifts, at least qualitatively. Further studies based on more sophisticated hadronic spectral densities are needed.\n\n\nIn fact, there are controversies about the mass-shifts of the $D$ and $B$ mesons in the nuclear matter, some theoretical approaches indicate negative mass-shifts \\cite{DN-BSE-Negative}, while others indicate positive mass-shifts \\cite{Positive-BD}. The different predictions originate mainly from whether or not the heavy pseudoscalar and heavy vector mesons are treated on equal footing in the coupled-channel approaches. If we obtain the meson-baryon interaction kernel by treating the heavy pseudoscalar and heavy vector mesons on equal footing as required by heavy quark symmetry, the mass-shift $\\delta M_{D}$ is negative \\cite{DN-BSE-Negative}, which is consistent with the present work;\nfurthermore, the attractive D-nucleus interaction can lead to the formation of $D$-nucleus bound states, which can be confronted to the experimental data in the future directly \\cite{D-Nuclei}.\n\n\n\nThe upcoming FAIR project at GSI\n provides the opportunity to study the in-medium properties of the charmoniums or charmed hadrons for the first time, however,\nthe high mass of charmed hadrons requires a high momentum in the antiproton beam to produce them, the\nconditions for observing in-medium effects seem unfavorable, as the hadrons sensitive\nto the in-medium effects are either at rest or have a small momentum relative to the nuclear\nmedium. We have to find\nprocesses that would slow down the charmed hadrons inside the nuclear matter, but this\nrequires more detailed theoretical studies.\nFurther theoretical studies on the reaction\ndynamics and on the exploration of the experimental\nability to identify more complicated\nprocesses are still needed.\n\n\n\n\n\\section{Conclusion}\n In this article, I divide the two-point correlation functions of the scalar, pseudoscalar, vector and axialvector currents in the nuclear matter\ninto two parts, i.e. the vacuum part and the static one-nucleon part, then study the in-medium modifications of the masses and decay constants by deriving\n QCD sum rules from the static one-nucleon part of the two-point correlation functions. In the operator product expansion,\n I calculate the contributions of the nuclear matter induced condensates up to dimension 5, especially I calculate the next-to-leading order contributions of the in-medium quark condensate and obtain concise expressions, which also have applications in studying the mesons properties in the vacuum.\n In calculation, I observe that the next-to-leading order contributions of the in-medium quark condensate are very large and should be taken into account.\n\n All in all, I study the properties of the scalar, pseudoscalar, vector and axialvector heavy mesons with the QCD sum rules in a systematic way, and obtain the shifts of the masses and decay constants in the nuclear matter.\n The numerical results indicate that the mass-shifts of the negative parity and positive parity heavy mesons are negative and positive, respectively. For the pseudoscalar meson $D$, I obtain the prediction $\\delta M_{D}<0$, which is in contrast to the prediction in Refs.\\cite{Hil,Hilger2}, where the mass-shift is positive $\\delta M_{D}>0$. In Refs.\\cite{Hil,Hilger2}, the authors study the masses of the heavy mesons in the nuclear matter directly by including both the vacuum part and\n static one-nucleon part with the QCD sum rules, and parameterize the spectral density of the whole correlation functions by a simple function\n $\\Delta\\Pi(\\omega)= F_{+}\\delta(\\omega-m_{+})- F_{-}\\delta(\\omega+m_{-})$.\n I discuss the differences between the QCD sum rules in the present work and that in Refs.\\cite{Hil,Hilger2} in details, and show why I prefer the present predictions. In the present work and Refs.\\cite{Hay,WangHuang,Azi,Hil,Hilger2}, the finite widths of the mesons in the nuclear matter are neglected, further studies based on the more sophisticated hadronic spectral densities by including the finite widths are needed.\n\n As the masses of the heavy meson paries, such as the $D\\bar{D}$, $D^*\\bar{D}^*$, $D_0 \\bar{D}_0$, $D_1 \\bar{D}_1$ are modified in the nuclear environment,\n we should take into account those effects carefully in studying the production of the $J\/\\psi$ (and $\\Upsilon$) so as to identifying the quark-gluon plasmas. Furthermore, I study the heavy-meson-nucleon scattering lengths as a byproduct,\nand obtain the conclusion qualitatively about the possible existence of the heavy-meson-nucleon bound states.\n\n\n\n\\section*{Acknowledgements}\nThis work is supported by National Natural Science Foundation,\nGrant Numbers 11375063, and Natural Science Foundation of Hebei province, Grant Number A2014502017.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Background}\n\\label{sec:Bg}\nThis section first presents the definition of the image spam email and then briefly provides the concept of a convolutional neural network. Finally, we present the description of data augmentation that is a widely used technique~\\cite{Krizhevsky17:dnn} for improving the robustness of deep learning models. It increases the size of labeled training samples by leveraging task-specific data transformations that preserve class labels.\n\n\\subsection{Image Spam Email}\n\nSince image spam emails appeared in 2004, several studies were conducted to formally define image spam emails and construct models to detect image spam emails in academia. Klangpraphan et al.~\\cite{klangpraphant2010pimsi} observed that image spam emails contain an image-based link to a website, which looks like a text. Soranamageswari et al.~\\cite{soranamageswari2010statistical} introduced the definition of image spam email as spam email having at least one image containing spam content. \n\n\\begin{figure}[!ht]\n\\centering\n\\begin{subfigure}[b]{0.49\\textwidth}{}\n\\centering\n\\includegraphics[width=\\textwidth,height=4.3cm]{pic\/spam_1.JPG}\n\\caption{Spam email containing a link}\n\\label{fig:y equals x}\n\\end{subfigure}\n\\hspace{0.15em}\n\\begin{subfigure}[b]{0.49\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{pic\/spam_2.JPG}\n\\caption{Spam email showing an advertisement}\n\\label{fig:three sin x}\n\\end{subfigure}\n\\caption{Examples of image spam emails.}\n\\label{fig:imagespam}\n\\end{figure}\n\nFigure \\ref{fig:imagespam} shows two examples of image spam emails. In Figure \\ref{fig:imagespam}(a), if a user clicks the ``Verify Email'' button, it tries to visit an attacker's website or download malware. In Figure \\ref{fig:imagespam}(b), the spam image shows unwanted advertisement information to email recipients. Basically, the goal of image spam emails is to hide the attacker's message into an image for circumventing text-based spam filters. Based on this observation, in this paper, we define the image spam email as spam email with images displaying unwanted text information.\n\n\\subsection{Convolutional Neural Network (CNN)}\n\nConvolutional neural network (CNN) is a kind of deep learning methods. Recently, in many classification tasks, CNN outperformed traditional machine learning methods. Therefore, it is widely believed that CNN has the potential to be used for security applications. \n\nCNN can automatically extract features of target objects from lower to higher levels by using convolutional and pooling layers. Convolutional layers play a role in extracting the features of the input. A convolutional layer consists of a set of filters and activation functions. A filter is a function to emphasize key features that are used to recognize target objects. The raw input data is converted into feature maps with filters, which becomes more clear after processing the activation functions. A pooling layer (or sub-sampling) reduces the number of features, which prevents overfitting caused by a high number of features and improve the learning rate. Finally, feature map layers are used as the input layer for the fully connected classifier. These are popularly applied to computer vision tasks such as object recognition~\\cite{bappy2016cnn}.\n\n\n\\subsection{Data Augmentation}\n\nIn a classification problem, it is widely known that the performance of classifiers deteriorates when an imbalanced training dataset is used. If the number of instances in the major class is significantly greater than that in the minor class, the classification performance on the major class will be higher, and vice versa.\n\nData augmentation is a popularly used method to solve the imbalance problem~\\cite{shorten2019survey}, which increases the number of instances in minority classes to balance between majority classes and minority classes. In the image domain, new samples are typically generated by applying the geometric transformations or adding noise to training samples. Figure \\ref{fig:mani} shows typically used image manipulation techniques such as flipping, rotation, and color transformation for image applications.\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=\\textwidth]{pic\/origin.JPG}\n\\caption{Original image}\n\\label{fig:D_spam}\n\\end{subfigure}\n\\hspace{0.05em}\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=\\textwidth]{pic\/flip.JPG}\n\\caption{Flipping}\n\\label{fig:D_spam2}\n\\end{subfigure}\n\\hspace{0.05em}\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{pic\/rotation.JPG}\n\\caption{Rotation}\n\\label{fig:D_ham}\n\\end{subfigure}\n\\hspace{0.05em}\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=\\textwidth]{pic\/color_transform.JPG}\n\\caption{Color transformation}\n\\label{fig:D_ham2}\n\\end{subfigure}\n\\caption{Examples of the image manipulation techniques.}\n\\label{fig:mani}\n\\end{figure}\n\nIn the image spam detection problem, however, the effects of such general data augmentation techniques would be limited because the typical ham and spam images are different from the samples generated from such augmentation techniques. Therefore, in this paper, we focus on developing data augmentation for image ham and spam emails.\n\n\n\n\\section{Conclusion}\n\\label{sec:CS}\n\nIn this paper, we proposed a new image spam email detection tool called DeepCapture. To overcome the performance degrade of existing models against entirely new and unseen datasets, we developed a classifier using CNN-XGBoost and data augmentation techniques tailored towards the image spam detection task. To show the feasibility of DeepCapture, we evaluate its performance with three publicly available datasets consisting of spam and non-spam image samples. The experimental results demonstrated that DeepCapture is capable of achieving 88\\% F1-score, which has 6\\% improvement over the best existing spam detection model, CNN-SVM~\\cite{shang2016image}, with an F1-score of 82\\%. Furthermore, DeepCapture outperforms other classifiers in cross data training scenarios to evaluate the performance of classifiers with the new and unseen dataset.\n\nFor future work, we plan to develop more sophisticated data augmentation methods to add a more real-like synthetic dataset effectively. In addition, we will increase the size of the dataset and examine any changes in detection accuracy. It would also be interesting to add the functionality of DeepCapture to an open-source project such as SpamAssassin.\n\n\n\n\n\n\n\n\\section{Evaluation}\n\\label{sec:eva}\nThis section presents the performance evaluation results of DeepCapture (presented in Section~\\ref{sec:architecture}) compared with state-of-the-art classification methods: SVM~\\cite{annadatha2018image}, RSVM~\\cite{annadatha2018image} and CNN-SVM~\\cite{shang2016image}.\n\n\\subsection{Dataset}\n\n\nTo evaluate the performance of image spam email detection models, we use publicly available mixed datasets with two spam (``Personal spam'' and ``SpamArchive spam'') and two ham (``Personal ham'' and ``Normal image ham'') image datasets. Figure \\ref{fig:imagesample} shows examples of those datasets.\n \n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=\\textwidth]{pic\/Dataset_spam_1.png}\n\\caption{Personal spam}\n\\label{fig:D_spam}\n\\end{subfigure}\n\\hspace{0.05em}\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=\\textwidth]{pic\/Dataset_spam_2.png}\n\\caption{SpamArchive spam}\n\\label{fig:D_spam2}\n\\end{subfigure}\n\\hspace{0.05em}\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{pic\/Dataset_ham_3.png}\n\\caption{Personal ham}\n\\label{fig:D_ham}\n\\end{subfigure}\n\\hspace{0.05em}\n\\begin{subfigure}[t]{.235\\linewidth}\n\\centering\n\\captionsetup{justification=centering}\n\\includegraphics[width=\\textwidth]{pic\/Dataset_ham_4.png}\n\\caption{Normal image ham}\n\\label{fig:D_ham2}\n\\end{subfigure}\n\\caption{Examples of image datasets.}\n\\label{fig:imagesample}\n\\end{figure}\n\n\n\nIn the ``Personal spam'' dataset, spam images were collected from 10 email accounts for one month, and ham images were collected from two email accounts for two years. The ``SpamArchive spam'' dataset~\\cite{dredze2007learning} was constructed with many anonymous users. ``Normal image ham'' dataset~\\cite{gao2008image} was collected from a photo-sharing website called ``Flickr'' (\\url{https:\/\/www.flickr.com\/}) and 20 scanned documents. From those datasets, we removed unnecessary image samples such as duplicated images, solid color background images, small and unknown images. In particular, since the ``SpamArchive spam'' dataset contains a lot of duplicated images such as the advertisement for a watch or corporate logo, we need to remove such duplicated images. After eliminating image samples that cannot be categorized as either normal ham or spam images, we were left with a dataset of 8,313 samples for experiments. The details of the dataset are presented in Table \\ref{tab:dataset}. In the final dataset, the number of spam images is 6,000, while the number of ham images is 2,313. The ratio of ham to spam is around 1:3.\n\n\\begin{table}[ht!]\n\\centering\n\\caption{Description of the datasets.}\n\\label{tab:dataset}\n\\vspace{2mm}\n\\begin{tabular}{ |p{1.5cm}||p{3cm}|p{2cm}|}\n\\Xhline{3\\arrayrulewidth}\n\\hline\n Category& Corpus &Total count\\\\\n \\hline\n Spam & Personal spam & 786\\\\\n & SpamArchive spam & 5,214\\\\\n \\cline{2-3}\n & Total & 6,000\\\\\n \\hline\n Ham & Personal ham & 1,503\\\\\n & Normal image ham & 810 \\\\\n \\cline{2-3}\n & Total & 2,313\\\\\n\\hline\n\\Xhline{3\\arrayrulewidth}\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection{Experiment setup}\n\nOur experiments were conducted using the Google Colab environment (\\url{https:\/\/colab.research.google.com\/}). It supports a GPU Nvidia Tesla K80 with 13GB of memory and an Intel(R) Xeon(R) CPU at 2.30GHz. We use Keras framework with the scikit-learn library in Python 3 to implement DeepCapture. \n\nFor classification, we randomly divided 8,313 samples into a training set (60\\%) and a testing set (40\\%) with similar class distributions.\n\nTo address the data imbalance issue and make the classifier more robust against new and unseen image datasets, we use the data augmentation (DA) techniques presented in Section~\\ref{sec:Data Augmentation in DeepCapture} to create additional image samples. Finally, we obtained 5,214 ham-like and 4,497 spam-like image samples with the images in the training set through our data augmentation techniques. Those image samples are used for training only.\n\n\n\\subsection{Classification results}\n\nTo evaluate the performance of classifiers, we use the four following metrics:\n\n\\begin{itemize}\n\\item \\textbf{Accuracy (Acc.)}: the proportion of correctly classified images;\n\\item \\textbf{Precision (Pre.)}: the proportion of images classified as spam that actually are spam;\n\\item \\textbf{Recall (Rec.)}: the proportion of spam images that were accurately classified;\n\\item \\textbf{F1-score (F1.)}: the harmonic mean of \\textit{precision} and \\textit{recall}.\n\\end{itemize}\n\nBecause the dataset used in our experiments is imbalanced, accuracy is not the best measure to evaluate the performance of classifiers. F1-score would be a more effective measure since it considers both precision and recall measures. Table \\ref{tab:evaluation} shows the performance of classifiers with\/without data augmentation techniques used for DeepCapture. DeepCapture produced the best results in all metrics except precision (accuracy: 85\\%, precision: 91\\%, recall: 85\\%, F1-score: 88\\%). The existing solutions (SVM~\\cite{annadatha2018image} with DA, RSVM~\\cite{annadatha2018image} with DA, and CNN-SVM~\\cite{shang2016image}) achieved high precision, but their recall was poor. Interestingly, traditional machine learning-based solutions (SVM~\\cite{annadatha2018image} and RSVM~\\cite{annadatha2018image}) failed to achieve a very low F1-score, less than 20\\%, without the training samples generated by the proposed data augmentation method. In contrast with those existing techniques, deep learning-based solutions (DeepCapture and CNN-SVM~\\cite{shang2016image}), achieved an F1-score of 85\\% and 82\\%, respectively, without data augmentation.\n\n\\begin{table}[!ht]\n\\centering\n\\caption{Performance of classifiers (DA represents ``Data Augmentation'').}\n\\vspace{2mm}\n\\label{tab:evaluation}\n\\begin{tabular}{|p{4.2cm}||p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|}\n\\Xhline{3\\arrayrulewidth}\n\\hline\n Model & Acc. & Pre. & Rec. & F1.\\\\\n\\hline\n\\textbf{DeepCapture} & \\textbf{85\\%} & \\textbf{91\\%} & \\textbf{85\\%} & \\textbf{88\\%}\\\\\nDeepCapture without DA & 81\\% & 90\\% & 81\\% & 85\\%\\\\\nSVM~\\cite{annadatha2018image} & 51\\% & 50\\% & 09\\% & 15\\% \\\\ \nSVM~\\cite{annadatha2018image} with DA & 71\\% & 96\\% & 36\\% & 52\\% \\\\\nRSVM~\\cite{annadatha2018image} & 53\\% & 52\\% & 11\\% & 18\\% \\\\\nRSVM~\\cite{annadatha2018image} with DA & 73\\% & 98\\% & 42\\% & 59\\% \\\\\nCNN-SVM~\\cite{shang2016image} & 76\\% & 99\\% & 71\\% & 82\\% \\\\\nCNN-SVM~\\cite{shang2016image} with DA & 84\\% & 90\\% & 83\\% & 86\\% \\\\\n\\hline\n\\Xhline{3\\arrayrulewidth}\n\\end{tabular}\n\\end{table}\n\nWe compare DeepCapture against existing solutions (SVM~\\cite{annadatha2018image}, RSVM~\\cite{annadatha2018image} and CNN-SVM~\\cite{shang2016image}) with respect to the training and testing times. Training time refers to the time taken to train a model with training samples. Testing time refers to the time taken to perform classification with all testing samples. Table~\\ref{tab:evaluation4} shows the training and testing times of all classifiers. DeepCapture took 300.27 seconds for training and 5.79 seconds for testing. CNN-based solutions such as DeepCapture and CNN-SVM outperformed SVM and RSVM with respect to the training time. However, DeepCapture produced the worst result with respect to the training time. We surmise that the testing time of XGBoost is relatively slower than other classifiers such as SVM and RSVM because XGBoost is an ensemble of multiple regression trees. For a single image, however, the average testing time of DeepCapture was only 0.0017 seconds. Hence, we believe that the testing time of DeepCapture would be practically acceptable.\n\n\\begin{table}[!ht]\n\\centering\n\\caption{Training and testing times (sec.) of classifiers.}\n\\vspace{2mm}\n\\label{tab:evaluation4}\n\\begin{tabular}{|p{4.5cm}||p{2cm}|p{2cm}|}\n\\Xhline{3\\arrayrulewidth}\n\\hline\n Model & Training time & Testing time\\\\\n\\hline\n\\textbf{DeepCapture} & \\textbf{300.27} & \\textbf{5.79}\\\\\nSVM (Annadatha et al.~\\cite{annadatha2018image}) & 2000.00 & 0.01 \\\\\nRSVM (Annadatha et al.~\\cite{annadatha2018image}) & 2000.00 & 0.01 \\\\\nCNN-SVM (Shang et al.~\\cite{shang2016image}) & 320.24 & 0.03 \\\\\n\\hline\n\\Xhline{3\\arrayrulewidth}\n\\end{tabular}\n\\end{table}\n\nTo test the robustness of classifiers against new and unseen image spam emails, we evaluate the performance of DeepCapture with cross data training. For cross data training, we trained classifiers on ham and spam images collected from one specific source, and evaluated the performance of classifiers against a different unseen dataset.\n\nFor training, we used 6,024 samples collected from ``SpamArchive spam'' and ``Normal image ham'' datasets, while for testing, we used 2,289 samples collected from ``Personal spam'' and ``Personal ham'' datasets. To make classifiers more robust against the unseen dataset, we additionally created 5,190 ham-like and 786 spam-like image samples with the images in the training set through our data augmentation techniques. Those image samples are used for training only. Table~\\ref{tab:evaluation2} shows the evaluation results for the first cross data training scenario. DeepCapture achieved an F1-score of 72\\% and outperformed the other classifiers. Surprisingly, F1-scores of all classifiers, including DeepCapture itself, are less than 35\\% without the training samples created by data augmentation, indicating that our data augmentation techniques are necessary to process unseen and unexpected image samples. \n\n\\begin{table}[!ht]\n\\centering\n\\caption{Performance of classifiers with a cross data training scenario (training dataset: ``SpamArchive spam'' and ``Normal image ham'' datasets; and testing dataset: ``Personal spam'' and ``Personal ham'' datasets).}\n\\vspace{2mm}\n\\label{tab:evaluation2}\n\\begin{tabular}{|p{4.2cm}||p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|}\n\\Xhline{3\\arrayrulewidth}\n\\hline\n Model & Acc. & Pre. & Rec. & F1.\\\\\n\\hline\n\\textbf{DeepCapture} & \\textbf{71\\%} & \\textbf{81\\%} & \\textbf{71\\%} & \\textbf{72\\%}\\\\\nDeepCapture without DA & 36\\% & 37\\% & 34\\% & 35\\%\\\\\nSVM~\\cite{annadatha2018image} & 89\\% & 14\\% & 10\\% & 12\\% \\\\ \nSVM~\\cite{annadatha2018image} with DA & 65\\% & 45\\% & 22\\% & 29\\% \\\\\nRSVM~\\cite{annadatha2018image} & 90\\% & 12\\% & 11\\% & 13\\% \\\\\nRSVM~\\cite{annadatha2018image} with DA & 69\\% & 58\\% & 27\\% & 30\\% \\\\\nCNN-SVM~\\cite{shang2016image} & 35\\% & 35\\% & 34\\% & 35\\% \\\\\nCNN-SVM~\\cite{shang2016image} with DA & 68\\% & 73\\% & 45\\% & 55\\% \\\\\n\\hline\n\\Xhline{3\\arrayrulewidth}\n\\end{tabular}\n\\end{table}\n\n\n\n\n\n\n\n\n\nAs another cross data training scenario, we used 2,289 samples collected from ``Personal spam'' and ``Personal ham'' datasets for training while we used 6,024 samples collected from ``SpamArchive spam'' and ``Normal image ham'' datasets for testing. Again, to make classifiers more robust against the unseen dataset, we additionally created 4,497 ham-like and 5,214 spam-like image samples with the images in the training set through our data augmentation techniques. Those image samples are used for training only. Table~\\ref{tab:evaluation3} shows the evaluation results for the second cross data training scenario. DeepCapture and RSVM~\\cite{annadatha2018image} with DA achieved an F1-score of 76\\% and outperformed the other classifiers. F1-scores of all classifiers, including DeepCapture, are less than 40\\% without the training samples created by data augmentation. \n\nWe note that in the second cross data training scenario, RSVM~\\cite{annadatha2018image} with DA also produced the best classification results comparable with DeepCapture. We surmise that underlying dataset differences may explain this. In the ``Personal spam'' dataset, the ratio of spam to ham image samples is approximately 1.9:1 while in the ``SpamArchive spam'' dataset, the ratio of spam to ham image samples is approximately 6.4:1. These results demonstrate that the performance of RSVM~\\cite{annadatha2018image} with DA can significantly be affected by the class distribution of samples. In contrast, DeepCapture overall works well regardless of the imbalanced class distribution of samples.\n\n\n\\begin{table}[!ht]\n\\centering\n\\caption{Performance of classifiers with a cross data training scenario (training dataset: \"Personal spam\" and \"Personal ham\" datasets; and testing dataset: \"SpamArchive spam\" and \"Normal image ham\" datasets).}\n\\vspace{2mm}\n\\label{tab:evaluation3}\n\\begin{tabular}{|p{4.2cm}||p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|}\n\\Xhline{3\\arrayrulewidth}\n\\hline\n Model & Acc. & Pre. & Rec. & F1.\\\\\n\\hline\n\\textbf{DeepCapture} & \\textbf{73\\%} & \\textbf{82\\%} & \\textbf{72\\%} & \\textbf{76\\%}\\\\\nDeepCapture without DA & 31\\% & 47\\% & 32\\% & 38\\%\\\\\nSVM~\\cite{annadatha2018image} & 74\\% & 61\\% & 14\\% & 22\\% \\\\ \nSVM~\\cite{annadatha2018image} with DA & 60\\% & 84\\% & 52\\% & 64\\% \\\\\nRSVM~\\cite{annadatha2018image} & 82\\% & 71\\% & 22\\% & 34\\% \\\\\nRSVM~\\cite{annadatha2018image} with DA & 62\\% & 94\\% & 67\\% & 76\\% \\\\\nCNN-SVM~\\cite{shang2016image} & 24\\% & 42\\% & 23\\% & 30\\% \\\\\nCNN-SVM~\\cite{shang2016image} with DA & 64\\% & 69\\% & 47\\% & 56\\% \\\\\n\\hline\n\\Xhline{3\\arrayrulewidth}\n\\end{tabular}\n\\end{table}\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\n\nImage-based spam emails (also referred to as ``image spam emails'') are designed to evade traditional text-based spam detection methods by replacing sentences or words contained in a spam email with images for expressing the same meaning~\\cite{ismail2019image}. As image spam emails become popular~\\cite{emailadvertisement}, several spam detection methods~\\cite{Fumera06:spam,kim2017analysis,kumar2017svm} have been proposed to detect image spam emails with statistical properties of image spam emails (e.g., the ratio of text contents in an email sample). However, these countermeasures have disadvantages due to a high processing cost for text recognition in images~\\cite{attar2013survey}. Recently, a convolutional neural network (CNN) model-based detection~\\cite{shang2016image} was presented to address this processing cost issue and improve the detection accuracy. The recent advance of deep learning technologies in the image domain would bring a new angle or approach to security applications. CNN has the potential to process raw data inputs (e.g., the input image itself) by extracting important (low-level) features in an automated manner~\\cite{Krizhevsky17:dnn}. However, we found that the detection accuracy of the existing CNN based image spam detection model~\\cite{shang2016image} could be degraded significantly against new and unseen image spam emails.\n\nTo overcome the limitation of existing image spam detectors against new and unseen datasets, we propose a new image spam email detection tool called DeepCapture. DeepCapture consists of two phases: (1) data augmentation to introduce new training samples and (2) classification using a CNN-XGBoost model. In this paper, we focus on developing new data augmentation techniques tailored for image spam training dataset and designing an effective CNN architecture capable of detecting images used for spam emails with the optimized configuration for number of layers, number of filters, filter size, activation function, a number of epochs and batch size.\n\n\n\nTo examine the feasibility of DeepCapture, we evaluate the performance of DeepCapture compared with existing image spam email detectors such as RSVM based detector~\\cite{annadatha2018image} and CNN-SVM based detector~\\cite{shang2016image}. In our experiments, we use a dataset consisting of 6,000 spam and 2,313 non-spam (hereinafter referred to as ham) image samples collected from real-world user emails. We also use our data augmentation techniques to balance the distribution of ham and spam samples and avoid performance degradation against new and unseen datasets. We evaluate the performance of the DeepCapture in two ways. First, we evaluate the performance of DeepCapture with\/without data augmentation. Second, we also evaluate the performance of DeepCapture via cross data training scenarios with\/without data augmentation. Our experimental results demonstrate that DeepCapture produced the best classification results in F1-score (88\\%) compared with existing solutions. Moreover, for two cross data training scenarios against unseen datasets, DeepCapture also produced the best F1-score results compared with other classifiers. The use of data augmentation techniques would be necessary for processing new and unseen datasets. In the cross data training scenarios, F1-scores of all classifiers are less than 40\\% without applying our data augmentation techniques.\n\n\nThis paper is constructed as follows: Section \\ref{sec:Bg} describes the background of image spam email, convolutional neural network and data augmentation. Section \\ref{sec:architecture} describes the model architecture of DeepCapture. Section \\ref{sec:eva} describes experiment setups and evaluation results of DeepCapture. Section \\ref{sec:Rw} describes the related work for image spam detection and we conclude in Section \\ref{sec:CS}.\n\\section{Related work}\n\\label{sec:Rw}\n\nTo avoid spam analysis and detection, spammers introduced the image spam technique to replace text spam messages with images. This strategy would be an effective technique to circumvent the text analysis of emails, which are commonly used in spam filters~\\cite{Biggio11:spam}. To detect image spam emails, several classification methods have been proposed~\\cite{Fumera06:spam,shang2016image,kim2017analysis,kumar2017svm,annadatha2018image,fatichah2019image}. However, the solutions offered so far exhibit several critical weaknesses. Existing detection techniques can be categorized into two approaches: (1) keyword-based analysis and (2) image classification.\n\n\n\n\\subsection{Keyword-based analysis} \n\nKeyword-based analysis is to extract texts from a given image and analyze them using a text-based spam filter. Several techniques~\\cite{Fumera06:spam,kim2017analysis,kumar2017svm} using keyword analysis were introduced. Also, this approach was deployed in real-world spam filters such as SpamAssassin (\\url{https:\/\/spamassassin.apache.org\/}). Unsurprisingly, the performance of this approach depends on the performance of optical character recognition (OCR). Sophisticated spammers can intentionally embed abnormal text characters into an image, which cannot be recognized by typical OCR programs but can still be interpreted by human victims. The performance of keyword-based spam detection methods could be degraded significantly against such image spam emails. Moreover, a high processing cost of OCR is always required for analyzing images. Therefore, in this paper, we propose an image spam detection method in the direction of establishing an image classifier to distinguish spam images from ham images.\n\n\n\\subsection{Image classification} \n\nTo address the high processing cost issue of keyword-based analysis, some researchers have tried to develop image spam detection methods using low-level features that are directly extracted from images. Annadatha et al.~\\cite{annadatha2018image} demonstrated that image spam emails could be detected with high accuracy using either Principal Component Analysis (PCA) or Support Vector Machines (SVM). To build a classifier, they manually selected 21 features (e.g., image color, object edges) that can be extracted from spam and ham images. Shang et al.~\\cite{shang2016image} proposed an alternative image classification method using a CNN model and an SVM classifier together, which is composed of 13 layers. The CNN model proceeds classification in the last fully connected layer. However, they use the output from the last fully connected layer as the input for the SVM classifier. In this paper, we develop a more compact CNN-XGBoost model consisting of 8 layers. Our evaluation results show that DeepCapture outperforms Shang et al.'s architecture in terms of detection accuracy. Fatichah et al.~\\cite{fatichah2019image} also discussed the possibility of CNN models to detect image spam. Unlike other previous studies, they focused on building CNN models to detect the image spam on Instagram (\\url{https:\/\/www.instagram.com\/}), a social photo-sharing service. They evaluated the performance of four pre-trained CNN models (3-layer, 5-layer, AlexNet, and VGG16) with 8,000 images collected from Instagram. They found that the VGG16 architecture achieves the best accuracy (about 0.84) compared with the other models. Since VGG16 is a pre-trained network and its performance is not advantageous, we do not directly compare DeepCapture with VGG16.\n\n\nWe note that the performances of previous methods have been evaluated on different data sets with different configurations. Therefore, we cannot directly compare their reported performances. In this paper, we needed to reimplement their models and used the publicly available datasets to compare the performance of DeepCature with those of the best existing models (SVM~\\cite{annadatha2018image}, RSVM~\\cite{annadatha2018image} and CNN-SVM~\\cite{shang2016image}).\n\n\n\\section*{Acknowledgement} \n\\label{sec:ack}\n\\begin{small}\nHyoungshick Kim is the corresponding author. This work has been supported in part by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government's Cooperative Research Centres Programme and the NRF grant (No. 2017H1D8A2031628) and the ITRC Support Program (IITP-2019-\n2015-0-00403) funded by the Korea government. The authors would like to thank all the anonymous reviewers for their valuable feedback.\n\\end{small}\n\n\\section{Overview of DeepCapture}\n\\label{sec:architecture}\n\nWe designed DeepCapture using data augmentation and CNN to make it robust against new and unseen datasets. Figure \\ref{fig:model} shows an overview of DeepCapture architecture.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=1\\columnwidth]{pic\/model2.JPG}\n\\caption{Overview of DeepCapture.}\n\\label{fig:model}\n\\end{figure}\n\nDeepCapture consists of two phases: (1) data augmentation to introduce new training samples and (2) classification using a CNN model.\n\n\n\n\\subsection{Data Augmentation in DeepCapture}\n\\label{sec:Data Augmentation in DeepCapture}\n\nTo address the class imbalance problem in image spam datasets and generalize the detection model, we introduce a new data augmentation method to create new ham and spam samples for training. The goal of data augmentation is to make augmented samples that are similar to real data.\n\nFor both ham and spam images, we commonly remove unnecessary images such as duplicate images, solid color background images, small and unknown images that cannot be recognized by human users. After removing unnecessary images, we apply different data augmentation methods to ham and spam images, respectively. \n\nFor ham images, we randomly choose an image among ham images and use an API to search images that are similar to the given image. For example, the Google Image Search API can be used to crawl the images similar to the ones we uploaded. For each uploaded image, $N$ (e.g., $N=100$) similar images can be obtained as ham-like images for training (see Figure~\\ref{fig:similar}). Those images would be regarded as additional ham images because those are also actually used images on other websites.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=1\\columnwidth]{pic\/similar.jpg}\n\\caption{Data augmentation process for ham images.}\n\\label{fig:similar}\n\\end{figure}\n\nFor spam images, we randomly choose two images among spam images and split each image in half from left to right (``left and right parts''). Next, we then combine the left part of an image with the right part of the other image. To combine parts from different images, we resize a part of an image so that its size is the same as the size of the part of another image (see Figure~\\ref{fig:cropflip}). Our key observation is that a spam image typically consists of the image and text parts. Therefore, it is essential to create augmented samples having both image and text parts. Our data augmentation techniques are designed to produce such image samples.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=1\\columnwidth]{pic\/generate_spam.JPG}\n\\caption{Data augmentation process for spam images.}\n\\label{fig:cropflip}\n\\end{figure}\n\n\\subsection{CNN-XGBoost Classification in DeepCapture}\n\nAs shown in Figure~\\ref{fig:model}, the architecture of DeepCapture is composed of eight layers. Given an input image, the input image is resized to 32x32 pixels. The first six layers are convolutional layers, and the remaining two layers are used for the XGBoost classifier to determine whether a given image is spam or not. \n\nAll convolutional layers use 3x3 kernel size and the Leaky ReLU function~\\cite{maas2013rectifier}, which is used as the activation function. The Leaky ReLU function has the advantage to solve the gradient saturation problem and improve convergence speed. Unlike ReLU, in which the negative value is totally dropped, Leaky ReLU assigns a relatively small positive gradient for negative inputs. We also apply the 2x2 max pooling to the 3rd and 6th layers, which selects the maximum value from the prior feature map. The use of max pooling reduces the dimension of feature parameters and can help extract key features from the feature map. We also use regularization techniques to prevent the overfitting problem. We specifically use both L2 regularization~\\cite{ng2004feature} and the dropout method~\\cite{srivastava2014dropout} as regularization techniques for DeepCapture. \n\n\n\n\nAfter extracting features from the input image through the convolutional layers, we use the XGBoost~\\cite{chen2016xgboost} classifier, which is a decision-tree-based ensemble machine learning algorithm that uses a gradient boosting framework. XGBoost builds a series of gradient boosted decision trees in a parallel manner and makes the final decision using a majority vote over those decision trees. In many situations, a CNN model typically uses the fully connected layers. However, for the image spam detection problem, we found that we can improve the detection accuracy if we replace the fully connected layers with a classifier such as XGBoost. We use the random search~\\cite{bergstra2012random} method for optimizing hyperparameters used in the XGBoost classifier.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Content Guidelines}\n\n\n\n\n\n\n\n\n\n\t%\n\n\n\t%\n\n\n\t\n\t\\section{Introduction}\n\t\n\tOver the past decades, nanoelectromechanical systems (NEMS) have received considerable attention as versatile elements in mesoscopic devices. \n\tFor example, they are well-suited for sensor applications, and have been shown to act as highly sensitive mass \n\t\\cite{hanay2012single,chaste2012nanomechanical}, force \\cite{mamin2001sub,Rugar2004,moser2013ultrasensitive,Simonsen2019}, or infrared \\cite{piller2019nanoelectromechanical} detectors. \n\tEven more, they serve as amplifiers or oscillators. \n\tOn the other hand, nanomechanical systems lend themselves to the exploration of fundamental physical phenomena both in the classical \\cite{Yang2019,huber2020spectral,Karg2020} and quantum regimes\n\t\\cite{sudhir2017quantum,rossi2018measurement,mason2019continuous,Guo2019}, with the prospect of serving as hybrid transducer\n\t\\cite{Andrews2014} or as storage element \\cite{OConnell2010} in future quantum technologies. \n\t\n\tMany of the underlying devices are based on a one-dimensional string or two-dimensional membrane resonators \\cite{Thompson2008,Wilson2009,Joeckel2014,Ftouni2015,Fink2016,Serra2016,Norte2016,Reinhardt2016,sudhir2017quantum,Tsaturyan2017NatNano-UltracoherentNanomechSoftclamping,Ghadimi2018StrainEngineering,rossi2018measurement,mason2019continuous,Simonsen2019,Guo2019,Yang2019,yuksel2019nonlinear,huber2020spectral,Karg2020}\n\tthat are both characterized by a strong intrinsic tensile stress in the device layer. Tensile stress in the device layer enables remarkably large mechanical quality factors as a result of a process commonly referred to as dissipation dilution \\cite{GonzalezSaulson1994BrownianMotionAnelasticWire,Unterreithmeier2010PRL-DampingNanomechRes,Yu2012PRL-ControlMaterialDampingHighQMembraneMicrores,Villanueva2014PRL-EvidenceSurfaceLossSiN}. \n\tThis process relies on the stress-induced increase of the stored vibrational energy of the resonator, while the dissipated energy remains largely unaffected. \n\t\n\tSeveral approaches have been pursued to suppress the limiting dissipation mechanism and to further enhance the potential of this class of nanomechanical resonators. For example, clamping losses can be reduced by means of mechanical impedance mismatch engineering \\cite{bib:Rieger2014} \n\tthat is exploited, e.g., in trampoline resonators \\cite{Kleckner2011,Reinhardt2016,Norte2016}. In addition, phononic bandgaps have been employed to reduce clamping losses \\cite{Alegre2011,yu2014phononic,Tsaturyan2014}. Recently, soft clamping has been presented as an innovative approach of boosting the quality factor \\cite{Tsaturyan2017NatNano-UltracoherentNanomechSoftclamping}. It is also based on phononic bandgap engineering, but additionally prevents mechanical strain at the interface between the resonator and its clamping points and thus enhances the dissipation dilution. \n\tSoft-clamped membranes have recently enabled measurement-based quantum control \\cite{rossi2018measurement} or measurements below the standard quantum limit \\cite{mason2019continuous}, and may find application in magnetic resonance force imaging \\cite{Simonsen2019}. \n\tFor string resonators, soft clamping can be beneficially combined with stress engineering \\cite{Ghadimi2018StrainEngineering,fedorov2019generalized}, which boosts the tensile stress to close to the mechanical limits and enables quality factors $Q$ of around $800$ million and a $Q \\times$ frequency product of more than $ 10^{15}\\,$Hz. \n\tThese developments pave the way towards an ever increasing sensitivity and the observation of mechanical quantum phenomena at room temperature.\n\t\n\tHere we describe a previously unappreciated aspect of nanomechanical string resonators: the one-dimensional tensile stress in the string is not solely determined by elastic material properties, but significantly depends on its length as well as other geometric parameters. This allows one to increase the tensile stress by approximately $50$\\,\\% by using shorter strings and thus boost the dissipation dilution. The observed length dependence of the tensile stress is material independent. We demonstrate length-dependent tensile stress for nanomechanical resonators fabricated from four different wafers featuring the three complementary, tensile-stressed device layers silicon nitride (SiN), silicon carbide (SiC), and indium gallium phosphide (InGaP). A simple elastic model is developed that captures the observed features. It describes the geometric reconstruction of the string resonator by a combination of two effects that determine the stress distribution in the device layer: the vertical release of the string leads to a deformation of the clamping structure, while the subsequent lateral release undercuts the clamping pads. Our results entail important insights for strain engineering of nanomechanical devices and may be exploited to boost the mechanical quality factor of string resonators.\n\t\n\t\n\t\\section{Experimental results}\n\t\n\tWe investigate nanomechanical string resonators fabricated from three distinct, strongly stressed material platforms, namely amorphous silicon nitride (SiN), crystalline silicon carbide (SiC), and crystalline indium gallium phosphide (InGaP). The $100$\\,nm thick amorphous stoichiometric SiN film is deposited by low-pressure chemical vapor deposition on top of two different substrates, a fused silica wafer (material denoted SiN-FS) and a sacrificial SiO$_2$ layer atop a silicon wafer (SiN-Si). The $110$\\,nm thick film of crystalline 3C-SiC is epitaxially grown on a Si(111) wafer (SiC). The III-V heterostructure hosts a $100$\\,nm thick In$_{0.415}$Ga$_{0.585}$P film epitaxially grown atop a sacrificial layer of Al$_y$Ga$_{1-y}$As on a GaAs wafer.\n\tSee Tab.~S.1. of the Supplemental Material \\cite{siCite} for the full details of all wafers.\n\t\n\tSeries of nanomechanical string resonators such as that depicted in Fig.~\\ref{fig:SEM} are defined in all four material systems. The length of the strings increases from $10$ to $110\\,\\mu$m in steps of $10\\,\\mu$m. \n\t%\n\t\\begin{figure}[t!]\n\t\t\\includegraphics[width=0.9\\linewidth]{SEMresonator.jpg\n\t\t\\caption{\n\t\t\t\\label{fig:SEM} \n\t\t\tScanning electron micrograph of a series of nano\\-string resonators with lengths increasing from $10$ to $110\\,\\mu$m in steps of $10\\,\\mu$m. \n\t\t}\n\t\\end{figure}\n\t%\n\tThe resonators are characterized using piezo actuation and interferometric detection. For all wafers, we record the resonance curves of the out-of-plane flexural modes in the linear response regime. The measurements are performed at room temperature and inside a vacuum chamber (pressure less than $10^{-3}$\\,mbar) to avoid gas damping.\n\t\n\tFor each resonator length we probe the fundamental out-of-plane mode as well as a series of up to $30$ higher-order modes, and determine the corresponding eigenfrequencies by Lorentzian fits, as shown for the case of SiN-FS in Fig.~\\ref{fig:ModeFreq}.\n\t%\n\t\\begin{figure}[t!]\n\t\t\\includegraphics[width=0.8\\linewidth]{nfOnFS\n\t\t\\caption{\n\t\t\t\\label{fig:ModeFreq} \n\t\t\tEigenfrequencies of the out-of-plane modes as a function of the mode number for SiN string resonators on fused silica (SiN-FS). The resonator lengths range from $10$ to $110\\,\\mu$m.\n\t\t\tFits of the eigenfrequencies using Eq.~(\\ref{eq:frequency}) are included as solid lines.\n\t\t}\n\t\\end{figure}\n\t%\n\tFollowing Euler-Bernoulli beam theory of a doubly clamped string with simply supported boundary conditions \\cite{Timoshenko1990-VibrationProblemsEngineering,Cleland2002foundations}, we can conveniently express the eigenfrequency of the $n$-th mode as\n\t%\n\t\\begin{equation}\n\t\tf_n = \\frac{n^2 \\pi}{2 L^2} \\sqrt{\\frac{E_1 h_1^2}{12\\rho}}\\sqrt{1+ \\frac{12 \\sigma L^2}{n^2 \\pi^2 E_1 h_1^2}} \\label{eq:frequency}\n\t\\end{equation}\n\t%\n\twhere $E_1$ is Young's modulus, $\\rho$ is the mass density, $h_1$ is the thickness of the resonator, and $\\sigma$ is the tensile stress of the string.\n\t\n\tThe resonance frequencies shown in Fig.\\,\\ref{fig:ModeFreq} are fitted with Eq.~(\\ref{eq:frequency}). To a very good approximation, the eigenfrequencies of all resonators scale linearly with the mode number as expected for stress-dominated nanostrings for which \n\t$f_n \\approx (n\/2 L) \\sqrt{\\sigma \/ \\rho}$.\n\tThe fits of the eigenfrequencies as a function of mode number allow to extract the tensile stress of each nanostring resonator. The obtained values are shown as a function of the resonator length for all four materials in Fig.\\,\\ref{fig:sigL}. Clearly, the tensile stress is not constant, but decreases for increasing resonator length. The same qualitative behavior is observed in all four material systems.\n\t\n\t\\begin{figure}[t!]\n\t\t\\includegraphics[width=0.8\\linewidth]{sigL\n\t\t\\caption{\n\t\t\t\\label{fig:sigL} \n\t\t\tExperimentally determined tensile stress as a function of the length of the nanostring for all four material systems. \n\t\t\tFits of Eq.~(\\ref{eq:sigLM}) are included as solid lines. The obtained fit parameters are summarized in Tab.~\\ref{tab:disp}.\n\t\t\tThe shaded areas indicate the uncertainty resulting from measurement errors of the pedestal height $h_0$ and undercut $a_\\mathrm{uc}$.\n\t\t}\n\t\\end{figure}\n\t\n\t\n\t\\section{Elastic model} \n\t\n\tA length dependence of the tensile stress in a nanostring has not been reported in the literature. We have developed a simple theoretical model to quantify this previously unappreciated phenomenon. Our model is based on elastic theory. As such, it is material independent and can be applied to all material systems under investigation. The model assumes a prismatic string of length $L$, width $w$ and thickness $h_1$. Its cross-sectional area is $A_{\\textrm{s}} = w \\, h_1$. On both ends, the string is attached to a rectangular clamping structure. It consists of a clamping pad in the device layer with lithographic dimensions $2 a_x$ and $2 a_y$, as well as thickness $h_1$ (Fig.~\\ref{fig:toymodel}(a)), which is supported by a pedestal of height $h_0$ in the underlying sacrificial or substrate layer (Fig.~\\ref{fig:toymodel}(b) and (c)). As a result of the wet etching process required to suspend the nanostrings, the clamping pads exhibit a certain undercut $a_\\mathrm{uc}$, i.e. the width of the pad $2a_x$ (in $x$ direction, the width $a_y$ in $y$ direction may differ) is larger than that of the remaining pedestal $2 a_\\mathrm{p} = 2a_x - 2a_\\mathrm{uc}$. The cross-sectional area of the clamping pad (in $yz$-plane) is $A_{\\textrm{p}} = 2a_y \\, h_1$. The geometric parameters of the four investigated samples are summarized in Tab.~\\ref{tab:ModelParameters}. The thickness of the device layer $h_1$ is obtained from the wafer growth protocol, whereas the height of the pedestal $h_0$ is determined by atomic force microscopy. The half-widths of the clamping pad $a_x$ and $a_y$, the undercut $ a_\\mathrm{uc} $ and the width of the nanostring $w$ have been extracted using electron beam microscopy. The elastic and material parameters of the samples \\cite{DosterUnpub,maluf2002introduction,li1987single,henisch2013silicon,Ioffe1999ShurEtAl-HandbookSeriesSemiconductorParametersVOL2} are listed in Tab.~S.2. in the Supplemental Material \\cite{siCite}.\n\t\n\t\\begin{figure}[ht]\n\t\t\\includegraphics[width=\\linewidth]{ToymodelSketch2\n\t\t\\caption{\n\t\t\t\\label{fig:toymodel} \n\t\t\tSample geometry and parameters of the model. \n\t\t\t(a) Lithographic dimensions of the nanostring and its clamping pads. \n\t\t\t(b) Cross section through the clamping structure illustrating the shearing contraction of the pedestal following the vertical release of the structure.\n\t\t\t(c) Cross section through the clamping structure illustrating the lateral contraction of the undercut areas of the clamping pad following the horizontal release. Combining the vertical and horizontal releases leads to the string's length change of $ \\Delta L $. Areas supported by a pedestal are colored in dark blue, undercut areas of the clamping pads are indicated by a lighter color and the string is marked with the lightest blue. Dotted lines serve as guides to the eye.\n\t\t}\n\t\\end{figure}\n\t%\n\t\\begin{table}\n\t\t\\caption{\n\t\t\t\\label{tab:ModelParameters}\n\t\t\tGeometric parameters of the investigated samples.\n\t\t}\n\t\t\\begin{ruledtabular}\n\t\t\t\\begin{tabular}{crrrrrr}\n\t\t\t\t& \\multicolumn{1}{c}{\\textrm{$h_1$}} & \\multicolumn{1}{c}{\\textrm{$h_0$}} & \n\t\t\t\t\\multicolumn{1}{c}{\\textrm{$2a_x$}} & \n\t\t\t\t\\multicolumn{1}{c}{\\textrm{$2a_y$}} &\n\t\t\t\t\\multicolumn{1}{c}{\\textrm{$a_\\mathrm{uc}$}} &\n\t\t\t\t\\multicolumn{1}{c}{\\textrm{$w$}} \\\\\n\t\t\t\t& \\multicolumn{1}{c}{\\textrm{(nm)}} & \\multicolumn{1}{c}{\\textrm{(nm)}} & \\multicolumn{1}{c}{\\textrm{($ \\mu $m)}} & \\multicolumn{1}{c}{\\textrm{($ \\mu $m)}} & \\multicolumn{1}{c}{\\textrm{(nm)}} & \\multicolumn{1}{c}{\\textrm{(nm)}} \\\\\n\t\t\t\t\\hline\n\t\t\t\tSiN-FS\t& 100(5) & 460(20)\t& 13.7(2) & 13.6(2)\t& 570(100) \t& 420(25) \\\\\n\t\t\t\tSiN-Si\t& 100(5) & 365(20)\t& 14.1(2) & 15.0(2) & 410(150) \t& 340(25) \\\\\n\t\t\t\tSiC \t& 110(15)& 570(40) \t& 14.2(2) & 15.0(2) & 860(150) \t& 360(30) \\\\\n\t\t\t\tInGaP \t& 100(1) & 990(10)\t& 12.7(2) & 13.3(2) & 640(170)\t& 250(15) \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{ruledtabular}\n\t\\end{table}\n\t\n\tAs we show in the following, the tensile stress in the device layer atop an unstressed sacrificial layer or substrate gives rise to a balance of forces that in turn leads to a length- and geometry-dependent change in the one-dimensional tensile stress of the nanostring. To quantify the contributing forces, we roughly follow the process sequence required to fabricate a freely suspended nanostring. \n\tFirst, we consider the vertical release of the nanostructure. It comprises all vertical etching contributions, notably the anisotropic dry etch employed for the pattern transfer following electron beam lithography. Note that the subsequent wet etching process releasing the nanostring may also have a vertical etching component, e.g., for the case of an isotropic wet etch. The vertical release penetrates both the device layer and the sacrificial or substrate layer and defines the height of the pedestal $h_0$.\n\n\tFollowing this vertical release, the tensile-stressed device layer will contract and induce a certain amount of shear in the pedestal (Fig.~\\ref{fig:toymodel}(b)). As a result, the tensile stress in the pad relaxes to a value $\\sigma_\\mathrm{p}$. \n\t\n\tSecond, the lateral release is considered. It accounts for the lateral etching during the nanostring release.\n\tAs a result of this lateral release, the two-dimensional stress in the nanostring relaxes in the direction perpendicular to the string. At the same time, the undercut parts of the tensile-stressed clamping pad contract, which applies additional stress on the nanostring (Fig.~\\ref{fig:toymodel}(c)). The combination of the described effects gives rise to the tensile stress experienced by the nanostring $\\sigma$ (see Eq.~(\\ref{eq:frequency}) and Fig.~\\ref{fig:sigL}). The model assumes a clear separation between the vertical and lateral release, which will be described in Secs.~\\ref{Pedestal} and~\\ref{Undercut}, respectively, and neglects geometric and elastic reconfigurations of the sheared pedestal and stressed clamping pad arising from the lateral releases, which we can safely assume to be small.\n\t\n\t\\subsection{Pedestal shear from vertical release}\\label{Pedestal}\n\t\n\tTo evaluate the shear of the pedestal induced by the vertical release of the structure, we first consider an isolated clamping structure and focus on its cross section along the $x$-$z$ direction as indicated in Fig.~\\ref{fig:toymodel}(b). \n\tThe resonator will be included at a later stage. \n\tFollowing the vertical release of the structure, the strong tensile stress in the device layer leads to a contraction of the clamping pad in order to minimize internal forces. \n\tThis contraction produces an increasing shear of the pedestal. \n\tThe reconfiguration of the clamping structure stops once equilibrium between the reduced tensile force and the counteracting shearing force is reached. \n\tThe shear stress $\\tau$ of such a shear-constrained material system can be expressed as \\cite{Taylor1962} \n\t\n\t\\begin{equation}\n\t\t\\tau = \\sigma_{\\mathrm{2D}} h_1 k \\tanh\\left(k a_{x}\\right) , \\quad k = \\sqrt{\\frac{G_0}{h_0} \\frac{1}{E_1 h_1}} \\label{eq:mSL:tau}\n\t\\end{equation}\n\t%\n\twhere $h_0$ and $h_1$ are the heights of the pedestal and the clamping pad, respectively, $G_0$ is the shear modulus of the pedestal, $E_1$ is Young's modulus of the clamping pad, and $\\sigma_{2D}$ is the initial two-dimensional stress in the device layer.\n\tThis results in the maximum contraction of the clamping pad $\\Delta p$ from its original half-width $a_\\mathrm{x}$: \n\t%\n\t\\begin{equation}\n\t\t\\Delta p = \\frac{h_0}{G_0} \\tau = \\frac{\\sigma_{\\mathrm{2D}}}{E_1 k} \\tanh\\left(k a_{x}\\right) .\n\t\t\\label{umax}\n\t\\end{equation}\n\tIn consequence, the tensile stress in the clamping pad is reduced to $\\sigma_\\mathrm{p} = \\sigma_{2D} - E_1 \\frac{\\Delta p}{a_{x}} $ according to Hooke's law. Note that a similar model that also accounts for additional shear in the device layer is presented in Ref.~\\cite{suhir2008interfacial}. \n\t\n\t\n\tFor the sake of simplicity, we neglect the minute counterforce exerted by the presence of the resonator, which will lead to a slightly reduced contraction of the pad to which it is attached. Similarly, the effect of shear of the pedestal underneath the resonator will be ignored. \n\t\n\tAn experimental verification of the contraction of the clamping pad following the vertical release is discussed in the Supplemental Material \\cite{siCite}. \n\t\n\t\\subsection{Undercut of clamping pads from lateral release}\\label{Undercut}\n\t\n\tThe lateral release of the nanostrings results in an undercut of the clamping pads. More specifically, the width of the pedestal is reduced by $a_\\mathrm{uc}$ from either side such that the rim of the clamping pad gets freely suspended as shown in Fig.~\\ref{fig:toymodel}(c). This enables a relaxation of the tensile force in the undercut parts of the pads that gives rise to a contraction by an amount $ \\Delta c $. The resulting contracting force acting on the interface between the clamping pad and the nanostring can be expressed as\n\t%\n\t\\begin{equation}\n\t\tF_\\mathrm{c} = \\sigma_\\mathrm{p} A_\\mathrm{p} - E_1 \\frac{\\Delta c}{a_\\mathrm{uc}} A_\\mathrm{p},\n\t\\end{equation}\n\t%\n\twhere $\\sigma_\\mathrm{p}$ is the remaining tensile stress in the clamping pad following the vertical release, and $E_1 \\frac{\\Delta c}{a_\\mathrm{uc}}$ is its reduction in the undercut part of the clamping pad, again according to Hooke's law. \n\n\tNote that in the absence of the nanostring, the suspended part of the clamping pad fully relaxes such that $F_\\mathrm{c} = 0$. \n\tIn the presence of the nanostring, however, the contracting force of the clamping pad is counteracted by a second force acting on the interface between the clamping pad and the nanostring which is associated with the elongation $\\Delta L$ of the nanostring \n\t\n\t\n\t\\begin{equation}\n\t\tF_\\mathrm{s} = \\sigma_\\infty A_\\mathrm{s} + E_1 \\frac{\\Delta L}{L} A_\\mathrm{s},\n\t\\end{equation}\n\twhere $\\sigma_\\infty$ is the one-dimensional stress of an infinitely long nanostring after the lateral release, and $E_1 \\frac{\\Delta L}{L} $ is its modification according to Hooke's law. \n\t\n\tThe equilibrium condition for the clamping pad - nanostring interface \n\t%\n\t\\begin{equation}\n\t\tF_c = F_s \\label{eq:interfaceCondition}\n\t\\end{equation}\n\t%\n\tdetermines the final geometric reconfiguration of the clamping pad and the string, under the boundary condition that the total length of the compound between the centers of the clamping pads has to be conserved,\n\t%\n\t\\begin{equation}\n\t\t2 \\Delta p + 2 \\Delta c = \\Delta L . \\label{eq:conservLength}\n\t\\end{equation}\n\t%\n\t\n\tEquations (\\ref{eq:interfaceCondition}) and (\\ref{eq:conservLength}) form a second-order system of linear equations with the unknown parameters $ \\Delta L $ and $ \\Delta c $. The third unknown $\\Delta p$ is determined using Eq.~(\\ref{umax}). Solving for the elongation of the resonator yields\n\t%\n\t\\begin{equation}\n\t\t\\Delta L = 2 L \\, \\frac{\\left( A_\\mathrm{p} a_\\mathrm{uc} \\sigma_\\mathrm{p} + A_\\mathrm{p} E_1 \\Delta p - A_\\mathrm{s} a_\\mathrm{uc} \\sigma_\\infty \\right)}{E_1 \\left( 2 A_\\mathrm{s} a_\\mathrm{uc} + A_\\mathrm{p} L \\right)} .\n\t\t\\label{eq:delL}\n\t\\end{equation}\n\t%\n\t\n\tThis length change of the resonator directly translates into an additional strain $ \\varepsilon=\\Delta L\/L $, giving rise to a length-dependent stress $ \\sigma(L) $ of the doubly clamped string resonator via Hooke's law\n\t%\n\t\\begin{equation}\n\t\t\\sigma(L) = \\sigma_\\infty + E_1 \\frac{\\Delta L}{L}. \\label{eq:sigLM} \n\t\\end{equation}\n\t%\n\t\n\t\n\t\\section{Discussion}\n\t\n\tTo validate the theoretical model, we fit Eq.~(\\ref{eq:sigLM}) to the experimental data measured on all four material systems, using the geometric and material parameters specified in Tabs.~\\ref{tab:ModelParameters} and S.2 of the Supplemental Material \\cite{siCite}. The initial two-dimensional stress \n\t$\\sigma_\\mathrm{2D}$ can be calculated from the epitaxial lattice mismatch of the crystalline InGaP sample. The mismatch of the lattice constants of the In$_{1-x}$Ga$_x$P layer with respect to the underlying Al$_{y}$Ga$_{1-y}$As sacrificial layer induces an in-plane strain $\\varepsilon^\\parallel(x) = (a^\\parallel_\\mathrm{L} - a^\\infty_\\mathrm{L}(x) )\/a^\\infty_\\mathrm{L}(x)$, where $a^\\infty_\\mathrm{L}(x)$ is the lattice constant of In$_{1-x}$Ga$_x$P and $a^\\parallel_\\mathrm{L}$ is the in-plane lattice constant of the strained In$_{1-x}$Ga$_x$P film. This allows to compute the two-dimensional stress of the thin In$_{1-x}$Ga$_x$P layer $\\sigma_\\mathrm{2D} = \\varepsilon^\\parallel E_1 \/(1-\\nu_1)$ \\cite{Bueckle2018APL-StressControl}. The ratio $ E_1\/(1-\\nu_1) $ including the Poisson ratio $\\nu_1$ (see Tab.~S.2. in the Supplemental Material \\cite{siCite}) represents the biaxial modulus of the stressed thin film, which is required as no stress occurs in the $z$ direction normal to the substrate. The obtained $\\sigma_\\mathrm{2D}$ value of $0.95$\\,GPa has also been used as an input parameter for the model. \n\tIn principle, the same argument can be made for SiC which is also an epitaxially grown crystalline thin-film material. However, the crystallization of 3C-SiC atop a Si wafer is more complex than that of the III-V heterostructures. SiC and Si feature the same crystal structure, but exhibit a lattice mismatch of approximately $20\\,\\%$. This implies a nontrivial, commensurate growth of the SiC film that strongly depends on growth conditions, such that the two-dimensional stress cannot be predicted from the crystal structure \\cite{pakula2004fabrication,zorman2008micro,iacopi2013evidence,Iacopi2013APL-QrientationDependentStressRelax3C-SiCFilms,Kermany2016JAP-FactorsAffectionFQproduct3CSiC,Romero2020}. \n\tHence, we set $ \\sigma_{\\mathrm{2D}} $ as an additional fit parameter for SiC. \n\tThe same applies for the amorphous thin-film materials SiN-FS and SiN-Si.\n\tThe one-dimensional stress $ \\sigma_\\infty $ is employed as fit parameter for all material systems. \n\t\n\tThe results of the fits are included in Fig.~\\ref{fig:sigL} as solid lines. The shaded area represents the model's uncertainty arising from the error of the input parameters.\n\t%\n\tAs long as $A_\\textrm{s} \\ll A_\\textrm{p}$ and $a_\\textrm{uc} \\ll L$, the length dependence of Eq.~(\\ref{eq:sigLM}) can be approximated as $ \\sigma(L) \\propto 1\/L $. This holds true for all nanostrings under investigation, such that a $1\/L$ dependence of the stress can be assumed. \n\t%\n\tWe find remarkable agreement between the model and the experimental data. This is particularly noteworthy for the case of the InGaP samples for which only one fit parameter, $ \\sigma_\\infty $, is employed, which, in the above approximation of small $A_\\textrm{s} a_\\textrm{uc}$ corresponds to a vertical offset and thus the limit $\\sigma(L\\rightarrow \\infty)$. \n\tAlso the results for SiN and SiC, which involve two fitting parameters, show good agreement between the model and the experimental data. Again, $ \\sigma_\\infty $ can be interpreted as the tensile stress of an infinitely long string, whereas the two-dimensional stress in the as-grown device layer $\\sigma_\\mathrm{2D}$ can, at least to some extent, be compared to literature values.\n\t\n\tIn Table~\\ref{tab:disp} we summarize the parameters obtained from the elastic model as well as the fit parameters for the case of the longest strings. \n\tThe as-grown two-dimensional stress in low-pressure chemical vapor deposition (LPCVD) grown stoichiometric SiN on silicon is found to depend on growth conditions, but has been reported to amount to $1.1$\\,GPa~\\cite{Ghadimi2018StrainEngineering,Bereyhi2019} and $1.4$\\,GPa~\\cite{Unterrreithmeier2009Nature-TransductionDielectricForces}, which is close to the value found here. \n\n\tThe same applies for high stress 3C-SiC(111), for which an as-grown two-dimensional stress of $1.3$\\,GPa has been reported~\\cite{iacopi2013evidence}, which is somewhat lower than our result. The growth of high stress SiN on a fused silica substrate is poorly characterized, and no comparison with the literature could be obtained. \n\tCertainly, all observed two-dimensional stress values \n\tare well within the yield strength of the respective material, which amounts to approximately $6-7$\\,GPa (or even $12$\\,GPa according to Ref.~\\cite{Yang2002}) for high stress LPCVD-deposited SiN \\cite{Kaushik2005,Ghadimi2018StrainEngineering,Bereyhi2019}, and $21$\\,GPa for SiC \\cite{Petersen1982}. \n\t\n\t\n\n\t\n\t\\begin{table}\n\t\t\\caption{\n\t\t\t\\label{tab:disp}\n\t\t\tParameters of the elastic model for the case of a long string, as well as two-dimensional and one-dimensional stress. \n\t\t}\n\t\t\\begin{ruledtabular}\n\t\t\t\\begin{tabular}{cddddd}\n\t\t\t\t& \\multicolumn{1}{c}{\\textrm{$\\Delta p$}} & \\multicolumn{1}{c}{\\textrm{$\\Delta c$}} & \\multicolumn{1}{c}{\\textrm{$\\Delta L$}} & \n\t\t\t\t\\multicolumn{1}{c}{\\textrm{$\\sigma_\\mathrm{2D}$}} & \\multicolumn{1}{c}{\\textrm{$\\sigma_\\infty$}} \\\\\n\t\t\t\t& \\multicolumn{1}{c}{\\textrm{(nm)}} & \\multicolumn{1}{c}{\\textrm{(nm)}} & \\multicolumn{1}{c}{\\textrm{(nm)}} & \\multicolumn{1}{c}{\\textrm{(GPa)}} & \\multicolumn{1}{c}{\\textrm{(MPa)}} \\\\\n\t\t\t\t\\hline\n\t\t\t\tSiN-FS \t& 8 \t& 6 \t& 27 \t& 3.1\\footnotemark[1] & 1560\\footnotemark[1] \\\\\n\t\t\t\tSiN-Si \t& 3 \t& 2 & 8 & 1.2\\footnotemark[1] & 900\\footnotemark[1] \\\\\n\t\t\t\tSiC \t& 3 \t& 5 & 16\t& 2.5\\footnotemark[1] & 1110\\footnotemark[1] \\\\\n\t\t\t\tInGaP \t& 3 \t& 5 \t& 15 \t& 0.95\\footnotemark[2] & 540\\footnotemark[1] \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{ruledtabular}\n\t\t\\footnotemark[1]{From fit}\n\t\t\\footnotemark[2]{Calculated with $\\sigma_\\mathrm{2D}=\\varepsilon^\\parallel E_1\/(1-\\nu_1)$ \\cite{Bueckle2018APL-StressControl}}\n\t\\end{table}\n\t\n\tA more general consideration of the length dependence of the tensile stress according to Eqs.~(\\ref{eq:delL}) and (\\ref{eq:sigLM}) reveals that two geometric parameters, the height of the pedestal $h_0$ and the undercut of the pedestal $a_\\textrm{uc}$, dominate the stress enhancement of short nanostrings. This suggests that maximum tensile stress can be achieved for short strings with large $h_0$ and $a_\\textrm{uc}$. However, it has to be noted that this limit can only be achieved for sufficiently large clamping pads avoiding a softening of the entire clamping structure under overly large undercuts, an unwanted side effect that is not accounted for in our model.\n\t\n\tFinally, we discuss the relation between $ \\sigma_\\infty $ and $ \\sigma_{\\mathrm{2D}} $. For a one-dimensional nanostring processed from a thin film under biaxial and isotropic stress, the one-dimensional stress follows from the initial two-dimensional stress according to \n\t%\n\t\\begin{equation}\n\t\t\\sigma_{\\mathrm{1D}} = \\sigma_{\\mathrm{2D}} (1-\\nu_1) \\label{eq:sigma1D-2D}.\n\t\\end{equation} \n\t%\n\tFor the nanostrings under investigation, this simple picture does not hold, as the stress relaxation along the $y$ direction upon releasing the string assumes a more complicated stress configuration following the contraction of the device layer described in Sec.~\\ref{Pedestal}. Not only does the contraction of the clamps by an amount $\\Delta p$ reduced the two-dimensional stress in the clamping pads from $\\sigma_{\\mathrm{2D}}$ to $\\sigma_{\\mathrm{p}}$. A similar contraction also occurs along the $y$ direction of the string, such that the tensile stress in the string before the lateral release cannot be considered isotropic. \n\t\n\tAdditionally, we wish to note that high-resolution x-ray diffraction measurements performed on In$_{1-x}$Ga$_x$P wafers have shown a compositional variation in the direction normal to the substrate \\cite{Bueckle2018APL-StressControl}. This can furthermore lead to strain gradients inside the device layer. A similar observation has been made for 3C-SiC in Ref.~\\cite{Romero2020}. This suggests that a more thorough analysis of the length-dependent stress should assume a three-dimensional strain tensor accounting for a vertical strain gradient rather than a biaxial isotropic thin-film stress $\\sigma_{\\mathrm{2D}}$.\n\t\n\t\n\t\\section{Conclusion}\n\t\n\tIn conclusion, we report on the observation of a length dependence of the tensile stress in nanomechanical string resonators. This previously unappreciated effect is material independent, and experimentally observed on samples fabricated from four different wafers, featuring the three complementary material platforms amorphous silicon nitride, crystalline silicon carbide and crystalline indium gallium phosphide. We develop a simple elastic model that describes the observed $1\/L$ dependence of the tensile stress, and which allows us to explain the observed length dependence by a combination of the elastic reconfiguration of the device under the vertical and lateral releases of the one-dimensional nanostring. The one-dimensional tensile stress relaxes to a value considerably smaller than the initial two-dimensional stress value for long strings. For shorter strings, this value increases by approximately $50$\\,\\%. Besides the length, particularly the height of the supporting pedestal and the size of the undercut of the clamping pads influence the resulting stress. Thus, the geometric parameters of the nanostring allow us to control the tensile stress, and enable stress engineering of the quality factor of the device without the need for complex phononic metamaterial processing.\n\t\n\tData and analysis code are available onlschneg \\cite{zenodo}.\n\t\n\t\\begin{acknowledgments}\n\t\tWe thank Ralf Messmer for technical support with the AFM measurements.\n\t\tFinancial support from the European\n\t\tUnions Horizon 2020 programme for Research and Innovation\n\t\tunder Grant Agreement No. 732894 (FET Proactive HOT) and the German Federal Ministry of Education\n\t\tand Research through Contract No. 13N14777 funded within\n\t\tthe European QuantERA cofund project QuaSeRT is gratefully\n\t\tacknowledged. \n\t\tWe further acknowledge support from the Deutsche Forschungsgemeinschaft via\n\t\tthe Collaborative Research Center SFB 1432 and via project WE 4721\/1-1, as well as project QT-6 SPOC of the Baden-W\\\"urttemberg Foundation\n\t\t\n\t\tM.B. and Y.S.K. contributed equally to this work.\n\t\t\n\t\\end{acknowledgments}\n\t\n\t\n\t\n\\section{Wafers and material parameters}\t\n\t\\label{A}\n\t\n\tIn Tab. \\ref{tab:wafers} the growth parameters of the four wafers employed in this work are summarized, stating the thickness of the device layer, sacrificial layer (if the system has one), substrate, and the corresponding supplier. The two SiN wafers were grown by Low Pressure Chemical Vapor Deposition (LPCVD), the SiC in a two-stage Chemical Vapor Deposition process, and the InGaP using Metal-Organic Chemical Vapor Deposition (MOCVD).\n\t\n\t\n\t\\begin{table*}[h!]\n\t\t\\caption{\n\t\t\t\\label{tab:wafers}\n\t\t\tBasic parameters of the wafers on which the string resonators were fabricated.\n\t\t}\n\t\t\\begin{ruledtabular}\n\t\t\t\\begin{tabular}{lllll}\n\t\t\t\t& resonator\/device layer & sacrificial layer & substrate & source \\\\\n\t\t\t\t\\hline\n\t\t\t\tSiN-FS \t\t\t& 100\\,nm SiN\t& --- \t& SiO$_2$ & HSG-IMIT \\\\\n\t\t\t\tSiN-Si \t\t\t& 100\\,nm SiN \t& 400\\,nm SiO$_2$ \t& Si(100) & HSG-IMIT \\\\\n\t\t\t\tSiC \t\t\t& 110\\,nm 3C-SiC \t& --- \t& Si(111) & NOVASiC \\\\\n\t\t\t\tInGaP \t& \\multicolumn{1}{r}{100\\,nm In$_{0.415}$Ga$_{0.585}$P} \t& \\multicolumn{1}{l}{1000\\,nm Al$_{0.85}$Ga$_{0.15}$As} \t& GaAs & CNRS \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{ruledtabular}\n\t\\end{table*}\n\t\n\t\\newpage\n\t\n\tAll material parameters employed in the theoretical calculations, i.e. the Young's modulus $ E $, the shear modulus $ G $, the Poisson ratio $ \\nu $, and the mass density $\\rho$ of the respective materials, are listed in Tab. \\ref{tab:ModelParameters}. \n\t\n\t\\begin{table}[h]\n\t\t\\caption{\n\t\t\t\\label{tab:ModelParameters}\n\t\t\tYoung's modulus, shear modulus and density of the materials used within the main text. All shear mouduli where calculated via $ G = \\frac{E}{2(1+\\nu)} $ . \n\t\t}\n\t\t\\begin{ruledtabular}\n\t\t\t\\begin{tabular}{crrrr}\n\t\t\t\t& \\multicolumn{1}{c}{Young's modulus $ E $} & \\multicolumn{1}{c}{Shear modulus $ G $}& \\multicolumn{1}{c}{Poisson's ratio $ \\nu $} &\\multicolumn{1}{c}{density $\\rho$} \\\\\n\t\t\t\t& \\multicolumn{1}{c}{\\textrm{(GPa)}} & \\multicolumn{1}{c}{\\textrm{(GPa)}} & & \\multicolumn{1}{c}{(g\/cm$ ^3 $)} \\\\\n\t\t\t\t\\hline\n\t\t\t\tSiN\t& 260 \\cite{DosterUnpub} & 104 & 0.25\\cite{maluf2002introduction} & 3.1 \\cite{maluf2002introduction}\t \\\\\n\t\t\t\tSiO$_2$\t& 73 \\cite{maluf2002introduction} & 31 & 0.17 \\cite{maluf2002introduction} &2.2 \\cite{maluf2002introduction}\t \\\\\n\t\t\t\tSi & 160 \\cite{maluf2002introduction} & 66 & 0.22 \\cite{maluf2002introduction} &2.4 \\cite{maluf2002introduction}\t \\\\\n\t\t\t\tSiC\t& 419 \\cite{li1987single}\\footnotemark[1] &\t184 & 0.14 \\cite{maluf2002introduction} &3.166 \\cite{henisch2013silicon}\\\\\n\t\t\t\tInGaP\t& 124 \\cite{Ioffe1999ShurEtAl-HandbookSeriesSemiconductorParametersVOL2}\\footnotemark[1] & 47 & 0.32\\cite{Ioffe1999ShurEtAl-HandbookSeriesSemiconductorParametersVOL2}\\footnotemark[2] & 4.418 \\cite{Ioffe1999ShurEtAl-HandbookSeriesSemiconductorParametersVOL2}\t \\\\\n\t\t\t\tGaAs\t& 75 \\cite{maluf2002introduction} & 29 & 0.31 \\cite{Ioffe1999ShurEtAl-HandbookSeriesSemiconductorParametersVOL2} &5.3 \\cite{maluf2002introduction}\t \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{ruledtabular}\n\t\t\\footnotemark[1]{Determined by utilizing the elastic constants and the stiffness tensor as described in \\cite{Bueckle2018APL-StressControl}.}\n\t\t\\footnotemark[2]{Calculated with $ \\nu = \\frac{c_{12}}{c_{11}+c_{12}} $ where $ c_{ij} $ are the elastic constants.}\n\t\t\\vspace{-4mm}\n\t\\end{table}\n\t\n\t\n\t\\section{Finite Element Method Simulations}\n\t\\label{B}\n\t\n\tThe geometric reconfiguration of the pedestal, the clamping pad and the string was explored in more detail by finite element method (FEM) simulations to validate our theoretical considerations. To this end, the individual $10\\,\\mu$m long and $300$\\,nm wide SiN-FS string resonator held in place by two SiO$ _2 $ pedestals on a SiO$_2$ substrate shown in Fig.~\\ref{fig:femSimulationPedestal} was considered. The thickness of the device layer was set to $100$\\,nm, a $500$\\,nm undercut and a pedestal height of $1\\,\\mu$m were assumed, as well as an initial two-dimensional tensile stress of $2.9$\\,GPa. A perfectly matched layer was included to mimick an infinite substrate, but did not noticeably influence the result.\n\tA close look at Fig.~\\ref{fig:femSimulationPedestal} clearly reveals the shearing of the pedestal as well as the contraction of the clamping pad due to the stressed device layer. Also apparent is the resulting elongation of the string and the enhanced tensile stress in the string, which, for the case of the extremely short length of the simulated string, even exceeds the remaining tensile stress in the clamping pad. \n\tThese observations qualitatively support all assumptions of the elastic model put forward in the main text.\\\\\n\n\t\n\t\\begin{figure}[h!]\n\t\t\\includegraphics[width=0.99\\linewidth]{femSimulationPedestal}\n\t\t\\caption{ FEM simulations of a single string resonator with an initial stress of $2.9$\\,GPa. Furthermore, we set a thickness of $100$\\,nm for the device layer, a $500$\\,nm undercut and a pedestal height of $1\\,\\mu $m.\n\t\t\t\\label{fig:femSimulationPedestal} \n\t\t}\n\t\\end{figure}\n\t\n\t\\section{Measuring the pedestal contraction}\n\t\\label{C}\n\t\n\tTo further support our elastic model, we have experimentally quantified the shearing of the pedestal using the test structures discussed in the following.\n\tAn array of quadratic pedestals is fabricated on SiN-FS (see Fig.~\\ref{fig:MeasurePedCont}(a) and (c)), the material for which the biggest contraction is expected (c.f. Tab. \\RNum{2}). As shown in Fig.~\\ref{fig:MeasurePedCont}(a) and (b), the uncontracted width of a pedestal is $ 2 a $ and the pedestal-pedestal distance is $ d $. An anistropic ICP-RIE etch step (etching depth of around 350~nm) allows for a contraction of the pedestal by $ 2 \\Delta p $ to $ 2 a_{\\mathrm{con}} = 2 a - 2 \\Delta p$. Because the contraction is in the nanometer regime and the pedestal in the micrometer regime, we can not simply image the whole pedestal and directly measure $ 2a $ and $ 2a_{\\mathrm{con}} $ and calculate the contraction $ 2 \\Delta p $, as this is beyond the resolution of our scanning electron microscopy. However, as indicated schematically in Fig. ~\\ref{fig:MeasurePedCont}(b), the separation of two closely-spaced pedestals of the test structure can be mapped out with a higher resolution. Comparison of their spacing before and after the contraction, $ d $ and $ \\tilde{d} $, respectively, indeed yields in increase of the gap as shown in Fig. ~\\ref{fig:MeasurePedCont}(d), which indicates a contraction of the clamping structure. For our sample chip we measure an average value of $ d = 793(6) $~nm and $ \\tilde{d} = 824 (7) $~nm (over the entire gap, not just the section close to the center shown in Fig. ~\\ref{fig:MeasurePedCont}(d)), corresponding to a contraction of $ \\Delta p = \\frac{\\tilde{d}-d}{2} = 15(7)~ $nm. The theoretical value of $ \\Delta p = 8~ $nm is just within the uncertainty of the measured value. \n\t\n\t\n\t\\begin{figure}[h!]\n\t\t\\includegraphics[width=0.9\\linewidth]{MeasurePedestalContraction}\n\t\t\\caption{ Array of pedestals (a) and a close up (b) including length annotations. Dashed lines and solid lines correspond to the pedestal before and after contraction, respectively. (c) SEM image of the array structure before it was etched. (d) SEM image of the gap between two pedestals before (left) and after (right) contraction.\n\t\t\t\\label{fig:MeasurePedCont} \n\t\t}\n\t\\end{figure}\n\t\n\t\n\t\\clearpage\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaqck b/data_all_eng_slimpj/shuffled/split2/finalzzaqck new file mode 100644 index 0000000000000000000000000000000000000000..b1b1e65845451d0eaa137a06ea27fadf13ccf3f9 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaqck @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \n\nAlthough the use of low clinker ratio cements is increasing, the development of\nnew clinker types remains a reliable strategy to reduce greenhouse gas emissions\nand improve the properties of cement for concrete structures applications. In a\ncontext of durable design, considerable efforts are being made for a better\nunderstanding of original Portland cement (OPC) hydration. However, considering\nthe hydration of all clinker phases together would turn the study highly\ncomplex. Most of the time, the tricalcium silicate (\\ce{C3S}) received a\nparticular attention, due to its predominance in OPC clinker (about\n\\SI{50}{\\percent} to \\SI{70}{\\percent} by mass). \\ce{C3S} is the main phase\nresponsible for OPC setting and strength development. The reactivity and early\nhydration of tricalcium silicate (\\ce{C3S}) is a very relevant topic towards a more sustainable design of OPC. The hydration itself\nencompasses several processes such as dissolution, phase growth, diffusion and\ncomplexation \\cite{Bullard2011}. Atomistic simulation methods have shown good\ncapabilities in predicting the reactivity of mineral surfaces and the behavior\nof solid\/liquid interfaces. Towards a better understanding of the phases and processes occurring in cementitious systems, many atomistic force fields have been optimized \\cite{Mishra2017}. Molecular dynamics (MD) and density\nfunctional theory (DFT), have already been used to compute surface energies and\ndetermine Wulff shapes for monoclinic \\ce{M3 C3S}\n\\cite{Manzano2015,Durgun2014,Mishra2013}. In recent studies, molecular and\ndissociative adsorption of single water molecule were investigated on multiple\nsurface planes of \\ce{M3 C3S} polymorph \\cite{Zhang2018b,Zhang2019,Qi2019}.\nZhang et al. have shown that the adsorption energy decreases with increasing\namount of adsorbed molecules. Reactive MD studies indicates that after\napproximately \\SI{0.3}{\\ns}, the structural properties of the surface are lost,\nmaking further hydration process independent of the crystallographic surface\nplane, and driven by proton hopping mechanisms towards the bulk\n\\cite{Manzano2015,Huang2015}. No correlation was found between water adsorption\nenergy and surface energy, when using static computational methods\n\\cite{Manzano2015}. However, the proton diffusion after the initial stage of\nhydration was related to the location of the valence band maximum (VBM), which\nis mainly constituted of oxygen 2p orbitals \\cite{Huang2015,Huang2014}. Previous\nDFT studies reveal that the local density of state of the VBM is close to the\noxygen anions for \\ce{C3S}, and close to oxygen in silicates for \\ce{C2S}\n\\cite{Durgun2012}. The higher reactivity of \\ce{C3S}\nwhen compared to \\ce{C2S} is explained by the difference in their electronic\nstructure, arising from the presence of oxygen anions in \\ce{C3S}. Calculations\nof a single water molecule sorption on a (100) surface of \\ce{T1 C3S}, shown\nthat chemisorption occured only in regions close to oxide ions. This behaviour\nwas associated to the higher degree of freedom of oxide ions when compared to\noxygen in silicate \\cite{Huang2015}.\n\nProton transfer (PT) frequency strongly depends on hydrogen bonds (HB)\nfluctuation due to thermal motion \\cite{Tocci2014}, and thus cannot be analyzed\nby a \\SI{0}{\\kelvin}, DFT investigation. Furthermore, such a phenomenon cannot\nbe captured considering a single water molecule adsorption. A previous\ncomputational study found structural changes, as well as a huge increase in PT\nrate from a solid\/water monolayer interface to a thicker water film\n\\cite{Tocci2014}. Towards a better understanding of the \\ce{C3S}\/water\ninterface, we performed an \\emph{ab initio} MD (AIMD) simulation, considering a\nwater film thick enough to account for fluctuation of the HB network. AIMD is a\npowerful tool that has been used extensively to investigate the structural and\ndynamical behavior of water\/oxide interfaces at the DFT level of theory\n\\cite{Bjoerneholm2016,Tocci2014,Ngouana-Wakou2017,Rimsza2016,Cimas2014,Laporte2015,Lee2015,Liu2008}.\nHowever, only few AIMD studies were conducted on cementitious materials\n\\cite{Churakov2009,Churakov2009a,Churakov2017}. As far as we know, this is the\nfirst time that the very early hydration stage of \\ce{C3S} is investigated using\nAIMD. In particular, the structure of water and the PT dynamics are analysed and\nquantified, and the results are compared with reactive molecular dynamics\ncalculations, performed for that purpose.\n\n\\section{Computational Methods}\n\nA simulation of the \\ce{C3S}\/water interface was performed on the symmetric,\nCa-rich, (040) plane (as in \\cite{Mishra2013}). The \\ce{M3 C3S} model employed\nwas refined from XRD analysis by Mumme et al. \\cite{Mumme1995}. The unit\ncell of 54 atoms was optimized, at the DFT level with the Quantum Espresso code,\nusing the PBE exchange-correlation functional \\cite{Perdew1996,Perdew1997} with\na Grimme D2 correction for van der Waals interactions \\cite{Grimme2006}. The\nkinetic energy cutoffs for wave functions and charge density were\n\\SI{45}{\\rydberg} and \\SI{405}{\\rydberg}, respectively. The Monkhorst-Pack\nmethod was used for the integration of the first Brillouin zone, with a\n\\num{3x3x3} k-point mesh. During the optimization process, the atoms were\nallowed to relax. In order to build a surface model, the optimized unit cell was\nconverted to an orthorhombic supercell of 162 atoms, with lattice parameters $a$\n= \\SI{12.28}{\\angstrom}, $b$ = \\SI{7.09}{\\angstrom} and $c$ =\n\\SI{25.59}{\\angstrom}. This transformation was performed with the Atomsk code,\nwhich searches linear combination of the unit cell vectors producing vectors\naligned with Cartesian axes \\cite{Hirel2015}. The optimized monoclinic cell and\ncorresponding orthorhombic supercell are represented in \\cref{fig:cells}.\n\n\\begin{figure}\n\\centering\n\\includegraphics{figure1}\n\\caption{\\ce{M3} monoclinic cell, optimized from the model by Mumme et al.\n\\cite{Mumme1995} (right). Corresponding orthorhombic supercell (left). Color\ncode: calcium cations in green, oxygen anions and silicate oxygen in red,\nsilicon atoms in yellow.}\n\\label{fig:cells}\n\\end{figure}\n\nThe surface model was created from three orthorhombic supercells, with a\n\\SI{20}{\\angstrom} thick vacuum layer, thus resulting in a \\SI{12.28 x 25.59\nx 21.28}{\\angstrom} structure. The relaxation of the surface and the AIMD\nsimulation were performed with the CP2K code, using a PBE functional, and a\ncombination of Gaussian and plane wave basis functions (GPW), with Grimme D2\ncorrection. A \\SI{400}{\\rydberg} planewave cutoff was adopted, and the\nreciprocal space was sampled only at the $\\gamma$ point. To relax the surface, the periodicity was applied for in-plane directions, and removed in the\ndirection of the vacuum. The atoms of the surface were allowed to relax at\nthe DFT level, ensuring that almost no change occurs within the middle of\nthe slab. The interface model was created adding a \\SI{15}{\\angstrom} thick\nlayer of water (157 molecules), with a \\SI{15}{\\angstrom} vacuum region. The\nstructure of the \\ce{C3S}(040)\/water interface is depited in \\cref{fig:system}.\n\n\\begin{figure}\n\\centering\n\\includegraphics{figure2}\n\\caption{Structure of the investigated \\ce{C3S}(040)\/water interface.}\n\\label{fig:system}\n\\end{figure}\n\nWhile the atoms of the mineral surface were kept fixed, the water molecules were\nallowed to relax on the surface during a \\SI{2}{ns} classical MD run in NVT\nensemble at \\SI{300}{\\kelvin}, using the INTERFACE FF parameters for \\ce{C3S}\n\\cite{Mishra2013} and a SPC model for water \\cite{Berendsen1981}. In order to\nminimize the computational time, the bottom layer was removed so that the\nremaining slab was composed of two orthorhombic supercells (\\SI{\\sim\n14}{\\angstrom} thick) and during the AIMD run, the lower supercell, considered\nas the bulk, was fixed. Afterwards, a \\SI{18}{\\ps} AIMD run was performed within\nthe Born-Oppenheimer approximation, in the canonical ensemble, with a\nNose-Hoover thermostat, integrating the equation of motion with a \\SI{0.5}{\\fs}\ntimestep. Based on the evolution of the energy of the system, it was considered\nthat the equilibrium was reached after \\SI{6}{\\ps}, and the remaining\nsimulation time was used for analysis of equilibrium properties. A slightly higher temperature of\n\\SI{360}{\\kelvin} (compared to standard conditions) was used to balance the low\ndiffusivity of water using the PBE functional. Deuterium masses were used for\nprotons to minimize the vibrational frequency of nuclei. It deserves to highlight that such substitution could decrease the frequency of PT, because of the lower vibrational frequency of deuterium nuclei in comparison to hydrogen nuclei. However, this method has already been used in the literature to prevent from energy drifts and it is understood that its benefits outweigh losses\n\\cite{Leung2006,Tocci2014}. A reactive molecular dynamics simulation was\nperformed using the ReaxFF, with the current optimized set of parameters for\nCa\/O\/H\/Si elements \\cite{Fogarty2010,Manzano2012,Manzano2012a}. The simulation\nmethod and parameters, as well as the system size, are in accordance with\npreviously reported calculations \\cite{Manzano2015}.\n\n\\section{Results}\n\\subsection{Water structure}\n\nIn this article, Oi refers to oxygen anions, Os refers to oxygens in silicates,\nand Odw refers to oxygens resulting from the dissociation of water molecules.\nAt the very first steps of the simulation, three oxide ions Oi from equivalent\nsites are protonated. The hydration model for \\ce{C3S} proposed by\nPustovgar et al. considers that protonation of oxide ions occurs before\nprotonation of silicates \\cite{Pustovgar2017}. Although hydroxides are more\nstable than silanol groups, our simulation indicates that on the considered\n(040) Ca-rich surface, protonation of Oi only occurs on sites close to\nsilicates. The other superficial oxide ions are shielded by four\ncalcium cations, hindering any protonation reaction. The negatively charged region allows water molecules to form hydrogen bonds with oxide ions on one side and with oxygens in silicates on the other side, thus leading to protonation of Oi (see\n\\cref{fig:oxide_protonation}).\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure3}\n\t\\caption{Atomic structure of the (040) surface. Hydrogen in hydroxide H-Oi are colorized in white. The snapshot shows the position of the water\n\tmolecule before protonation of the oxide ion. The ions of the surface are\n\tin the same layer, except for the calcium indicated by blue circles, having\n\tperpendicular coordinate $z$ \\SI{\\sim 2}{\\angstrom} larger.}\n\t\\label{fig:oxide_protonation}\n\\end{figure}\n\nThe number of H-Os groups formed on silicates tends to stabilize after \\SI{\\sim\n1}{\\ps} whereas the number of H-Odw groups stabilize after \\SI{\\sim 0.25}{\\ps}\n(see \\cref{fig:ho-coverage}). Both hydroxyl groups fluctuate during the whole\nsimulation due to proton transfer. Conversely, the hydroxides H-Oi formed on\noxygen anions Oi are very stable and no backward PT occurs. From our simulation\nusing the ReaxFF, within the timescale of \\SI{18}{\\ps}, a steady state is\nreached very quickly as in the AIMD simulation. However, all hydroxyl groups\nformed in the ReaxFF simulation are very stable, and the currently developed set\nof parameters for Ca\/O\/H\/Si failed to describe the PT dynamics between water and\nOi\/Os atoms which is observed in our AIMD simulation. The complexity to optimize\nparameters for ReaxFF relies on the fact that the same set of parameters is\nemployed for each element \\cite{Senftle2016}. It means that the parameters are\nthe same independently of the environment. Other approaches based on the\nempirical valence bond model has succeeded in reproducing \\ce{OH-} solvation and\ntransport in water solutions \\cite{Ufimtsev2009}. The implementation of a PT\nmodel almost doubled the diffusion of \\ce{OH-} ions when compared to a classical\nmodel. The hydroxyl coverage of each hydroxyl type is close to the result of the\nAIMD simulation. Therefore, within the timescale of the simulation, the ReaxFF\nis representing the protonation state of the (040) surface in good agreement\nwith the AIMD simulation. The average total hydroxyl coverage over the last\n\\SI{12}{\\ps} of simulation is \\SI{5.36(37)}{\\hopernmsquared} for the AIMD\nsimulation, and \\SI{5.17(1)}{\\hopernmsquared} for the ReaxFF simulation. These\nvalues are also in agreement with previous investigation on \\ce{C3S} hydration,\nusing the ReaxFF \\cite{Manzano2015,Huang2015}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{figure4}\n\t\\caption{Number of hydroxyl groups formed at the surface on oxide ions\n\t(Oi-H), oxygen in silicates (Os-H), and from water dissociation (Odw-H) for\n\t(a) AIMD simulation, (b) ReaxFF simulation.}\n\t\\label{fig:ho-coverage}\n\\end{figure}\nThe atomic density profile of water oxygen and hydrogen atom, along the axis\nperpendicular to the surface, is reported in \\cref{fig:1d_density_snap}.\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics{figure5}\n\t\\caption{Atomic density profile of water molecules along the z axis for\n\tAIMD. The origin adopted is the average coordinate of the uppermost oxygen\n\tsilicate layer. The layering observed results from the hydrogen bond network\n\tcreated between the strong interaction between the water molecules and the\n\tionic surface.}\n\t\\label{fig:1d_density_snap}\n\\end{figure*}\nThe layered structure of the interfacial water results from the effect of\nexcluded volume, electrostatic force field, and hydrogen bonding network.\nPrevious investigations based on classical MD simulations showed that this\nlayering is lost with protonation of the surface \\cite{Claverie2019}. The\nclosest hydrogen's peak from the surface, is at the average position of oxygen\nin silicates Os ($z = 0$), and corresponds to chemisorbed H. The thickness of\nthe layered region is approximately the same as in our previous classical MD\ninvestigation: \\SIrange{\\sim 5}{6}{\\angstrom} \\cite{Claverie2019}. \n\nThe radial distribution functions (RDF) of H-Oi, H-Os and Ow-Ca pairs are\nplotted in \\cref{fig:rdf}. \n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{figure6}\n\t\\caption{Radial distribution function H-Oi, H-Os, and Ow-Ca pairs employing AIMD, ReaxFF, and IFF \\cite{Claverie2019}. The inset shows a zoom out ($\\times 0.5$).}\n\t\\label{fig:rdf}\n\\end{figure}\nFor H-Oi and Ow-Ca pairs, sharp peaks raise at \\SI{\\sim 0.97}{\\angstrom} and\n\\SI{\\sim 2.50}{\\angstrom} respectively, indicating superficial hydroxides and\nthe first coordination shell of calcium cation. No second coordination shell is\nnoticed for both pairs. Yet for the H-Os pairs, two peaks stand at \\SI{\\sim\n1.02}{\\angstrom} and \\SI{\\sim 1.58}{\\angstrom}, corresponding to hydroxyl groups\nformed on silicates and H-bonds between water and oxygen in silicates. The RDF\nobtained in the ReaxFF simulation is very similar, with two main differences: in\none case, only one correlation peak is observed for the H- Os pair, suggesting\nan Os protonation, and the coordination peak for Ow-Ca pairs is split in two,\nindicating two different distances of correlated water molecules. RDF for dry\n\\ce{C3S}\/water interface, obtained from previous MD investigation\n\\cite{Claverie2019}, was plotted for comparative purpose. In such classical\nsimulation, no PT occurs and the first coordination peak correspond to H-bonds\nbetween water and superficial anions (r \\SI{\\sim 1.53}{\\angstrom}).\n\nThe orientation of water molecules in contact with the surface is a\ncharacteristic of the hydrophilic\/hydrophobic behavior of the surface. The\nprobability distribution of the angle $\\theta$ between the water dipole moment\nand the $z$ axis is depicted in \\cref{fig:angdis_water} a). Within the contact\nlayer ($z$ \\SI{< 1.6}{\\angstrom}), most of the water molecules have $\\theta$\n\\SIrange[range-phrase = --]{\\sim 20}{50}{\\degree} or $\\theta$\n\\SIrange[range-phrase = --]{\\sim 120}{160}{\\degree}, meaning that their dipole\nmoments point preferentially towards or against the surface. This feature is\ncharacteristic of hydrophilic surfaces \\cite{Deryagin1987}. Less probably, water\nmolecules forming a single hydrogen bond with silicates orient with their dipole\nmoment parallel to the surface. Water molecules in the contact layer orient\naccording to the charge of superficial ionic species. Thus, two regions can be\ndistinguished: one where the water dipole is oriented upward and H atoms\ncoordinate with Os, and a second where the water dipole is oriented downward and\nOw atoms coordinate with calcium cations. These regions are mapped on the\nsurface in \\cref{fig:angdis_water} b) by collecting $\\theta$ and the $x$ and $y$\ncoordinates of water molecules within \\SI{3}{\\angstrom} from the surface, during\nthe whole simulation. The effect of the topology of \\ce{C3S} surfaces on the\nstructure of water molecules has already been reported in a previous molecular\ndynamics investigation \\cite{Alex2017}.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure7}\n\t\\caption{(a) Probability distribution of the angle $\\theta$ between the\n\twater dipole moment and the z axis, perpendicular to the surface. Lighter\n\tregions correspond to higher probabilities (b) Density mapping of $\\theta$\n\tfor water molecules within \\SI{3}{\\angstrom} from the surface. Color code:\n\tCa in green, Os and Oi in red, Si in yellow, H in hydroxide H-Oi in\n\twhite.}\n\t\\label{fig:angdis_water}\n\\end{figure}\nThe probability distribution of H-Os and H-Odw hydroxyl groups is plotted in\n\\cref{fig:map_oh}. H-Odw groups are principally located on Ca-rich, positively\ncharged regions, but also close to protonated silicates.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\columnwidth]{figure8}\n\\caption{Normalized probability distribution of H-Os (blue) and H-Odw (red) hydroxyl groups.}\n\\label{fig:map_oh}\n\\end{figure}\n\n\\subsection{Proton transfer analysis}\n\nThe frequency $\\nu$ of PT between water molecules and Os-H and Odw-H groups is\nreported in \\cref{fig:frequency}. The lifetime $\\tau$ is defined as\nthe time for a proton to return to the oxygen atoms to which it was initially\nbonded.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure9}\n\t\\caption{Frequency $\\nu$ of PT as a function of its lifetime $\\tau$ (left). Evolution of the total number of hops within the simulation time (right).}\n\t\\label{fig:frequency}\n\\end{figure}\nThe inset plot shows the evolution of proton hops during the simulation. Hops\nbetween Ow and Os reach a plateau after \\SI{\\sim 8}{\\ps}. Conversely, hops\nwithin Odw-Ow pairs seems to increase steadily during the whole simulation and\nare about 5 times more frequent than within Os-Ow pairs during the first\n\\SI{8}{\\ps}. The typical lifetime of PT events is shorter than\n\\SI{100}{\\fs}, which corresponds to the time scale of Eigen-Zendel structure\ninterchanging in bulk water, obtained by femtosecond vibrational spectroscopy\n\\cite{Woutersen2006}. Lifetimes of approximately the same duration were reported\nfor PT at water\/ZnO($10\\bar{1}0$) interface \\cite{Tocci2014}. The longest\nlifetimes reported are \\SI{\\sim 0.2}{\\ps} for transfer between Ow and Os, and\n\\SI{\\sim 2}{\\ps} for transfer between Ow and Odw.\n\nThe free energy profiles of PT between Ow and Os and between Ow and Odw pairs\nwere obtained from a standard method as follows: $F = -k_BT \\log P$ (see \\cref{fig:free-energy}) \\cite{Tocci2014, Ufimtsev2009, Zhu2002}.\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics{figure10}\n\t\\caption{Free energy contour plot of PT between Ow and Os as a function of\n\tthe distance $d_{\\text{Oa-Ob}}$ between oxygen atoms, and of\n\t$\\delta_{\\text{a-b}} = d_{\\text{Oa-H}} - d_{\\text{Ob-H}}$. The red circles\n\tshows the local free energy minima, corresponding to the most stable\n\tconfigurations before and after the proton jump. The red square is the\n\tsaddle point, where the proton is equidistant from both oxygen atoms.}\n\t\\label{fig:free-energy}\n\\end{figure*}\n$P$ is a probability distribution function of the distance $d_{\\text{Oa-Ob}}$\nbetween two oxygen atoms Oa and Ob, and of $\\delta_{\\text{a-b}} =\nd_{\\text{Oa-H}} - d_{\\text{Ob-H}}$. In the case of PT between Ow and Os, the\nmolecular configuration of water forming an H-bond with Os is more stable than\nthe dissociative configuration. The free energy profile of PT between Ow and Odw\nsuggests that the molecular adsorption of water on the surface is more stable\nthan the dissociated form. This observation is contrary to previous DFT and\nreactive MD calculations on a single water molecule, where dissociative\nadsorption energies were generally larger than molecular adsorption energies,\nresulting in more stable configurations \\cite{Zhang2019, Zhang2018b,Qi2019,\nManzano2015}. This consolidates the idea that static calculations on single\nmolecule adsorption cannot describe accurately the properties of a solid\/water\ninterface. Tocci and Michaelides also reported considerable differences in PT\nrate between a monolayer and an thicker water film \\cite{Tocci2014}. These\ndifferences arise from the decrease of the free energy barrier of proton\ntransfers induced by H-bond fluctuations. The free energy barrier is \\SI{\\sim\n3.24}{\\kt} for PT from Ow to Os and \\SI{\\sim 2.24}{\\kt} from Os to Ow. The\nbarrier is \\SI{\\sim 3.30}{\\kt} for transfer from Ow to Odw and \\SI{\\sim\n3.60}{\\kt} for the reverse reaction. These results suggest that hydroxides\nformed by water dissociation are more stable than silanol groups. However, the\nenergy barrier for the creation of both hydroxyl groups is almost equal.\n\nElectron density difference analysis allows to map the distribution of electrons\ninvolved in PT, and more generally in the adsorption. This analysis was realized\nperforming static DFT calculation on the total system, as well as \\ce{C3S}, and\nwater independently on the system configuration at \\SI{5}{\\ps}. The electron\ndensity difference $\\Delta \\rho$ was calculated as follow:\n\\begin{equation}\n\\Delta \\rho = \\rho_{\\ce{C3S}\/\\ce{H2O}} - \\rho_{\\ce{C3S}} -\\rho_{\\ce{H2O}}\n\\end{equation}\nwhere $\\rho_{\\ce{C3S}\/\\ce{H2O}}$ corresponds to the electron density of the\ninterface system, $\\rho_{\\ce{C3S}}$ and $\\rho_{\\ce{H2O}}$ are the electron\ndensity of the \\ce{C3S} and water alone, respectively. A positive value of\n$\\Delta \\rho$ indicates a high electron density, while a\nnegative value points out an electron depletion region.\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics{figure11}\n\t\\caption{Snapshots of the isosurface of electron density difference. Golden and cyan isosurface represent positive and negative $\\Delta \\rho$, respectively. a) A water molecule between a silicate and a silanol group. b1) Electron density delocalization during a proton exchange between a molecular water and a hydroxide. b2) Electron abundance and depletion regions around an hydroxide created upon water dissociation.}\n\t\\label{fig:edensity}\n\\end{figure*}\n\nThe proton transfers occurring at the surface create an electron\ndelocalization, and thus high and low electron density regions. Water molecules\nact as charge carrier and the electron depletion or gain highly depends on the\nlocation of the molecule (see \\cref{fig:edensity} a and b1). High electron\ndensity is observed around the silicate oxygen close to the water molecule,\ncreating a depletion region on the silicon atom. A large depletion region is\nobserved around the silanol group (\\cref{fig:edensity} a). Charge depletion\nregions are observed around H of hydroxyl groups. Their magnitude increase in\nthis order: H-Oi $<$ H-Odw $<$ H-Os. In other words, the magnitude of the\ndepletion regions on H decreases as a function of the stability of the hydroxyl\ngroup. A large depletion region indicates a greater charge separation, and a\nmore ionic bond, while a small depletion region reveals a more covalent bond.\n\n\\section{Conclusion}\nThe very early hydration of the (040) surface of \\ce{C3S} was investigated\nthrough a \\SI{18}{\\ps} AIMD simulation. As a first observation, only 1\/3 of the\noxide ions on the surface were protonated during the whole simulation. The\nhydroxides formed are highly stable and no proton exchange was observed.\nAlthough the oxide ion is very unstable in water, we found that its environment\non the surface is an important factor for the creation of hydrogen bond with\nwater molecules and for protonation to occur. Thus, the pKa of hydroxide and\nsilicic acid in solution cannot predict accurately the protonation state of the\nsurface during the very early stage of hydration. The structure of water at\nthe interface, resulting from the formation of the hydrogen bond network, is\nvery similar to that of our previous classical molecular dynamics study, with a\nthickness of the layered region of approximately \\SIrange{5}{6}{\\angstrom} from\nthe surface. The (040) surface is composed of Ca-rich regions (positively\ncharged) and Si-rich regions (negatively charged). Water molecules in the\ncontact layer orient their dipole moment in accordance with the surface charge,\nmaking either H-bonds in Si-rich regions, or creating strong Ca-Ow interactions\nin Ca-rich regions. Energy barrier analysis suggests that the molecular\nadsorption of water on the \\ce{C3S} surface is more stable than dissociative\nadsorption. Based on proton transfer energy analysis, the hydroxyl groups formed\nwere classified in order of stability as follow: H-Oi $>$ H-Odw $>$ H-Os. From\nelectron density difference, high electron density, and depletion regions were\nobserved. These observations revealed that the magnitude of the electron\ndepletion region upon adsorption is smaller for more stable hydroxyl groups. \n\n\\section*{Acknowledgments}\nThe authors acknowledge Brazilian science agencies CAPES (PDSE process\nn\\si{\\degree}88881.188619\/2018-01) and CNPq for financial support, as well as\nHegoi Manzano and Gabriele Tocci for helpful discussions. The authors also thank\nthe anonymous reviewers for their careful reading of our manuscript and their\nmany insightful comments and suggestions.\n\t\n\\bibliographystyle{elsarticle-num}\n\\biboptions{sort&compress}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}}\n\\newcommand{\\subsec}[1]{\\subsection{#1}}\n\\newcommand{{\\cal O}}{{\\cal O}}\n\\newcommand{{\\cal N}}{{\\cal N}}\n\\newcommand{{\\cal A}}{{\\cal A}}\n\\newcommand{{\\cal N}}{{\\cal N}}\n\\def\\slashed#1{{\\ooalign{\\hfil\\hfil\/\\hfil\\cr $#1$}}}\n\\newcommand{\\gb}[1]{\\langle#1]}\n\\newcommand{\\rf}[1]{(\\ref{#1})}\n\\def{\\cal M}{{\\cal M}}\n\\def\\lambda{\\lambda}\n\\def\\tilde\\lambda{\\tilde\\lambda}\n\\def\\Bbb{CP}{\\Bbb{CP}}\n\\def\\Bbb{RP}{\\Bbb{RP}}\n\n\n\n\\begin{document}\n\\baselineskip=16pt\n\\pagestyle{plain}\n\\setcounter{page}{1}\n\n\\begin{titlepage}\n\n\\begin{flushright}\nhep-th\/0504194 \\\\\n\\end{flushright}\n\\vfil\n\n\\begin{center}\n{\\huge Lectures on Twistor Strings}\\\\ \\huge{ and} \\linebreak\n\\huge{Perturbative Yang-Mills Theory}\n\\end{center}\n\n\\vfil\n\\begin{center}\n{\\large Freddy Cachazo and Peter Svr\\v{c}ek\\footnote{On leave from\nPrinceton University.}\n}\\end{center}\n\n$$\\seqalign{\\span\\hfil$\\displaystyle{##}$\\, & \\sl\\span\\hbox{##}}{\n & Institute for Advanced Study,\n Princeton, NJ 08540, USA \\cr\n}$$\n\\vfil\n\n\\begin{center}\n{\\large Abstract}\n\\end{center}\n\n\\noindent Recently, Witten proposed a topological string theory in\ntwistor space that is dual to a weakly coupled gauge theory. In\nthis lectures we will discuss aspects of the twistor string\ntheory. Along the way we will learn new things about Yang-Mills\nscattering amplitudes. The string theory sheds light on Yang-Mills\nperturbation theory and leads to new methods for computing\nYang-Mills scattering amplitudes\\footnote{These lecture notes are\nbased on lectures given by the second author at RTN Winter School\non Strings,\n Supergravity and Gauge Theories, 31\/1-4\/2 2005 SISSA, Trieste, Italy}.\n\n\\vfil\n\\begin{flushleft}\nApril, 2005\n\\end{flushleft}\n\\end{titlepage}\n\\newpage\n\\renewcommand{\\thefootnote}{\\arabic{footnote}}\n\\setcounter{footnote}{0}\n\\renewcommand{\\baselinestretch}{1.2} \n\\tableofcontents\n\\def\\overline{\\overline}\n\n\\section{Introduction}\n\nThe idea that a gauge theory should be dual to a string theory\ngoes back to 't Hooft \\cite{'tHooft:1973jz}. 't Hooft considered\n$U(N)$ gauge theory in the large $N$ limit while keeping\n$\\lambda=g_{YM}^2 N$ fixed. He observed that the perturbative\nexpansion of Yang-Mills can be reorganized in terms of Riemann\nsurfaces, which he interpreted as an evidence for a hypothetical\ndual string theory with string coupling $g_s\\sim 1\/N.$\n\nIn 1997, Maldacena proposed a concrete example of this duality\n\\cite{Maldacena:1997re}. He considered the maximally\nsupersymmetric Yang-Mills theory and conjectured that it is dual\nto type IIB string theory on $AdS_5\\times S^5.$ This duality led\nto many new insights from string theory about gauge theories and\nvice versa. At the moment, we have control over the duality only\nfor strongly coupled gauge theory. This corresponds to the limit\nof large radius of $AdS_5\\times S^5$ in which the string theory is\nwell described by supergravity. However, QCD is asymptotically\nfree, so we would also like to have a string theory description of\na weakly coupled gauge theory.\n\nIn weakly coupled field theories, the natural object to study is\nthe perturbative $S$ matrix. The perturbative expansion of the $S$\nmatrix is conventionally computed using Feynman rules. Starting\nfrom early studies of de Witt \\cite{DeWitt:1967uc}, it was\nobserved that scattering amplitudes show simplicity that is not\napparent from the Feynman rules. For example, the maximally\nhelicity violating (MHV) amplitudes can be expressed as simple\nholomorphic functions.\n\nRecently, Witten proposed a string theory that is dual to a weakly\ncoupled ${\\cal N}=4$ gauge theory \\cite{Witten:2003nn}. The\nperturbative expansion of the gauge theory is related to\nD-instanton expansion of the string theory. The string theory in\nquestion is the topological open string B-model on a Calabi-Yau\nsupermanifold $\\Bbb{CP}^{3|4},$ which is a supersymmetric\ngeneralization of Penrose's twistor space.\n\nAt tree level, evaluating the instanton contribution has led to\nnew insights about scattering amplitudes. `Disconnected'\ninstantons give the MHV diagram construction of amplitudes in\nterms of Feynman diagrams with vertices that are suitable\noff-shell continuations of the MHV amplitudes\n\\cite{Cachazo:2004kj}. The `connected' instanton contributions\nexpress the amplitudes as integrals over the moduli space of\nholomorphic curves in twistor space \\cite{Roiban:2004yf}.\nSurprisingly, the MHV diagram construction and the connected\ninstanton integral can be related via localization on the moduli\nspace \\cite{Gukov:2004ei}.\n\nDespite the successes of the twistor string theory at tree level,\nthere are still many open questions. The most pressing issue is\nperhaps the closed strings that give ${\\cal N}=4$ conformal\nsupergravity \\cite{Berkovits:2004jj}. At tree level, it is\npossible to recover the Yang-Mills scattering amplitudes by\nextracting the single-trace amplitudes. At loop level, the single\ntrace gluon scattering amplitudes receive contributions from\ninternal supergravity states, so it would be difficult to extract\nthe Yang-Mills contribution to the gluon scattering amplitudes.\nSince, ${\\cal N}=4$ Yang-Mills theory is consistent without conformal\nsupergravity, it is likely that there exists a version of the\ntwistor string theory that is dual to pure Yang-Mills theory.\nIndeed, the MHV diagram construction that at tree level has been\nderived from twistor string theory seems to compute loop\namplitudes as well \\cite{Brandhuber:2004yw}.\n\nThe study of twistor structure of scattering amplitudes has\ninspired new developments in perturbative Yang-Mills theory\nitself. At tree level, this has led to recursion relations for\non-shell amplitudes \\cite{Britto:2004ap}. At one loop, unitarity\ntechniques \\cite{Bern:1994cg,Bern:1994zx} have been used to find\nnew ways of computing ${\\cal N}=4$ \\cite{Britto:2004nc} and ${\\cal N}=1$\n\\cite{Britto:2005ha} Yang-Mills amplitudes.\n\nIn these lectures we will discuss aspects of twistor string\ntheory. Along the way we will learn lessons about Yang-Mills\nscattering amplitudes. The string theory sheds light on Yang-Mills\nperturbation theory and leads to new methods for computing\nYang-Mills scattering amplitudes. In the last section, we will\ndescribe further developments in perturbative Yang-Mills.\n\n\\section{Helicity Amplitudes}\n\\subsection{Spinors}\n\\label{spinors}\n\nRecall\\footnote{The sections $2-4$\n are based on lectures given by E. Witten at PITP, IAS Summer 2004}\n that the complexified Lorentz group is locally isomorphic\nto\n\\begin{equation}\nSO(3,1,\\Bbb C)\\cong Sl(2,\\Bbb C)\\times Sl(2,\\Bbb C),\n\\label{sothree}\n\\end{equation}\nhence the finite dimensional representations are classified as\n$(p,q)$ where $p$ and $q$ are integer or half-integer. The\nnegative and positive chirality spinors transform in the\nrepresentations $(1\/2,0)$ and $(0,1\/2)$ respectively. We write\ngenerically $\\lambda_a,a=1,2$ for a spinor transforming as\n$(1\/2,0)$ and $\\tilde\\lambda_{\\dot a}, \\dot a=1,2$ for a spinor\ntransforming as $(0,1\/2).$\n\nThe spinor indices of type $(1\/2,0)$ are raised and lowered using\nthe antisymmetric tensors $\\epsilon_{ab}$ and $\\epsilon^{ab}$\nobeying $\\epsilon_{12}=1$ and\n$\\epsilon^{ac}\\epsilon_{cb}=\\delta^a{}_b$ \\eqalign{raise}{\n\\lambda^a = \\epsilon^{ab}\\lambda_b\\qquad \\lambda_a =\n\\epsilon_{ab}\\lambda^b.} Given two spinors $\\lambda$ and\n$\\lambda',$ both of negative chirality, we can form the Lorentz\ninvariant product\n\\eqn{skew}{\\vev{\\lambda,\\lambda'}=\\epsilon_{ab}\\lambda^a\\lambda'{}^b.}\nIt follows that $\\vev{\\lambda,\\lambda'}=-\\vev{\\lambda',\\lambda}$,\nso the product is antisymmetric in its two variables. In\nparticular, $\\vev{\\lambda,\\lambda'}=0$ implies that $\\lambda$\nequals $\\lambda'$ up to a scaling $\\lambda^a=c \\lambda'^a.$\n\nSimilarly, we lower and raise the indices of positive chirality\nspinors with the antisymmetric tensor $\\epsilon_{\\dot a\\dot b}$\nand its inverse $\\epsilon^{\\dot a \\dot b}.$ For two spinors\n$\\tilde\\lambda$ and $\\tilde\\lambda',$ both of positive chirality\nwe define the antisymmetric product\n\\eqn{angled}{[\\tilde\\lambda,\\tilde\\lambda']=-[\\tilde\\lambda',\\tilde\\lambda]=\\epsilon_{\\dot\na\\dot b}\\tilde\\lambda^{\\dot a}\\tilde\\lambda'^{\\dot b}.}\n\nThe vector representation of $SO(3,1,\\Bbb C)$ is the $(1\/2,1\/2)$\nrepresentation. Thus a momentum vector $p_\\mu, \\mu=0,\\dots,3$ can\nbe represented as a bi-spinor $p_{a\\dot a}$ with one spinor index\n$a$ and $\\dot a$ of each chirality. The explicit mapping from\n$p_\\mu$ to $p_{a \\dot a}$ can be made using the chiral part of the\nDirac matrices. In signature $+---,$ one can take the Dirac\nmatrices to be \\begin{equation}\\gamma^\\mu=\\begin{pmatrix}0 &\n\\sigma^\\mu \\cr \\overline\\sigma^\\mu &0\\end{pmatrix},\\end{equation} where\n$\\sigma^\\mu=(1,\\vec\\sigma), \\overline\\sigma^\\mu=(1,-\\vec\\sigma)$ with\n$\\vec\\sigma$ being the $2\\times 2$ Pauli matrices. For any\nvector, the relation between $p_\\mu,$ and $p_{a\\dot a}$ is\n\\eqn{transf}{ p_{a\\dot a}= p_\\mu \\sigma^\\mu_{a \\dot\na}=p_0+\\vec\\sigma\\cdot\\vec p.} It follows that, \\eqn{detr}{p_\\mu\np^\\mu= \\det(p_{a\\dot a}).} Hence, $p_\\mu$ is lightlike if the\ncorresponding determinant is zero. This is equivalent to the rank\nof the $2\\times2$ matrix $p_{a\\dot a}$ being less than or equal to\none. So $p^\\mu$ is lightlike precisely, when it can be written as\na product \\eqn{bispi}{p_{a\\dot a}=\\lambda_a\\tilde\\lambda_{\\dot a}}\nfor some spinors $\\lambda_a$ and $\\tilde\\lambda_{\\dot a}.$ For a\ngiven null vector $p,$ the spinors $\\lambda$ and $\\tilde\\lambda$\nare unique up to a scaling\n\\eqn{scalm}{(\\lambda,\\tilde\\lambda)\\rightarrow\n(t\\lambda,t^{-1}\\tilde\\lambda)\\qquad t\\in \\Bbb C^\\ast.} There is\nno continuous way to pick $\\lambda$ as a function $p.$ In\nMinkowski signature, the $\\lambda$'s form the Hopf line bundle\nover the sphere $S^2$ of directions of the lightlike vector $p.$\n\nFor complex momenta, the spinors $\\lambda^a$ and\n$\\tilde\\lambda^{\\dot a}$ are independent complex variables, each\nof which parameterizes a copy of $\\Bbb {CP}^1.$ Hence, the\ncomplex lightcone $p_\\mu p^\\mu=0$ is a complex cone over the\nconnected manifold $\\Bbb {CP}^1\\times \\Bbb {CP}^1.$\n\nFor real null momenta in Minkowski signature $+---$, we can fix\nthe scaling up to a $Z_2$ by requiring $\\lambda^a$ and\n$\\tilde\\lambda^{\\dot a}$ to be complex conjugates\n\\eqn{fixe}{\\overline\\lambda^{\\dot a}=\\pm\\tilde\\lambda^{\\dot a}.} Hence,\nthe negative chirality spinors $\\lambda$ are conventionally called\n`holomorphic' and the positive chirality spinor\n`anti-holomorphic.' In \\rf{fixe} the $+$ sign is for a future\npointing null vector $p^\\mu,$ and $-$ is for a past pointing\n$p^\\mu.$\n\nOne can also consider other signatures. For example in the\nsignature $++--,$ the spinors $\\lambda$ and $\\tilde\\lambda$ are\nreal and independent. Indeed, with signature $++--,$ the Lorentz\ngroup is $SO(2,2),$ which is locally isomorphic to $Sl(2,\\Bbb\nR)\\times Sl(2,\\Bbb R).$ Hence, the spinor representations are\nreal.\n\nLet us remark, that if $p$ and $p'$ are two lightlike vectors\ngiven by $p_{a\\dot a}=\\lambda_a\\tilde\\lambda_{\\dot a}$ and\n$p'_{a\\dot a}=\\lambda'_a\\tilde\\lambda'_{\\dot a}$ then their scalar\nproduct can be expressed as\\footnote{This differs from the `-'\nsign convention used in the perturbative QCD literature.}\n\\eqn{product}{2 p\\cdot\np'=\\vev{\\lambda,\\lambda'}[\\tilde\\lambda,\\tilde\\lambda'].}\n\nGiven $p,$ the additional physical information in $\\lambda$ is\nequivalent to a choice of wavefunction of a helicity $-1\/2$\nmassless particle with momentum $p.$ To see this, we write the\nchiral Dirac equation for a negative chirality spinor $\\psi^a$\n\\eqn{chdir}{0=i\\sigma^\\mu_{a\\dot a} \\partial_\\mu \\psi^a.} A plane\nwave $\\psi^a=\\rho^a\\exp(ip\\cdot x)$ satisfies this equation if and\nonly if $p_{a\\dot a}\\rho^a=0.$ Writing $p_{a\\dot a}=\\lambda_a\n\\tilde\\lambda_{\\dot a},$ we get $\\lambda_a\\rho^a=0,$ that is\n$\\rho^a= c\\cdot \\lambda^a$ for a constant $c.$ Hence the negative\nhelicity fermion has wavefunction \\eqn{negw}{\\psi^a=c\\lambda^a\n\\exp(ix_{a\\dot a} \\lambda^a\\tilde\\lambda^{\\dot a}).} Similarly,\n$\\tilde\\lambda$ defines a wavefunction for a helicity $+1\/2$\nfermion $\\psi^{\\dot a}=c \\tilde\\lambda^{\\dot a}\\exp(ix_{a\\dot a}\n\\lambda^a\\tilde\\lambda^{\\dot a}).$\n\nThere is an analogous description of wavefunctions of massless\nparticles of helicity $\\pm1.$ Usually, we describe massless gluons\nwith their momentum vector $p^\\mu$ and polarization vector\n$\\epsilon^\\mu.$ The polarization vector obeys the constraint\n\\eqn{decop}{p_\\mu\\ \\epsilon^\\mu=0} that represents the decoupling\nof longitudinal modes and it is subject to the gauge invariance\n\\eqn{gaugi}{\\epsilon^\\mu\\rightarrow\\epsilon^\\mu+wp^\\mu,} for any\nconstant $w.$ Suppose that instead of being given only a lightlike\nvector $p_{a\\dot a}$, one is also given a decomposition $p_{a\\dot\na}=\\lambda_a\\tilde\\lambda_{\\dot a}.$ Then we have enough\ninformation to determine the polarization vector up to a gauge\ntransformation once the helicity is specified. For a positive\nhelicity gluon, we take \\eqn{pospol}{\\epsilon_{a\\dot\na}^+=\\frac{\\mu_a\\tilde\\lambda_{\\dot a}}{\\vev{\\mu,\\lambda}},} where\n$\\mu$ is any negative chirality spinor that is not a multiple of\n$\\lambda.$ To get a negative helicity polarization vector, we take\n\\eqn{negpol}{\\epsilon_{a\\dot a}^-=\\frac{\\lambda_a\\tilde\\mu_{\\dot\na}}{[\\tilde\\lambda,\\tilde\\mu]},} where $\\tilde\\mu$ is any positive\nchirality spinor that is not a multiple of $\\tilde\\lambda.$ We\nwill explain the expression for the positive helicity vector. The\nnegative helicity case is similar.\n\nClearly, the constraint \\eqn{con}{p^\\mu\\ \\epsilon^+_\\mu=p^{a\\dot\na}\\epsilon^+_{a\\dot a}=0} holds because $\\tilde\\lambda^{\\dot\na}\\tilde\\lambda_{\\dot a}=0.$ Moreover, $\\epsilon^+$ is also\nindependent of $\\mu^a$ up to a gauge transformation. To see this,\nnotice that $\\mu$ lives in a two dimensional space that is spanned\nwith $\\lambda$ and $\\mu.$ Hence, any change in $\\tilde\\mu$ is of\nthe form \\eqn{vam}{\\delta \\mu^a=\\alpha\\mu^a+\\beta\\lambda^a} for some\nparameters $\\alpha$ and $\\beta.$ The polarization vector\n\\rf{pospol} is invariant under the $\\alpha$ term, because this\nsimply rescales $\\mu$ and $\\epsilon^+_{a \\dot a}$ is invariant\nunder the rescaling of $\\mu.$ The $\\beta$ term amounts to a gauge\ntransformation of the polarization vector\n\\eqn{poch}{\\delta\\epsilon^+_{a\\dot a}=\n\\beta{\\lambda_a\\tilde\\lambda_{\\dot a}\\over \\vev{\\mu,\\lambda}}.}\n\nUnder the scaling $(\\lambda,\\tilde\\lambda)\\rightarrow\n(t\\lambda,t^{-1}\\tilde\\lambda),\\, t\\in {\\Bbb C}^\\ast$ the\npolarization vectors scale like \\eqn{scal}{\\epsilon^-\\rightarrow\nt^{+2}\\epsilon^- \\qquad \\epsilon^+\\rightarrow t^{-2}\\epsilon^+.}\nThis could have been anticipated, since $\\tilde\\lambda_{\\dot a}$\ngives the wavefunction of a helicity $+1\/2$ particle so a\nhelicity $+1$ polarization vector should scale like\n$\\tilde\\lambda^2.$ Similarly, the helicity $-1$ polarization\nvector scales like $\\lambda^2.$\n\nTo show more directly that $\\epsilon^+$ describes a massless\nparticle of helicity $+1,$ we must show that the corresponding\nlinearized field strength $F_{\\mu\\nu}=\\partial_\\mu\nA_\\nu-\\partial_\\nu A_\\mu$ is anti-selfdual. Indeed, the field\nstrength written in a bispinor notation has the decomposition\n\\eqn{sef}{F_{a\\dot a b\\dot b}=\\epsilon_{ab}\\tilde f_{\\dot a\\dot\nb}+\\epsilon_{\\dot a \\dot b}f_{ab},} where $f_{ab}$ and $\\tilde\nf_{\\dot a \\dot b}$ are the selfdual and anti-selfdual parts of\n$F.$ Substituting $A_{a\\dot a}=\\epsilon_{a\\dot a}^+\\exp(ix_{a\\dot\na}\\lambda^a\\tilde\\lambda^{\\dot a})$ we find that $F_{a\\dot a b\\dot b}=\n\\epsilon_{ab} \\tilde\\lambda_{\\dot a}\\tilde\\lambda_{\\dot b}\\exp(ix_{a\\dot a}\n\\lambda^a\\tilde\\lambda^{\\dot a}).$\n\nSo far, we have seen that the wavefunction of a massless particle\nwith helicity $h$ scales under\n$(\\lambda,\\tilde\\lambda)\\rightarrow(t\\lambda,t^{-1}\\tilde\\lambda)$\nas $t^{-2h}$ if $|h|\\leq1.$ This is true for any $h,$ as can be\nseen from the following argument. Consider a massless particle\nmoving in the $\\vec n$ direction. Then a rotation by angle\n$\\theta$ around the $\\vec n$ axis acts on the spinors as\n\\eqn{rota}{(\\lambda,\\tilde\\lambda)\\rightarrow\n(e^{-i\\theta\/2}\\lambda,e^{+i\\theta\/2}\\tilde\\lambda).} Hence,\n$\\lambda,\\tilde\\lambda$ carry $-\\frac{1}{2}$ or $+\\frac{1}{2}$ units of\nangular momentum around the $\\vec n$ axis. Clearly, a massless\nparticle of helicity $h$ carries $h$ units of angular momentum\naround the $\\vec n$ axis. Hence the wavefunction of the particle\ngets transformed as $\\psi\\rightarrow e^{ih\\theta}\\psi$ under the\nrotation around $\\vec n$ axis, so it obeys the auxiliary condition\n\\eqn{scali}{\\left(\\lambda^a\\frac{\\partial}{\\partial\\lambda^a}-\\tilde\\lambda^{\\dot\na}\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot a}}\\right)\n\\psi(\\lambda,\\tilde\\lambda)=-2h\\psi(\\lambda,\\tilde\\lambda).}\nClearly, this constraint holds for wavefunctions of massless\nparticles of any spin. The spinors $\\lambda,\\tilde\\lambda$ give us\na convenient way of writing the wavefunction of massless particle\nof any spin, as we have seen in detail above for particles with\n$|h|\\leq 1.$\n\n\n\n\\subsection{Scattering Amplitudes}\n\nLet us consider scattering of massless particles in four\ndimensions. Consider the situation with $n$ particles of momenta\n$p_1,p_2,\\dots, p_n.$ For scattering of scalar particles, the\ninitial and final states of the particles are completely\ndetermined by their momenta. The scattering amplitude is simply a\nfunction of the momenta $p_i,$\n\\eqn{ascal}{A_{scalar}(p_1,p_2,\\dots,p_n).} In fact, by Lorentz\ninvariance, it is a function of the Lorentz invariant products\n$p_i\\cdot p_j$ only.\n\nFor particles with spin, the scattering amplitude is a function of\nboth the momenta $p_i$ and the wavefunctions $\\psi_i$ of each\nparticle \\eqn{ascat}{A(p_1,\\psi_1;\\dots;p_n,\\psi_n).} Here, $A$ is\nlinear in each of the wavefunctions $\\psi_i.$ The description of\n$\\psi_i$ depends on the spin of the particle. As we have seen\nexplicitly above in the case of massless particles of spin $\\frac{1}{2}$\nor $1,$ the spinors $\\lambda,\\tilde\\lambda$ give a unified\ndescription of the wavefunctions of particles with spin. Hence, to\ndescribe the wavefunctions, we specify for each particle the\nhelicity $h_i$ and the spinors $\\lambda_i$ and $\\tilde\\lambda_i.$\nThe spinors determine the momenta $p_i=\\lambda_i\\tilde\\lambda_i$\nand the wavefunctions $\\psi_i(\\lambda_i,\\tilde\\lambda_i, h_i)$. So\nfor massless particles with spin, the scattering amplitude is a\nfunction of the spinors and helicities of the external particles\n\\eqn{scaf}{A(\\lambda_1,\\tilde\\lambda_1,h_1;\\dots;\\lambda_n,\\tilde\\lambda_n,h_n).}\nIn labelling the helicities we take all particles to be incoming.\nTo obtain an amplitude with incoming particles as well as outgoing\nparticles, we use crossing symmetry, that relates an incoming\nparticle of one helicity to an outgoing particle of the opposite\nhelicity.\n\nIt follows from \\rf{scali} that the amplitude obeys the conditions\n\\eqn{scalia}{\\left(\\lambda^a_i\\frac{\\partial}{\\partial\\lambda^a_i}-\\tilde\\lambda_i^{\\dot\na}\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot a}_i}\\right)\nA(\\lambda_i,\\tilde\\lambda_i,h_i)=-2h_iA(\\lambda_i,\\tilde\\lambda_i,h_i)}\nfor each particle $i,$ with helicity $h_i.$ In summary, a general\nscattering amplitude of massless particles can be written as\n\\eqn{scatmas}{A=(2\\pi)^4 \\delta^4\\left(\\sum_i\n\\lambda_i^a\\tilde\\lambda^{\\dot a}_i\\right)\nA(\\lambda_i,\\tilde\\lambda_i,h_i),} where we have written\nexplicitly the delta function of momentum conservation.\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=2.0in]{ampl_notation.eps}\n \\caption{A scattering amplitude of $n$ gluons in Yang-Mills theory.\n Each gluon comes with the color factor $T_i,$ spinors\n $\\lambda_i,\\tilde\\lambda_i$ and helicity label $h_i=\\pm1.$}\n \\label{ampl_notation}\n\\end{figure}\n\n\n\\subsection{Maximally Helicity Violating Amplitudes}\n\nTo make the discussion more concrete, we consider tree level\nscattering of $n$ gluons in Yang-Mills theory. These amplitudes\nare of phenomenological importance. The multijet production at LHC\nwill be dominated by tree level QCD scattering.\n\nConsider Yang-Mills theory with gauge group $U(N).$ Recall that\ntree level scattering amplitudes are planar and lead to single\ntrace interactions. In an index loop, the gluons are attached in a\ndefinite cyclic order, say $1,2,\\dots, n.$ Then the amplitude\ncomes with a group theory factor $\\mop{Tr} T_1T_2\\dots T_n.$ It is\nsufficient to give the amplitude with one cyclic order. The full\namplitude is obtained from this by summing over the cyclic\npermutations to achieve Bose symmetry\n\\eqn{boses}{A=g^{n-2}(2\\pi)^4\\delta^4\\left(\\sum_i^n\np_i\\right){\\cal A}(1,2,\\dots, n)\\mop{Tr} (T_1T_2\\dots T_n)\\, +\\,\npermutations.} Here, $g$ is the coupling constant of the gauge\ntheory. In the rest of the lecture notes, we will always consider\ngluons in the cyclic order $1,2,\\dots, n$ and we will omit the\ngroup theory factor and the delta function of momentum\nconservation in writing the formulas. Hence we will consider the\n`reduced color ordered amplitude' ${\\cal A}(1,2,\\dots, n).$\n\nThe scattering amplitude with $n$ incoming gluons of the same\nhelicity vanishes. So does the amplitude, for $n\\geq 3,$ with\n$n-1$ incoming gluons of one helicity and one of the opposite\nhelicity. The first nonzero amplitudes, the maximally helicity\nviolating (MHV) amplitudes, have $n-2$ gluons of one helicity and\ntwo gluons of the other helicity. Suppose that gluons $r,s$ have\nnegative helicity and the rest of gluons have positive helicity.\nThen the tree level amplitude, stripped of the momentum delta\nfunction and the group theory factor, is\n\\eqn{mhv}{{\\cal A}(r^-,s^-)=g^{n-2}\\frac{\\vev{\\lambda_r,\\lambda_s}^4}{\\prod_{k=1}^n\n\\vev{\\lambda_k,\\lambda_{k+1}}}.} The amplitude ${\\cal A}(r^+,s^+)$ with\ngluons $r,s$ of positive helicity and the rest of the gluons of\nnegative helicity follows from \\rf{mhv} by exchange\n$\\vev{}\\leftrightarrow [].$ Note, that the amplitude has the\ncorrect homogeneity in each variable. It is homogeneous of degree\n$-2$ in $\\lambda_i$ for positive helicity gluons; and of degree\n$-2$ for negative helicity gluons $i=r,s$ as required by the\nauxiliary condition \\rf{scalia}. The amplitude ${\\cal A}$ is sometimes\ncalled `holomorphic' because it depends on the `holomorphic'\nspinors $\\lambda_i$ only.\n\n\\section{Twistor Space}\n\n\\subsection{Conformal Invariance of Scattering Amplitudes}\n\nBefore discussing twistor space, let us show the conformal\ninvariance of the MHV tree level amplitude. Firstly, we need to\nconstruct representation of the conformal group generators in\nterms of the spinors $\\lambda,\\tilde\\lambda.$ We will consider the conformal\ngenerators for a single particle. The generators of the\n$n$-particle system are given by the sum of the generators over\nthe $n$ particles.\n\nSome of the generators are clear. The Lorentz generators are the\nfirst order differential operators\n\\eqalign{lorentz}{J_{ab}&=&\\frac{i}{2}\\left(\\lambda_a\n\\frac{\\partial}{\\partial\\lambda^b}+\\lambda_b\\frac{\\partial}{\\partial\\lambda^a}\n\\right)\\cr \\tilde J_{\\dot a \\dot\nb}&=&\\frac{i}{2}\\left(\\tilde\\lambda_{\\dot\na}\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot\nb}}+\\tilde\\lambda_{\\dot\nb}\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot a}}\\right).} The\nmomentum operator is the multiplication operator\n\\eqn{momentum}{P_{a\\dot a}=\\lambda_a\\tilde\\lambda_{\\dot a}.} The\nremaining generators are the dilatation operator $D$ and the\ngenerator of special conformal transformation $K_{a\\dot a}.$ The\ncommutation relations of the dilatation operator are\n\\eqn{comd}{[D,P]=iP,\\qquad [D,K]=-iK,} so $P$ has dimension $+1$\nand $K$ has dimension $-1.$ We see from (\\ref{momentum}) that it\nis natural to take $\\lambda$ and $\\tilde\\lambda$ to have dimension\n$1\/2.$ Hence, a natural guess for the special conformal generator\nrespecting all the symmetries is \\eqn{kguess}{K_{a\\dot\na}=\\frac{\\partial^2}{\\partial\\lambda^a\\partial\\tilde\\lambda^{\\dot\na}}.} We find the dilatation operator $D$ from the closure of the\nconformal algebra. The commutation relation\n\\eqn{comrel}{[K_{a\\dot a},P^{b\\dot b}]=-i\\left(\\delta_a{}^b\\tilde\nJ_{\\dot a}{}^{\\dot b}+\\delta_{\\dot a}{}^{\\dot\nb}J_{a}{}^b+\\delta_a{}^b\\delta_{\\dot a}{}^{\\dot b}D\\right)}\ndetermines the dilatation operator to be\n\\eqn{dilat}{D=\\frac{i}{2}\\left(\\lambda^a\\frac{\\partial}{\\partial\\lambda^a}\n+\\tilde\\lambda^{\\dot a}\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot\na}}+2\\right).}\n\nWe are now ready to verify that the MHV amplitude\n\\eqn{mhvdil}{{\\cal A}(r^-,s^-)=(2\\pi)^4\\delta^4\\left(\\sum_i\\lambda_i^a\\tilde\\lambda^{\\dot\na}_i\\right)\n\\frac{\\vev{\\lambda_r,\\lambda_s}^4}{\\prod_{k=1}^n\\vev{\\lambda_k,\\lambda_{k+1}}},}\nis invariant under the conformal group. The Lorentz generators are\nclearly symmetries of the amplitude. The momentum operator\nannihilates the amplitude thanks to the delta function of momentum\nconservation.\n\nIt remains to verify that the amplitude is annihilated by $D$ and\n$K.$ For simplicity, we will only consider the dilatation operator\n$D.$ The numerator contains the delta function of momentum\nconservation which has dimension $D=-4$ and the factor\n$\\vev{\\lambda_r,\\lambda_s}^4$ of dimension $4.$ Hence, $D$\ncommutes with the numerator. So we are left with the denominator\n\\eqn{denomi}{\\frac{1}{\\prod_{k=1}^n\\vev{\\lambda_k,\\lambda_{k+1}}}.}\nThis is annihilated by $D_k$ for each particle $k,$ since the $-2$\ncoming from the second power of $\\lambda_k$ in the denominator\ngets cancelled against the $+2$ from the definition of the\ndilatation operator \\rf{dilat}.\n\n\\subsection{Transform to Twistor Space}\n\\label{transform}\n\nWe have demonstrated conformal invariance of the MHV amplitude,\nhowever the representation of the conformal group that we have\nencountered above is quite exotic. The Lorentz generators are\nfirst order differential operators, but the momentum is a\nmultiplication operator and the special conformal generator is a\nsecond order differential operator.\n\nWe can put the action of the conformal group into a more standard\nform if we make the following transformation\n\\eqalign{four}{\\tilde\\lambda_{\\dot a}&\\rightarrow\ni\\frac{\\partial}{\\partial\\mu^{\\dot a}} \\cr\n\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot a}}&\\rightarrow\ni\\mu_{\\dot a}.} Making this substitution we have arbitrarily\nchosen to Fourier transform $\\tilde\\lambda$ rather than $\\lambda.$ This choice\nbreaks the symmetry between positive and negative helicities. The\namplitude with $n_1$ positive helicity and $n_2$ negative helicity\ngluons has different description in twistor space from an\namplitude with $n_2$ positive helicity gluons and $n_1$ negative\nhelicity gluons.\n\nUpon making this substitution, all operators become first order.\nThe Lorentz generators take the form \\eqalign{lorentztwo}{J_{ab}\n&=&\\frac{i}{2}\\left(\\lambda_a\n\\frac{\\partial}{\\partial\\lambda^b}+\\lambda_b\\frac{\\partial}{\\partial\\lambda^a}\n\\right)\\cr \\tilde J_{\\dot a \\dot b} &=&\\frac{i}{2}\\left(\\mu_{\\dot\na}\\frac{\\partial}{\\partial\\mu^{\\dot b}}+\\mu_{\\dot\nb}\\frac{\\partial}{\\partial\\mu^{\\dot a}}\\right).} The momentum and\nspecial conformal operators become \\eqalign{fir}{P_{a\\dot\na}&=&i\\lambda_a\\frac{\\partial}{\\partial\\mu^{\\dot a}} \\cr K_{a\\dot\na}&=&i\\mu_{\\dot a}\\frac{\\partial}{\\partial\\lambda^a}.} Finally,\nthe dilatation operator (\\ref{dilat}) becomes a {\\it homogeneous}\nfirst order operator\n\\eqn{dilafirst}{D=\\frac{i}{2}\\left(\\lambda^a{\\partial\\over\n\\partial\\lambda^a} -\\mu^{\\dot a}\\frac{\\partial}{\\partial\\mu^{\\dot\na}}\\right).}\n\nThis representation of the four dimensional conformal group is\neasy to explain. The conformal group of Minkowski space is\n$SO(4,2)$ which is the same as $SU(2,2).$ $SU(2,2),$ or its\ncomplexification $Sl(4,\\Bbb C),$ has an obvious four-dimensional\nrepresentation acting on \\eqn{zi}{Z^I=(\\lambda^a,\\mu^{\\dot a}).}\n$Z^I$ is called a twistor and the space $\\Bbb C^4$ spanned by\n$Z^I$ is called the twistor space. The action of $Sl(4,\\Bbb C)$ on\nthe $Z^I$ is generated by $15$ traceless matrices $\\Lambda^I{}_J,\\\nI,J=1,\\dots, 4,$ that correspond to the $15$ first order operators\n$J_{ab}, \\tilde J_{\\dot a \\dot b}, D, P_{a\\dot a}, K_{a\\dot a}.$\n\nIf we are in signature $++--,$ the conformal group is $SO(3,3)\n\\cong Sl(4,\\Bbb R).$ The twistor space is a copy of $\\Bbb R^4$ and\nwe can consider $\\lambda$ and $\\mu$ to be real. In the Euclidean\nsignature $++++$, the conformal group is $SO(5,1)\\cong SU^\\ast(4)$\nwhere $SU^\\ast(4)$ is the noncompact version of $SU(4),$ so we\nmust think of twistor space as a copy of $\\Bbb C^4.$\n\n\nFor signature $++--,$ where $\\tilde\\lambda$ is real, the\ntransformation from momentum space scattering amplitudes to\ntwistor space scattering amplitudes is made by a simple Fourier\ntransform that is familiar from quantum mechanics\n\\eqn{fouriert}{\\tilde {\\cal A}(\\lambda_i,\\mu_i)= \\int\\prod_{j=1}^n\n{d^2\\tilde\\lambda_j\\over (2\\pi)^2} \\exp(i[\\mu_j,\\tilde\\lambda_j])\n{\\cal A}(\\lambda_i,\\tilde\\lambda_i).} The same Fourier transform turns\na momentum space wavefunction $\\psi(\\lambda,\\tilde\\lambda)$ to a\ntwistor space wavefunction\n\\eqn{wavefo}{\\tilde\\psi(\\lambda,\\mu)=\\int {d^2\\tilde\\lambda\\over\n(2\\pi)^2} \\exp(i[\\mu,\\tilde\\lambda]) \\psi(\\lambda,\\tilde\\lambda).}\n\nRecall that the scattering amplitude of massless particles obeys\nthe auxiliary condition\n\\eqn{sara}{\\left(\\lambda^a_i\\frac{\\partial}{\\partial\\lambda^a_i}-\\tilde\\lambda_i^{\\dot\na}\\frac{\\partial}{\\partial\\tilde\\lambda^{\\dot a}_i}\\right)\n{\\cal A}(\\lambda_i,\\tilde\\lambda_i,h_i)=-2h_i{\\cal A}(\\lambda_i,\\tilde\\lambda_i,h_i)}\nfor each particle $i,$ with helicity $h_i.$ In terms of\n$\\lambda_i$ and $\\mu_i,$ this becomes\n\\eqn{lica}{\\left(\\lambda^a_i\\frac{\\partial}{\\partial\\lambda^a_i}+\\mu_i^{\\dot\na}\\frac{\\partial}{\\partial\\mu^{\\dot a}_i}\\right)\n\\tilde{\\cal A}(\\lambda_i,\\mu_i,h_i)=-(2h_i+2)\\tilde{\\cal A}(\\lambda_i,\\mu_i,h_i).}\nThere is a similar condition for the twistor wavefunctions of\nparticles. The operator on the left hand side coincides with\n$Z^I{\\partial\\over\n\\partial Z^I}$ that generates the scaling of the twistor coordinates\n\\eqn{rescal}{Z^I\\rightarrow t Z^I, \\qquad t\\in \\Bbb C^\\ast.}\n\n\nSo the wavefunctions and scattering amplitudes have known behavior\nunder the $\\Bbb C^\\ast$ action $Z^I\\rightarrow t Z^I.$ Hence, we\ncan identify the sets of $Z^I$ that differ by the scaling\n$Z^I\\rightarrow t Z^I$ and throw away the point $Z^I=0.$ We get\nthe projective twistor space\\footnote{The twistor wavefunctions\n\\rf{wavefo} are regular only on the subset $\\Bbb{CP}'^{3|4}$ of\n$\\Bbb{CP}^{3|4}$ with $(\\lambda^1,\\lambda^2)\\neq(0,0),$ which is the\nprecise definition of the projective twistor space. In the rest of\nthe lecture notes, we do not distinguish between these two spaces,\nunless necessary.} $\\Bbb {CP}^3$ or $\\Bbb {RP}^3$ if $Z^I$ are\ncomplex or real-valued. The $Z^I$ are the homogeneous coordinates\non the projective twistor space. It follows from \\rf{lica} that,\nthe scattering amplitudes are homogeneous functions of degree\n$-2h_i-2$ in the twistor coordinates $Z_i^I$ of each particle\nparticle. In the complex case, this means that scattering\namplitudes are sections of the complex line bundle ${\\cal\nO}(-2h_i-2)$ over a $\\Bbb {CP}^3_i$ for each particle. For further\ndetails on twistor transform, see any standard textbook on twistor\ntheory, e.g. \\cite{Huggett:1986fs,Atiyah:1979iu}.\n\n\n\\subsection{Scattering Amplitudes in Twistor Space}\n\\label{twist}\n\nIn an $n$ gluon scattering process, after the Fourier transform\ninto twistor space, the external gluons are associated with points\n$P_i$ in the projective twistor space. The scattering amplitudes\nare functions of the twistors $P_i,$ that is, they are functions\ndefined on the product of $n$ copies of twistor space, one for\neach particle.\n\nLet us see what happens to the tree level MHV amplitude with $n-2$\ngluons of positive helicity and $2$ gluons of negative helicity,\nafter Fourier transform into twistor space. We work in $++--$\nsignature, for which the twistor space is a copy of $\\Bbb {RP}^3.$\nThe advantage of $++--$ signature is that the transform to twistor\nspace is an ordinary Fourier transform and the scattering\namplitudes are ordinary functions on a product of $\\Bbb {RP}^3$'s,\none for each particle. With other signatures, the twistor\ntransform involves $\\overline\\partial$-cohomology and other\nmathematical machinery.\n\nWe recall that the MHV amplitude with negative helicity gluons\n$r,s$ is\n\\eqn{mhvrecall}{A(\\lambda_i,\\tilde\\lambda_i)=(2\\pi)^4\\delta^4(\\sum_i\n\\lambda_i\\tilde\\lambda_i) f(\\lambda_i),} where\n\\eqn{fln}{f(\\lambda_i)=g^{n-2}{\\vev{\\lambda_r,\\lambda_s}^4\\over\n\\prod_k\\vev{\\lambda_k, \\lambda_{k+1}}}.} The only property of\n$f(\\lambda_i),$ that we need is that it is a function of the\nholomorphic spinors $\\lambda_i$ only. It does not depend on the\nanti-holomorphic spinors $\\tilde\\lambda_i.$\n\nWe express the delta function of momentum conservation as an\nintegral \\eqn{deltan}{(2\\pi)^4\\delta^4(\\sum_i\n\\lambda_i^a\\tilde\\lambda_i^{\\dot a})=\\int d^4x^{a\\dot a}\n\\exp\\left(i x_{b\\dot b}\\sum_i \\lambda_i^b\\tilde\\lambda_i^{\\dot\nb}\\right).} Hence, we can rewrite the amplitude as \\eqn{mhvnew}{\nA(\\lambda_i,\\tilde\\lambda_i)=\\int d^4x \\exp\\left(i x_{b\\dot\nb}\\sum_i \\lambda_i^b\\tilde\\lambda_i^{\\dot b}\\right) f(\\lambda_i).}\nTo transform the amplitude into twistor space, we simply carry out\na Fourier transform with respect to all $\\tilde\\lambda$'s. Hence,\nthe twistor space amplitude is\n\\eqn{mhvint}{A(\\lambda_i,\\mu_i)=\\int\n{d^2\\tilde\\lambda_1\\over(2\\pi)^2}\\dots{d^2\\tilde\\lambda_n\\over\n(2\\pi)^2}\\exp\\left(i\\sum_{j=1}^n \\mu_{j\\dot\na}\\tilde\\lambda_j^{\\dot a}\\right)\\int d^4x\\, \\exp\\left(i x_{b\\dot\nb}\\sum_j \\lambda_j^b\\tilde\\lambda_j^{\\dot b}\\right) f(\\lambda_i).}\nThe only dependece on $\\tilde\\lambda_i$ is in the exponential factors. Hence\nthe integrals over $\\tilde\\lambda_j$ gives a product of delta\nfunctions with the result \\cite{Nair:1988bq}\n\\eqn{mhvtwistor}{A(\\lambda_i,\\mu_i)=\\int d^4x \\prod_{j=1}^n\n\\delta^2(\\mu_{j\\dot a}+x_{a\\dot a}\\lambda^a_j)f(\\lambda_i).} This\nequation has a simple geometrical interpretation. Pick some\n$x^{a\\dot a}$ and consider the equation \\eqn{inci}{\\mu_{\\dot\na}+x_{a\\dot a}\\lambda^a=0.} The solution set for $x=0$ is a $\\Bbb\n{RP}^1$ or $\\Bbb {CP}^1$ depending on whether the variables are\nreal or complex. This is true for any $x$ as the equation lets us\nsolve for $\\mu_{\\dot a}$ in terms of $\\lambda^a.$ So\n$(\\lambda^1,\\lambda^2)$ are the homogeneous coordinates on the\ncurve.\n\nIn real twistor space, which is appropriate for signature $++--,$\nthe curve $\\Bbb {RP}^1$ can be described more intuitively as a\nstraight line, see fig. \\ref{cp_one}. Indeed, throwing away the\nset $Z^1=0,$ we can describe the rest of $\\Bbb {RP}^3$ as a copy\nof $\\Bbb R^3$ with the coordinates $x_i=Z_i\/Z_1, i=2,3,4.$ The\nequations \\rf{inci} determine two planes whose intersection is the\nstraight line in question.\n\nIn complex twistor space, the genus zero curve $\\Bbb {CP}^1$ is\ntopologically a sphere $S^2.$ The $\\Bbb {CP}^1$ is an example of\na holomorphic curve in $\\Bbb {CP}^3.$ The simplest holomorphic\ncurves are defined by vanishing of a pair of homogeneous\npolynomials in the $Z^I$ \\eqalign{completeint}{f(Z^1,\\dots,\nZ^4)&=&0 \\cr g(Z^1,\\dots, Z^4)&=&0.} If $f$ is homogeneous of\ndegree $d_1$ and $g$ is homogeneous of degree $d_2,$ the curve has\ndegree $d_1 d_2.$ The equations \\eqn{incide}{\\mu_{\\dot b}+x_{b\\dot\nb}\\lambda^b=0, \\quad \\dot b=1,2} are both linear, $d_1=d_2=1$.\nHence the degree of the $\\Bbb {CP}^1$ is $d=d_1d_2=1.$ Moreover,\nevery degree one genus zero curve in $\\Bbb {CP}^3$ is of the form\n\\rf{incide} for some $x^{b\\dot b}.$\n\nThe area of a holomorphic curve of degree $d,$ using the natural\nmetric on $\\Bbb {CP}^3,$ is $2\\pi d.$ So the curves we found with\n$d=1$ have the minimal area among nontrivial holomorphic curves.\nThey are associated with the minimal nonzero Yang-Mills tree\namplitudes, the MHV amplitudes.\n\nGoing back to the amplitude \\rf{mhvtwistor}, the\n$\\delta$-functions mean that the amplitude vanishes unless\n$\\mu_{j\\dot a}+x_{a\\dot a}\\lambda^a_j=0, j=1,\\dots n,$ that is,\nunless some curve of degree one determined by $x_{a\\dot a}$\ncontains all $n$ points $(\\lambda_j,\\mu_j).$ The result is that\nthe MHV amplitudes are supported on genus zero curves of degree\none. This is a consequence of the holomorphy of these amplitudes.\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.5in]{cp_one.eps}\n \\caption{(a) In\n complex twistor space $\\Bbb{CP}^3$, the MHV amplitude localizes to a $\\Bbb{CP}^1.$\n (b) In the real case, the amplitude is associated to a real line in $\\Bbb R^3.$}\n \\label{cp_one}\n\\end{figure}\n\n\nThe general conjecture is that an $l$-loop amplitude with $p$\ngluons of positive helicity and $q$ gluons of negative helicity is\nsupported on a holomorphic curve in twistor space. The degree of\nthe curve is determined by \\eqn{dec}{d=q-1+l.} The genus of the\ncurve is bounded by the number of the loops \\eqn{lops}{g\\leq l.}\nThe MHV amplitudes are a special case of this for $q=2,l=0.$\nIndeed the conjecture in this case gives that MHV amplitudes are\nsupported in twistor space on a genus zero curve of degree one.\n\nThe natural interpretation of this is that the curve is the\nworldsheet of a string. In some way of describing the perturbative\ngauge theory, the amplitudes arise from coupling of the gluons to\na string. In the next two sections we discuss a proposal for such\na string theory due to Witten \\cite{Witten:2003nn} in which the\nstrings in questions are D1-strings. There is an alternative\nversion of twistor string theory due to Berkovits\n\\cite{Berkovits:2004hg, Berkovits:2004tx}, discussed in section\n\\ref{berko}, in which the curves come from fundamental strings.\nThe Berkovits's twistor string theory seems to give an equivalent\ndescription of the scattering amplitudes. Further proposals\n\\cite{Neitzke:2004pf,Aganagic:2004yh,Bars:2004dg} have not yet\nbeen used for computing scattering amplitudes.\n\n\n\n\\section{Twistor String Theory}\n\nIn this section, we will describe a string theory that gives a\nnatural framework for understanding the twistor properties of\nscattering amplitudes discussed in previous section. This is a\ntopological string theory whose target space is a supersymmetric\nversion of the twistor space.\n\n\\subsection{Brief Review of Topological Strings}\n\nFirstly, let us consider an ${\\cal N}=2$ topological field theory\nin $D=2$ \\cite{Witten:1991zz}. The ${\\cal N}=2$ supersymmetry algebra\nhas two supersymmetry generators $Q_i, i=1,2$ that satisfy the\nanticommutation relations \\eqn{susyalg}{\\{Q_{\\alpha i}, Q_{\\beta\nj}\\}=\\delta_{ij} \\gamma^\\mu_{\\alpha\\beta}P_\\mu.} In two\ndimensions, the Lorentz group $SO(1,1)$ is generated by the\nLorentz boost $L.$ We diagonalize $L$ by going into the light-cone\nframe $P_\\pm=P_0\\pm P_1,$ \\eqalign{boostdiag}{[L,P_\\pm]&=&\\pm\nP_\\pm \\cr \\{L,Q_\\pm\\}&=&\\pm\\frac{1}{2} Q_\\pm.} The commutation\nrelations of the ${\\cal N}=2$ supersymmetry algebra become\n\\eqalign{susydiag}{\\{Q_{+i},Q_{+j}\\}&=&\\delta_{ij} P_+\\cr\n\\{Q_{-i}, Q_{-j}\\}&=&\\delta_{ij} P_- \\cr \\{Q_{+i}, Q_{-j}\\}&=&0.}\nWe let \\eqn{brstcharge}{Q=Q_{+1}+iQ_{+2}+Q_{-1}\\pm iQ_{-2}} with\neither choice of sign. It follows from \\rf{susydiag} that $Q$ is\nnilpotent \\eqn{br}{Q^2=0,} so we would like to consider $Q$ as a\nBRST operator.\n\nHowever $Q$ (\\ref{brstcharge}) is not a scalar so this\nconstruction would violate Lorentz invariance. There is a way out\nif the theory has left and right R-symmetries $R_+$ and $R_-.$\nUnder $R_+,$ the combination of supercharges $Q_{+1}\\pm iQ_{+2}$\nhas charge $\\pm1\/2$ and $Q_{-1}\\pm iQ_{-2}$ is neutral. For $R_-,$\nthe same is true with `left' and `right' interchanged.\n\nHence, we can make $Q$ scalar if we modify the Lorentz generator\n$L$ to be \\eqn{lmod}{L'=L-\\frac{1}{2} R_+ \\mp\\frac{1}{2} R_-.} At a more\nfundamental level, this change in the Lorentz generator arises if\nwe replace the stress tensor $T_{\\mu\\nu}$ with \\eqn{tmunu}{\\tilde\nT_{\\mu\\nu}= T_{\\mu\\nu}-\\frac{1}{2}(\\partial_\\mu J^+_\\nu+\\partial_\\nu\nJ^+_\\mu) \\mp\\frac{1}{2}(\\partial_\\mu J^-_\\nu+\\partial_\\nu J^-_\\mu),}\nwhere $J^+_\\nu$ and $J^-_\\mu$ are the left and right R-symmetry\ncurrents. The substitution \\rf{tmunu} is usually referred to as\n`twisting' the stress tensor.\n\nWe give a new interpretation to the theory by taking $Q$ to be a\nBRST operator. A state $\\Psi$ is considered to be physical if it\nis annihilated by $Q$ \\eqn{qani}{Q\\Psi=0.} Two states $\\Psi$ and\n$\\Psi'$ are equivalent if \\eqn{psiequi}{\\Psi-\\Psi'= Q\\Phi,} for\nsome $\\Phi.$ Similarly, we take the physical operators to commute\nwith the BRST charge \\eqn{ocu}{[Q,{\\cal O}]=0.} Two operators are\nequivalent if they differ by an anticommutator of $Q,$\n\\eqn{eqvi}{{\\cal O}'\\sim {\\cal O}+ \\{Q,{\\cal V}\\},} for some\noperator ${\\cal V}.$\n\nThe theory with the stress tensor $\\tilde T_{\\mu\\nu}$ and BRST\noperator $Q$ is called a topological field theory. The basis for\nthe name is that one can use the supersymmetry algebra to show\nthat the twisted stress tensor is BRST trivial\n\\eqn{trivial}{\\tilde T_{\\mu\\nu}=\\{Q,\\Lambda_{\\mu\\nu}\\}.} It\nfollows that in some sense the worldsheet metric is irrelevant.\nThe correlation function \\eqn{cor}{\\vev{{\\cal O}_1(x_1)\n{\\cal O}_2(x_2)\\dots{\\cal O}_n(x_n)}_\\Sigma} of physical operators ${\\cal O}_i$\nobeying $[Q,{\\cal O}_i]=0$ on a fixed Riemann surface $\\Sigma$ is\nindependent of metric on $\\Sigma.$ Indeed, varying the metric\n$g_{\\mu\\nu}\\rightarrow g_{\\mu\\nu}+\\delta g_{\\mu\\nu},$ the\ncorrelation function stays the same up to BRST trivial terms\n\\eqn{mvev}{\\vev{{\\cal O}_1(x_1)\\dots {\\cal O}_n(x_n)\\int_\\Sigma\n\\delta(\\sqrt{g} g^{\\mu\\nu})\\tilde T_{\\mu\\nu}}=\\vev{{\\cal O}_1(x_1)\\dots\n{\\cal O}_n(x_n)\\int_\\Sigma\\delta(\\sqrt{g}\ng^{\\mu\\nu})\\{Q,\\Lambda_{\\mu\\nu}\\}}=0.}\n\nMore importantly for us, we can also construct a topological\nstring theory in which one obtains the correlation functions by\nintegrating \\rf{cor} over the moduli of the Riemann surface\n$\\Sigma$ using $\\Lambda_{\\mu\\nu}$ where the antighost $b_{\\mu\\nu}$\nusually appears in the definition of the string measure.\n\nFor an ${\\cal N}=2$ supersymmetric field theory in two dimensions with\nanomaly-free left and right R-symmetries we get two topological\nstring theories, depending on the choice of sign in\n(\\ref{brstcharge}). We would like to consider the case that the\n${\\cal N}=2$ model is a sigma model with a target space being a complex\nmanifold $X.$ In this case, the two R-symmetries exist\nclassically, so classically we can construct the two topological\nstring theories, called the A-model and the B-model. Quantum\nmechanically, however, there is an anomaly, and the B-model only\nexists if $X$ is a Calabi-Yau manifold.\n\n\\subsection{Open String B-model on a Super-Twistor Space}\n\nTo define open strings in the B-model, one needs BRST invariant\nboundary conditions. The simplest such conditions are Neumann\nboundary conditions \\cite{Witten:1992fb}. Putting in $N$ space\nfilling D5-branes gives $Gl(n,\\Bbb C)$ (whose compact real form is\n$U(N)$) gauge symmetry. The zero modes of the D5-D5 strings give\na $(0,1)$ form gauge field $A=A_{\\overline i}dz^{\\overline i}$ in the target\nspace. The BRST operator acts as the $\\overline\\partial$ operator and\nthe string $\\ast$ product is just the wedge product. Hence, $A$ is\nsubject to the gauge invariance \\eqn{gaui}{\\delta\nA=Q\\epsilon=\\overline\\partial \\epsilon+[A,\\epsilon],} and the string\nfield theory action reduces to the action of the holomorphic\nChern-Simons theory \\cite{Witten:1992fb} \\eqn{hcsc}{S={1\\over\n2}\\int\\Omega\\wedge \\mop{Tr}\\left(A\\wedge \\overline\\partial A+{2\\over3}\nA\\wedge A\\wedge A\\right),} where $\\Omega$ is the Calabi-Yau volume\nform.\n\nWe would like to consider the open string B-model with target\nspace $\\Bbb {CP}^3,$ but we cannot, since $\\Bbb {CP}^3$ is not a\nCalabi-Yau manifold and the B-model is well defined only on a\nCalabi-Yau manifold. On a non-Calabi-Yau manifold, the R-symmetry\nthat we used to twist the stress tensor is anomalous. A way out is\nto introduce spacetime supersymmetry. Instead of $\\Bbb {CP}^3,$\nwhich has homogeneous coordinates $Z^I, I=1,\\dots, 4$ we consider\na supermanifold $\\Bbb {CP}^{3|N}$ with bosonic and fermionic\ncoordinates \\eqn{coordi}{Z^I,\\quad \\psi^A\\qquad I=1,\\dots, 4,\n\\quad A=1,\\dots, N,} with identification of two sets of\ncoordinates that differ by a scaling \\eqn{suca}{(Z^I,\\psi^A)\\cong\n(tZ^I, t\\psi^A) \\qquad t\\in \\Bbb C^\\ast.} The $\\Bbb {CP}^{3|N}$ is\na Calabi-Yau supermanifold if and only if the number of fermionic\ndimensions is $N=4.$ To see this, we construct the holomorphic\nmeasure on $\\Bbb {CP}^{3|4}.$ We start with the $(4|N)$ form on\n$\\Bbb C^{4|N}$ \\eqn{ozero}{\\Omega_0=dZ^1\\dots dZ^4d\\psi^1\\dots\nd\\psi^N} and study its behavior under the rescaling symmetry\n(\\ref{suca}). For this, recall that $d\\psi$ scales oppositely to\n$\\psi$ \\eqn{suop}{(dZ^I, d\\psi^A)\\rightarrow (t dZ^I, t^{-1}\nd\\psi^A).} It follows, that $\\Omega_0$ is $\\Bbb C^\\ast$ invariant\nif and only if $N=4.$ In this case we can divide by the $C^\\ast$\naction and get a Calabi-Yau measure on $\\Bbb {CP}^{3|4}$\n\\eqn{holf}{\\Omega={1\\over 4!}\\epsilon_{IJKL}Z^I dZ^JdZ^KdZ^L\n{1\\over 4!}\\epsilon_{ABCD} \\psi^A\\psi^B\\psi^C\\psi^D.}\n\nThe twistor space $\\Bbb {CP}^3$ has a natural $Sl(4,\\Bbb C)$ group\naction that acts as $Z^A\\rightarrow \\Lambda^A{}_B Z^B$ on the\nhomogeneous coordinates of $\\Bbb{CP}^3$. The real form $SU(2,2)$ of\n$Sl(4,\\Bbb C)$ is the conformal group of Minkowski space.\nSimilarly, the super-twistor space $\\Bbb {CP}^{3|N}$ has a natural\n$Sl(4|N,\\Bbb C)$ symmetry. The real form $SU(2,2|N)$ of this is\nthe superconformal symmetry group with $N$ supersymmetries.\n\nFor $N=4$, the superconformal group $SU(2,2|4)$ is the symmetry\ngroup of ${\\cal N}=4$ super-Yang-Mills theory. In a sense, this is the\nsimplest non-abelian gauge theory in four dimensions. The ${\\cal N}=4$\nsuperconformal symmetry uniquely determines the states and\ninteractions of the gauge theory. In particular, the beta function\nof ${\\cal N}=4$ gauge theory vanishes.\n\nNow we know a new reason for ${\\cal N}=4$ to be special. The\ntopological B-model on $\\Bbb {CP}^{3|N}$ exists if and only if\n${\\cal N}=4.$ The B-model on $\\Bbb {CP}^{3|4}$ has a $SU(2,2|4)$\nsymmetry coming from the geometric transformations of the twistor\nspace. This is related via the twistor transform to the ${\\cal N}=4$\nsuperconformal symmetry.\n\nIn the topological B-model with space-filling branes on $\\Bbb\n{CP}^{3|4}$, the basic field is the holomorphic gauge field\n${\\cal A}={\\cal A}_{\\overline I} dZ^{\\overline I}$, \\eqn{abasic}{{\\cal A}(Z,\\overline Z,\n\\psi)= A(Z,\\overline Z)+\\psi^A \\xi_{A}(Z,\\overline Z)+{1\\over\n2!}\\psi^A\\psi^B \\psi_{AB}(Z,\\overline Z)+\\dots+{1\\over\n4!}\\epsilon_{ABCD}\\psi^A\\psi^B\\psi^C\\psi^D G(Z,\\overline Z).} The\naction is the same as \\rf{hcsc}, except that the gauge field ${\\cal A}$\nnow depends on $\\psi$ \\eqn{actsu}{S=\\frac{1}{2}\\int\n\\Omega\\wedge\\mop{Tr}\\left({\\cal A} \\overline\\partial {\\cal A}+\n{2\\over3}{\\cal A}\\wedge{\\cal A}\\wedge{\\cal A}\\right),} and the holomorphic three\nform is (\\ref{holf}). The classical equations of motions\nobtained from \\rf{actsu} are\n\\eqn{cseq}{\\overline\\partial{\\cal A}+{\\cal A}\\wedge{\\cal A}=0.} Linearizing the\nequations of motions around the trivial solutions ${\\cal A}=0,$ they\ntell us that \\eqn{tela}{\\overline\\partial \\Phi=0,} where $\\Phi$ is any\nof the components of ${\\cal A}.$ The gauge invariance reduces to\n$\\delta\\Phi=\\overline\\partial\\alpha.$ Hence for each component $\\Phi,$\nthe field $\\Phi$ defines an element of a cohomology group.\n\nThis action has the amazing property that its spectrum is the same\nas that of ${\\cal N}=4$ super Yang-Mills theory in Minkowski space. To\nsee this, we need to use that the elements of $(0,1)$ cohomology\ngroups of degree $2h-2$ are related by twistor transform to\nhelicity $h$ free fields on Minkowski space.\n\nTo figure out the degrees of various components ${\\cal A}$, notice that\nthe action must be invariant under the $\\Bbb C^*$ action\n$Z^I\\rightarrow tZ^I.$ Since the holomorphic measure is also\ninvariant under the scaling, the only way that the action\n\\rf{actsu} is invariant is that the superfield ${\\cal A}$ is also\ninvariant, in other words, ${\\cal A}$ is of degree zero\n\\eqn{azero}{{\\cal A}\\in H^{0,1}(\\Bbb {CP}^{3|4},{\\cal O}(0)).} Looking back\nat the expansion (\\ref{abasic}) of the superfield, we identify the\ncomponents, via the twistor correspondence, with fields in\nMinkowski space of definite helicity. $A$ is is of degree zero,\njust like the superfield ${\\cal A}.$ Hence, it is related by twistor\ntransform to a field of helicity $+1.$ The field $G$ has degree\n$-4$ to off-set the degree $4$ coming the four $\\psi$, so it\ncorresponds to a field of helicity $-1.$ Continuing in this\nfashion, we obtain the complete spectrum of ${\\cal N}=4$ supersymmetric\nYang-Mills theory. The twistor fields\n$A,\\xi_A,\\phi_{AB},\\xi_{ABC},G$ describe, via twistor transform,\nparticles of helicities $1,+\\frac{1}{2},0,-\\frac{1}{2},-1$ respectively.\n\nThe fields also have the correct representations under the $SU(4)$\nR-symmetry group. This symmetry is realized in twistor space by\nthe natural geometric action on the fermionic coordinates\n$\\psi^A\\rightarrow \\Lambda^A{}_B \\psi^B.$ Hence, $\\psi^A$\ntransforms in the $\\bf 4$ of the $SU(4)_R.$ The holomorphic gauge\nsuperfield ${\\cal A}(Z,\\psi)$ is invariant under the R-symmetry, hence\nthe representations of the components of ${\\cal A}$ must be conjugate\nto the representations of the $\\psi$ factors that they multiply in\n\\rf{abasic}. Hence, $A, \\xi_{A}, \\phi_{AB},\\xi_{ABC}$ and $G$\ntransform in the $\\bf 1, \\overline 4, 6, 4, 1$ representation of\n$SU(4)_R$ respectively\\footnote{The construction of twistor string\nhas been generalized to theories with less supersymmetry or with\nproduct gauge groups, by orbifolding the fermionic directions of\nthe super-twistor space \\cite{Park:2004bw,Giombi:2004xv}.}\n\n\\subsection{ D-Instantons}\n\nThe action (\\ref{actsu}) also describes some of the interactions\nof ${\\cal N}=4$ super Yang-Mills, but not all. It cannot describe the\nfull interactions, because an extra $U(1)$ R-symmetry gets in the\nway. The fermionic coordinates $\\psi^A, A=1,\\dots, 4$ have an\nextra $U(1)_R$ besides the $SU(4)_R$ considered above. Indeed, the\nfull R-symmetry group in twistor space is\n\\eqn{rsym}{U(4)_R=SU(4)_R\\times U(1)_R,} where we take the extra\n$U(1)_R$, which we call $S,$ to rotate the fermions by a common\nphase \\eqn{extrau}{S:\\ Z^I\\rightarrow Z^I,\\quad \\psi^A\\rightarrow\ne^{i\\theta} \\psi^A.} In the B-model, the extra $U(1)_R$ is\nanomalous, since it does not leave fixed the holomorphic measure\n$\\Omega\\sim d^3Z d\\psi^1\\dots d\\psi^4.$ Under the $S$\ntransformation, the holomorphic measure transforms as\n$\\Omega\\rightarrow e^{-4i\\theta}\\Omega$, so it has charge $S=-4$,\nhence the $B$-model action has $S=-4.$\n\nHowever, as we have set things up so far, the anomaly of the\nB-model is too trivial to agree with the anomaly of ${\\cal N}=4$\nYang-Mills theory. With the normalization \\rf{extrau}, the $S$\ncharges of fields are given by their degrees. The ${\\cal N}=4$\nYang-Mills action is a sum of terms with $S=-4$ and $S=-8.$ For\nillustration, consider the positive and negative helicity gluons\nthat have $S$-charge $0$ and $-4$ respectively. The kinetic term\nand the $++-$ three-gluon vertex contribute to the $S=-4$ part of\nthe Yang-Mills action. The $--+$ and the $--++$ vertices\ncontribute to the $S=-8$ part.\n\nThe action of the open string B-model (\\ref{actsu}) has $S=-4$\ncoming from the anomaly of $S$ of the holomorphic measure\n$\\Omega$. To get the $S=-8$ piece of the Yang-Mills action, we\nneed to enrich the B-model with nonperturbative instanton\ncontributions.\n\n\nThe instantons in question are Euclidean D1-branes wrapped on\nholomorphic curves in $\\Bbb {CP}^{3|4}$ on which open string can\nend. The gauge theory amplitudes come from coupling of the open\nstrings to the D1-branes. The massless modes on the worldvolume of\na D-instanton are a $U(1)$ gauge field and the modes that describe\nthe motion of the instanton. In the following, we will study\nmostly tree level amplitudes. These get contributions from genus\nzero instantons on which the $U(1)$ line bundles have a fixed\ndegree $d=-1$. Hence the bundles do not have any discrete or\ncontinuous moduli, so we will ignore the $U(1)$ gauge field from\nnow on. The modes describing the motion of the D-instanton make up\nthe moduli space ${\\cal M}$ of holomorphic curves $C$ in the\ntwistor space. To construct scattering amplitudes we need to\nintegrate over ${\\cal M}.$\n\n\n\\section{Tree Level Amplitudes from Twistor String Theory}\n\n\\subsection{Basic Setup}\n\\label{basic}\n\nRecall that the interactions of the full gauge theory come from\nEuclidean D1-brane instantons on which the open strings can end.\nThe open strings are described by the holomorphic gauge field\n${\\cal A}.$ To find the coupling of the open strings to the\nD-instantons, let us consider the effective action of the D1-D5\nand D5-D1 strings. Quantizing the zero modes of the D1-D5 strings\nleads to a fermionic zero form field $\\alpha^i$ living on the\nworldvolume of the D-instanton. $\\alpha^i$ transforms in the\nfundamental representation of the $Gl(n,\\Bbb C)$ gauge group\ncoming from the Chan-Paton factors. The D5-D1 strings are\ndescribed by a fermion $\\beta_i$ transforming in the\nantifundamental representation. The kinetic operator for the\ntopological strings is the BRST operator $Q,$ which acts as\n$\\overline\\partial$ on the low energy modes. So the effective action of\nthe D1-D5 strings is \\eqn{done}{S=\\int_C \\beta(\\overline\\partial+{\\cal\nA})\\alpha,} where $C$ is the holomorphic curve wrapped by the\nD-instanton. From this we read off the vertex operator for an open\nstring with wavefunction $\\phi={\\cal A}_{\\overline I}dZ^{\\overline I}$\n\\eqn{verterxop}{V_\\phi=\\int_C J\\wedge \\phi,} where $J_i{}^j=\n\\beta_i\\alpha^j dz$ is a holomorphic current made from the free\nfermions $\\alpha^j,\\beta_i$. These currents generate a current\nalgebra on the worldvolume of the D-instanton.\n\nTo compute a scattering amplitude, we evaluate the correlation\nfunction \\eqalign{conf}{{\\cal A} =\\int d{\\cal M} \\vev{\nV_{\\phi_1}V_{\\phi_2}\\dots V_{\\phi_n}} =\\int d{\\cal M} \\left\\langle\n\\int_C J_1\\phi_1\\dots \\int_C J_n\\phi_n \\right\\rangle.} We can\nthink of this as integrating out the fermions $\\alpha,\\beta$\nliving on the D-instanton. Hence, the generating function for\nscattering amplitudes is simply the integral of Dirac operator\nover moduli space of D-instantons \\eqn{dii}{\\int d{\\cal M}\n\\det(\\overline\\partial+ A).} Here, $d{\\cal M} $ is the holomorphic\nmeasure on the moduli space of holomorphic curves of genus zero\nand degree $d.$ In topological B-model, the action is holomorphic\nfunction of the fields and all path integrals are contour\nintegral. Hence, the integral is actually over a\nmiddle-dimensional Lagrangian cycle in the moduli space. This\nintegral is a higher dimensional generalization of the familiar\ncontour integral from complex analysis.\n\nThe correlator of the currents on D1-instanton\\footnote{Here we\nwrite the single trace contribution to the correlator that\nreproduces the gauge theory scattering amplitude. As discussed in\nsection \\ref{closed}, the multitrace contributions correspond to\ngluon scattering processes with exchange of internal conformal\nsupergravity states.} \\eqn{curc}{\\langle J_1(z_1) J_2(z_2) \\dots\nJ_n(z_n)\\rangle={\\mop{Tr}(T_1 T_2\\dots T_n)dz_1 dz_2\\dots dz_n\\over\n(z_1-z_2)(z_2-z_3)\\dots (z_n-z_1)}+\\,permutations} follows from\nthe free fermion correlator on a sphere\n\\eqn{fref}{\\alpha^i(z)\\beta_j(z')\\sim {\\delta^i{}_j\\over z-z'}.}\n\n\n\\vskip .2in\\noindent{\\it Scattering Wavefunctions}\n\nWe would like to compute the scattering amplitudes of plane waves\n$\\phi(x)=\\exp\\,(i \\, p\\cdot x)=\\exp\\,(i \\, \\pi^a \\tilde\\pi^{\\dot\na} x_{a \\dot a}).$ These are wavefunctions of external particles\nwith definite momentum ${p^{a \\dot a}=\\pi^a \\tilde\\pi^{\\dot a}}.$\nThe twistor wavefunctions corresponding to plane waves are\n\\eqn{twav}{\\phi(\\lambda,\\mu,\\psi)= \\overline\\delta(\\vev{\\lambda,\\pi})\n\\exp(i [{\\tilde\\pi},\\mu])g(\\psi),} where $g(\\psi)$ encodes the\ndependence on fermionic coordinates. For a positive helicity gluon\n$g(\\psi)=1$ and for a negative helicity gluon\n$g(\\psi)=\\psi^1\\psi^2\\psi^3\\psi^4.$ Here, we have introduced the\nholomorphic delta function \\eqn{hold}{\\overline\\delta(f)=\\overline\\partial\n\\overline f \\delta^2(f),} which is a closed $(0,1)$ form. We normalize\nit so that for any function $f(z)$, we have \\eqn{delf}{\\int dz\n\\,\\overline\\delta(z-a)f(z)=f(a).}\n\nThe idea of \\rf{twav} is that the delta function\n$\\delta(\\vev{\\lambda,\\pi})$ sets $\\lambda^a$ equal to $\\pi^a.$\nThe Fourier transform of the exponential $\\exp(i\n[{\\tilde\\pi},\\mu])$ back into Minkowski space gives another delta\nfunction that sets $\\tilde\\lambda^{\\dot a}$ equal to\n$\\tilde\\pi^{\\dot a}.$ The twistor string computation with these\nwavefunctions gives directly momentum space scattering amplitudes.\n\nActually, the wavefunctions should be modified slightly so that\nthey are invariant under the scaling of the homogeneous\ncoordinates of $\\Bbb{CP}^{3|4}.$ From the basic properties of delta\nfunctions, it follows that $\\overline\\delta(\\vev{\\lambda,\\pi})$ is\nhomogeneous of degree $-1$ in both $\\lambda$ and $\\pi.$ Hence, for\npositive helicity gluons, the wavefunction is actually\n\\eqn{poh}{\\phi^+(\\lambda,\\mu)=\\overline\\delta(\\vev{\\lambda,\\pi})(\\lambda\/\\pi)\n\\exp \\big(i[{\\tilde\\pi},\\mu](\\pi\/\\lambda)\\big).} Here,\n$\\lambda\/\\pi$ is a well defined holomorphic function, since\n$\\lambda$ is a multiple of $\\pi$ on the support of the delta\nfunction. The power of $(\\lambda\/\\pi)$ was chosen, so that the\nwavefunction is homogeneous of degree zero in overall scaling of\n$\\lambda,\\mu,\\psi.$ Under the scaling\n\\eqn{scals}{(\\pi,\\tilde\\pi)\\rightarrow (t\\pi,t^{-1}\\tilde\\pi),}\nthe wavefunction is homogeneous of degree $-2$ as expected for a\npositive helicity gluon \\rf{scalia}. For negative helicity gluon,\nthe wavefunction is\n\\eqn{nev}{\\phi^-(\\lambda,\\mu)=\\overline\\delta(\\vev{\\lambda,\\pi})(\\pi\/\\lambda)^3\n\\exp\\big(i[\\tilde\\pi,\\mu](\\pi\/\\lambda)\\big)\n\\psi^1\\psi^2\\psi^3\\psi^4.} Under the scaling \\rf{scals}, the\nwavefunction is homogeneous of degree $+2$ as expected. For\nwavefunctions of particles with helicity $h,$ there are similar\nformulas with $2-2h$ factors of $\\psi.$\n\n\\vskip .2in\\noindent{\\it MHV Amplitudes}\n\nWe saw in section \\ref{twist} that MHV amplitudes, after Fourier\ntransform into twistor space, localize on genus zero degree one\ncurve $C$, that is, a linearly embedded copy of $\\Bbb{CP}^1.$ Here we\nwill evaluate the degree one instanton contribution and confirm\nthat it gives the MHV amplitude.\n\nConsider the moduli space of such curves. Each curve $C$ can be\ndescribed by the equations \\eqn{dege}{\\mu^{\\dot a}=x^{a {\\dot a}}\n\\lambda_a \\qquad \\psi^A=\\theta^{Aa} \\lambda_a,} where $\\lambda^a$\nare the homogeneous coordinates and $x^{a\\dot a}$ and\n$\\theta^{Aa}$ are the moduli of $C$. The holomorphic measure on\nthe moduli space is \\eqn{meas}{d{\\cal M}=d^4x d^8\\theta.} Hence, the\nmoduli space has $4$ bosonic and $8$ fermionic dimensions.\n\nIn terms of the homogeneous coordinate $\\lambda^a$ the current\ncorrelator \\rf{curc} becomes\n\\eqn{cunu}{\\vev{J_1(\\pi_1)J_2(\\pi_2)\\dots\nJ_n(\\pi_n)}={\\prod_i\\vev{\\lambda_i,d\\lambda_i}\\over\n\\vev{\\lambda_1,\\lambda_2}\\vev{\\lambda_2,\\lambda_3}\\dots \\vev{\\lambda_n,\\lambda_1}},} which we\nfound by setting $z_i=\\lambda^2_i\/\\lambda^1_i$. We stripped away the color\nfactors and kept only the contribution of the term with\n$1,2,\\dots, n$ cyclic order. We multiply this with the\nwavefunctions\n$\\psi_i(\\lambda,\\mu,\\psi)=\\overline\\delta(\\vev{\\lambda,\\pi_i})\\exp\\left(i[\\mu,\\tilde\\pi_i]\\right)\ng_i(\\psi)$ and integrate over the positions $\\lambda_i,\\tilde\\lambda_i$ of the\nvertex operators. We perform the integral over the positions of\nthe vertex operators using the formula \\eqn{homl}{\\int_{\\Bbb{CP}^1}\n\\vev{\\lambda,d\\lambda}\\ \\overline\\delta(\\vev{\\lambda,\\pi})f(\\lambda)=f(\\pi),} where $f(\\lambda)$\nis a homogeneous function of $\\lambda^a$ of degree $-1.$ This is the\nhomogeneous version of definition of holomorphic delta function\n\\eqn{hdef}{\\int_{\\Bbb C} dz\\ \\overline\\delta(z-b)f(z)=f(b).} Hence,\neach wavefunction contributes a factor of \\eqn{wco}{\\int_C\n\\vev{\\lambda,d\\lambda}\\phi_i=\n\\exp\\left(i[\\tilde\\pi_i,\\mu_i]\\right)g_i(\\psi_i),} where\n$\\mu_i^{\\dot a}=x^{a\\dot\na}\\lambda_{ia},\\psi^A_i=\\theta^{aA}\\lambda_{ia}$ and the delta\nfunction sets $\\lambda^a_i=\\pi^a_i$ in the correlation function.\nSo the amplitude becomes \\eqn{ale}{{\\cal A}={1\\over \\prod_k\n\\vev{\\pi_k,\\pi_{k+1}}} \\int d^4x d^8\\theta\\ \\exp\\Big(i\\sum_k\n[\\tilde\\pi_k,\\mu_k]\\Big) \\prod_k g_k(\\psi_k).}\n\nThe fermionic part of the wavefunctions is $g_i=1$ for the\npositive helicity gluons and\n$g_i=\\psi^1_i\\psi^2_i\\psi^3_i\\psi^4_i$ for the negative helicity\ngluons. Since we are integrating over eight fermionic moduli\n$d^8\\theta,$ we get nonzero contribution to amplitudes with\nexactly two negative helicities $r^-,s^-.$ Setting\n$\\psi^A=\\theta^{Aa}\\pi_a,$ the integral over fermionic dimensions\nof the moduli space gives the numerator of the MHV amplitude\n\\eqn{fermi}{\\int d^8\\theta \\prod_{A=1}^4 \\psi_r^A\n\\prod_{B=1}^4\\psi_s^B= \\vev{r,s}^4.} Setting $\\mu^{\\dot\na}_i=x^{a\\dot a}\\pi_{ia},\\ i=1,\\dots, n,$ the integral over\nbosonic moduli gives the delta function of momentum conservation\n\\eqn{bosni}{\\int d^4x \\exp\\left(ix_{a\\dot\na}\\sum_i\\pi_i^a\\tilde\\pi^{\\dot a}_i\\right)=\\delta^4(\\sum_{i=1}^n\n\\pi_i^a\\tilde\\pi_i^{\\dot a}).}\n\nCollecting the various pieces, we get the familiar MHV amplitude\n\\eqn{mhva}{{\\cal A}(r^-,s^-)={\\vev{r,s}^4 \\over \\prod_{i=1}^n\n\\vev{i,i+1}}\\delta^4(\\sum_{i=1}^n \\pi_i\\tilde\\pi_i).}\n\n\n\n\\subsection{Higher Degree Instantons}\n\\label{highd}\n\n\\vskip .15in\\noindent{\\it Instanton Measure}\n\nHere we will construct the measure on the moduli space of genus\nzero degree $d$ curves. Such curves can be described as degree $d$\nmaps from an abstract $\\Bbb{CP}^1$ with homogeneous coordinates $(u,v)$\n\\eqalign{didi}{Z^I&=&P^I(u,v) \\cr \\psi^A&=&\\chi^A(u,v).} Here\n$P^I,\\chi^A$ are homogeneous polynomials of degree $d$ in $u,v.$\nThe space of homogeneous polynomials of degree $d$ is a linear\nspace of dimension $d+1,$ spanned by $u^d,u^{d-1}v,\\dots, v^d.$\nPicking a basis $b^\\alpha(u,v),\\alpha=1,\\dots,d+1$, we write\n\\eqalign{expa}{P^I&=&\\sum_\\alpha P_\\alpha^I\\ b^\\alpha\\cr \\psi^A&=&\n\\sum_\\alpha \\chi^A_\\alpha\\ b^\\alpha.} A natural measure is\n\\eqn{namo}{d{\\cal M}_0=\\prod_{\\alpha=1}^{d+1}\\prod_{A,I=1}^4\ndP^I_\\alpha\\ d\\chi^A_\\alpha.} This measure is invariant under a\ngeneral $Gl(d+1,\\Bbb C)$ transformation of the basis $b_\\alpha.$\nSince the number of bosonic and fermionic coordinates is the same,\nthe Jacobians cancel between fermionic and bosonic parts of the\nmeasure. The description \\rf{didi} is redundant, we need to divide\nby the $\\Bbb C^\\ast$ action that rescales $P^I$ and $\\chi^A$ by a\ncommon factor. This reduces the space of curves from $\\Bbb\nC^{4d+4|4d+4}$ to $\\Bbb{CP}^{4d+3|4d+4}.$ The curve $C$ also stays\ninvariant under an $Sl(2,\\Bbb C)$ transformation on $(u,v)$ so the\nactual moduli space of genus zero degree $d$ curves is\n\\eqn{modsp}{{\\cal M}=\\Bbb{CP}^{4d+3|4d+4}\/Sl(2,\\Bbb C).} As $d{\\cal M}_0$\nis $Gl(2,\\Bbb C)$ invariant, it descends to a holomorphic measure\n\\eqn{holm}{d{\\cal M}={d{\\cal M}_0\\over Gl(2,\\Bbb C)}.} on ${\\cal M}.$ Hence,\n${\\cal M}$ is a Calabi-Yau supermanifold of dimension $(4d|4d+4).$\n\nWe can now understand why amplitudes with different helicities\ncome from holomorphic curves of different degrees. Integrating\nover the moduli space, the measure absorbs $4d+4$ fermion zero\nmodes. These come from the fermionic factors $g(\\psi)$ in the\nwavefunctions of the gluons \\rf{twav}. A positive helicity gluon\ndoes not contribute any zero modes while a negative helicity gluon\n$g^-(\\psi)=\\psi^1\\psi^2\\psi^3\\psi^4$ gives $4$ zero modes. Hence,\ninstantons of degree $d$ contribute to amplitudes with $d+1$\nnegative helicity gluons.\n\nAlternatively, we can get this from counting the $S$ charge\nanomaly. Wavefunctions of particles with different helicities\nviolate $S$ by different amount. The positive helicity gluons do\nnot violate $S$ while the negative helicity gluons violate $S$ by\n$-4$ units. So, the amplitudes with $p$ positive helicity gluons\nand $q$ negative helicity gluons violates the $S$ charge by $-4q$\nunits.\n\nIn the twistor string, there is a source of violation of $S$ from\nthe instanton measure. Since the $S$ charge of $Z$ and $\\psi$ is\n$0$ and $1$ respectively, the charges of the coefficients\n$P^I_\\alpha, \\chi^A_\\alpha$ are $0,1.$ Hence, the differentials\n$dP^I_\\alpha, d\\chi^A_\\alpha$ have charges $0,-1$ and the $S$\ncharge of the $(4d|4d+4)$ dimensional measure $d{\\cal M}$ is $-4d-4.$\n\nSo an instanton can contribute to an amplitude with $q$ negative\nhelicity gluons if and only if \\eqn{reld}{d=q-1.} This is the\nfamiliar formula discussed at in subsection \\ref{twist}. For $l$\nloop amplitudes, this relation generalizes to $d=q-1+l.$\n\n \\vskip\n.15in\\noindent{\\it Evaluating the Instanton Contribution}\n\nHere we consider the connected instanton contribution along the\nlines of the calculation of the MHV amplitude. The amplitude is\n\\cite{Roiban:2004yf,Roiban:2004vt,Witten:2004cp}\n\\eqalign{wop}{{\\cal A}&=&\\int d{\\cal M}_d \\prod_i\\int_C {\\vev{u_i,du_i}\n\\over \\prod_k \\vev{u_k,u_{k+1}}}\\overline\\delta(\\vev{\\lambda(u_i),\\pi_i})\n\\exp\\left( i[\\mu(u_i),\\tilde\\pi_i]\\right) g_i(\\psi_i).} Here\n$d{\\cal M}_d$ is the measure on the moduli space of genus zero degree\n$d$ curves. Next comes the correlator of currents on the\nworldvolume of the D1-instanton and the wavefunctions in which we\nuse the parameterization $\\lambda^a_i(u_i)=P^a(u_i), \\mu^{\\dot\na}(u_i)=P^{\\dot a}(u_i).$\n\nThis is not really an integral. The integral over the $2d+2$\nparameters $P^{\\dot a}_\\alpha$ gives $2d+2$ delta functions\nbecause $P^{\\dot a}$ appear only in the exponential\n$\\exp\\left(\\sum_i P(u_i)_{\\dot a}\\tilde\\pi_i^{\\dot a}\\right)$.\nHence, we are left with an integral over $4d-(2d+2)+2n=2d+2n-2$\nbosonic variables. Here the $2n$ integrals come from the\nintegration over the positions of the vertex operators. Now there\nare $2n$ delta functions from the wavefunctions since each\nholomorphic delta function is really a product of two real delta\nfunctions $\\overline\\delta(z)=d\\overline z\\ \\delta^2(z)$, and $2d+2$ delta\nfunctions from the integral over the exponentials, which gives a\ntotal of $2d+2n+2$. There are four more delta functions than\nintegration variables. The four extra delta functions impose\nmomentum conservation. Hence, the delta functions localize the\nintegral to a sum of contributions from a finite number of points\non the moduli space.\n\n\n\\vskip .15in\\noindent{\\it Parity Invariance}\n\nIn the helicity formalism, the parity symmetry of Yang-Mills\nscattering amplitudes is apparent. The parity changes the signs of\nthe helicities of the gluons. The parity conjugate amplitude can\nbe obtained by simply exchanging $\\lambda_i$'s with $\\tilde\\lambda_i$'s.\n\nTo go to twistor space, one Fourier transforms with respect to\n$\\tilde\\lambda_i$, which breaks the symmetry between $\\lambda$ and $\\tilde\\lambda.$ Indeed,\nthe result \\rf{wop} for the scattering amplitude treats $\\lambda$ and\n$\\tilde\\lambda$ asymmetrically. An amplitude with $p$ positive helicities\nand $q$ negative helicities has contribution from instantons of\ndegree $q-1$, while the parity conjugate amplitude with $q$ gluons\nof positive helicity and $p$ gluons of negative helicity has\ncontribution from instantons of degree $p-1.$ To show that these\ntwo are related by an exchange of $\\lambda_i$ and $\\tilde\\lambda_i$ requires some\namount of work. We refer the interested reader to the original\nliterature \\cite{Roiban:2004yf,Roiban:2004vt,\nWitten:2004cp,Berkovits:2004tx}.\n\n\n\\vskip .15in\\noindent{\\it Localization on the Moduli Space}\n\nRecall that a tree level amplitude with $q$ negative helicity\ngluons and arbitrary number of positive helicity gluons receives\ncontribution from instantons wrapping holomorphic curves of degree\n$d=q-1.$ The degree $d$ instanton can consist of several disjoint\nlower degree instantons whose degrees add up to $d.$ For\ndisconnected scattering amplitudes the instantons are connected by\nopen strings.\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.6in]{con_discon.eps}\n \\caption{An amplitude with tree negative helicity gluons has contribution\n from two configurations: (a) Connected $d=2$ instanton. (b) Two disjoint\n $d=1$ instantons. The dashed line represents an open string\n connecting the instantons. }\n \\label{coni}\n\\end{figure}\nA priory, one expects that the amplitude receives contributions\nfrom all possible instanton configurations with total degree\n$q-1.$ So for example an amplitude with three negative helicity\ngluons has contribution from a connected $d=2$ instanton and a\ncontribution from two disjoint $d=1$ instantons, fig. \\ref{coni}.\n\nWhat one actually finds is that the connected and disconnected\ninstanton contributions reproduce the whole amplitude {\\it\nseparately}. For example, in the case of amplitude with three\nnegative helicity gluons, it seems that there are two different\nways to compute the same amplitude. One can either evaluate it\nfrom the connected $d=2$ instantons\n\\cite{Roiban:2004yf,Roiban:2004vt}, see fig. \\ref{coni}(a).\nAlternatively, the amplitude comes from evaluating the\ncontribution of the two disjoint $d=1$ instantons\n\\cite{Cachazo:2004kj}, fig. \\ref{coni}(b).\n\nWe can explain the equality of various instanton contributions\nroughly as follows \\cite{Gukov:2004ei}. Consider the connected\ncontribution. The amplitude is expressed as a `contour' integral\nover a middle-dimensional Lagrangian cycle in the moduli space of\ndegree two curves . The integrand comes from the correlation\nfunction on the worldvolume of the D-instanton and from the\nmeasure on the moduli space. It has poles in the region of the\nmoduli space, where the instanton degenerates to two intersecting\ninstantons of lower degrees $d_1+d_2=d$ , fig. \\ref{locali}.\nPicking a contour that encircles the pole, the integral localizes\nto an integral over the moduli space ${\\cal M}'$ of the intersecting\nlower degree curves.\n\nSimilarly, the disconnected contribution has a pole when the two\nends of the propagator coincide. This comes from the pole of the\nopen string propagator \\eqn{opro}{\\overline\\partial G={\\overline\n\\delta}^3(Z'{}^I-Z^I) \\delta^4(\\psi'{}^A-\\psi^A).} Hence, the\nintegral over disjoint instantons also localizes on the moduli\nspace of intersecting instantons. It can be shown that the\nlocalized integrals coming from either connected or disconnected\ninstanton configurations agree \\cite{Gukov:2004ei} which explains\nwhy the connected and disconnected instanton calculations give the\nentire scattering amplitude separately.\n\n\n\\vskip .2in \\noindent{\\it Towards MHV Diagrams}\n\nStarting with a higher degree instanton contribution, successive\nlocalization reduces the integral to the moduli space of\nintersecting degree one curves. As we will review below, this\nintegral can be evaluated leading to a combinatorial prescription\nfor the scattering amplitudes \\cite{Cachazo:2004kj}. Indeed,\ndegree one instantons give MHV amplitudes, so the localization of\nthe moduli integral leads to a diagrammatic construction based on\na suitable generalization of the MHV amplitudes.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.5in]{degree_two.eps}\n \\caption{Localization of the connected instanton contribution to next to MHV amplitude;\n (a) the integral over the moduli space of connected degree two\n curves, localizes to an integral over the degenerate\n curves of (b), that is intersecting complex lines.\n In the figure, we draw the real section of the curves.}\n \\label{locali}\n\\end{figure}\n\n\n\\subsection{MHV Diagrams}\n\nIn this subsection, we start with a motivation of the MHV diagrams\nconstruction of amplitudes from basic properties of twistor\ncorrespondence. We then go on to discuss simple examples and\nextensions to loop amplitudes. In the next subsection, we give a\nheuristic derivation of the MHV rules from twistor string theory.\n\nRecall that MHV scattering amplitudes are supported on $\\Bbb\n{CP}^1$'s in twistor space. Each such $\\Bbb {CP}^1$ can be\nassociated to a point $x^{a\\dot a}$ in Minkowski space\\footnote{We\nare being slightly imprecise here. The space of $\\Bbb {CP}^1$'s is\nactually a copy of the complexified Minkowski space $\\Bbb C^4.$\nThe Minkowski space $\\Bbb R^{3|1}$ corresponds to $\\Bbb {CP}^1$'s\nthat lie entirely in the 'null twistor space', defined by\nvanishing of the pseudo-hermitian norm $Q(\\lambda,\\mu)=i(\\lambda^a\n\\overline\\mu_a-\\overline\\lambda^{\\dot a}\\mu_{\\dot a}).$ Indeed, for a $\\Bbb\n{CP}^1$ corresponding to point a point in Minkowski space\n$x^{a\\dot a}$ is a hermitian matrix, hence it follows from\n\\rf{cpone} that $Q$ vanishes.} \\eqn{cpone}{\\mu_{\\dot a}+x_{a\\dot\na}\\lambda^a=0.} So, in a sense, we can think of MHV amplitudes as\nlocal interaction vertices \\cite{Cachazo:2004kj}. To take this\nanalogy further, we can try to build more complicated amplitudes\nfrom Feynman diagrams with vertices that are suitable off-shell\ncontinuations of the MHV amplitudes, fig. \\ref{localize}. MHV\namplitudes are functions of holomorphic spinors $\\lambda_i$ only.\nHence, to use them as vertices in Feynman diagrams, we need to\ndefine $\\lambda$ for internal off-shell momenta $p^2\\neq0.$\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.5in]{localize.eps}\n \\caption{Two representations of a degree three MHV diagram. (a) In\n Minkowski space, the MHV vertices are represented by points.\n (b) In twistor space, each MHV vertex corresponds to a line.\n The three lines pairwise intersect.}\n \\label{localize}\n\\end{figure}\n\nTo motivate the off-shell continuation, notice that for on-shell\nmomentum $p^{a\\dot a}=\\lambda^a\\tilde\\lambda^{\\dot a}$, we can\nextract the holomorphic spinors $\\lambda$ from the momentum $p$ by\npicking arbitrary anti-holomorphic spinor $\\eta^{\\dot a}$ and\ncontracting it with $p^{a\\dot a}.$ This gives $\\lambda^a$ up to a\nscalar factor \\eqn{lax}{\\lambda^a= {p^{a\\dot a}\\eta_{\\dot a}\\over\n[\\tilde\\lambda, \\eta]}.} For off-shell momenta, this strategy\nalmost works except for the factor $[\\tilde\\lambda,\\eta]$ in the\ndenominator which depends on the undefined spinor $\\tilde\\lambda$.\nFortunately, $[\\tilde\\lambda, \\eta]$ scales out of Feynman\ndiagrams, so we take as our definition\n\\eqn{lambdaoffshell}{\\lambda^a= p^{a\\dot a}\\eta_{\\dot a}.} This is\nclearly well-defined for off-shell momentum. We complete the\ndefinition of the MHV rules, by taking $1\/k^2$ for the propagator\nconnecting the MHV vertices.\n\nConsider an MHV diagram with $v$ vertices. Each vertex gives two\nnegative helicity gluons. To make a connected tree level graph,\nthe vertices are connected with $v-1$ propagators. The propagators\nabsorb $v-1$ negative helicities, leaving $v+1$ negative helicity\nexternal gluons. Hence, to find all MHV graphs contributing to a\ngiven amplitude, draw all possible tree graphs of $v$ vertices and\n$v-1$ links assigning opposite helicities to the two ends of\ninternal lines. The external gluons are distributed among the\nvertices while preserving cyclic ordering. MHV graphs are those\nfor which each vertex has two negative helicity gluons emanating\nfrom it.\n\nFor further work on MHV vertices construction of tree-level gluon\namplitudes, see\n\\cite{Bena:2004ry,Kosower:2004yz,Wu:2004fb,Zhu:2004kr,Birthwright:2005ak}.\nMHV vertices have many generalizations; in particular, to\namplitudes with fermions and scalars\n\\cite{Georgiou:2004wu,Georgiou:2004by,Khoze:2004ba,Wu:2004jx,Su:2004ym},\nwith Higgses \\cite{Dixon:2004za,Badger:2004ty} and with\nelectroweak vector-boson currents \\cite{Bern:2004ba}. For an\nattempt to generalize MHV vertices to gravity, see\n\\cite{Giombi:2004ix,Nair:2005iv,Abe:2005se}.\n\n \\vskip .2in \\noindent{\\it Examples}\n\nHere we discuss concrete amplitudes to illustrate the method.\nConsider first the $+---$ gluon amplitude. This amplitude vanishes\nin Yang-Mills theory. It has contribution from two diagrams, see\nfig. \\ref{pmmm}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.8in]{pmmm.eps}\n \\caption{MHV diagrams contributing to the $+---$\n amplitude, which is expected to vanish.}\n \\label{pmmm}\n\\end{figure}\n\nThe first of the two diagrams gives\n\\eqn{firstd}{{\\vev{2,\\lambda}^4 \\over \\vev{1,2} \\vev{2,\\lambda}\n\\vev{\\lambda, 1}} {1 \\over p^2} {\\vev{3,4}^4 \\over \\vev{3,4}\n\\vev{4,\\lambda} \\vev{\\lambda, 3}},} where we associate to the\ninternal momentum $p=p_1+p_2=-p_3-p_4$ the holomorphic spinor\n\\eqn{hof}{\\lambda^a=p^{a\\dot a}\\eta_{\\dot a}=(p_1+p_2)^{a {\\dot\na}}\\eta_{\\dot a}.} The second diagram can be obtained from the\nfirst by exchanging particles $2$ and $4$\n\\eqn{secd}{{\\vev{\\lambda', 4}^4\\over\\vev{1,\n\\lambda'}\\vev{\\lambda', 4}\\vev{4, 1}}{1\\over\np'^2}{\\vev{2,3}^4\\over\\vev{2,3}\\vev{3,\\lambda'}\\vev{\\lambda',2}},}\nwhere $\\lambda'^a=(p_1+p_4)^{a\\dot a}\\eta_{\\dot a}.$ Denoting\n$\\phi_i=\\lambda_i^{\\dot a}\\eta_{\\dot a},$ the first and second\ndiagrams give respectively \\eqn{firg}{-{\\phi_1^3\\over\n\\phi_2\\phi_3\\phi_4}{\\vev{34}\\over [21]}-{\\phi^3_1\\over\n\\phi_2\\phi_3\\phi_4}{\\vev{32}\\over[41]}.} The sum of these\ncontributions vanishes, because momentum conservation implies\n$\\vev{32}[21]+\\vev{34}[41]=\\sum_i \\vev{3i}[i1]=0.$\n\nIt is easy to compute more complicated amplitudes. For example,\nthe $n$ gluon $---++\\dots++$ amplitude is a sum of $2(n-3)$ MHV\ndiagrams, which can be obtained from fig. \\ref{pmmm} by adding\nadditional $+$ helicities on the MHV vertices. The diagrams can be\nevaluated to give \\eqalign{nmhv}{ A&=&\\sum_{i=3}^{n-1} {\\vev{1\n\\lambda_{2,i}}^3 \\over \\vev{\\lambda_{2,i} i+1} \\vev{i+1 i+2} \\dots\n\\vev{n1}}{1 \\over q_{2i}^2} {\\vev{23}^3 \\over \\vev{\\lambda_{2,i}\n2} \\vev{34}\\dots \\vev{i\\lambda_{2,i}}} \\cr &+&\\sum_{i=4}^n\n{\\vev{12}^3 \\over \\vev{2\\lambda_{3,i}} \\vev{\\lambda_{3,i}i+1}\n\\dots \\vev{n1}} {1\\over q_{3i}^2} {\\vev{\\lambda_{3,i} 3}^3 \\over\n\\vev{3,4} \\dots \\vev{i-1 i}\\vev{i\\lambda_{3,i}},}} where\n$q_{ij}=p_i+p_{i+1}+\\dots+p_j$ and the corresponding spinor\n$\\lambda_{i,j}^a$ is defined in the usual way\n$\\lambda^a_{i,j}=q_{ij}^{a\\dot a}\\ \\eta_{\\dot a}.$\n\n\n\\vskip .2in\\noindent{\\it Loop Amplitudes}\n\nSimilarly, one can compute loop amplitudes using MHV diagrams.\nThis has been carried out for the one loop MHV amplitude in\n${\\cal N}=4$ \\cite{Brandhuber:2004yw} and ${\\cal N}=1$\n\\cite{Quigley:2004pw,Bedford:2004py} Yang-Mills theory, in\nagreement with the known answers.\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.2in]{mhv_loop.eps}\n \\caption{Schematic representation of a hypothetical twistor string computation of\n one-loop MHV amplitude. The picture shows a diagram in which the\n negative helicity gluons $i^-,j^-$ are on the same MHV vertex.}\n \\label{mhv_loop}\n\\end{figure}\n\nThe expression for an MHV diagram contributing to the one-loop\nMHV amplitude is just what one would expect for a one-loop Feynman\ndiagram with MHV vertices, fig. \\ref{mhv_loop}. There are two MHV\nvertices, each coming with two negative helicity gluons. The\nvertices are connected with two Feynman propagators that absorb\ntwo negative helicities, leaving two negative helicity external\ngluons \\eqn{ampli}{{\\cal A}^{loop}=\\sum_{{\\cal D}, h} \\int\n{d^{4-2\\epsilon} p\\over (2\\pi)^4} {\\cal A}_L(\\lambda_k,\n\\lambda_p,\\lambda_{p-p_L}){1\\over\np^2(p-p_L)^2}{\\cal A}_R(\\lambda_k,\\lambda_p,\\lambda_{p-p_L}).} The off-shell\nspinors entering the MHV amplitudes ${\\cal A}_L, {\\cal A}_R$ are determined\nin terms of the momenta of the internal lines\n\\eqn{ofsh}{\\lambda_p^a=p^{a\\dot a}\\eta_{\\dot a},\\qquad\n\\lambda_{p-p_L}^a=(p-p_L)^{a\\dot a}\\eta_{\\dot a},} which is the same\nprescription as for tree level MHV diagrams. The sum in \\rf{ampli}\nis over partitions ${\\cal D}$ of the gluons among the two MHV\ndiagrams that preserve the cyclic order and over the helicities of\nthe internal particles\\footnote{Similarly, the double-trace\ncontribution to one-loop MHV amplitudes comes from Feynman\ndiagrams with double-trace MHV vertices\n\\cite{Luo:2004ss,Luo:2004nw}.}.\n\nThis calculation makes the twistor structure of one-loop MHV\namplitudes manifest. The two MHV vertices are supported on lines\nin twistor space, so the amplitude is a sum of contributions, each\nof which is supported on a disjoint union two lines. In a\nhypothetical twistor string theory computation of the amplitude,\nthese two lines are connected by open string propagators, see fig.\n\\ref{loop_twistor}. This pictures agrees with studies of the\ntwistor structure using differential equations\n\\cite{Cachazo:2004zb}, after taking into account the holomorphic\nanomaly of the differential equations\n\\cite{Cachazo:2004by,Bena:2004xu}.\n\nFinally, we make a few remarks about the nonsupersymmetric\none-loop MHV amplitudes. The ${\\cal N}=0$ MHV amplitudes are sums of\ncut-constructible terms and rational terms. The cut-constructible\nterms are correctly reproduced from MHV diagrams\n\\cite{Bedford:2004nh}. The rational terms are single valued\nfunctions of the spinors, hence they are free of cuts in four\ndimensions. Their twistor structure suggests that they receive\ncontribution from diagrams in which, alongside with MHV vertices,\nthere are new one-loop vertices coming from one-loop all-plus\nhelicity amplitudes \\cite{Cachazo:2004zb}. However, a suitable\noff-shell continuation of the one-loop all-plus amplitude has not\nbeen found yet. There has been recent progress in computing the\nrational part of some one-loop QCD amplitudes using a\ngeneralization \\cite{Bern:2005hs} of the tree level recursion\nrelations reviewed in section \\ref{perturb}.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.4in]{loop_twistor.eps}\n \\caption{Twistor space structure of the one-loop MHV amplitude.\n The two MHV vertices are represented by lines. In a hypothetical\n twistor string computation of the amplitude, the lines are\n connected by two twistor propagators to make a loop.}\n \\label{loop_twistor}\n\\end{figure}\n\n\n\\subsection{Heuristic Derivation of MHV Diagrams from Twistor String Theory}\n\nHere, we will make an analysis of the disconnected twistor\ndiagrams that contribute to tree level amplitudes\\footnote{For an\nattempt to derive MHV rules from ${\\cal N}=4$ superspace constraints,\nsee \\cite{Abe:2004ep}.}. We will evaluate the twistor string\namplitude corresponding the twistor contribution of fig.\n\\ref{thret} and show how it leads to the MHV diagrammatic rules of\nthe last subsection.\n\n\nThe physical field of the open string B-model is a $(0,1)$-form\n${\\cal A}$ with kinetic operator $\\overline\\partial$ coming from the\nChern-Simons action. The twistor propagator for ${\\cal A}$ is a\n$(0,2)$-form on $\\Bbb {CP}^3\\times \\Bbb{CP}^3$ that is a\n$(0,1)$-form on each copy of $\\Bbb{CP}^3.$ The propagator obeys\nthe equation \\eqn{opror}{\\overline\\partial G={\\overline\n\\delta}^3(Z_2^I-Z_1^I) \\delta^4(\\psi_2^A-\\psi_1^A).} Here,\n$\\overline\\delta(z)=d{\\overline z} \\delta(z)\\delta({\\overline z})$ is the\nholomorphic delta function $(0,1)$-form.\n\nIn an axial gauge, the twistor propagator becomes\n\\eqn{tiwp}{G=\\overline\\delta(\\lambda_2^2-\\lambda_1^2)\\overline\\delta(\\mu_2^{\\dot\n1}-\\mu_1^{\\dot 1}){1\\over \\mu_2^{\\dot 2}-\\mu_1^{\\dot\n2}}\\prod_{A=1}^4(\\psi_2^A-\\psi_1^A),} where we set\n$\\lambda_1^1=\\lambda_2^1=1.$\n\nFor simplicity, we evaluate the contribution from two degree-one\ninstantons $C_1$ and $C_2$ connected by twistor propagator. This\nconfiguration contributes to amplitudes with three negative\nhelicity gluons. The instantons $C_i, i=1,2$ are described by the\nequations \\eqn{insto}{\\mu^{\\dot a}_k=x_i^{a {\\dot a}}\n\\lambda_{ka}, \\qquad \\psi_k^A=\\theta^{Aa}_i\\lambda_{ka}\\qquad\ni=1,2,\\ k=1,\\dots, n.} Here, $x^{a {\\dot a}}_i$ and\n$\\theta^{Aa}_i$ are the bosonic and fermionic moduli of $C_i$.\n\nWith our choice of gauge, the twistor propagator is supported on\npoints such that $\\lambda^a_1=\\lambda^a_2.$ Since $\\mu^{\\dot\na}_2-\\mu^{\\dot a}_1=y^{a\\dot a}\\lambda_a,$ where $y^{a\\dot\na}=x^{a\\dot a}_2-x^{a\\dot a}_1,$ the condition $\\mu^{\\dot\n1}_2-\\mu^{\\dot 1}_1=0$ implies $\\lambda^a=y^{a\\dot 1}.$ Hence, the\nbosonic part of the propagator gives $1\/(\\mu^{\\dot 2}_2-\\mu^{\\dot\n2}_1)=1\/y^2.$\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.2in]{instantons.eps}\n \\caption{Twistor string contribution to an amplitude with three negative\n helicity external gluons.\n Two disconnected degree one instantons are connected by an open string.}\n \\label{thret}\n\\end{figure}\n\nThe correlators of the gluon vertex operators on $C_1$ and $C_2$\nand the integral over $\\theta^{Aa}_i$ give two MHV amplitudes\n${\\cal A}_L$ and ${\\cal A}_R$ as explained in the $d=1$ computation. So we\nare left with the integral \\eqn{dtwoi}{\\int d^4x_1 d^4x_2 {\\cal A}_L\n{1\\over (x_2-x_1)^2} {\\cal A}_R \\prod_{i \\in L} \\exp(i x_1 \\cdot p_i)\n\\prod_{j \\in R} \\exp(i x_2 \\cdot p_j),} where the integral is over\na suitably chosen $4\\times4$ real dimensional `contour' in the\nmoduli space $\\Bbb C^4\\times \\Bbb C^4$ of two degree one curves.\nWe rewrite the exponential as \\eqn{expo}{\\exp(i y \\cdot P)\n\\prod_{j \\in L,R} \\exp(i x \\cdot p_i),} where $x\\equiv x_1$ and\n$P=\\sum_{i \\in R} p_i$ is momentum of the off-shell line\nconnecting the two vertices. The integral \\eqn{dem}{\\int d^4x\n\\prod_{i \\in L,R} \\exp(i x \\cdot p_i)=(2\\pi)^4\\delta^4(\\sum_i\np_i)} gives the delta function of momentum conservation. We are\nleft with \\eqn{abem}{A= \\int d^4 y{1 \\over y^2} \\exp(i y \\cdot P)\n{\\cal A}_L{\\cal A}_R.} The integrand has a pole at $y^2=0,$ which is the\ncondition for the curves $C_1$ and $C_2$ to intersect. The space\n$y^2=0$ is the familiar conifold. It is a cone over $\\Bbb\n{CP}^1\\times \\Bbb {CP}^1$ so we parameterize it as \\eqn{cop}{y^{a\n{\\dot a}}=t \\lambda^a \\tilde\\lambda^{\\dot a}.} Here $\\lambda^a\\in\n{\\cal O}(1,0),\\, \\tilde\\lambda^{\\dot a} \\in {\\cal O}(0,1),$ hence\n$t\\in {\\cal O}(-1,-1)$ so that \\rf{cop} is well-defined. We choose\na contour that picks the residue at $y^2=0.$ The residue is the\nvolume form on the conifold \\eqn{refa}{ {\\rm Res} \\, {d^4 y \\over\ny^2} = t dt \\vev{\\lambda, d\\lambda} [\\tilde\\lambda,\nd\\tilde\\lambda].} Taking the residue, the integral becomes\n\\eqn{aboc}{ I= \\int t dt\\vev{\\lambda, d\\lambda} [\\tilde\\lambda,\nd\\tilde\\lambda] \\exp(i t P_{a\\dot a}\\lambda^a \\tilde\\lambda^{\\dot\na}){\\cal A}_L {\\cal A}_R,} where the MHV vertices depend on the holomorphic\nspinor $\\lambda$ only. We pick the contour $t\\in(-\\infty,\\infty)$\nand ${\\tilde\\lambda=\\overline\\lambda},$ that is, we integrate over the\nreal light-cone. For ${t\\in(0,\\pm\\infty)}$ we regulate the\nintegral with the prescription ${P=(p^0\\pm i\\epsilon,\\vec{p})},$\nso \\eqn{princ}{{\\int_{-\\infty}^\\infty t dt \\exp(i tP_{a\\dot\na}\\lambda^a \\tilde\\lambda^{\\dot a})=-{2\\over (P_{a\\dot a}\\lambda^a\n\\tilde\\lambda^{\\dot a})^2}}.} Hence we have \\eqn{next}{{I= \\int\n\\vev{\\lambda, d\\lambda} [\\tilde\\lambda, d\\tilde\\lambda] {1 \\over\n(P \\lambda \\tilde\\lambda)^2} {\\cal A}_L{\\cal A}_R(\\lambda)}.} To reduce the\nintegral \\rf{next} to a sum over MHV diagrams, we use the identity\n\\eqn{rein}{{ {[\\tilde\\lambda,d\\tilde\\lambda] \\over (P \\lambda\n\\tilde\\lambda )^2}= -{1\\over P\\lambda \\eta} \\, \\overline\\partial \\left(\n{[\\tilde\\lambda, \\eta] \\over P \\lambda \\tilde\\lambda } \\right)},} where\n$\\eta^{\\dot a}$ is an arbitrary positive helicity spinor to write\nthe integral as \\eqn{ipre}{{I=\\int \\vev{\\lambda ,d\\lambda} \\,\n{{\\cal A}_L {\\cal A}_R \\over (P\\lambda \\eta)} \\overline \\partial \\left(\n{[\\tilde\\lambda, \\eta] \\over (P \\lambda \\tilde\\lambda)} \\right)}.}\nNow we can integrate by parts. The $\\overline\\partial$ operator acting\non the holomorphic function on the left gives zero except for\ncontributions coming from poles of the holomorphic function,\n$\\overline\\partial\\left(1\/z\\right)=\\overline\\delta(z).$ These evaluate to a\nsum over residues \\eqn{byp}{{I=\\sum {\\rm Res} \\, \\left(\n{{\\cal A}_L{\\cal A}_R \\over P\\lambda \\eta} \\right) {[\\tilde\\lambda, \\eta]\n\\over P \\lambda \\tilde\\lambda}}.} The residues of\n${1\/(P\\lambda\\eta)}$ are at \\eqn{pol}{\\lambda^a=P^{a\\dot\na}\\eta_{\\dot a}.} Substituting this back into \\rf{byp},\n$P\\lambda\\tilde\\lambda$ evaluates to $P^2 [\\tilde\\lambda,\\eta],$ so we\nhave \\eqn{mhve}{{I={1 \\over P^2} {\\cal A}_L{\\cal A}_R(\\lambda=P\\eta)}.} But\nthis is precisely the contribution from an MHV diagram. Summing\nover all cyclicly ordered partitions of the gluons among the two\ninstantons gives the sum over MHV diagrams contributing to the\nscattering amplitude.\n\nThere are additional additional poles in \\rf{byp} that come from\nthe MHV vertices ${\\cal A}_L{\\cal A}_R$ \\eqn{apo}{{{1 \\over\n\\prod_{\\alpha=1}^4 \\vev{\\lambda_\\alpha, \\lambda}}},} where\n$\\alpha$ runs over the four gluons adjacent to the twistor line.\nThe poles are located at ${\\lambda=\\lambda_\\alpha},$ which is the\ncondition of the twistor line to meet the gluon vertex operator.\nConsider the two diagrams, fig. \\ref{cancellation} in which the\nfunction ${\\cal A}_L{\\cal A}_R$ has a pole at $\\lambda=\\lambda_\\alpha.$ The\ngraphs differ by whether the gluon $\\alpha$ is on the left vertex\njust after the propagator or on the right vertex just before the\npropagaor. The reversed order of ${\\lambda}$ and\n${\\lambda_\\alpha}$ in the two diagrams changes the sign of the\nresidue. The rest of the residue \\rf{byp} stays the same after\ntaking ${\\lambda=\\lambda_\\alpha}.$ The off-shell momenta of the\ntwo diagrams differ by ${\\delta P=\\lambda_\\alpha\n\\tilde\\lambda_\\alpha},$ so the diagrams have the same value of the\ndenominators ${(P\\lambda_\\alpha\n\\tilde\\lambda_\\alpha)(P\\lambda_\\alpha \\eta)}.$ Hence, all poles at\n${\\lambda=\\lambda_\\alpha}$ get cancelled among pairs of diagrams.\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.6in]{cancellation.eps}\n \\caption{The graphs contributing to the pole at $\\lambda=\\lambda_\\alpha.$\n The reversed order of $\\alpha$ and the internal line in the two graphs,\n changes the sign of the residue of the pole.}\n \\label{cancellation}\n\\end{figure}\n\nThis derivation clearly generalizes to several disconnected degree\none instantons that contribute to a general tree level amplitude.\nAn amplitude with $d+1$ negative helicity gluons gets\ncontributions from diagrams with $d$ disconnected degree one\ninstantons. The evaluation of the twistor contributions leads to\nMHV diagrams with $d$ MHV vertices.\n\nLet us remark that the integral \\rf{next} could be taken as the\nstarting point in the study of MHV diagrams. Since \\rf{princ} is\nclearly Lorentz invariant,\\footnote{The Lorentz invariance\nrequires some elaboration, because the choice of contour\n$\\overline\\lambda=\\tilde\\lambda,$ breaks the complexified Lorentz group\n$Sl(2,\\Bbb C)\\times Sl(2,\\Bbb C)$ to the diagonal $Sl(2,\\Bbb C),$\nthe real Minkowski group. It can be argued from the holomorphic\nproperties of the integral \\rf{next}, that it is invariant under\nthe full $Sl(2,\\Bbb C)\\times Sl(2,\\Bbb C)$ \\cite{Cachazo:2004kj}.}\nthe MHV diagram construction must be Lorentz invariant as well.\nAlthough separate MHV diagrams depend on the auxiliary spinor\n$\\eta,$ the sum of all diagrams contributing to a given amplitude\nis independent of the auxiliary spinor $\\eta^{\\dot a}$.\n\n\\vskip .2in \\noindent{\\it Loops in Twistor Space?}\n\nWe have just seen that the disconnected instanton contribution\nleads to tree level MHV diagrams. However, the MHV diagram\nconstruction seems to work for loop amplitudes as well, as\ndiscussed in previous subsection. Hence, one would like to\ngeneralize the twistor string derivation to higher genus instanton\nconfigurations, which contribute to loop amplitudes in Yang-Mills\ntheory. For example, the one-loop MHV amplitude should come from a\nconfiguration of two degree one instantons connected by two\ntwistor propagators to make a loop, fig. \\ref{loop_twistor}. An\nattempt to evaluate this contribution runs into difficulties: the\ntwo twistor propagators are both inserted at the same point\n$\\lambda^a=y^{a\\dot a}\\eta_{\\dot a}$ on the D-instanton\nworldvolume making the answer ill-defined. Some of these\ndifficulties are presumably related to the closed string sector of\nthe twistor string theory, that we will now review.\n\n\n\n\n\\section{Closed Strings}\n\\label{closed}\n\n\nThe closed strings of the topological B-model on supertwistor\nspace are related by twistor transform to ${\\cal N}=4$ conformal\nsupergravity \\cite{Berkovits:2004jj}. The conformal group is the\ngroup of linear transformations of the twistor space, so the\ntwistor string is manifestly conformally invariant. Hence it\nnecessarily leads to a conformal theory of gravity.\n\n\n\\subsection{Closed String Spectrum}\n\nLet us see how the closed strings are related to the conformal\nsupergravity fields. The most obvious closed string field is the\ndeformation of complex structure of $\\Bbb{CP}'^{3|4},$ the $'$ means\nthat we throw away the set $\\lambda^a=0.$ In this and the\nfollowing section, we parameterize $\\Bbb{CP}^{3|4}$ with homogeneous\ncoordinates $Z^I,\\ I=1,\\dots,8.$ Recall that the complex structure\nis conventionally defined in terms of the tensor field\n$j^A=J^A{}_B dZ^B$ obeying $J^2=-1.$ The indices $A,B$ can be both\nholomorphic or antiholomorphic. In local holomorphic coordinates\n$J^I{}_J=i$ and $J^{\\overline I}{}_{\\overline J}=-i$. The first order\nperturbations of the complex structure are described by a field\n$J^I{}_{\\overline J}$ and its complex conjugate $J^{\\overline I}{}_J$. From\n$J^I{}_{\\overline J}$ we form the vector valued $(0,1)$ form\n$j^I=J^I{}_{\\overline J} dZ^{\\overline J}$ with equations of motion\n\\eqn{ejej}{\\overline\\partial j^I=0} that express the integrability\ncondition on the deformed complex structure. $j^I$ is volume\npreserving $\\partial_I J^I_{\\overline J}=0$, since the holomorphic\nvolume $\\Omega$ is part of the definition of the\nB-model\\footnote{This extra condition is not understood from\nB-model perpective \\cite{Berkovits:2004jj}. One can guess it from\nanalogous condition in the Berkovits's open twistor string.},\n$j^I$ is subject to the gauge symmetry $j^I\\rightarrow\nj^I+\\epsilon \\overline\\partial \\kappa^I$, where $\\kappa^I$ is a volume\npreserving vector field.\n\n\nAccording to twistor transform \\cite{Penrose:1976jq}, volume\npreserving deformations of complex structure of twistor space are\nrelated to anti-selfdual perturbations of the spacetime.\nAnti-selfdual perturbations correspond to positive helicity\nconformal supergravitons. The ${\\cal N}=4$ positive helicity\nsupermultiplet contains fields going from the helicity $+2$\ngraviton to a complex scalar $\\overline C.$\n\nThe negative helicity graviton is part of a separate ${\\cal N}=4$\nsuperfield. It comes from an RR two form field\n\\eqn{bfield}{b=B_{\\overline I J}\\ d\\overline Z^{\\overline I}\\wedge dZ^J} that\ncouples to the D1-branes of the B-model via \\eqn{bcop}{\\int_C b,}\nwhere $C$ is the worldvolume of the D1-brane. The equations of\nmotion of $b$ are \\eqn{eomg}{\\overline\\partial b=0} and $b$ is subject\nto the gauge invariance $b\\rightarrow b+\\overline\\partial \\lambda.$ In\norder to relate $b$ to the fields of the Berkovits's open twistor\nstring that we discuss in next section, one needs to assume that\n$b$ is also invariant under the gauge transformation $B_{\\overline\nIJ}\\rightarrow B_{\\overline IJ}+\\partial_J\\ \\chi_{\\overline I}.$\n\n\\subsection{Conformal Supergravity}\n\nConformal supergravity in four dimensions has action\n\\eqn{weyla}{S\\sim\\int d^4x \\sqrt{-g}W_{abcd}W^{abcd},} where $W$\nis the Weyl tensor. This theory is generally considered\nunphysical. Expanding the action around flat space\n$g_{\\mu\\nu}=\\eta_{\\mu\\nu}+h_{\\mu\\nu}$ leads to a fourth order\nkinetic operator $S\\sim\\int d^4x\\, h\n\\partial^4h$ for the fluctuations of the metric, and thus to\nlack of unitarity.\n\nWe can see a sign of the supergravity already in the tree level\nMHV amplitude calculation of section \\ref{basic}. There we found\nthat the single trace terms agree with the tree level MHV\namplitude in gauge theory. We remarked that the current algebra\ncorrelators give additional multi-trace contributions. These come\nfrom an exchange of an internal conformal supergravity state,\nwhich is a singlet under the gauge group\\footnote{These\nopen-closed string interactions can be used to study deformations\nof ${\\cal N}=4$ gauge theory by turning on closed string background\nfield \\cite{Kulaxizi:2004pa,Chiou:2005jn}.}. For example, the four\ngluon MHV amplitude has a contribution $\\mop{Tr}\\ T_1T_2 \\mop{Tr}\\ T_3T_4$\ncoming from an exchange of supergravity state in the\n$12\\rightarrow 34$ channel, fig. \\ref{graviton}. In twistor string\ntheory, this comes from the double trace contribution of the\ncurrent algebra on the worldvolume of the D-instanton\n\\eqn{twiam}{\\int_{{\\cal M}} d{\\cal M} \\left\\langle V_1 V_2\\right\\rangle\n\\left\\langle V_3 V_4\\right\\rangle.}\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=1.1in]{graviton.eps}\n \\caption{A double trace $ \\mop{Tr} T_1T_2\\mop{Tr} T_3T_4$ contribution to\n tree level four gluon scattering amplitude coming from\n exchange of conformal supergravity particle,\n which is represented by a dashed line.}\n \\label{graviton}\n\\end{figure}\n\nAt tree level, it is possible to recover the pure gauge theory\nscattering amplitudes by keeping the single-trace terms. However,\nat the loop level, the diagrams that include conformal\nsupergravity particles can generate single-trace interactions.\nHence the presence of conformal supergravity coming from the\nclosed strings puts an obstruction to computation of Yang-Mills\nloop amplitudes in the present formulation of twistor string\ntheory.\n\nIn twistor string theory, the conformal supergravitons have the\nsame coupling as gauge bosons, so it is not possible to remove the\nconformal supergravity states by going to weak coupling. Since,\nYang-Mills theory is consistent without conformal supergravity, it\nis likely that there is a version of the twistor string theory\nthat does not contain the conformal supergravity states.\n\n\\section{Berkovits's Open Twistor String}\n\\label{berko}\n\nHere we will describe the open string version of the twistor\nstring \\cite{Berkovits:2004hg}. In this string theory, both\nYang-Mills and conformal supergravity states come from open string\nvertex operators.\n\n\\subsection{The Spectrum}\n\nThe action of the open string theory is \\eqn{opac}{S=\\int\nd^2z\\left( Y_I\\overline\\nabla_{\\overline z} Z^I+\\overline Y_I \\nabla_z\\overline\nZ^I+S_C \\right).} For Euclidean signature of the worldsheet,\n$Z^I=(\\lambda^a,\\tilde\\lambda^{\\dot a},\\phi^A),\\ a,{\\dot a}=1,2,\\ A=1,\\dots,4,\\\nI=1,\\dots,8,$ are homogeneous coordinates on $\\Bbb{CP}^{3|4}$ and $\\overline\nZ^I$ are their complex conjugates. $Y_I$ and $\\overline Y_I$ are\nconjugates to $Z^I$ and $\\overline Z^I.$ Notice that $Z^I,I=5,\\dots, 8$\nwere denoted as $\\phi^A,A=1,\\dots, 4$ in previous sections. Before\ntwisting, $Z$ and $\\overline Z$ have conformal weight zero and $Y$ and\n$\\overline Y$ have conformal weight one. The covariant derivatives are\n\\eqn{covd}{\\nabla_z=\\partial_z-A_z\\qquad \\nabla_{\\overline\nz}=\\overline\\partial_{\\overline z}-A_{\\overline z},} where $A$ is a worldsheet\ngauge field that gauges the $Gl(1,\\Bbb C)$ symmetry\n$Z^I\\rightarrow tZ^I,\\ Y_I\\rightarrow t^{-1} Y_I.$ $S_C$ is the\naction of a current algebra with central charge $+28$ which\ncancels $-26$ of the conformal ghosts and $-2$ of the $Gl(1,\\Bbb\nC)$ ghosts. The open string boundary conditions are\n\\eqn{bondc}{Z^I=\\overline Z^I,\\quad Y_I=\\overline Y_I\\quad j_r=\\overline j_r,}\nwhere $j_r,\\ r=1,\\dots \\dim G$ are the currents of the current\nalgebra. On the boundary, $Z$ and $Y$ are real and the $Gl(1,\\Bbb\nC)$ gauge group is broken to the group $Gl(1,\\Bbb R)$ of real\nscalings of $Z,Y.$\n\nThe physical open string vertex operators are described by\ndimension one fields that are neutral under $Gl(1)$ and primary\nwith respect to Virasoro and $Gl(1)$ generators\n\\eqn{genp}{T=Y_I\\partial Z^I+T_C,\\qquad J=Y_IZ^I.} The fields\ncorresponding to Yang-Mills states are\n\\eqn{vym}{V_\\phi=j_r\\phi^r(Z)} where $\\phi^r(Z)$ is a dimension\nzero $Gl(1,\\Bbb R)$ neutral function of $Z.$ That is, $\\phi$ is\nany function on $\\Bbb{RP}^{3|4}.$ $V_\\phi$ has clearly dimension one.\n$\\phi$ is related by twistor transform to gauge fields on\nspacetime with signature $++--.$\n\nThe vertex operators describing the conformal supergravity are\n\\eqn{vertg}{V_f=Y_If^I(Z),\\qquad V_g=\\partial Z^I g_I(Z).} These\nhave dimension one, since $Y_I$ and $\\partial Z^I$ have dimension\none. The $Gl(1)$ invariance requires that $f^I$ has $Gl(1)$ charge\n$+1$ and $g_I$ has $Gl(1)$ charge $-1.$ The vertex operators are\nprimary if \\eqn{pric}{\\partial_I f^I=0,\\qquad Z^Ig_I=0.} We\nidentify two vertex operators that differ by null states\n\\eqn{din}{\\delta V_f=J_{-1}\\Lambda=Y_IZ^I\\Lambda,\\qquad \\delta\nV_\\phi=T_{-1}\\chi=\\partial Z^I\\partial_I\\chi.} Hence, $f^I$ and\n$g_I$ are subject to the gauge invariance \\eqn{fgga}{\\delta\nf^I=Z^I\\Lambda,\\qquad \\delta g_I=\\partial_I \\chi.}\n\nSince $f^I$ has $Gl(1)$ charge $+1,$ we can construct $Gl(1)$\nneutral the vector field \\eqn{ups}{\\Upsilon=f^I{\\partial\\over\n\\partial Z^I}.} $\\Upsilon$ descends to\na vector field on on the real twistor space $\\Bbb R^{3|4}$ thanks\nto the gauge invariance $\\delta f^I=Z^I\\Lambda$ that kills the\nvertical part of the vector field along the $Gl(1)$ orbits\n$Z^I\\rightarrow tZ^I.$ The primary condition $\\partial_I f^I=0$\nimplies that $\\Upsilon$ preserves the volume measure $\\Omega\\sim Z\nd^7Z$ on $\\Bbb{RP}^{3|4}.$ Hence $\\Upsilon$ is a volume preserving\nvector field on $\\Bbb{RP}^{3|4}$. Similarly, we can summarize the\nconditions on $g$ by considering the $1$ form\n\\eqn{oneg}{\\Theta=g_I dZ^I.} The constraint $g_I Z^I=0$ means that\n$\\Theta$ annihilates the vertical vector field\n$Z^I\\partial\/\\partial Z^I,$ so it descends to a one form on\n$\\Bbb{RP}^{3|4}.$ The gauge invariance $\\delta g_I=\\partial_I \\chi$\nmeans that $\\Theta$ is actually an abelian gauge field on\n$\\Bbb{RP}^{3|4}.$\n\n\\vskip .2in\\noindent{\\it Comparison with B-model}\n\nRecall that that B-model is defined on $\\Bbb{CP}'^{3|4}$. The open\nstrings correspond to gauge fields in Minkowski space and the\nclosed strings correspond to conformal supergravity. On the other\nhand in the open twistor string both gauge theory and conformal\nsupergravity states come from the open string vertex operators.\nThe boundary of the worldsheet (and hence the vertex operators)\nlives in $\\Bbb{RP}'^{3|4}$. Hence the twistor fields are related by\ntwistor transform to fields on spacetime with signature $++--.$\n\nThe gauge field is described in B-model by a $(0,1)$ form ${\\cal A}$\nthat is an element of $H^1(\\Bbb{CP}'^{3|4},{\\cal O})$. This has equations of\nmotion $\\overline\\partial {\\cal A}=0$ and gauge invariance\n$\\delta{\\cal A}=\\overline\\partial\\epsilon,$ where $\\epsilon$ is a function\non $\\Bbb{CP}'^{3|4}$. In open string, the gauge field comes from a\nfunction $\\phi$ on $\\Bbb{RP}'^{3|4}.$ If $\\phi$ is real-analytic, we\ncan extend it to a complex neighborhood of $\\Bbb{RP}'^{3|4}$ in\n$\\Bbb{CP}'^{3|4}$. Then the relation between the two fields is\n\\cite{Atiyah:1979iu,Berkovits:2004jj,Witten:2003nn}\n\\eqn{reat}{{\\cal A}=\\phi\\ \\overline\\partial(\\theta\\left({\\rm Im}\\ z\\right))=\n{i\\over 2} \\phi\\ \\delta\\left({\\rm Im}\\ z\\right) d\\overline z,} where\n$z=\\lambda^2\/\\lambda^1$ and $\\theta(x)=1$ for $x\\geq0$ and $0$ for\n$x<0.$\n\nThe B-model closed string field giving deformation of complex\nstructure $j^I=J^I{}_{\\overline J} dZ^J$ is related to the open string\nvolume preserving vector field $\\Upsilon=f^I\\ {\\partial\/\\partial\nZ^I}$ as $j^I= f^I\\ \\overline\\partial(\\theta\\left({\\rm Im}\\ z\n\\right)).$ Similarly, the RR-two form $b=B_{I\\overline J}\\ dZ^I\\wedge\ndZ^{\\overline J}$ gets related to the abelian gauge field $\\Theta=g_I\\\ndZ^I$ of the open string by $b=\\Theta\\\n\\overline\\partial(\\theta\\left({\\rm Im}\\ z \\right)).$\n\nHence, we get the open twistor string wavefunction by considering\n$\\lambda^a,\\mu^{\\dot a}$ real and by replacing holomorphic delta\nfunctions $\\overline\\delta(\\vev{\\lambda,\\pi})$ with real delta functions\n\\eqn{bwa}{\\phi(\\lambda,\\mu,\\psi)=\\delta(\\vev{\\lambda,\\pi})\n\\exp\\left(i[\\tilde\\pi,\\mu]\\right) g(\\psi).}\n\n\n\\subsection{ Tree Level Yang-Mills Amplitudes}\n\nA tree level $n$ gluon scattering amplitude has contribution from\nworldsheet $D$ of disk topology. The gluon vertex operators are\ninserted along the boundary of the worldsheet. Taking the disk to\nbe the upper half-plane ${\\rm Im}\\ z\\geq 0$, we insert the vertex\noperators at $z_i,\\ {\\rm Im}\\ z_i=0.$ Hence, the scattering\namplitude is \\eqn{scatr}{{\\cal A}=\\sum_d\\int d{\\cal M} \\vev{\\int dz_1\nV_\\phi(z_1) \\dots \\int dz_n V_\\phi(z_n)},} where the sum is over\n$U(1)$ worldsheet instantons and $d{\\cal M}$ is the measure.\n\nIn two dimensions, the instanton number of a $Gl(1)$ gauge bundle\nis the degree of the line bundle. Recall that the bundle ${\\cal O}(d)$\nof degree $d$ homogeneous functions has degree $d$. Hence, on a\nworldsheet with instanton number $d$, $Z^I$'s are sections of\n${\\cal O}(d).$ But this is just the parametric description of an\nalgebraic curve of degree $d$ discussed in section \\ref{highd}.\nWhile in B-model we summed over D-instantons, in the open twistor\nstring we are summing over worldsheet instantons. Both\ndescription lead to the same curves in twistor space. The only\ndifference is that for B-model we consider holomorphic curves,\nwhile here we are interested in real algebraic curves.\n\nThe discussion of the real case is entirely analogous to the\nholomorphic case. Each $Z^I$ has $d+1$ real zero modes that are\nlocal coordinates on the moduli space\n${\\cal M}=\\Bbb{RP}^{4d+3|4d+4}\/Sl(2,\\Bbb R).$ The measure is just the\nholomorphic measure \\rf{holm} restricted to real curves. The\nmoduli space of degree $d$ instantons has $4d+4$ fermionic\ndimensions. Since negative helicity gluon gives $4$ zero modes and\npositive helicity gluon gives no zero modes, a degree $d$\ninstanton contributes to amplitudes with $d+1$ negative helicity\ngluons. Parameterizing the disk using $z,\\ {\\rm Im} z\\geq 0,$ the\namplitude is the real version of \\rf{wop}\n\\eqalign{rwop}{{\\cal A}&=&\\int d{\\cal M}_d \\prod_i\\int_D {dz_i \\over \\prod_k\n(z_k-z_{k+1})} \\delta(\\vev{\\lambda(z_i),\\pi_i}) \\exp\\left(\ni[\\mu(z_i),\\tilde\\pi_i]\\right) g_i(\\psi_i).} In\n\\cite{Berkovits:2004tx}, a cubic open string field theory was\nconstructed for the Berkovits's twistor string theory. Since the\ntwistor string field theory gives the correct cubic\nsuper-Yang-Mills vertices, it provides further support that\n\\rf{rwop} correctly computes tree-level Yang-Mills amplitudes.\n\n\n\n\\section{Recent Results in Perturbative Yang-Mills}\n\\label{perturb}\n\nIn this part of the lecture we shift gears and concentrate on new\ntechniques for the calculation of scattering amplitudes in gauge\ntheory. We will discuss two main results: BCFW recursion relations\n\\cite{Britto:2004ap, Britto:2005fq} for tree amplitudes of gluons\nand quadruple cuts of ${\\cal N}=4$ one-loop amplitudes of gluons\n\\cite{Britto:2004nc}.\n\n\n\\subsection{BCFW Recursion Relations}\n\nWe have seen how tree-level amplitudes of gluons can be computed\nin a simple and systematic manner by using MHV diagrams. However,\nfrom the study of infrared divergencies of one-loop ${\\cal N}=4$\namplitudes of gluons, surprisingly simple and compact forms for\nmany tree amplitudes were found in \\cite{Bern:2004ky,\nRoiban:2004ix}. These miraculously simple formulas were given an\nexplanation when a set of recursion relations for amplitudes of\ngluons was conjectured in \\cite{Britto:2004ap}. The\nBritto-Cachazo-Feng-Witten (BCFW) recursion relations were later\nproven and extended in \\cite{Britto:2005fq}. Here we review the\nBCFW proof of the general set of recursion relations. The reason\nwe choose to spend more time in the proof than in recursion\nrelation itself is that the proof is constructive and the same\nmethod can and has been applied to many other problems from field\ntheory to perhaps string theory.\n\nConsider a tree-level amplitude ${\\cal A} (1,2,\\ldots ,n-1,n)$ of $n$\ncyclically ordered gluons, with any specified helicities. Denote\nthe momentum of the $i^{th}$ gluon by $p_i$ and the corresponding\nspinors by $\\lambda_i$ and $\\tilde\\lambda_i$. Thus, $p_i^{a\\dot a}\n=\\lambda_i^a\\tilde\\lambda_i^{\\dot a}$, as usual in these lectures.\n\nIn what follows, we single out two of the gluons for special\ntreatment. Using the cyclic symmetry, without any loss of\ngenerality, we can take these to be the gluons $k$ and $n$. We\nintroduce a complex variable $z$, and let\n\\eqalign{deftwo}{ p_{k}(z) & = \\lambda_{k}(\\tilde\\lambda_{k} -\nz\\tilde\\lambda_n), \\cr p_n(z) & = (\\lambda_n +\nz\\lambda_{k})\\tilde\\lambda_n. } We leave the momenta of the other\ngluons unchanged, so $p_s(z)=p_s $ for $s\\not= k,n$. In effect, we\nhave made the transformation\n\\eqn{thetrans}{\\tilde\\lambda_k\\to\\tilde\\lambda_k-z\\tilde\\lambda_n,\\quad\n~~\\lambda_n\\to\\lambda_n+z\\lambda_k,}\nwith $\\lambda_k$ and $\\tilde\\lambda_n$ fixed. Note that $p_k(z)$\nand $p_n(z)$ are on-shell for all $z$, and $p_k(z)+p_n(z)$ is\nindependent of $z$. As a result, we can define the following\nfunction of a complex variable $z$,\n\\eqn{defz}{{\\cal A}(z) = {\\cal A}(p_1,\\ldots ,p_{k-1}, p_{k}(z),\np_{k+1},\\ldots ,p_{n-1}, p_n(z)). }\nThe right hand side is a physical, on-shell amplitude for all $z$.\nMomentum is conserved and all momenta are on-shell.\n\n\nFor any $z\\not=0$, the deformation \\rf{deftwo} does not make sense\nfor real momenta in Minkowski space, as it does not respect the\nMinkowski space reality condition $\\tilde\\lambda = \\pm\n\\overline\\lambda$. However, \\rf{deftwo} makes perfect sense for complex\nmomenta or (if $z$ is real) for real momenta in signature\n$++-\\,-$. In any case, we think of ${\\cal A}(z)$ as an auxiliary\nfunction. In the end, all answers are given in terms of spinor\ninner products and are valid for any signature.\n\nHere we assume that the helicities $(h_k,h_n)$ are $(-,+)$. The\nproof can be extended to helicities $(+,+)$, or $(-,-)$ but we\nrefer the reader to \\cite{Britto:2005fq}.\n\nWe claim three facts about ${\\cal A}(z)$: $(1)$ It is a rational\nfunction. $(2)$ It only has simple poles. $(3)$ It vanishes for\n$z\\to \\infty$.\n\nThese three properties of ${\\cal A}(z)$ imply that it can be written as\nfollows\n\\eqn{ratio}{{\\cal A}(z) = \\sum_{p\\in \\{ {\\rm poles}\\} }\n\\frac{c_p}{z-z_p},}\nwhere $c_p$ is the residue at a given pole and the sum is over the\nwhole set of poles. It turns out that, as we will see below, $c_p$\nis proportional to the product of two physical amplitudes with\nfewer gluons than ${\\cal A}(z)$. Therefore, \\rf{ratio} provides a\nrecursion relation for amplitudes of gluons.\n\nLet us prove the three statements. $(1)$ This is easy. Note that\nthe original tree-level amplitude is a rational function of spinor\nproducts. Since the $z$ dependence enters only via the shift\n$\\tilde\\lambda_k \\to \\tilde\\lambda_k - z\\tilde\\lambda_n$ and\n$\\lambda_n \\to \\lambda_n + z \\lambda_k$, ${\\cal A}(z)$ is clearly\nrational in $z$.\n\n$(2)$ By definition, ${\\cal A}(z)$ is constructed out of Feynman\ndiagrams. The only singularities ${\\cal A}(z)$ can have come from\npropagators. Recall that ${\\cal A}(z)$ is color-ordered. This means\nthat all propagators are of the form $1\/P^2_{ij}$ where\n$P_{ij}=p_i+\\ldots + p_j$. Clearly, $P_{ij}$ is $z$ independent if\nboth $k,n\\in \\{ i,\\ldots, j\\}$ or if $k,n\\not\\in \\{ i,\\ldots,\nj\\}$. By momentum conservation it is enough to consider\npropagators for which $n\\in \\{ i,\\ldots, j\\}$ and $k \\not\\in \\{\ni,\\ldots, j\\}$. Since the shift of $p_n$ is by a null vector, one\nhas\n\\eqn{porpi}{ P_{ij}^2(z) = P^2_{ij}(0) - z\n\\langle\\lambda_k|P_{ij}|\\tilde \\lambda_n],}\nwhere for any spinors $\\lambda,\\tilde\\lambda$ and vector $p$, we\ndefine $\\langle \\lambda|p|\\tilde\\lambda]=- p_{a\\dot\na}\\lambda^a\\tilde\\lambda^{\\dot a}$. Hence, the propagator\n$1\/P_{ij}(z)^2$ has only a single, simple pole, which is located\nat $z_{ij}= P_{ij}^2\/\\langle\\lambda_k|P_{ij}|\\tilde\\lambda_n]$.\n\n$(3)$ Recall that any Feynman diagram contributing to the\namplitude ${\\cal A}(z)$ is linear in the polarization vectors\n$\\epsilon_{a\\dot a}$ of the external gluons. Polarization vectors\nof gluons of negative and positive helicity and momentum $p_{a\\dot\na}= \\lambda_a \\tilde\\lambda_{\\dot a}$ can be written respectively\nas follows (see section \\ref{spinors} ),\n\\eqn{poli}{\\epsilon_{a\\dot a}^{-} = {\\lambda_a\\tilde\\mu_{\\dot\na}\\over [\\tilde\\lambda , \\tilde\\mu ] }, \\qquad \\epsilon_{a\\dot\na}^{+} = {\\mu_a \\tilde\\lambda_{\\dot a}\\over \\vev{\\mu, \\lambda}},}\nwhere $\\mu$ and $\\tilde\\mu$ are fixed reference spinors.\n\nOnly the polarization vectors of gluons $k$ and $n$ can depend on\n$z$. Consider the $k^{th}$ gluon first. Recall that $\\lambda_k$\ndoes not depend on $z$ and $\\tilde\\lambda_k(z)$ is linear in $z$.\nSince $h_k=-1$, it follows from \\rf{poli} that $\\epsilon^{-}_k$\ngoes as $1\/z$ as $z\\to \\infty$. A similar argument leads to\n$\\epsilon^{+}_n \\sim 1\/z$ as $z\\to \\infty$.\n\nThe remaining pieces in a Feynman diagram are the propagators and\nvertices. It is clear that the vanishing of ${\\cal A}(z)$ as $z\\to\n\\infty$ can only be spoiled by the momenta from the cubic\nvertices, since the quartic vertices have no momentum factors and\nthe propagators are either constant or vanish for $z\\to\\infty$.\n\nLet us now construct the most dangerous class of graphs and show\nthat they vanish precisely as $1\/z$. The $z$ dependence in a tree\ndiagram ``flows\" from the $k^{th}$ gluon to the $n^{th}$ gluon\nalong a unique path of propagators. Each such propagator\ncontributes a factor of $1\/z$. If there are $r$ such propagators,\nthe number of cubic vertices through which the $z$-dependent\nmomentum flows is at most $r+1$. (If all vertices are cubic, then\n starting from the $k^{th}$ gluon, we\nfind a cubic vertex and then a propagator, and so on. The final\ncubic vertex is then joined to the $n^{th}$ gluon.) So the\nvertices and propagators give a factor that grows for large $z$ at\nmost linearly in $z$.\n\nAs the product of polarization vectors vanishes as $1\/z^2$, it\nfollows that for this helicity configuration, ${\\cal A}(z)$ vanishes\nas $1\/z$ for $z\\to \\infty$.\n\nNow we can rewrite \\rf{ratio} more precisely as follows\n\\eqn{collo}{{\\cal A}(z)=\\sum_{i,j}{c_{ij}\\over z-z_{ij}},}\nwhere $c_{ij}$ is the residue of ${\\cal A}(z)$ at the pole $z=z_{ij}$.\nFrom the above discussion, the sum over $i$ and $j$ runs over all\npairs such that $n$ is in the range from $i $ to $j$ while $k$ is\nnot. At this point it is clear the smallest number of poles is\nachieved when $k$ and $n$ are adjacent, i.e., $k=n-1$. This is the\nchoice we make in the examples below.\n\nFinally, we have to compute the residues $c_{ij}$. To get a pole\nat $P_{ij}^2(z)=0$, a tree diagram must contain a propagator that\ndivides it into a ``left'' part containing all external gluons not\nin the range from $i$ to $j$, and a ``right'' part containing all\nexternal gluons that are in that range. The internal line\nconnecting the two parts of the diagram has momentum $P_{ij}(z)$,\nand we need to sum over the helicity $h=\\pm$ at, say, the left of\nthis line. (The helicity at the other end is opposite.) The\ncontribution of such diagrams near $z=z_{ij}$ is $\\sum_h\n{\\cal A}_L^h(z){\\cal A}_R^{-h}(z)\/P_{ij}(z)^2$, where ${\\cal A}_L^h(z)$ and\n${\\cal A}_R^{-h}(z)$ are the amplitudes on the left and the right with\nindicated helicities. Since the denominator $P_{ij}(z)^2$ is\nlinear in $z$, to obtain the function $c_{ij}\/(z-z_{ij})$ that\nappears in \\rf{collo}, we must simply set $z$ equal to $z_{ij}$ in\nthe numerator. When we do this, the internal line becomes\non-shell, and the numerator becomes a product\n${\\cal A}_L^h(z_{ij}){\\cal A}_R^{-h}(z_{ij})$ of physical, on-shell\nscattering amplitudes. More precisely we have,\n\\eqn{preci}{{\\cal A}_L^h(z_{ij}) = {\\cal A}(p_{j+1},\\ldots\n,p_k(z_{ij}),\\ldots , p_{i-1}, P^{h}_{ij}(z_{ij})), \\quad\n{\\cal A}_R^{-h}(z_{ij}) = {\\cal A}(-P^{-h}_{ij}(z_{ij}),p_i,\\ldots,\np_n(z_{ij}),\\ldots , p_j). }\n\nThe formula \\rf{collo} for the function ${\\cal A}(z)$ therefore becomes\n\\eqn{gollo}{{\\cal A}(z)=\\sum_{i,j}\\sum_h{{\\cal A}_L^h(z_{ij}){\\cal A}_R^{-h}(z_{ij})\\over\nP_{ij}(z)^2}.} To get the physical scattering amplitude\n${\\cal A}(1,2,\\ldots, n-1,n)$, we set $z$ to zero in the denominator\nwithout touching the numerator. Hence,\n\\eqn{wollo}{{\\cal A}(1,2,\\ldots,\nn-1,n)=\\sum_{i,j}\\sum_h{{\\cal A}_L^h(z_{ij}){\\cal A}_R^{-h}(z_{ij})\\over\nP_{ij}^2}.}\nThis is the BCFW recursion relation\n\\cite{Britto:2004ap,Britto:2005fq}.\n\n\\subsubsection{Examples}\n\nLet us illustrate some of the compact formulas one can obtain\nusing the recursion relations \\rf{wollo}.\n\nConsider two of the six-gluon next-to-MHV amplitudes, for example,\namplitudes with three minus and three plus helicity gluons: $\n{\\cal A}(1^-,2^-,3^-,4^+,5^+,6^+)$ and ${\\cal A}(1^+,2^-,3^+,4^-,5^+,6^-)$.\nAs mentioned above, the recursion relations \\rf{wollo} have the\nsmallest number of terms when $k$ and $n$ are chosen to be\nadjacent gluons. In the first example we choose to shift $p_3$ and\n$p_4$, while in the second we shift $p_2$ and $p_3$. The results\nare the following:\n\\eqn{lepta}{ {\\cal A}(1^-,2^-,3^-,4^+,5^+,6^-) = {1\\over \\gb{ 5 | 3+4 |\n2}}\\left( {\\gb{ 1 | 2+3 | 4}^3 \\over [2~3][3~4]\\vev{5~6}\\vev{6~1}\nt_2^{[3]}} + {\\gb{ 3 | 4+5 | 6}^3 \\over\n[6~1][1~2]\\vev{3~4}\\vev{4~5} t_3^{[3]} }\\right).}\n\\eqalign{alos}{ {\\cal A}(1^+,2^-,3^+,4^-,5^+,6^-) &=& \\frac{ [1~3]^4\n\\vev{4~6}^4 }{ [1~2] [2~3] \\vev{4~5} \\vev{5~6} t_{1}^{[3]}\n\\gb{6|1+2|3} \\gb{4|2+3|1}} \\\\ &+& \\frac{ \\vev{2~6}^4 [3~5]^4 }{\n\\vev{6~1}\\vev{1~2} [3~4] [4~5] t_{3}^{[3]} \\gb{6|4+5|3}\n\\gb{2|3+4|5}} \\\\ &+& \\frac{ [1~5]^4 \\vev{2~4}^4 }{\n\\vev{2~3}\\vev{3~4} [5~6] [6~1] t_{2}^{[3]} \\gb{4|2+3|1}\n\\gb{2|3+4|5}},}\nwhere $t_i^{[r]} = p_i + \\ldots + p_{i+r-1}$.\n\nIt is interesting to observe that while \\rf{lepta} and \\rf{alos}\nare simpler than the amplitudes computed by Berends, Giele,\nMangano, Parke, Xu \\cite{Berends:1987me, Mangano:1987xk,\nBerends:1989hf, Mangano:1990by}; the former possess spurious\npoles, like $\\gb{ 5 | 3+4 | 2}$, while the latter only have\nphysical poles. One can use the recursion relations to find\nfurther simple formulas for tree-level gluon amplitudes\n\\cite{Britto:2005dg}.\n\nAlso note that the two-term form \\rf{lepta} was obtained in\n\\cite{Roiban:2004ix} as a collinear limit of a very compact form\nof the seven-gluon amplitude, which was originally obtained from\nthe infrared behavior of a one-loop ${\\cal N}=4$ amplitude\n\\cite{Bern:2004ky}.\n\nLet us also mention that many generalizations of the BCFW\nrecursion relations have been made, in particular, to include\namplitudes with fermions and scalars \\cite{Luo:2005rx,Luo:2005my}\nand to gravity amplitudes \\cite{Bedford:2005yy, Cachazo:2005ca}.\nThe recursion relations have also been generalized to amplitudes\nwith massive particles \\cite{Badger:2005zh}, and to some one-loop\namplitudes in QCD \\cite{Bern:2005hs}.\n\n\n\\subsection{One-Loop ${\\cal N}=4$ Amplitudes of Gluons and Quadruple Cuts}\n\nSupersymmetric amplitudes of gluons are very special. The main\nreason is that these amplitudes are four-dimensional\ncut-constructible. This means that a complete knowledge of their\nbranch cuts and discontinuities, when the dimensional\nregularization parameter is taken to zero, is enough to determine\nthe full amplitude. This is not true for non-supersymmetric\namplitudes. As an example consider the one-loop\n${\\cal A}(1^+,2^+,\\ldots , n^+)$ and ${\\cal A}(1^-,2^+,\\ldots , n^+)$\namplitudes. One can prove that both series of amplitudes are\nsingle valued functions of the kinematical invariants. This is\nenough to conclude that they vanish in any supersymmetric\ntheory\\footnote{This can also be derived using supersymmetric Ward\nidentities. For a nice review see \\cite{Dixon:1996wi}.}. In\ncontrast, in non-supersymmetric gauge theories, they are\ninteresting rational functions. These two series of amplitudes\nwere shown to be reproduced by a generalization of the BCFW\nrecursion relations in \\cite{Bern:2005hs}.\n\nHere we concentrate on ${\\cal N}=4$ one-loop amplitudes. The\nreason these amplitudes are special within the class of\nsupersymmetric amplitudes is that they can be expressed in terms\nof known scalar box integrals with coefficients that are rational\nfunctions of the spinor products.\n\nIn this part of the lectures, we explain a new technique that\nallows the computation of any given scalar box coefficient as the\nproduct of four tree-level amplitudes. Now recall that either by\nusing MHV diagrams or the BCFW recursion relations\\footnote{We\nalso need the corresponding generalizations to include fermions\nand scalars \\cite{Georgiou:2004by, Wu:2004fb, Wu:2004jx,\nGeorgiou:2004wu}.}, any tree-level amplitude can be easily\ncomputed. This implies that the new technique solves the problem\nof computing one-loop amplitudes of gluons in ${\\cal N}=4$ super\nYang-Mills.\n\n\n\\subsubsection{Review of The Unitarity-Based Method}\n\nOne of the most successful methods in the calculation of one-loop\namplitudes of gluons is the unitarity-based method\n\\cite{Bern:1994zx, Bern:1994cg}. This method was used to calculate\nall MHV amplitudes \\cite{Bern:1994zx} and all six-gluon\nnext-to-MHV amplitudes \\cite{Bern:1994cg} more than a decade ago.\nWe review the basic idea of the method focusing on the points that\nprepare the ground for the quadruple cut method.\n\nThe unitarity-based method can be described as a three-step\nprocedure: (1) Consider a given amplitude and use\nPassarino-Veltman or other reduction techniques \\cite{passarino}\nto find a set of basic integrals. In supersymmetric amplitudes of\ngluons, this means that any tensor Feynman integrals that enters\nin a Feynman diagram calculation can be reduced to a set of scalar\nintegrals, that is Feynman integrals in a scalar field theory with\na massless particle running in the loop, with rational\ncoefficients. In particular, for ${\\cal N}=4$ super Yang-Mills,\nonly scalar box integrals appear.\n\nScalar box integrals are defined as follows,\n\\eqn{ifou}{I_{(K_1,K_2,K_3,K_4)} = \\int d^4\\ell { 1\\over\n(\\ell^2+i\\epsilon) ((\\ell-K_1)^2 + i\\epsilon )((\\ell -K_1-K_2)^2 +\ni\\epsilon )((\\ell+K_4)^2 + i\\epsilon )}.}\nThis is really a function of only three momenta $K_1, K_2, K_3$,\nfor $K_4=-K_1-K_2-K_3$ by momentum conservation. This integral is\nUV finite but it has IR divergencies when at least one $K_i$ is\nnull, i.e., $K_i^2=0$. This implies that a regularization\nprocedure, like dimensional regularization, is required. The\nstructure of the IR singular terms is well understood\n\\cite{Catani:1998bh}. We do not discuss it here because it is not\nrelevant for the quadruple cut technique.\n\nIn a given amplitude, $K_i$ is the sum of consecutive momenta of\nexternal gluons. We discuss this in more detail below.\n\n(2) Consider a unitarity cut in a given channel, say the\n$s-$channel. Recall that this is defined by summing over all\nFeynman diagrams that contain two propagators whose momenta differ\nby $s$ and by cutting those two propagators. Cutting a propagator\n$1\/(P^2+i\\epsilon)$ means removing the principal part, i.e.,\nreplacing the propagator by $\\delta^{(+)} (P^2)$. When this is\ndone, the internal particles go on-shell and the sum over Feynman\ndiagrams produces two tree-level amplitudes while the integration\nover the internal momenta becomes an integration over the Lorentz\ninvariant phase space of two null vectors; this is known as a cut\nintegral. As an example, consider the cut in the\n$P_{ij}^2$-channel, see figure \\rf{uni}, the cut integral is given\nby \\cite{cuts}\n\\eqn{cutii}{C = \\int d\\mu ~ {\\cal A}^{\\rm tree} (\\ell_1,i,...,j,\n\\ell_2) ~ {\\cal A}^{\\rm tree}(-\\ell_2,j+1,..., i-1,-\\ell_1),}\nwhere $d\\mu$ is the Lorentz invariant phase space measure for\n$(\\ell_1,\\ell_2)$. The measure is explicitly given by\n\\eqn{mme}{ d\\mu = d^4\\ell_1 d^4\\ell_2 \\delta^{(+)} (\\ell_1^2)\n\\delta^{(+)} (\\ell_2^2) \\delta^{(4)}(\\ell_1 + \\ell_2 - P_{ij}),}\nwith $P_{ij}$ denoting the sum of the momenta of gluons from $i$\nto $j$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=1.7in]{GeneralCut.eps}\n\\caption{Unitarity cut in the $P_{ij}^2-$channel.\nThe blobs represent tree-level amplitudes in which the propagator\nlines are interpreted as external on-shell particles.} \\label{uni}\n\\end{figure}\n\n(3) Use reduction techniques to write the integrand of \\rf{cutii}\nas a sum of terms that contain a constant coefficient times two\npropagators. Once this is achieved, it is easy to construct a\nfunction of scalar box integrals with given coefficients that has\nsuch a cut. Then repeat this for all other cuts, remembering that\na given scalar box integral has cuts in several different\nchannels. This means that one should not just add the functions\nobtained from the study of each channel. Instead one has to\ncombine them while avoiding to overcount. Once a function with all\nthe correct discontinuities has been constructed, this must be the\nfinal answer for the amplitude. The reason is that supersymmetric\namplitudes are four-dimensional cut-constructible, as mentioned\nabove.\n\nUsing this technique, all MHV amplitudes and the six-gluon NMHV\namplitudes were computed more than ten years ago. More recently,\nthe seven-gluon NMHV amplitude with all minus helicity gluons\nadjacent was computed by using a combination of this method and\nthe holomorphic anomaly of unitarity cuts \\cite{Cachazo:2004by} in\n\\cite{Cachazo:2004dr, Britto:2004nj}. The same result as well as\nall other helicity configurations for the seven-gluon amplitude\nwere obtained by the unitarity-based method in \\cite{Bern:2004ky}.\n\nAt this point it is important to mention that the integrand in the\ncut integral \\rf{cutii} is complicated because in general there\nare many scalar box integrals sharing the same branch cut. The\nreduction techniques, though systematic, can lead to very large\nexpressions for the scalar box coefficients \\cite{Bern:2004ky}.\nThese large expressions can be shown to be equivalent to simple\nformulas obtained as educated guesses \\cite{Bern:2004ky}. This is\na hint that there must be a more direct method for computing such\ncoefficients.\n\nA related difficulty comes from the fact that a given scalar box\nintegral has many different branch cuts. This means that after its\ncoefficient has been computed from a given cut, one still has to\ndisentangle it from other unknown coefficients over and over again\nin the other cuts. This somehow reduces the efficiency of the\nmethod.\n\nOne way to improve the situation is by cutting three propagators\n\\cite{Bern:2004ky}\\cite{Bern:2004bt}. Note that triple cuts where\na single gluon in trapped in between two cut propagators vanish.\nThis would correspond to a cut in a one-particle channel. In the\nnext part of this lecture we will reconsider this issue.\n\nNote that the number of scalar boxes with a given triple cut is\nless than that with a given unitarity cut. However, in general one\nstill has to apply reduction techniques. A class of amplitudes for\nwhich triple cuts are very suitable are next-to-MHV (NMHV)\namplitudes \\cite{Bern:2004bt}. But, one might expect that this\nprocedure becomes cumbersome already for NNMHV amplitudes.\n\nIt turns out that there is a way of avoiding the reduction\ntechniques as well as the recalculation of known coefficients.\nThis is achieved by considering quadruple cuts\n\\cite{Britto:2004nc} which we now discuss.\n\n\\subsubsection{Quadruple Cuts}\n\nConsider a scalar one-loop Feynman integral, $I$. The integral $I$\nis a function of the kinematical invariants constructed out of the\nexternal momenta. In general, $I$ is a complicated multi-valued\nfunction with branch cuts that are like domain walls in the space\nof kinematical invariants, $\\Sigma$. As it is well known, cutting\ntwo propagators in the loop computes the imaginary part of the\nintegral in a certain region of $\\Sigma$. This imaginary part of\n$I$ can be thought of as the discontinuity of $I$ across the\nbranch cut of interest.\n\nNow consider unitarity cuts in several possible channels. One can\nask what is the discontinuity across the intersection of two or\nmore cuts. The answer is given by the union of the set of cut\npropagators! Of particular interest to us is the meaning of\ncutting all propagators in a one-loop integral; such a cut\nintegral computes the discontinuity across the singularity of\nhighest codimension, which is known as the leading singularity.\nFor a more extensive discussion and references see\n\\cite{Eden:1966}\\footnote{In \\cite{Eden:1966}, the arguments are\nmade for a massive scalar field theory. However, it turns out that\nthe relevant results for our discussion can be used in massless\ntheories with little modifications.}.\n\nAs mentioned above, ${\\cal N}=4$ one-loop amplitudes of gluons can\nbe written as a linear combination of scalar box integrals with\nrational coefficients. The scalar box integrals can be thought of\nas a ``basis of vectors\" in some sort of vector space. The idea is\nthat this basis is in some appropriate sense\northogonal\\footnote{To push the analogy even further, one can\nthink of the scalar box functions defined in \\cite{Bern:1994zx},\nwhich are scalar box integrals nicely normalized, as an\northonormal basis!} (In less supersymmetric theories there also\nare bubble and triangle integrals which break the orthogonality\ncondition).\n\nThe one-loop amplitude ${\\cal A}_n^{\\rm 1-loop}$ can now be interpreted\nas a general vector which can be written as a linear combination\nof the basis. All we need is the appropriate way of projecting the\n``vector\" ${\\cal A}_n^{\\rm 1-loop}$ onto a given vector $I$ in order to\ncompute the corresponding coefficient.\n\n{}From our discussion, it is clear that the natural way of doing\nthis is to consider ${\\cal A}_n^{\\rm 1-loop}$ in the region near the\nleading singularity of $I$, which is unique to $I$. The\ndiscontinuity of ${\\cal A}_n^{\\rm 1-loop}$ across such a singularity is\nthe coefficient of $I$, up to a normalization, which is the analog\nof the norm of $I$.\n\nLet us see how this works in practice. Recall that the scalar box\nintegral \\rf{ifou} is,\n\\eqn{ifout}{I_{(K_1,K_2,K_3)} = \\int d^4\\ell { 1\\over\n(\\ell^2+i\\epsilon) ((\\ell-K_1)^2 + i\\epsilon )((\\ell -K_1-K_2)^2 +\ni\\epsilon )((\\ell+K_4)^2 + i\\epsilon )}.}\nIn the expansion of ${\\cal A}^{\\rm 1-loop}_n$, each $K_i$ in \\rf{ifout}\nis the sum of the momenta of consecutive external gluons.\n\nThe discontinuity across the leading singularity $\\Delta_{LS}$ is\ncomputed by cutting all four propagators. This is called a\nquadruple cut:\n\\eqn{wpuvv}{\\Delta_{LS} I_{(K_1,K_2,K_3)} = \\int d^4\\ell\n~\\delta^{(+)}(\\ell^2) ~\\delta^{(+)}((\\ell-K_1)^2)\n~\\delta^{(+)}((\\ell -K_1-K_2)^2)~\\delta^{(+)}((\\ell+K_4)^2).}\n\nIn order to make the discussion more explicit we introduce\nnotation for the coefficients of $I$ in the expansion of\n${\\cal A}_n^{\\rm 1-loop}$ as follows:\n \\eqn{ampli}{ {\\cal A}_n^{\\rm 1-loop} =\n\\sum_{1 0$ is a small absolute constant.\n\\end{thm}\n\nAs a consequence of Theorem \\ref{thm:main_moments_Gaussian}, we will prove the following central limit theorem.\n\n\\begin{thm}\n\\label{thm:Gaussian_RMF}\nFor $X(n)$ drawn randomly as a Rademacher or Steinhaus random multiplicative function, define random trigonometric polynomials $P_N$ by\n$$\nP_N(\\theta) = \\frac{1}{\\sqrt{N}}\\sum_{n=1}^N X(n) e(n\\theta).\n$$\nThen almost surely,\n\\begin{equation}\n\\label{eq:as_CLT}\n\\lim_{N\\rightarrow\\infty} \\mathrm{meas}\\Big\\{ \\theta \\in [0,1]:\\, P_N(\\theta) \\in E\\Big\\} = \\frac{1}{\\pi} \\int_{E} e^{-x^2-y^2}\\, dxdy,\n\\end{equation}\nfor any rectangle $E \\subset \\mathbb{C}$.\n\\end{thm}\n\nHere ``almost surely\" means upon drawing $X(n)$, this event occurs with probability $1$. The measure in \\eqref{eq:as_CLT} is the Lebesgue measure on $[0,1]$, with $\\mathrm{meas}\\{[0,1]\\} = 1$.\n\n\nTheorem \\ref{thm:Gaussian_RMF} was first obtained by A. Harper \\cite{harper_com} using martingale methods, though this work was not published. The approach we give here is via moments.\n\nFurthermore, from the information on high moments in Theorems \\ref{thm:main_moments_Gaussian} and a few additional estimates we obtain information about the sup-norm of $P_N$. Recall that a sequence of events $E_n$ is said to occur \\emph{asymptotically almost surely} if $\\mathbb{P}(E_n) = 1-o(1)$ as $n\\rightarrow\\infty$.\n\n\\begin{thm}\\label{thm:supnorm}\nFor $X(n)$ a Rademacher or Steinhaus random multiplicative function, we have\n$$\n\\Big(\\frac{\\log N}{\\log_2 N}\\Big)^{1\/6} \\,\\leq \\, \\max_\\theta |P_N(\\theta)| \\,\\leq \\, \\exp\\big( 3\\sqrt{\\log N \\log_2 N}\\big),\n$$\nasymptotically almost surely as $N\\rightarrow\\infty$.\n\\end{thm}\n\nThis result allows us to see that $\\max_\\theta|P_N(\\theta)|$ will typically be at least as large as a power of $\\log N$, thus distinguishing conjectural behavior of Fekete polynomials from typical behavior of random multiplicative functions.\n\nA well-known conjecture of S\\'{a}rk\\\"{o}zy \\cite[Conj. 49]{sarkozy} is that if $f(n): \\N \\to \\{\\pm1\\}$ is \\emph{any} multiplicative function, then\n$$\n\\limsup_{N\\to\\infty} \\frac1{\\sqrt{N}} \\max_{\\theta} \\Big| \\sum_{n \\le N} f(n) e(n\\theta) \\Big| = +\\infty.\n$$\nThe lower bound in Theorem \\ref{thm:supnorm} shows that for a typical completely multiplicative function a rather stronger statement holds.\n\n\\subsection{Long tails}\n\nIn fact it seems likely that asymptotically almost surely,\n\\begin{equation}\\label{eq:supnorm_conj}\n\\max_\\theta |P_N(\\theta)| \\sim (\\log N)^{1\/2},\n\\end{equation}\nand it may even be that this is true almost surely as for independent coefficients. Nonetheless we expect the random variables $\\max_\\theta |P_N(\\theta)|$ to have rather longer tails than in the independent case, presenting a serious challenge in proving \\eqref{eq:supnorm_conj} via a moment or exponential moment method.\n\n\nIn fact, such long tails are already present in the computation of high moments. $\\E\\, \\mathfrak{m}_N^{(k)}$ is much larger than a Gaussian moment for $k$ larger than $(\\log N)^{1\/2+\\epsilon}$, and the standard deviation of $\\mathfrak{m}_N^{(k)}$ will be at least as large as the expected value, indicating that there is no concentration of this random variable in the mean-square sense.\n\n\\begin{thm}\n\\label{thm:main_moments_larger}\nFor $X(n)$ a Rademacher or Steinhaus random multiplicative function, we have for positive integers $k \\leq N^{1\/4}$,\n\\begin{equation}\\label{eq:moment_mean_larger}\n\\E\\, \\mathfrak{m}_N^{(k)} = e^{O(\\log N)} \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n)\\Big|^{2k}.\n\\end{equation}\n\nAs a consequence for any $\\varepsilon > 0$, there is a constant $B_\\varepsilon$ such that if $N$ is sufficiently large (depending on $\\varepsilon$) and $B_\\varepsilon (\\log N \/ \\log_2 N)^{1\/2} \\leq k \\leq N^{1\/4}$,\n\\begin{equation}\\label{eq:moment_mean_lowerbound}\n\\E\\, \\mathfrak{m}_N^{(k)} \\gg \\Big(\\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n)\\Big|^{2k}\\Big)^{1-\\varepsilon}.\n\\end{equation}\n\nMoreover, there is an absolute constant $B$ such that for $B (\\log N \/ \\log_2 N)^{1\/2} \\leq k \\leq N^{1\/4}$,\n\\begin{equation}\\label{eq:moment_variance_larger}\n\\Var(\\mathfrak{m}_N^{(k)}) \\gg (\\E\\, \\mathfrak{m}_N^{(k)})^2.\n\\end{equation}\n\\end{thm}\n\n\\begin{rmk}\nThe implicit constants in this theorem are to be understood as absolute (independent of $k, N$ and $\\epsilon$).\n\\end{rmk}\n\nA considerable amount is known about the moments on the right hand side of \\eqref{eq:moment_mean_larger} and \\eqref{eq:moment_mean_lowerbound}. We recall the most relevant result in Theorem \\ref{thm:harper_highmoments} below.\n\nFrom Theorem \\ref{thm:harper_highmoments} the reader should check that the right hand side of \\eqref{eq:moment_mean_lowerbound} is considerably larger than the gaussian moment $k!$ for large $k$. Thus the asymptotic $\\E\\, \\mathfrak{m}_N^{(k)} \\sim k!$ implied by Theorem \\ref{thm:main_moments_Gaussian} for $k \\ll (\\log N\/\\log_2 N)^{1\/3}$ cannot be true for $k \\gg (\\log N\/\\log _2 N)^{1\/2}$.\n\nNote that while \\eqref{eq:moment_variance_larger} indicates that $\\mathfrak{m}_N^{(k)}$ will not concentrate around its mean value for such large $k$ in the mean-square sense, it is quite possible that $\\mathfrak{m}_N^{(k)}$ will concentrate around a median value in probability, even for $k$ of order $(\\log N)^{1\/2}$ or larger. The latter phenomenon would be far more subtle, and we are not able to address it here.\n\n\\begin{rmk}\nThe estimate \\eqref{eq:moment_mean_larger} is not useful for $k = o(\\log N \/ \\log_2 N)^{1\/2})$ as in this range the error term is larger than the main term. It seems possible that with more work the range of $k$ in which the moments of Theorem \\ref{thm:main_moments_Gaussian} are Gaussian in mean square can be extended to $k = o( (\\log N\/\\log_2 N)^{1\/2})$, but Theorem \\ref{thm:main_moments_larger} tells us we can go no further. \n\\end{rmk}\n\n\\begin{rmk}\nAs we will discuss in section \\ref{sec:supremum}, unit-circle-moments would need to be well understood for $k \\approx \\log N$ in order to capture the true order of magnitude of $\\max_\\theta \\left| P_N(\\theta) \\right|$ (see also \\cite[Sec. 4.5]{green}). Theorem \\ref{thm:main_moments_larger} thus rules out such an approach, at least if \\eqref{eq:supnorm_conj} is true.\n\\end{rmk}\n\n\\begin{rmk}\nNote that $N^{-1\/2} \\sum_{n\\leq N} X(n) = P_N(0)$. Thus the average value of $2k$th moments on the unit circle are essentially controlled for large enough $k$ by the values of the polynomial at $\\theta = 0$.\n\\end{rmk}\n\n\\subsection{Notation}\nWe will write $f \\ll g$ or alternatively $f=O(g)$, if there exists a positive constant $C$ such that $|f|\\leq C |g|$. A subscript $f\\ll_t g$ may be added to emphasize the depence of the implicit constant $C$ on the parameter $t$. The symbol $\\tau_k$ is reserved for the $k$-fold divisor function and we will write $n = \\square$ to indicate that the integer $n$ is a perfect square.\n\n\\subsection{Acknowledgments}\nWe thank Adam Harper for explaining his previous work on Theorem \\ref{thm:Gaussian_RMF} to us, pointing us in the direction of \\cite{VW} for the computation of high-moments, and comments on a preprint version, Trevor Wooley for explaining some of the details of a point counting method to us, and Maxim Gerspach for pointing out a mistake in Theorem \\ref{thm:main_moments_larger} in an earlier draft. Likewise we thank Oleksiy Klurman and an anonymous referee for helpful comments and corrections.\nJ.B. is supported by ERC Advanced Grant 692616. The research of A.N. was funded by ISF Grant 1903\/18. B.R. received partial support from an NSERC grant and US NSF FRG grant 1854398.\n\n\\section{Method of proof}\n\n\\subsection{Point counting}\n\nWe will explicitly compute moments. Note that for $X(n)$ a Rademacher random multiplicative function, the value of\n\\begin{equation} \\label{eq:P_N_moments_avg}\nN^{\\tfrac12 (j+k)} \\cdot \\E \\int_0^1 \\big( P_N(\\theta) \\big)^j \\big( \\overline{P_N(\\theta)} \\big)^k\\, d\\theta\n\\end{equation}\nis equal to the count of integer solutions to the system of equations\n\\begin{equation} \\label{eq:fundsystem_squares}\n\\begin{split}\nm_1 \\cdots m_{j} &n_1 \\cdots n_{k}=\\square \\\\\nm_1+\\cdots m_j&=n_1 +\\cdots +n_k,\n\\end{split}\n\\end{equation}\nwith $m_r, n_s \\in [1,N]$ for all $r,s$. We label this count by $\\mathcal{N}^{\\, \\square}_{j,k}(N)$. \n\nLikewise for $X(n)$ a Steinhaus random multiplicative function the value of \\eqref{eq:P_N_moments_avg} is given by the count of integer solutions to the system of equations\n\\begin{equation} \\label{eq:fundsystem_circles}\n\\begin{split}\nm_1 \\cdots m_{j} &=n_1 \\cdots n_{k} \\\\\nm_1+\\cdots m_j&=n_1 +\\cdots +n_k,\n\\end{split}\n\\end{equation}\nwith $m_r, n_s \\in [1,N]$ for all $r,s$. We label this count by $\\mathcal{N}^{\\, \\bullet}_{j,k}(N)$.\n\nAn important subcollection of solutions to the systems \\eqref{eq:fundsystem_squares} and \\eqref{eq:fundsystem_circles} is given by the \\emph{diagonal} contributions,\nin which $\\{m_r\\}$ is a permutation of $\\{n_s\\}$.\nIn the notation of multisets, we define the count of diagonal contributions to be the count of integers solutions to\n$$\n\\{m_1,...,m_j\\} = \\{n_1,...,n_k\\},\n$$\nwith $m_r, n_s \\in [1,N]$ for all $r,s$. We label this count by $\\mathcal{D}_{j,k}(N)$.\n\n\\begin{lem} \\label{lem:diag_count}\nFor $j,k \\leq N^{1\/2}$,\n$$\n\\mathcal{D}_{j,k}(N) = \\mathbf{1}_{jk} \\, k! \\, N^k \\left( 1+ O(k^2\/N)\\right).\n$$\n\\end{lem}\n\n\\begin{proof}\nIf $j\\neq k$ obviously $\\mathcal{D}_{j,k}(N) = 0$. For $j=k$, we have\n$$\nk! N (N-1) \\cdots (N-k+1) \\leq \\mathcal{D}_{j,k}(N) \\leq k! N^k,\n$$\nwith the lower bound coming from matches of $m_1,...,m_k$ in which all $m_j$ are distinct. And a crude bound reveals\n\\begin{multline*}\nN(N-1)\\cdots (N-k+1) = N^k \\exp\\left( \\log(1-\\tfrac{1}{N}) + \\cdots + \\log(1 - \\tfrac{k-1}{N}) \\right) \\\\= N^k \\left( 1 + O(k^2\/N)\\right). \\qedhere\n\\end{multline*}\n\\end{proof}\n\nThese diagonal contributions will constitute the most important contribution to the counts \\eqref{eq:fundsystem_squares} and \\eqref{eq:fundsystem_circles}. We thus seek to bound \\emph{off-diagonal} counts,\n$$\n\\mathcal{E}^{\\, \\square}_{j,k}(N) = \\mathcal{N}^{\\, \\square}_{j,k}(N) - \\mathcal{D}_{j,k}(N),\n$$\nand\n$$\n\\mathcal{E}^{\\, \\bullet}_{j,k}(N) = \\mathcal{N}^{\\, \\bullet}_{j,k}(N) - \\mathcal{D}_{j,k}(N).\n$$\n\n\\subsection{A factorization of Vaughan and Wooley}\n\nOur main tool will be a variant of a device introduced by Vaughan and Wooley \\cite{VW}. Let $sq(\\nu)$ be the largest square divisor of a positive integer $\\nu$, so that $\\nu\/sq(\\nu)$ is squarefree.\n\n\\begin{lem}\\label{lem:triangular}\nIf for positive integers $\\nu_1,...,\\nu_\\ell$ we have \n$$\n\\nu_1\\cdots \\nu_\\ell = \\square,\n$$\nthen there exists an upper triangular array of $\\binom{\\ell+1}{2}$ positive integers\n\\begin{equation} \\label{eq:tri_array}\n\\begin{matrix}\nb_{11} & b_{12} & \\cdots & b_{1\\ell} \\\\\n & b_{22} & \\cdots & b_{2\\ell} \\\\\n & & \\ddots & \\vdots \\\\\n & & & b_{\\ell \\ell}\n\\end{matrix}\n\\end{equation}\nsuch that for $1 \\leq r \\leq \\ell$, we have $b_{rr}^2 = sq(\\nu_r)$, and $\\nu_r$ is the product of row $r$ and column $r$ in this array:\n$$\n\\nu_r = (b_{1r}\\cdots b_{rr}) (b_{rr}\\cdots b_{r\\ell}).\n$$\n\\end{lem}\n\n\\begin{proof}\nNote the numbers $\\nu_r\/sq(\\nu_r)$ will be squarefree for all $r$. We fix $\\ell$ and let $E = (\\nu_1\/sq(\\nu_1))\\cdots (\\nu_\\ell\/sq(\\nu_\\ell))$. Obviously such an array exists if $E=1$. We use induction on $E$, supposing we have proved the claim for all smaller values. We have $E = \\square$, and so if $E > 1$, we must have $\\gcd(\\nu_u\/sq(\\nu_u), \\nu_v\/sq(\\nu_v))~>~1$ for some distinct indices $u < v$. \nLet $c = \\gcd(\\nu_u\/sq(\\nu_u), \\nu_v\/sq(\\nu_v))$.\nNote that $c$ must be squarefree, and further note\n$$\n(\\nu_1\/sq(\\nu_1))\\cdots \\left(\\tfrac{\\nu_u\/sq(\\nu_u)}{c}\\right) \\cdots \\left(\\tfrac{\\nu_v\/sq(\\nu_v)}{c}\\right) \\cdots (\\nu_\\ell\/sq(\\nu_\\ell)) = \\square.\n$$\nBy inductive hypothesis an array $b_{rs}$ generates the values $\\nu_1,\\cdots, \\left(\\tfrac{\\nu_u}{c}\\right), \\cdots, \\left(\\tfrac{\\nu_v}{c}\\right), \\cdots, \\nu_\\ell$ with its row-times-column products, and thus one may check the array\n$$\n\\begin{cases}\nb_{rs} & \\textrm{if}\\; (r,s) \\neq (u,v) \\\\\nc \\, b_{uv} & \\textrm{if}\\; (r,s) = (u,v)\n\\end{cases}\n$$\ngenerates the values $\\nu_1,...,\\nu_\\ell$ with its row-times-column products, which completes the proof.\n\\end{proof}\n\nThe above lemma will allow us to consider moments for both Rademacher and Steinhaus random multiplicative functions. If we were treating the Steinhaus case alone, then it would be enough to use the original version of the device, that follows.\n\n\\begin{lem} [Vaughan-Wooley] \\label{lem:rectangular}\nIf for positive integers $m_1,...,m_j$ and $n_1,...,n_k$ we have\n$$\nm_1\\cdots m_j = n_1 \\cdots n_k,\n$$\nthen there exists a $j\\times k$ array of positive integers\n\\begin{equation} \\label{eq:rect_array}\n\\begin{matrix}\na_{11} & a_{12} & \\cdots & a_{1k} \\\\\na_{21} & a_{22} & \\cdots & a_{2k} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{j1} & a_{j2} & \\cdots & a_{jk}\n\\end{matrix}\n\\end{equation}\nsuch that for $1 \\leq r \\leq j$ the product of row $r$ is equal to $m_r$:\n$$\nm_r = a_{r1}a_{r2}\\cdots a_{rk},\n$$\nand for $1 \\leq s \\leq k$ the product of column $s$ is equal to $n_s$:\n$$\nn_s = a_{1s}a_{2s}\\cdots a_{js}.\n$$\n\\end{lem}\n\nThis lemma can be proved using the same inductive procedure as above (see also the original proof in \\cite[Section 8]{VW}, and see also \\cite[Section 4]{granville}). \n\n\\subsection{A rough outline}\n\nWe carry out the details of bounding $\\mathcal{E}^{\\, \\square}_{j,k}(N)$ and $\\mathcal{E}^{\\, \\bullet}_{j,k}(N)$ later, but an approximation to the idea can be explained here. As observed by Vaughan and and Wooley, each non-diagonal solution to \\eqref{eq:fundsystem_circles} implies at least one `non-diagonal' solution to\n$$\n(a_{11}\\cdots a_{1k}) + \\cdots + (a_{j1}\\cdots a_{jk}) = (a_{11}\\cdots a_{j1}) + \\cdots +( a_{1k} \\cdots a_{jk}),\n$$\nin which at least one of the products on the left hand side has no matching product on the right hand side, which will be seen to force a solution to one of the $a_{rs}$ in terms of the others, e.g.\n$$\na_{11}(a_{12}\\cdots a_{1k} - a_{21}\\cdots a_{j1}) = (a_{12}\\cdots a_{j2} + \\cdots + a_{1k}\\cdots a_{jk}) - (a_{21}\\cdots a_{2k} + \\cdots + a_{j1}\\cdots a_{jk}),\n$$\nand this loss in degrees of freedom entails fewer non-diagonal solutions than diagonal. This is a slight oversimplification, but with only a little more care the argument can be made rigorous. A similar argument works for the system \\eqref{eq:fundsystem_squares}. Likewise we obtain the mean-square estimate in Theorem \\ref{thm:main_moments_Gaussian} by reducing the variance of $\\mathfrak{m}_N^{(j,k)}$ to a related point count.\n\nNote however the number of solutions to e.g. $m_1 = a_{11}\\cdots a_{1k}$ grows as $k$ grows, and this phenomenon is responsible for this argument breaking down as $k$ increases with $N$ in the form of Theorem \\ref{thm:main_moments_larger}. \n\nTheorem \\ref{thm:Gaussian_RMF} then follows from these moment estimates by using the moment method and a simple bound on $L^p$-norms of short-interval sums of random multiplicative functions to bootstrap from an asymptotically almost sure central limit theorem to an almost sure central limit theorem. Theorem \\ref{thm:supnorm} likewise follows from these moment estimates, using the fact that for degree $N$ trigonometric polynomials, the sup-norm is reasonably close to an $L^p$-norm, even for $p$ relatively small compared to $N$. \n\n\n\n\\section{Point counting}\n\n\\subsection{Preliminary results}\n\nWe state the following results regarding the $k$-fold divisor function for easy reference later.\n\n\\begin{lem}[Divisor sum bounds]\\label{lem:divisorsumbound}\nFor $N\\geq 3$ and $\\ell \\geq 1$,\n$$\n\\sum_{n \\leq N} \\tau_\\ell(n) \\le N (2\\log N)^{\\ell-1}.\n$$\n\\end{lem}\n\n\\begin{proof}\nWe have\n$$\n\\sum_{n\\leq N} \\tau_\\ell(n) = \\sum_{a_1\\cdots a_\\ell \\leq N} 1 \\le \\sum_{a_1\\cdots a_{\\ell-1} \\leq N} \\frac{N}{a_1\\cdots a_{\\ell-1}} \\leq N (1+ \\log N)^{\\ell-1}.\n$$\nAs $1+ \\log N \\leq 2\\log N$ for $N\\geq 3$ the result follows.\n\\end{proof}\n\nWe also note the well-known pointwise divisor bound: for all fixed $\\ell \\geq 1$ and $\\varepsilon > 0$, we have\n\\begin{equation}\\label{eq:divisor_bound}\n\\tau_\\ell(n) \\leq_{\\ell, \\varepsilon} n^\\varepsilon,\n\\end{equation}\nobtained by iterating the bound for $\\ell = 2$ (see e.g. \\cite[(2.20)]{montgomeryvaughan}).\n\nIn addition we have the crude estimates\n\\begin{equation}\\label{eq:div_jk_bound}\n\\tau_j(n)\\tau_k(n) \\leq \\tau_{jk}(n),\n\\end{equation}\n\\begin{equation}\\label{eq:div_nm_bound}\n\\tau_k(nm) \\leq \\tau_k(n)\\tau_k(m).\n\\end{equation}\nBoth are simple if slightly tedious to verify by setting $m = p^a$ and $n = p^b$ for a prime $p$, then using multiplicativity to extend to other values. \\eqref{eq:div_nm_bound} has appeared before as \\cite[(45)]{sandor}.\n\nThese estimates then imply the following bounds, for all $\\ell \\geq 1$\n\\begin{equation}\\label{eq:tau_squared}\n\\sum_{n\\leq N} \\tau_\\ell(n)^2 \\leq \\sum_{n\\leq N} \\tau_{\\ell^2}(n) \\leq N (2 \\log N)^{\\ell^2-1},\n\\end{equation}\n\\begin{equation}\\label{eq:tau_of_squares}\n\\sum_{n\\leq N} \\tau_\\ell(n^2) \\leq \\sum_{n\\leq N} \\tau_\\ell(n)^2 \\leq N (2\\log N)^{\\ell^2-1}.\n\\end{equation}\nA somewhat better estimate than \\eqref{eq:tau_squared} has appeared in \\cite[Theorem 1.7]{Nor} for a limited range of $\\ell$, but the estimate here will be sufficient for our purposes.\n\n\\subsection{The mean of moments: Gaussian range}\n\nWe compute counts which are related to $\\mathbb{E} \\, \\mathfrak{m}_N^{(j,k)}$. In the arguments that follow we label the collection of triangular arrays of positive integers \\eqref{eq:tri_array} by $T_\\ell$ (thus $T_\\ell$ is isomorphic to $\\mathbb{N}^{\\binom{\\ell+1}{2}}$) and for $\\mathbf{b} \\in T_\\ell$, we let\n$$\nb_r^\\ast = (b_{1r}\\cdots b_{rr})(b_{rr}\\cdots b_{r\\ell})\n$$\ndenote the product of the $r$th column and $r$th row.\n\n\\begin{lem}\\label{lem:square_offdiag}\nUniformly for all $1\\leq j\\leq k$ and $N\\geq 10$,\n$$\n\\Esquare_{j,k}(N) \\ll N^{(j+k)\/2} \\exp\\Big[ - \\frac{\\log N}{6k} + O(k^2 (\\log k + \\log_2 N))\\Big].\n$$\n\\end{lem}\n\n\\begin{proof}\nWe first prove an upper bound for $\\Fsquare_{j,k}(N)$ defined as follows: $\\Fsquare_{j,k}(N)$ is the count of $(m_1,...,m_j,n_1,...,n_k) \\in \\mathbb{N}^{j+k}$ such that $m_r, n_s \\in [1,N]$ for all $1\\leq r \\leq j$, $1 \\leq s \\leq k$, and\n\\begin{gather*}\nm_1 \\cdots m_{j} n_1 \\cdots n_{k}=\\square \\\\\nm_1+\\cdots m_j=n_1 +\\cdots +n_k, \\\\\n\\textrm{the sets} \\; \\{m_1,...m_j\\} \\; \\textrm{and} \\; \\{n_1,...,n_k\\} \\; \\textrm{have no elements in common.}\n\\end{gather*}\n\nWe also define $\\Gsquare_{j,k}(N)$ to be the count of $\\mathbf{b} \\in T_{j+k}$ satisfying\n\\begin{enumerate}[(i)]\n\\item $b_1^\\ast,...,b_{j+k}^\\ast \\in [1,N]$,\n\\item $b_1^\\ast + \\cdots + b_j^\\ast = b_{j+1}^\\ast + \\cdots + b_{j+k}^\\ast$, and\n\\item the sets $\\{b_1^\\ast,...,b_j^\\ast\\}$ and $\\{b_{j+1}^\\ast,...,b_{j+k}^\\ast\\}$ have no elements in common.\n\\end{enumerate}\nFor notational reasons let $\\ell = j+k$. For indices $u \\leq v$ of the $\\ell \\times \\ell$ upper triangle, we likewise let $\\Gsquare_{j,k}(N;\\, u,v)$ be the count of $\\mathbf{b} \\in T_\\ell$ satisfying (i), (ii), (iii) above and in addition\n\\begin{enumerate}[(iv)]\n\\item $b_{uv}$ is a maximal entry of $\\mathbf{b}$.\n\\end{enumerate}\n\nBy Lemma \\ref{lem:triangular},\n\\begin{equation}\\label{eq:F_boundedby_G}\n\\Fsquare_{j,k}(N) \\leq \\Gsquare_{j,k}(N),\n\\end{equation}\nand plainly\n\\begin{equation}\\label{eq:G_boundedby_maximalG}\n\\Gsquare_{j,k}(N) \\leq \\sum_{1 \\leq u \\leq v \\leq \\ell} \\Gsquare_{j,k}(N;\\, u,v).\n\\end{equation}\n\nSo it will be sufficient to treat $\\Gsquare_{j,k}(N;\\, u,v)$. When $b_{uv}$ is the largest entry of $\\mathbf{b}$, then $b_u^\\ast \\leq N$ and $b_v^\\ast \\leq N$ imply that either\n\\begin{equation}\\label{eq:bprod_small1}\n\\begin{split}\n(b_{1u}\\cdots b_{uu}) (b_{uu} \\cdots \\widehat{b_{uv}} \\cdots b_{u\\ell}) &\\leq N^{1-1\/(\\ell+1)}\\\\ \n\\textrm{and}& \\\\\n(b_{1v}\\cdots \\widehat{b_{uv}} \\cdots b_{vv}) (b_{vv} \\cdots b_{v\\ell}) &\\leq N^{1-1\/(\\ell+1)}\n\\end{split}\n\\quad (\\textrm{if}\\; u < v)\n\\end{equation}\nor\n\\begin{equation}\\label{eq:bprod_small2}\n(b_{1v}\\cdots \\widehat{b_{vv}}) (\\widehat{b_{vv}}\\cdots b_{v\\ell}) \\leq N^{1-2\/(\\ell+1)} \\quad (\\textrm{if}\\; u = v),\n\\end{equation}\nwhere $\\widehat{b_{uv}}$ or $\\widehat{b_{vv}}$ indicates these terms are excluded from the products above.\n\nFurthermore, if $1 \\leq u \\leq j$ and $j+1 \\leq v \\leq k$, and utilizing that by (ii) $b_1^\\ast + \\cdots + b_j^\\ast = b_{j+1}^\\ast + \\cdots + b_{j+k}^\\ast$, we must have \n\\begin{align*}\n&b_{uv} \\left( (b_{1u}\\cdots b_{uu}) (b_{uu} \\cdots \\widehat{b_{uv}} \\cdots b_{u\\ell}) - (b_{1v}\\cdots \\widehat{b_{uv}} \\cdots b_{vv}) (b_{vv} \\cdots b_{v\\ell})\\right) \\\\ \n&\\quad\\quad = b_u^\\ast - b_v^\\ast \n= \\sum_{\\substack{j+1 \\leq r \\leq j+k \\\\ r \\neq v}} b_r^\\ast - \\sum_{\\substack{1\\leq r \\leq j \\\\ r \\neq u}} b_r^\\ast.\n\\end{align*}\nBy (iii) the sets $\\{b_1^\\ast,...,b_j^\\ast\\}$ and $\\{b_{j+1}^\\ast,...,b_{j+k}^\\ast\\}$ are disjoint, and in particular $b_u^\\ast - b_v^\\ast \\ne 0$. Hence, if $1 \\leq u \\leq j$ and $j+1 \\leq v \\leq k$, for $\\mathbf{b}$ contributing to the count of $\\Gsquare_{j,k}(N;\\, u,v)$, we have that $b_{uv}$ is determined by the values of $b_{rs}$ for $(r,s) \\neq (u,v)$. An easier analysis shows $b_{uv}$ is determined by other values of $b_{rs}$ for $u,v$ not in this range as well. It thus follows for all $u\\leq v$,\n$$\n\\Gsquare_{j,k}(N;\\, u,v) \\leq \\#\\{ (b_{11},b_{12},..., \\widehat{b_{uv}},...,b_{\\ell\\ell}):\\, b_{11}^2 b_{12}^2 \\cdots \\widehat{ (b_{uv}^2)} \\cdots b_{\\ell\\ell}^2 \\leq N^{\\ell - 1\/(\\ell+1)}\\},\n$$\nsince multiplying together the relations $b_r^\\ast \\leq N$ for all $r \\neq u,v$ with the relations \\eqref{eq:bprod_small1} or \\eqref{eq:bprod_small2} for $r = u$ or $v$ yield the inequality inside the brackets above. The above count is just the sum of a divisor function, and applying Lemma \\ref{lem:divisorsumbound} we obtain,\n$$\n\\Gsquare_{j,k}(N; \\, u,v) \\ll N^{\\tfrac{\\ell}{2} - \\tfrac{1}{2(\\ell+1)}} (\\ell \\log N)^{\\binom{\\ell+1}{2}-1},\n$$\nfor all $u,v$. Thus from \\eqref{eq:G_boundedby_maximalG} and \\eqref{eq:F_boundedby_G},\n\\begin{align}\\label{eq:F_bound}\n\\notag \\Fsquare_{j,k}(N) &\\ll \\binom{\\ell+1}{2} N^{\\tfrac{\\ell}{2} - \\tfrac{1}{2(\\ell+1)}} (\\ell \\log N)^{\\binom{\\ell+1}{2}} \\\\\n\\notag & \\ll \\ell^{\\ell^2} N^{\\tfrac{\\ell}{2}- \\tfrac{1}{3\\ell}} (\\log N)^{\\ell^2} \\\\\n& \\ll (2k)^{4k^2} N^{\\tfrac{j+k}{2}- \\tfrac{1}{6k}} (\\log N)^{4k^2},\n\\end{align}\nas $\\ell = j+k \\leq 2k$.\n\nTurning finally to $\\Esquare_{j,k}(N)$ with $j\\leq k$, if $m_1,...m_j,n_1,...,n_k$ is a non-diagonal solution to the system \\eqref{eq:fundsystem_squares}, then either (i) $m_r = n_s$ for some $1\\leq r \\leq j$ and $1 \\leq s \\leq k$ and the remaining $m, n$ are a non-diagonal solution to such a system with $j$ and $k$ reduced by $1$, or the sets $\\{m_1,...,m_j\\}$ and $\\{n_1,...,n_k\\}$ have no elements in common. Hence\n$$\n\\Esquare_{j,k}(N) \\leq jk \\,N \\, \\Esquare_{j-1,k-1}(N) + \\Fsquare_{j,k}(N),\n$$\nand by inductively expanding this relation\n\\begin{multline*}\n\\Esquare_{j,k}(N) \\leq \\Fsquare_{j,k}(N) + jk\\, N \\, \\Fsquare_{j-1,k-1}(N) + j(j-1)k(k-1)\\, N^2 \\, \\Fsquare_{j-2,k-2}(N) \\\\\n+ \\cdots + j!\\, k(k-1)\\cdots(k-j+1)\\, N^{j-1} \\Fsquare_{1,k-j+1}(N),\n\\end{multline*}\nnoting that $\\Esquare_{1,k}(N) = \\Fsquare_{1,k}(N)$. Hence from \\eqref{eq:F_bound},\n\\begin{align*}\n\\Esquare_{j,k}(N) &\\ll j \\cdot j!\\, k! \\, (2k)^{4k^2} \\, N^{(j+k)\/2-1\/6k} (\\log N)^{4k^2} \\\\\n&\\ll N^{(j+k)\/2} \\exp\\Big[ - \\frac{\\log N}{6k} + O(k^2 (\\log k + \\log_2 N))\\Big],\n\\end{align*}\nas claimed.\n\\end{proof}\n\nAs an immediate consequence, because any off-diagonal solution to \\eqref{eq:fundsystem_circles} also constitutes an off-diagonal solution to \\eqref{eq:fundsystem_squares}, we immediately obtain\n\n\\begin{lem}\\label{lem:circle_offdiag}\nUniformly for all $1\\leq j\\leq k$ and $N\\geq 10$,\n$$\n\\Ecircle_{j,k}(N) \\ll N^{(j+k)\/2} \\exp\\Big[ - \\frac{\\log N}{6k} + O(k^2 (\\log k + \\log_2 N))\\Big].\n$$\n\\end{lem}\n\nNote that if $A$ is a sufficiently small constant, for $k \\leq A\\, (\\log N \/ \\log_2 N)^{1\/3},$ we have $\\exp[ - \\log N\/6k + O(k^2 (\\log k + \\log_2 N))] \\ll 1\/N^{1\/7k}$. Thus, combining these upper-bounds for off-diagonal counts with Lemma \\ref{lem:diag_count}, we deduce\n\n\\begin{prop}\\label{prop:expectation_estimate}\nFor $X(n)$ a Rademacher or Steinhaus random multiplicative function, we have\n$$\n\\mathbb{E}\\, \\mathfrak{m}_N^{(j,k)} = k!\\, \\mathbf{1}_{jk} + O\\Big(\\frac{1}{N^{1\/7k}}\\Big),\n$$\nuniformly for $1 \\leq j \\leq k \\leq A\\, (\\log N \/ \\log_2 N)^{1\/3},$ for a sufficiently small absolute constant $A > 0$.\n\\end{prop}\n\n\n\\subsection{The variance of moments: Gaussian range}\n\nWe define the count\n$$\n\\Vcircle_{j,k}(N)= \\sideset{}{^*}\\sum_{\\substack{ {\\bf m}, {\\bf m'} \\in [1,N]^j \\\\ {\\bf n}, {\\bf n'} \\in [1,N]^k}} 1\n$$\nwhere the asterisked sum above is restricted to tuples ${\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'}$ satisfying,\n\\begin{align}\\label{Vsys}\n\\begin{split}\nm_1 + \\cdots + m_j = n_1 + \\cdots + n_k&, \\quad m_1\\cdots m_j \\neq n_1 \\cdots n_k, \\\\\nm_1' + \\cdots + m_j' = n_1' + \\cdots + n_k'&,\\quad m_1'\\cdots m_j' \\neq n_1' \\cdots n_k',\\\\\nm_1\\cdots m_j m_1'\\cdots m_j' &= n_1\\cdots n_k n_1' \\cdots n_k'.\n\\end{split}\n\\end{align}\n\nThe reader can verify for $X(n)$ a Steinhaus random multiplicative function,\n\\begin{equation}\\label{eq:Steinhaus_variance}\nN^{j+k} \\cdot \\Var\\Big( \\int_0^1 (P_N(\\theta))^j (\\overline{P_N(\\theta)})^k\\, d\\theta \\Big) = \\Vcircle_{j,k}(N).\n\\end{equation}\n\n\\begin{cor}\n\\label{prop:count_twofold}\nUniformly for all $1 \\leq j \\leq k$ and $N \\geq 10$,\n\\begin{equation}\\label{Vbound}\n\\Vcircle_{j,k}(N) \\ll N^{j+k}\\exp\\Big[ - \\frac{\\log N}{12k} + O(k^2 (\\log k + \\log_2 N))\\Big].\n\\end{equation}\n\\end{cor}\n\n\n\n\\begin{proof}\nLet us first suppose that $j \\neq k$. Since each solution $({\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'})$ to \\eqref{Vsys} may be combined to form a pair \n\\begin{equation}\\label{concat}\n{\\bf \\underline{m} }=(m_1,...,m_j,m_1',...,m_j'), \\ \\ {\\bf \\underline{n} }=(n_1,...,n_k,n_1',...,n_k'),\n\\end{equation} \nit follows from Lemmas \\ref{lem:diag_count} and \\ref{lem:circle_offdiag} that\n$$\n\\Vcircle_{j,k}(N) \\leq \\Ncircle_{2j,2k}(N) \\ll N^{j+k} \\exp\\Big[ - \\frac{\\log N}{12k} + O(k^2 (\\log k + \\log_2 N))\\Big],\n$$\nsince for $2j \\neq 2k$ all contributions to $\\Ncircle_{2j,2k}(N)$ are off-diagonal.\n\nWe may now focus on the case $j=k$, once again arguing inductively with $k$. Writing $\\Vcircle_{k}(N) =\\Vcircle_{k,k}(N)$, we begin by observing that $\\Vcircle_{1}(N)=0$.\n\nWe make two simplifications. First, it will be sufficient to restrict our attention to solutions $({\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'})$ satisfying the following property: either $\\{m_1,...,m_k\\}$ and $\\{n_1,...,n_k\\}$ have no elements in common, or $\\{m'_1,...,m'_k\\}$ and $\\{n'_1,...,n'_k\\}$ have no elements in common. Indeed, letting $\\Wcircle_k(N)$ be the set of solutions for which this is not the case, by reducing the system by removing two equal pairs ($m_{r_1} = n_{s_1}$) and ($m'_{r_2} = n'_{s_2}$) from the solution, we see\n$$\n\\Wcircle_k(N) \\leq k^2 N^2 \\Vcircle_{k-1}(N)\n$$\nand the quantity $\\Vcircle_{k-1}(N)$ we will deal with inductively.\n\nSecond, we may also assume that the tuple ${\\bf \\underline{m} }=(m_1,...m_K,m'_1,...,m_K')$ is a permutation of ${\\bf \\underline{n} }=(n_1,...n_K,n'_1,...,n_K')$ since if $\\Ucircle_k(N)$ is the set of solutions for which this is not the case we have\n\\begin{equation}\\label{e2kbound}\n\\Ucircle_k(N) \\leq \\Ecircle_{2k,2k}(N) \\ll N^{2k} \\exp\\Big[ - \\frac{\\log N}{12k} + O(k^2 (\\log k + \\log_2 N))\\Big]\n\\end{equation} \nby Lemma \\ref{lem:circle_offdiag}.\n\nLet $\\widetilde{V}_{k}(N)$ denote the set of solutions satisfying both simplifications. Given any $({\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'}) \\in \\widetilde{V}_{k}(N)$, we see that ${\\bf m}$ has to be a permutation of ${\\bf n'}$ and similarly that ${ \\bf m'}$ must be a permutation of ${\\bf n}$. Thus\n\\begin{equation}\\label{eq:simplification_bound}\n|\\widetilde{V}_{k}(N)| \\leq \n(k !)^2 \\sum_{\\substack{ {\\bf m}, {\\bf m'} \\in [1,N]^K \\\\ m_1 + \\cdots + m_k = m'_1 + \\cdots + m'_k }} 1 \\leq k^{2k} N^{2k-1}.\n\\end{equation}\nHence\n$$\n\\Vcircle_k(N) \\ll k^{2k} N^{2k-1} + \\Ucircle_k(N) + k^2 N^2 \\Vcircle_{k-1}(N).\n$$\nExpanding inductively and using \\eqref{e2kbound} we verify the claim.\n\\end{proof}\n\nWe also have an analogous count for Rademacher random multiplicative functions. Define\n$$\n\\Vsquare_{j,k}(N) = \\sideset{}{^*}\\sum_{\\substack{ {\\bf m}, {\\bf m'} \\in [1,N]^j \\\\ {\\bf n}, {\\bf n'} \\in [1,N]^k} } 1,\n$$\nwhere here the asterisked sum is restricted to ${\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'}$ satisfying,\n\\begin{align}\\label{vsquaresys}\n\\begin{split}\nm_1 + \\cdots + m_j = n_1 + \\cdots + n_k&, \\quad \\ m_1\\cdots m_j n_1 \\cdots n_k \\neq \\square,\\\\\nm_1' + \\cdots + m_j' = n_1' + \\cdots + n_k'&, \\quad \\ m_1'\\cdots m_j' n_1' \\cdots n_k' \\neq \\square,\\\\\nm_1\\cdots m_j m_1'\\cdots m_j' & n_1\\cdots n_k n_1' \\cdots n_k' = \\square.\n\\end{split}\n\\end{align}\nPlainly for Rademacher random multiplicative functions, \\eqref{eq:Steinhaus_variance} remains true with $\\Vsquare_{j,k}(N)$ in place of $\\Vcircle_{j,k}(N)$. \n\n\n\\begin{cor}\n\\label{prop:count_twofold_Sq}\nUniformly for all $1 \\leq j \\leq k$ and $N \\geq 10$, we have the estimate\n\\begin{equation}\\label{Vbound_square}\n\\Vsquare_{j,k}(N) \\ll N^{j+k}\\exp\\Big[ - \\frac{\\log N}{12k} + O(k^2 (\\log k + \\log_2 N))\\Big].\n\\end{equation}\n\n\\end{cor}\n\n\\begin{proof}\nAs in \\eqref{concat}, each solution $({\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'})$ to \\eqref{vsquaresys} may be concatenated to form a pair ${\\bf \\underline{m} }, {\\bf \\underline{n} }$. \nWhen $j \\neq k$ we may apply Lemma \\ref{lem:square_offdiag} to find that \n\\begin{equation}\\label{vsquarejk}\n\\Vsquare_{j,k}(N) \\leq \\mathcal{E}^{\\square}_{2j,2k}(N) \\ll N^{j+k}\\exp\\Big[ - \\frac{\\log N}{12k} + O(k^2 (\\log k + \\log_2 N))\\Big]. \n\\end{equation}\nWhen $j=k$, we consider two sets of solutions to \\eqref{vsquaresys}. First we count the number of solutions $({\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'})$ for which \n$m_1\\cdots m_j m_1'\\cdots m_j' = n_1\\cdots n_k n_1' \\cdots n_k'$. Invoking Corollary \\ref{prop:count_twofold} these contribute no more than $N^{j+k}\\exp[ - \\log N\/12k + O(k^2 (\\log k + \\log_2 N))]$ to $\\Vsquare_{k,k}(N)$. The remaining solutions are counted in precisely the same manner as \\eqref{vsquarejk} since we have removed all diagonal tuples $({\\bf \\underline{m} }, {\\bf \\underline{n} })$.\n\\end{proof}\n\nFrom Corollaries \\ref{prop:count_twofold} and \\ref{prop:count_twofold_Sq}, as we did in Proposition \\ref{prop:expectation_estimate}, we obtain the following estimate for the variance.\n\n\\begin{prop}\\label{prop:variance_estimate}\nFor $X(n)$ a Rademacher or Steinhaus random multiplicative function, we have\n$$\n\\Var(\\mathfrak{m}_N^{(j,k)}) = O\\Big(\\frac{1}{N^{1\/15k}}\\Big),\n$$\nuniformly for $1 \\leq j \\leq k \\leq A\\, (\\log N \/ \\log_2 N)^{1\/3},$ for a sufficiently small absolute constant $A > 0$.\n\\end{prop}\n\n\\subsection{Moments of short interval polynomials}\n\nIn bounding the mean of moments of sums $\\sum_{n \\in [M,M+L]} X(n) e(n\\theta)$, we will use the following estimate.\n\n\\begin{lem}\\label{lem:shortintervals}\nFix $\\ell \\geq 1$ and $\\varepsilon > 0$. For any $N\\geq 1$ and any collection of intervals $I_1, ..., I_\\ell \\subset [1,2N]$, we have\n\\begin{equation}\\label{eq:count_shortint_Sq}\n\\sum_{\\substack{\\nu_1\\cdots \\nu_\\ell = \\square \\\\ \\nu_r \\in I_r}} 1 \\ll_{\\ell,\\varepsilon} N^\\varepsilon \\prod_{r=1}^\\ell (|I_r|^{1\/2}+1) \n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nNote that by Lemma \\ref{lem:triangular} it will be sufficient to show\n\\begin{equation}\\label{eq:triangular_shortint}\n\\sum_{\\substack{\\mathbf{b} \\in T_\\ell \\\\ b_r^\\ast \\in I_r}} 1 \\ll_{\\ell,\\epsilon} N^\\varepsilon \\prod_{r=1}^\\ell (|I_r|^{1\/2}+1).\n\\end{equation}\nWe will prove the above estimate by induction.\n\nFor $\\ell=1$ this is just the claim\n$$\n\\sum_{b_{11}^2 \\in I_1} 1 \\ll N^\\varepsilon (|I_1|^{1\/2}+1),\n$$\nbut for $I = (T-L,T]$ the left hand side is $ \\lfloor \\sqrt{T} \\rfloor -\\lfloor \\sqrt{T-L} \\rfloor \\ll L\/\\sqrt{T}+1 \\leq L^{1\/2}+1,$ as claimed.\n\nSuppose \\eqref{eq:triangular_shortint} has been verified for $\\ell-1$. Then for $\\ell$, suppose without loss of generality $|I_\\ell| \\leq |I_1|, \\, |I_2|,\\, ... , |I_{\\ell-1}|$. We have by inductive hypothesis\n$$\n\\sum_{\\substack{\\mathbf{b} \\in T_\\ell \\\\ b_r^\\ast \\in I_r}} 1 = \\sum_{\\beta_1 \\cdots \\beta_{\\ell-1} \\beta_\\ell^2 \\in I_\\ell} \\sum_{\\substack{\\mathbf{b} \\in T_{\\ell-1} \\\\ \\beta_r b_r^\\ast \\in I_r \\\\ \\forall\\, 1 \\leq r \\leq \\ell-1}} 1 \\ll \\sum_{\\beta_1 \\cdots \\beta_{\\ell-1} \\beta_\\ell^2 \\in I_\\ell} \\prod_{r=1}^{\\ell-1} \\Big(\\frac{|I_r|^{1\/2}}{\\beta_r^{1\/2}} + 1\\Big) N^\\varepsilon,\n$$\nwhere we have relabeled the right column of $T_\\ell$ by $\\beta_r$ in place of $b_{r \\ell}$.\n\nNote that the number of tuples satisfying $\\beta_1 \\cdots \\beta_{\\ell-1} \\beta_\\ell^2 \\in I_\\ell$ is $\\ll_{\\ell} (|I_\\ell|+1)N^\\varepsilon$ by the pointwise divisor bound \\eqref{eq:divisor_bound}. If we expand the product above, getting $2^{\\ell-1}$ terms, the contribution from any term in which $1$ appears instead of $|I_s|^{1\/2}\/\\beta_s^{1\/2}$ is thus\n$$\n\\ll (|I_\\ell|+1) N^\\varepsilon \\cdot \\prod_{\\substack{1 \\leq r \\leq \\ell-1 \\\\ r\\neq s}} (|I_r|^{1\/2}+1) N^\\varepsilon.\n$$\nAs $|I_\\ell| \\leq |I_s|$ and $\\varepsilon > 0$ is arbitrary, this yields an acceptable contribution to \\eqref{eq:triangular_shortint}.\n\nOn the other hand, in expanding the product, the only term not considered above is\n\\begin{equation}\\label{eq:induction_leftover}\n\\sum_{\\beta_1 \\cdots \\beta_{\\ell-1} \\beta_\\ell^2 \\in I_\\ell} \\prod_{r=1}^{\\ell-1} \\frac{|I_r|^{1\/2}}{\\beta_r^{1\/2}} N^\\varepsilon = \\sum_{\\delta\\beta_\\ell^2 \\in I_\\ell} \\frac{\\tau_{\\ell-1}(\\delta)}{\\delta^{1\/2}} \\prod_{r=1}^{\\ell-1} |I_r|^{1\/2} N^\\varepsilon.\n\\end{equation}\nIf we show\n\\begin{equation}\\label{eq:simple_bound}\n\\sum_{\\delta \\beta^2 \\in I} \\frac{1}{\\delta^{1\/2}} \\ll (|I|^{1\/2}+1) N^\\varepsilon,\n\\end{equation}\nfor all intervals $I \\subset [1,2N]$, we will be done, since then \\eqref{eq:induction_leftover} therefore yields an acceptable contribution to \\eqref{eq:triangular_shortint}, by the pointwise divisor bound.\n\nBut note that if $I \\subset [N, 2N]$, we can split the sum in \\eqref{eq:simple_bound} into two pieces: one in which $\\delta > |I|$, which yields a contribution no larger than the right hand side by the divisor bound, and a second in which $\\delta \\leq |I|$, which yields a contribution no more than\n$$\n\\sum_{\\delta \\leq |I|} \\frac{1}{\\delta^{1\/2}} \\sum_{b^2 \\in I\/\\delta} 1 \\ll \\sum_{\\delta \\leq |I|} \\frac{1}{\\delta^{1\/2}} \\Big( \\frac{|I|\/\\delta}{\\sqrt{N\/\\delta}} + 1\\Big) \\ll |I|^{1\/2} \\log N.\n$$\nThis yields \\eqref{eq:simple_bound} if $I \\subset [N,2N]$. But then summing dyadically (splitting $I$ up into pieces if necessary) gives \\eqref{eq:simple_bound} for all $I \\subset [1,2N]$.\n\\end{proof}\n\n\\section{Proofs of the main results}\n\n\\subsection{Gaussian moments and an almost sure central limit theorem}\n\n\\subsubsection{A proof of Theorem \\ref{thm:main_moments_Gaussian}}\n\nIt does not take much more work to prove Theorem~\\ref{thm:main_moments_Gaussian}. Indeed, using Propositions \\ref{prop:expectation_estimate} and \\ref{prop:variance_estimate}, note that in both the Rademacher and Steinhaus cases, we have from the triangle inequality,\n$$\n(\\E\\, |\\mathfrak{m}_N^{(j,k)} - k! \\mathbf{1}_{jk}|^2)^{1\/2} \\leq (\\E\\, |\\mathfrak{m}_N^{(j,k)} - \\E\\, \\mathfrak{m}_N^{(j,k)}|^2)^{1\/2} + (\\E\\, |k! \\mathbf{1}_{jk} - \\E\\, \\mathfrak{m}_N^{(j,k)}|^2)^{1\/2} \\ll \\frac{1}{N^{1\/30k}},\n$$\nwhich implies the theorem.\n\n\\subsubsection{From asymptotically almost sure to almost sure}\n\nTheorem \\ref{thm:main_moments_Gaussian} shows that asymptotically almost surely the fixed moments of the polynomials $P_N(\\theta)$ become Gaussian. In this section we will prove the almost sure central limit theorem, Theorem \\ref{thm:Gaussian_RMF}, by repeatedly applying Borel-Cantelli type arguments to this result. We begin with a lemma.\n\n\\begin{lem}\n\\label{lem:sqrt_cancel}\nAlmost surely for all $p> 0$, $\\varepsilon > 0$, \nand uniformly for $L \\le M$,\n\\begin{equation}\n\\label{eq:sqrt_cancel}\n\\int_0^1 \\Big|\\sum_{M < n \\leq M+L} X(n) e(n\\theta) \\Big|^p\\, d\\theta = O_{p,\\varepsilon}(L^{p\/2+\\varepsilon} M^\\varepsilon).\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nBy H\\\"older's inequality, if \\eqref{eq:sqrt_cancel} is true for some exponent $p$, it will be true for all smaller exponents $p$, and plainly if it is true for some $\\varepsilon$, it remains true for all larger $\\varepsilon$. Therefore by examining any countable collection of $p\\rightarrow\\infty$ and $\\varepsilon \\rightarrow 0$, it will be sufficient to prove that for any fixed $p$ and $\\varepsilon$, \\eqref{eq:sqrt_cancel} is true almost surely. \n\nNow, we note that Lemma \\ref{lem:shortintervals} immediately implies that for all $k\\geq 1$ and $L \\leq M$,\n\\begin{equation}\n\\label{eq:2k_expint}\n\\EE \\int_0^1 \\Big| \\sum_{M < n \\leq M+L} X(n) e(n\\theta)\\Big|^{2k}\\, d\\theta = O_{k,\\varepsilon}( L^k M^\\varepsilon)\n\\end{equation}\n\nFix $p$ and $\\varepsilon$, and let $\\mathcal{A}$ be the event that for infinitely many $M, L$,\n$$\n\\int_0^1 \\Big| \\sum_{M < n \\leq M+L} X(n) e(n\\theta)\\Big|^p \\, d\\theta \\geq L^{p\/2+\\varepsilon} M^\\varepsilon.\n$$\nTo prove the lemma, we need only to show that $\\mathcal{A}$ is null.\n\nBut on $\\mathcal{A}$, we see by H\\\"older that for all $t\\geq 1$,\n$$\n\\int_0^1 \\Big| \\sum_{M < n \\leq M+L} X(n) e(n\\theta)\\Big|^{pt} \\, d\\theta \\geq L^{pt\/2+\\varepsilon t} M^{\\varepsilon t},\n$$\nfor infinitely many $M, L$. We choose $t$ large enough that $\\varepsilon t \\geq 2+\\varepsilon$ and also take $pt$ an even integer $2k$. Then on the one hand from \\eqref{eq:2k_expint}\n$$\n\\EE \\sum_{M,L\\geq 1} \\frac{1}{L^{pt\/2 + \\varepsilon t} M^{\\varepsilon t}} \\int_0^1 \\Big| \\sum_{M < n \\leq M+L} X(n) e(n\\theta) \\Big|^{pt}\\, d\\theta \\leq \\sum_{M,L\\geq 1} \\frac{1}{L^{2} M^2} < +\\infty.\n$$\nBut on the other hand the sum here is infinite on $\\mathcal{A}$, so $\\mathcal{A}$ must be a null event.\n\\end{proof}\n\nWe now turn to the almost sure central limit theorem.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:Gaussian_RMF}]\nWe use the method of moments: we will show that for all $j,k \\geq 1$, almost surely\n\\begin{equation}\n\\label{eq:as_moments}\n\\mathfrak{m}_N^{(j,k)} = \\mathbf{1}_{jk}\\cdot k! + o(1),\n\\end{equation}\nas $N\\rightarrow\\infty$. Since these are the moments of a standard complex normal random variable, the theorem will follow (see e.g. \\cite[p. 388]{billingsley}).\n\nTo establish \\eqref{eq:as_moments}, fix arbitrary $1 \\leq j \\leq k$ and let $\\lambda \\in \\N$ be such that $(\\lambda+1)\/15k > 1$. Then by Proposition \\ref{prop:variance_estimate},\n$$\n\\sum_{M=1}^\\infty \\Var\\big(\\mathfrak{m}^{(j,k)}_{M^{\\lambda+1}}\\big) < +\\infty,\n$$\nand so almost surely,\n\\begin{equation}\n\\label{eq:as_moments_subseq}\n\\mathfrak{m}^{(j,k)}_{M^{\\lambda+1}} = \\EE \\mathfrak{m}^{(j,k)}_{M^{\\lambda+1}}+ o(1) = \\mathbf{1}_{jk}\\cdot k! + o(1),\n\\end{equation}\nas $M\\rightarrow\\infty$, with the last equation following from Proposition \\ref{prop:expectation_estimate}. Note that this implies in particular almost surely, for all $p > 0$,\n$$\n\\int_0^1 |P_{M^{\\lambda+1}}(\\theta)|^p \\, d\\theta = O_p(1).\n$$\n\nIt remains to pass from the sequence $\\{M^{\\lambda+1}\\}$ to the integers. We note that if we take $N$ with $M^{\\lambda+1} \\leq N < (M+1)^{\\lambda+1}$, then $N = M^{\\lambda+1} + L$, for $L = O(M^\\lambda)$.\n\nWrite\n$$\n\\sqrt{N} P_N(\\theta) = M^{(\\lambda+1)\/2} P_{M^{\\lambda+1}}(\\theta) + \\sum_{M^{\\lambda+1} < n \\leq N} X(n) e(n\\theta) \\eqqcolon F + f,\n$$\nand observe that for all $X$ and $\\theta$, by a binomial expansion,\n$$\n(F+f)^j \\overline{(F+f)}^k = F^j \\overline{F}^k + O_{j,k}\\Big(\\sum_{\\substack{a\\leq j, b\\leq k \\\\ (a,b) \\neq (0,0)}} |F|^{(j-a) + (k-b)} |f|^{a+b}\\Big).\n$$\nWe seek to integrate the error term in $\\theta$; note that for any $p_1 p_2 \\geq 0$,\n$$\n\\int_0^1 |F|^{p_1} |f|^{p_2} \\, d\\theta \n\\leq \\Big(\\int_0^1 |F|^{2p_1}\\, d\\theta \\Big)^{1\/2} \\Big(\\int_0^1 |f|^{2p_2}\\, d\\theta \\Big)^{1\/2}.\n$$\nBut from our almost sure computation of moments along the sequence $\\{M^{\\lambda+1}\\}$ we have that almost surely\n$$\n\\int_0^1 |F|^p\\, d\\theta = O_p(M^{p (\\lambda+1)\/2}) = O_p(N^{p\/2}),\n$$\nfor any $p > 0$. Furthermore, because $L = O(M^\\lambda)$ and $N > M^{\\lambda+1}$, applying Lemma \\ref{lem:sqrt_cancel} with $\\varepsilon < 1\/4(\\lambda + 1)$ we find that\n$$\n\\int_0^1 |f|^p\\, d\\theta = O_{p,\\lambda}(N^{2\\varepsilon} N^{(p\/2)(1-1\/(\\lambda+1))}) = o(N^{p\/2}),\n$$\nfor any $p > 0$.\n\nTherefore almost surely\n$$\n\\int_0^1 (F+f)^j \\overline{(F+f)}^k \\, d\\theta = \\int_0^1 F^j \\overline{F}^k\\, d\\theta + o(N^{(j+k)\/2}).\n$$\nBut dividing by $N^{(j+k)\/2}$ applying \\eqref{eq:as_moments_subseq}, we see that \\eqref{eq:as_moments} holds as claimed.\n\\end{proof}\n\n\\subsection{Estimates for the sup-norm}\\label{sec:supremum}\n\nBefore proving Theorem \\ref{thm:supnorm} we need the following simple lemma, which tells us the sup-norm of a degree $N$ polynomial is characterized by $L^p$ norms for $p$ of order at least $\\log N$, and gives less precise bounds in the case of smaller $p$.\n\n\\begin{lem}\\label{lem:Lp_to_sup}\nFor an arbitrary trigonometric polynomial $p_N(\\theta) = \\sum_{n\\leq N} a_n e(n\\theta)$ of degree $N$,\n\\begin{equation}\\label{eq:Lp_to_sup}\n\\Big(\\int_0^1 |p_N(\\theta)|^{2k}\\, d\\theta\\Big)^{1\/2k}\\leq \\max_\\theta|p_N(\\theta)| \\ll \\Big( N \\int_0^1 |p_N(\\theta)|^{2k}\\,d\\theta \\Big)^{1\/2k},\n\\end{equation}\nfor all $k\\geq 1$, where the implied constant is absolute\n\\end{lem}\n\n\\begin{proof}\nLet $H = \\max_\\theta |p_N(\\theta)|$. The lower bound in \\eqref{eq:Lp_to_sup} is obvious. For the upper bound, note that Bernstein's inequality (see \\cite[Ex 7.16]{katznelson}) implies $|p_N'(\\theta)| \\leq 2\\pi N H$. Thus if $|p_N(\\theta)|$ achieves its maximum for $\\theta = \\theta^\\ast$, then $|p_N(\\theta^\\ast + t)| \\geq H - 2\\pi NH |t|$, so\n$$\n|p_N(\\theta^\\ast + t)| \\geq H\/2, \\quad \\textrm{for} \\; |t| \\leq 1\/2\\pi N.\n$$\nThis immediately implies the upper bound in \\eqref{eq:Lp_to_sup}.\n\\end{proof}\n\nWe now turn to:\n\n\\begin{proof}[Proof of Theorem \\ref{thm:supnorm}]\nWe will prove the lower bound first. We may take $X(n)$ either a Rademacher or Steinhaus random multiplicative function. Note by Chebyshev's inequality,\n$$\n\\mathbb{P}\\left(|\\mathfrak{m}_N^{(k)}-k!| \\,\\geq\\, k!\/2 \\right) \\leq \\frac{\\E\\, \\left| \\mathfrak{m}_N^{(k)} - k! \\right|^2}{(k!)^2\/4} = o(1),\n$$\nfor any choice of $k \\leq A\\,(\\log N\/\\log_2 N)^{1\/3}$ for a small absolute constant $A$. Thus $\\mathfrak{m}_N^{(k)} \\geq k!\/2$ with probability $1-o(1)$, for any choice of $k$ in this range. Hence using the lower bound in Lemma \\ref{lem:Lp_to_sup},\n$$\nH \\coloneqq \\max_\\theta |P_N(\\theta)| \\geq (\\mathfrak{m}_N^{(k)})^{1\/2k} \\geq (k!\/2)^{1\/2k} \\gg \\sqrt{k},\n$$\nwith probability $1-o(1)$ for any choice of $k$ in this range. Using Stirling's formula and taking $k$ to be of order $(\\log N\/\\log_2 N)^{1\/3}$ yields the lower bound in the theorem.\n\nFor the upper bound, if $X(n)$ is a Rademacher random multiplicative function,\n\\begin{multline*}\n\\E\\, \\int_0^1 |P_N(\\theta)|^{2k}\\, d\\theta = \\frac{1}{N^k} \\Nsquare_{k,k}(N) \\leq \\frac{1}{N^k} \\sum_{\\substack{m_1\\cdots m_k n_1\\cdots n_k = \\square \\\\ m_r, n_s \\leq N}} 1 \\\\\n\\leq \\frac{1}{N^k} \\sum_{n \\leq N^k} \\tau_{2k}(n^2) \\leq (2k \\log N)^{4k^2-1},\n\\end{multline*}\nusing \\eqref{eq:tau_of_squares} in the final step. It is easy to see in the same way that if $X(n)$ is instead a Steinhaus random multiplicative function the same bound is satisfied (in fact a slightly better bound is satisfied)\n\nThus from Lemma \\ref{lem:Lp_to_sup}\n$$\n\\E\\, H^{2k} \\leq C^{2k} N (2k)^{4k^2-1} (\\log N)^{4k^2-1},\n$$\nHence for all $\\lambda > 0$ and all integers $k \\geq 1$,\n$$\n\\PP (H \\geq \\lambda) \\leq \\frac{C^{2k} N (2k)^{4k^2-1} (\\log N)^{4k^2-1}}{\\lambda^{2k}}.\n$$\nWe approximately optimize the right-hand side in $k$ by setting $k = \\lfloor \\log \\lambda \/ 6 \\log_2 N \\rfloor$, and then we set $\\lambda = \\exp(3 \\sqrt{\\log N \\log_2 N})$ to see that for such $\\lambda$, \n$$\n\\PP (H \\geq \\lambda) \\rightarrow 0.\n$$\nThis proves the claim.\n\\end{proof}\n\n\\subsection{Large mean and variance}\n\nIn this section we prove Theorem \\ref{thm:main_moments_larger}. We first recall a recent result estimating moments of sums of random multiplicative functions.\n\n\\begin{thm}[Harper]\\label{thm:harper_highmoments}\nThere is a small positive absolute constant $c$ such that the following holds. For $X(n)$ a Steinhaus random multiplicative function, for all $1 \\leq \\ell \\leq c \\log N \/ \\log_2 N$,\n\\begin{equation}\\label{eq:steinhaus_high_moments}\n\\mathbb{E} \\, \\Big| \\sum_{n\\leq N} X(n) \\Big|^{2\\ell} = N^\\ell \\exp(-\\ell^2 \\log \\ell - \\ell^2 \\log_2(2 \\ell) + O(\\ell^2)) (\\log N)^{(\\ell-1)^2}.\n\\end{equation}\nFor $X(n)$ a Rademacher random multiplicative function, for all $2 \\leq \\ell \\leq c \\log N \/ \\log_2 N$,\n\\begin{equation}\\label{eq:rad_high_moments}\n\\mathbb{E} \\, \\Big| \\sum_{n\\leq N} X(n) \\Big|^{2\\ell} = N^\\ell \\exp(-2 \\ell^2 \\log \\ell - 2\\ell^2 \\log_2(2 \\ell) + O(\\ell^2)) (\\log N)^{2 \\ell^2+O(\\ell)}.\n\\end{equation}\n\n\\begin{proof}\nThe estimate \\eqref{eq:steinhaus_high_moments} is just a restatement of of Harper \\cite[Thm. 1]{harper_high}. \\eqref{eq:rad_high_moments} is very nearly \\cite[Thm.2]{harper_high}, except the result there actually pertains to Rademacher random multiplicative functions supported on squarefree integers. However, in our setting, we may recover the estimate \\eqref{eq:rad_high_moments} by way of H\\\"older's inequality. Writing $\\flat$ to indicate a summation over squarefree integers, we see that\n\\begin{align*}\n\\Big| \\sum_{n\\leq N} X(n) \\Big|^{2\\ell} =&\\Big| \\sum_{r^2 \\leq N} \\sum_{n \\leq N\/r^2}^{\\flat} X(n) \\Big|^{2 \\ell} \n\\leq (\\log N)^{2 \\ell} \\max_{\\substack{D \\leq N^{1\/2}\\\\ \\text{dyadic} }} \\Big| \\sum_{r \\in [D,2D]} \\sum_{n \\leq N\/r^2}^{\\flat} X(n) \\Big|^{2 \\ell}\\\\\n&\\leq (\\log N)^{2 \\ell} \\sum_{\\substack{D \\leq N^{1\/2} \\\\ \\text{dyadic} }} D^{2 \\ell -1} \\sum_{r \\in [D,2D]} \\Big|\\sum_{n \\leq N\/r^2}^{\\flat} X(n) \\Big|^{2 \\ell}.\n\\end{align*}\nWe have used H\\\"older's inequality in the last step, and then replaced the maximum with a sum. Taking expectations, we obtain the desired upper bound. The matching lower bound follows upon noting that our $2\\ell$-th moments, expanded into lattice point counts, are larger than those in the squarefree Rademacher setting. \n\\end{proof}\n\\end{thm}\n\n\\begin{rmk}\nThese estimates hold for all $\\ell$, though we only need them for integer $\\ell$. Harper actually gives a formula for \\eqref{eq:rad_high_moments} in a slightly larger range $1\\leq \\ell \\leq c \\log N \/ \\log_2 N$, with a somewhat more complicated right hand side (involving a phase change at $\\ell = (1+\\sqrt{5})\/2 \\approx 1.618$), but we will not need this.\n\\end{rmk}\n\n\n\nWe now prove Theorem \\ref{thm:main_moments_larger}, treating the mean-value estimate \\eqref{eq:moment_mean_larger}, the lower bound \\eqref{eq:moment_mean_lowerbound}, and the variance estimate \\eqref{eq:moment_variance_larger} separately.\n\n\n\\begin{proof}[Proof of \\eqref{eq:moment_mean_larger} of Theorem \\ref{thm:main_moments_larger}]\nWe first treat the case that $X(n)$ is a Steinhaus random multiplicative function. We will find upper and lower bounds for the quantity $\\E\\, \\mathfrak{m}_N^{(k)}$. \n\nFor the upper bound, note by examining the point counts corresponding to the left- and right- hand side, we have\n\\begin{equation}\\label{eq:add_to_mult_comp}\n\\E\\, \\int_0^1 |P_N(\\theta)|^{2k}\\, d\\theta \\leq \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n) \\Big|^{2k}.\n\\end{equation}\n\nFor the lower bound, note\n\n\\begin{align*}\n\\E\\, & \\int_0^1 |P_N(\\theta)|^{2k}\\, d\\theta \\geq \\E\\, \\int_{[0,N^{-3\/2}]}|P_N(\\theta)|^{2k}\\, d\\theta \\\\\n&= \\frac1{N^k} \\int_{[0,N^{-3\/2}]} \\sum_{\\substack{m_r, n_s \\in [1,N] \\\\ m_1\\cdots m_k = n_1\\cdots n_k}} e([(m_1+\\cdots+m_k) - (n_1+\\cdots+n_k)]\\theta)\\,d\\theta\\\\\n&\\geq \\frac{1}{N^{k + 3\/2}} \\sum_{\\substack{m_r, n_s \\in [1,N] \\\\ m_1\\cdots m_k = n_1\\cdots n_k}} \\Big( 1 - O\\Big(\\frac{k}{N^{1\/2}}\\Big)\\Big)\n\\end{align*}\nusing a pointwise bound of the (positive) integrand in $\\theta$ to arrive at the last line. For $k \\leq N^{1\/4}$ and sufficiently large $N$ this gives\n\\begin{equation}\\label{eq:lower_add_to_mult_comp}\n\\E\\, \\int_0^1 |P_N(\\theta)|^{2k}\\, d\\theta \\gg \\frac{1}{N^{3\/2}} \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n) \\Big|^{2k}\n\\end{equation}\nBy possibly adjusting the implicit constant and using compactness it follows that this result is true for all $N$.\n\n\\eqref{eq:add_to_mult_comp} and \\eqref{eq:lower_add_to_mult_comp} then give \\eqref{eq:moment_mean_larger}.\n\nThe case for $X(n)$ a Rademacher random multiplicative function is the same, replacing a sum over $m_1\\cdots m_k = n_1 \\cdots n_k$ with a sum over $m_1\\cdots m_k n_1 \\cdots n_k = \\square$.\n\\end{proof}\n\n\\begin{proof}[Proof of \\eqref{eq:moment_mean_lowerbound} of Theorem \\ref{thm:main_moments_larger}]\nLet $X(n)$ be either a Steinhaus or Rademacher random multiplicative function. We claim that for any $\\delta > 0$, that if $N$ is sufficiently large, $C_\\delta$ is a sufficiently large constant, and $C_\\delta (\\log N\/\\log_2 N)^{1\/2} \\leq k$, then\n\\begin{equation}\\label{eq:log_lowerbound}\n\\log N \\leq \\delta \\log \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n)\\Big|^{2k}.\n\\end{equation}\n\nThis follows directly from Theorem \\ref{thm:harper_highmoments} if $k = k_0 = \\lceil C_\\delta (\\log N\/\\log_2 N)^{1\/2} \\rceil$ as long as $C_\\delta$ and $N$ are chosen large enough. But note for $k \\geq k_0$\n$$\n\\log \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n)\\Big|^{2k_0} \\leq \\frac{2k_0}{2k} \\log \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n)\\Big|^{2k} \\leq \\log \\E\\, \\Big| \\frac{1}{\\sqrt{N}} \\sum_{n\\leq N} X(n)\\Big|^{2k},\n$$\nusing H\\\"older's inequality to deduce the first inequality. This implies that \\eqref{eq:log_lowerbound} is true for all $k$ as claimed.\n\n\\eqref{eq:moment_mean_lowerbound} then follows from \\eqref{eq:moment_mean_larger} and \\eqref{eq:log_lowerbound}, choosing $\\delta$ based on $\\varepsilon$ and the implicit constant in \\eqref{eq:moment_mean_larger}.\n\\end{proof}\n\nIn order to treat the variance estimate \\eqref{eq:moment_variance_larger} we use the following lemma.\n\n\\begin{lem}\\label{lem:var_lowerbound}\nLet $X(n)$ be a Rademacher or Steinhaus random multiplicative function and suppose that $k\\leq N^{1\/4}$. We have the lower bound\n\\begin{equation}\\label{vartransfer}\nN^{2k} \\cdot \\Var(\\mathfrak{m}_N^{(k)}) \\geq N^{-3} \\ \\E \\Big|\\sum_{n \\leq N} X(n) \\Big|^{4k} - \\left( \\E \\Big|\\sum_{n \\leq N} X(n) \\Big|^{2k} \\right)^2.\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nWe treat the case that $X(n)$ is a Steinhaus random multiplicative function first. Since $\\E (\\mathfrak{m}_N^{(k)}) \\leq N^{-k}\\ \\E \\left( |\\sum_{n \\leq N} X(n)|^{2k} \\right) $, we first observe that\n\n\\begin{align}\\label{Varlower}\n\\Var(\\mathfrak{m}_N^{(k)}) &=\\E[(\\mathfrak{m}_N^{(k)})^2] - (\\E\\, \\mathfrak{m}_N^{(k)})^2 \\notag \\\\\n&\\geq N^{-2k}\\sideset{}{^*}\\sum_{\\substack{ {\\bf m}, {\\bf m'} \\in [1,N]^k \\\\ {\\bf n}, {\\bf n'} \\in [1,N]^k}} 1 - N^{-2k} \\left( \\E \\big|\\sum_{n \\leq N} X(n)\\big|^{2k} \\right)^2\n\\end{align}\nwhere the asterisked sum here is restricted to tuples ${\\bf m},{ \\bf m'}, {\\bf n}, {\\bf n'}$ satisfying,\n\\begin{align}\\label{Vsys1}\n\\begin{split}\n\\sum_{j \\leq k} m_j = \\sum_{j \\leq k} n_j, &\\qquad \\sum_{j \\leq k} m_j' = \\sum_{j \\leq k} n_j' \\\\\nm_1\\cdots m_k m_1'\\cdots m_k' &= n_1\\cdots n_k n_1' \\cdots n_k'.\n\\end{split}\n\\end{align}\n\n\n\nTo treat the asterisked sum on the right-hand side of \\eqref{Varlower}, consider the random trigonometric polynomial\n$$\nF_k(\\alpha,\\alpha')= \\sum_{ {\\bf n}, {\\bf n'} \\in [1,N]^k} \nX(n_1 \\cdots n_k n_1' \\cdots n_k') \\, e\\Big(\\alpha \\sum_{j \\leq k} n_j +\\alpha' \\sum_{j \\leq k} n_j' \\Big)\n$$\nand observe that \n\n\\begin{align}\\label{starsumlower}\n\\sideset{}{^*}\\sum_{\\substack{ {\\bf m}, {\\bf m'} \\in [1,N]^k \\\\ {\\bf n}, {\\bf n'} \\in [1,N]^k}} 1 \n&= \\int_{[0,1]^2} \\E \\, | F_k(\\alpha,\\alpha')|^2 \\, d \\alpha \\, d \\alpha' \\notag \\\\\n&\\geq \\int_{[0,N^{-3\/2}]^2} \\E \\, | F_k(\\alpha,\\alpha')|^2 \\, d \\alpha \\, d \\alpha'. \n\\end{align}\n\nFor any $|\\alpha|,|\\alpha'|\\leq N^{-3\/2}$ we gather that \n\n\n\\begin{align*}\n\\E \\, |F_k(\\alpha,\\alpha')|^2 &=\n\\sideset{}{^\\dagger}\\sum_{\\substack{ {\\bf n}, {\\bf n'} \\in [1,N]^k \\\\ {\\bf m}, {\\bf m'} \\in [1,N]^k }} \ne\\Big(\\alpha \\big(\\sum_{j \\leq k} n_j - \\sum_{j \\leq k} m_j \\big)+\\alpha' \\big(\\sum_{j \\leq k} n_j' - \\sum_{j \\leq k} m_j' \\big) \\Big)\\\\\n& \\geq \\sideset{}{^\\dagger}\\sum_{\\substack{ {\\bf n}, {\\bf n'} \\in [1,N]^k \\\\ {\\bf m}, {\\bf m'} \\in [1,N]^k }} \\left( 1+O\\Big( \\frac{k}{N^{1\/2} } \\Big) \\right)\n\\gg \\sideset{}{^\\dagger}\\sum_{\\substack{ {\\bf n}, {\\bf n'} \\in [1,N]^k \\\\ {\\bf m}, {\\bf m'} \\in [1,N]^k }} 1,\n\\end{align*}\nwhere the outer sums range over ${\\bf n}, {\\bf n'}, {\\bf m}, {\\bf m'} \\in [1,N]^k $ satisfying the multiplicative constraint in \\eqref{Vsys1}. Inserting this information into \\eqref{starsumlower} and then \\eqref{Varlower}, we get that \n \n$$\\sideset{}{^*}\\sum_{\\substack{ {\\bf m}, {\\bf m'} \\in [1,N]^k \\\\ {\\bf n}, {\\bf n'} \\in [1,N]^k}} 1 \\gg N^{-3} \\sideset{}{^\\dagger}\\sum_{\\substack{ {\\bf n}, {\\bf n'} \\in [1,N]^k \\\\ {\\bf m}, {\\bf m'} \\in [1,N]^k }} 1=N^{-3} \\ \\E \\Big( \\Big|\\sum_{n \\leq N} X(n)\\Big|^{4k} \\Big).$$\nCollecting all of the previous estimates, we find the lower bound \\eqref{vartransfer}.\n\nThe proof for $X(n)$ a Rademacher random variable carries through in the same manner. The multiplicative constraint in \\eqref{Vsys1} is replaced by the constraint \n$$\nm_1\\cdots m_k m_1'\\cdots m_k' n_1\\cdots n_k n_1' \\cdots n_k' = \\square.\n$$\n\\end{proof}\n\n\\begin{proof}[Proof of \\eqref{eq:moment_variance_larger} of Theorem \\ref{thm:main_moments_larger}]\nLemma \\ref{lem:var_lowerbound} shows that in the range $k \\leq N^{1\/4}$, for $X(n)$ either a Steinhaus or Rademacher random multiplicative function, we will have\n\\begin{equation}\\label{eq:var_upper_const1}\n\\Var(\\mathfrak{m}_N^{(k)}) \\geq (\\E\\, \\mathfrak{m}_N^{(k)})^2\n\\end{equation}\nif\n\\begin{equation}\\label{eq:moment_ratio}\n\\Big(\\E\\, \\Big| \\sum_{n\\leq N} X(n)\\Big|^{4k}\\Big)\\bigg\/\\Big(\\E\\,\\Big| \\sum_{n\\leq N} X(n) \\Big|^{2k}\\Big)^2 \\geq 2N^3.\n\\end{equation}\nBut for $k \\leq \\tfrac{c}{2}\\log N \/ \\log_2 N$ we may apply the estimates of Theorem \\ref{thm:harper_highmoments} to the numerator and denominator of the left-hand side of \\eqref{eq:moment_ratio}, and we see with a little computation that such an inequality is true as long as $k \\geq B (\\log N \/ \\log_2 N)^{1\/2}$ for a sufficiently large absolute constant $B$, in both the Steinhaus and Rademacher cases. \n\nThis proves \\eqref{eq:moment_ratio} for $B(\\log N \/\\log_2 N)^{1\/2} \\leq k \\leq \\tfrac{c}{2} \\log N \/ \\log_2 N$. But note that if for $p \\in (0,\\infty)$ we define\n$$\n\\phi(p) = \\log\\, \\E\\, \\Big| \\sum_{n \\leq N} X(n)\\Big|^{2p},\n$$\nthen $\\phi(p)$ is a convex function (see \\cite[Thm. 5.5.1]{garling}). It follows that $\\phi(2k) - 2 \\phi(k)$ is an increasing function in $k$. Therefore, fixing $N$, if \\eqref{eq:moment_ratio} is true for some value of $k$, it will remain true for all larger values of $k$. This implies \\eqref{eq:var_upper_const1} for all $B(\\log N \/\\log_2 N)^{1\/2} \\leq k \\leq N^{1\/4}$, completing the proof of Theorem \\ref{thm:main_moments_larger}.\n\\end{proof}\n\n\\begin{rmk}\nWe have made no attempt to optimize the upper limit $N^{1\/4}$ above. It may be that \\eqref{eq:moment_variance_larger} of Theorem \\ref{thm:main_moments_larger} holds for all $B(\\log N \/\\log_2 N)^{1\/2} \\leq k < \\infty$. \n\\end{rmk}\n\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter{\\textbf{$\\#$ #1}}}\n\n\\newcommand{\\myrefeq}[1]{Eq.~\\eqref{#1}}\n\\newcommand{\\myreffig}[1]{Fig.~\\ref{#1}}\n\\newcommand{\\myreftab}[1]{Table~\\ref{#1}}\n\\newcommand{\\myrefsec}[1]{Sec.~\\ref{#1}}\n\\newcommand{\\myrefapp}[1]{Appendix~\\ref{#1}}\n\\newcommand{\\myrefalg}[1]{Alg.~\\ref{#1}}\n\n\\makeatletter\n\\renewcommand{\\ALG@name}{Alg.}\n\\makeatother\n\n\\newcommand{\\mathbf{v}}{\\mathbf{v}}\n\\newcommand{\\mathbf{w}}{\\mathbf{w}}\n\\newcommand{\\rho}{\\rho}\n\\renewcommand{\\vec}{\\mathbf}\n\\newcommand{\\mathbb{E}}{\\mathbb{E}}\n\\captionsetup[table]{labelformat=empty}\n\n\\begin{abstract}\nWe propose a temporally coherent generative model addressing \nthe super-resolution problem for fluid flows. \nOur work represents a first approach\nto synthesize four-dimensional physics fields with neural networks.\nBased on a conditional generative adversarial network\nthat is designed for the inference of three-dimensional volumetric data, \nour model generates consistent and detailed results \nby using a novel temporal discriminator, in addition to the commonly used spatial one.\nOur experiments show that the generator is able to\ninfer more realistic high-resolution details by using\nadditional physical quantities, such as low-resolution velocities or vorticities.\nBesides improvements in the training process and in the generated outputs, \nthese inputs offer means for artistic control as well.\nWe additionally employ a physics-aware data augmentation step,\nwhich is crucial to avoid overfitting and to reduce memory requirements.\nIn this way, our network learns to \ngenerate advected quantities with highly detailed, realistic, and temporally coherent features.\nOur method works instantaneously, using only a single time-step of low-resolution fluid data.\nWe demonstrate the abilities of our method using a variety of complex inputs and applications in\ntwo and three dimensions.\n\\end{abstract}\n\n\\begin{document}\n\\title{tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow}\n\\author{You Xie*}\n\\affiliation{\n\t\\institution{Technical University of Munich}}\n\\email{you.xie@tum.de}\n\\author{Erik Franz*}\n\\affiliation{\n\t\\institution{Technical University of Munich}}\n\\email{franzer@in.tum.de}\n\\author{Mengyu Chu*}\n\\affiliation{\n\t\\institution{Technical University of Munich}}\n\\email{mengyu.chu@tum.de}\n\\author{Nils Thuerey}\n\\affiliation{\n\t\\institution{Technical University of Munich}}\n\\email{nils.thuerey@tum.de}\n\\renewcommand\\shortauthors{Xie, Y., Franz, E., Chu, M., Thuerey, N.}\n\n\\begin{CCSXML}\n\n\n10010147.10010257.10010293.10010294<\/concept_id>\nComputing methodologies~Neural networks<\/concept_desc>\n500<\/concept_significance>\n<\/concept>\n\n10010147.10010371.10010352.10010379<\/concept_id>\nComputing methodologies~Physical simulation<\/concept_desc>\n500<\/concept_significance>\n<\/concept>\n<\/ccs2012>\n\\end{CCSXML}\n\n\\ccsdesc[500]{Computing methodologies~Neural networks}\n\\ccsdesc[500]{Computing methodologies~Physical simulation}\n\n\\keywords{physics-based deep learning, generative models, computer animation, fluid simulation}\n\n\\thanks{\n(*) Similar amount of contributions.\n }\n\n\\begin{teaserfigure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{teaser7.jpg}\n\t\\caption{Our convolutional neural network learns to generate highly detailed,\n\tand temporally coherent features based on a low-resolution field containing a single time-step\n\tof density and velocity data. We introduce a novel discriminator that ensures\n\tthe synthesized details change smoothly over time.}\n\t\\label{fig:teaser}\n\\end{teaserfigure}\n\n\\maketitle\n\n\\section{Introduction} \\label{sec:intro}\n\nGenerative models were highly successful in the last years\nto represent and synthesize complex natural images \n\\cite{goodfellow2014generative}.\nThese works demonstrated that deep convolutional neural networks (CNNs) are able to\ncapture the distribution of, e.g., photos of human faces, and generate novel,\npreviously unseen versions that are virtually indistinguishable from the original\ninputs. Likewise, similar algorithms were shown to be extremely successful at\ngenerating natural high-resolution images from a coarse input \\cite{karras2017growgan}. \nHowever, in their original\nform, these generative models do not take into account the temporal\nevolution of the data, which is crucial for realistic physical systems. In the\nfollowing, we will extend these methods to generate high-resolution volumetric\ndata sets of passively advected flow quantities, and ensuring temporal coherence \nis one of the core aspects that we will focus on below.\nWe will demonstrate that it is especially important to make the training process\naware of the underlying transport phenomena, such that the network can learn to\ngenerate stable and highly detailed solutions.\n\nCapturing the intricate details of turbulent flows has been a long-standing\nchallenge for numerical simulations. Resolving such details with discretized\nmodels induces enormous com\\-pu\\-tatio\\-nal costs and quickly becomes\ninfeasible for flows on human space and time scales. \nWhile algorithms to increase the apparent resolution\nof simulations can alleviate this problem \\cite{Kim:2008:wlt}, \nthey are typically based on procedural models that are only loosely inspired by\nthe underlying physics.\nIn contrast to all previous methods, our algorithm represents a\nphysically-based interpolation, that does not require any form\nof additional temporal data or quantities tracked over time. The\nsuper-resolution process is instantaneous, based on volumetric\ndata from a single frame of a fluid simulation.\nWe found that inference of \nhigh-resolution data in a fluid flow setting benefits from the availability\nof information about the flow. In our case, this takes the shape of\nadditional physical variables such as velocity and vorticity\nas inputs, which in turn yield means for artistic control.\nA particular challenge in the field of super-resolution flow is how to \nevaluate the quality of the generated output. As we are typically\ntargeting turbulent motions, a single coarse approximation can be associated\nwith a large variety of significantly different high-resolution versions.\nAs long as the output matches the correlated spatial \nand temporal distributions of the reference data, it represents a correct solution. To encode this requirement in the training\nprocess of a neural network, we employ so-called generative adversarial\nnetworks (GANs). These methods\ntrain a {\\em generator}, as well as a second network, the {\\em discriminator} \nthat learns to judge how closely the generated output matches the ground truth data.\nIn this way, we train a specialized, data-driven loss function\nalongside the generative network, while making sure it\nis differentiable and compatible with the training process.\nWe not only employ this adversarial approach for the\nsmoke density outputs, but we also train a specialized and novel adversarial\nloss function that learns to judge the temporal coherence of the\noutputs. \n\nWe additionally present best practices to set up a training pipeline for\nphysics-based GANs. E.g., we found it particularly useful to have physics-aware\ndata augmentation functionality in place. The large amounts of space-time data\nthat arise in the context of many physics problems quickly bring typical\nhardware environments to their limits. \nAs such, we found data augmentation crucial to avoid overfitting.\nWe also explored a variety of different variants\nfor setting up the networks as well as training them, \nand we will evaluate them in terms of their capabilities to learn\nhigh-resolution physics functions below.\n\nTo summarize, the main contributions of our work are:\n\\begin{itemize}\n{\\vspace{-2pt}\n\\item a novel temporal discriminator, to generate consistent and high\\-ly detailed results over time,\n\\item artistic control of the outputs, in the form of additional loss terms and\n\tan intentional entangling of the physical quantities used as inputs,\n\\item a physics aware data augmentation method,\n\\item and a thorough evaluation of adversarial training processes for physics functions.}\n\\end{itemize}\nTo the best of our knowledge, our approach is the first generative adversarial network\nfor four-dimensional functions,\nand we will demonstrate that it\nsuccessfully learns to infer solutions for flow transport processes from approximate solutions.\n\\revision{ A high level preview of the architecture we propose can\nbe found in \\myreffig{fig:tempoGANcoarse}.}\n\\begin{figure}[tb]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{coarseview.pdf} \\\\\n\t\\caption{ \\revision{This figure gives a high level overview of our approach: a generator on the \n\tleft, is guided during training by two discriminator networks (right), one of which\n\tfocuses on space ($D_s$), while the other one focuses on temporal aspects ($D_t$).\n\tAt runtime, both are discarded, and only the generator network is evaluated.} }\n\t\\label{fig:tempoGANcoarse}\n\\end{figure}\n\\section{Related Work}\nIn the area of computer vision, deep learning techniques have achiev\\-ed significant breakthroughs\nin numerous fields such as classification \\cite{krizhevsky2012imagenet}, object detection \\cite{girshick2014rich}, \nstyle transfer \\cite{luan2017deep}, novel view synthesis \\cite{flynn2016deepstereo}, and \nadditionally, in the area of content creation. For more in-depth reviews of neural networks\nand deep learning techniques, we refer the readers to corresponding books \\cite{bishop2006book,Goodfellow2016}. \n\nOne of the popular methods to generate content are so called {\\em generative adversarial networks} (GANs), \nintroduced by Goodfellow et al. \\cite{goodfellow2014generative}. They were shown \nto be particularly powerful at re-creating the distributions of complex data sets such as images\nof human faces.\nDepending on the kind of input data they take, GANs can be separated into unconditional and conditional ones. \nThe formers generate realistic data from samples of a synthetic data distribution\nlike Gaussian noise. The DC-GAN \\cite{RadfordMC15} is a good example of an unconditional\nGAN. It was designed for generic natural images, \nwhile the cycle-consistent GAN by Zhu et al. \\shortcite{zhu2017cycle} was developed\nto translate between different classes of images.\nThe conditional GANs were introduced by Mirza and Osindero \\shortcite{mirza2014conditional}, and\nprovide the network with an input that is in some way related to the target function in order\nto control the generated output.\nTherefore, conditional variants are popular for transformation tasks,\nsuch as image translations problems \\cite{isola2016image} and \nsuper resolution problems \\cite{ledig2016photo}. \n\nIn the field of super-resolution techniques, researchers have explored different network architectures.\nE.g., convolutional net\\-works \\cite{dong2016image} were shown to be more effective than fully connected architectures.\nThese networks can be trained with smaller tiles and\n later on be applied to images of arbitrary sizes \\cite{isola2016image}.\nBatch normalization \\cite{lim2017enhanced} significantly improves results by removing value \nshifting in hidden layers, \nand networks employing so-called {\\em residual blocks}\n\\cite{kim2016accurate, lim2017enhanced} enable the training of deep networks \nwithout strongly vanishing gradients.\n\nIn term of loss functions, pixel-wise loss between the network output and the ground truth data \nused to be common, such as the $L_1$ and $L_2$ loss \\cite{dong2016image}.\nNowadays, using the adversarial structure of GANs, or using pre-trained\nnetworks, such as the VGG net \\cite{simonyan2014} often leads to higher perceptual qualities\n\\cite{mathieu2015deep, johnson2016perceptual}.\n\nOur method likewise uses residual blocks in conjunction with a conditional GAN architecture to\ninfer three-dimensional flow quantities. Here, we try to use standard architectures. \nWhile our generator is similar\nto approaches for image super-resolution \\cite{ledig2016photo}, \nwe show that loss terms and discriminators are crucial for high-quality outputs.\nWe also employ a fairly traditional GAN training, instead of recently proposed alternatives\n\\cite{arjovsky2017wasserstein,berthelot2017began}, which could potentially lead to additional gains in quality.\nBesides the super-resolution task, our work differs from many works in the GAN area\nwith its focus on temporal coherence, as we will demonstrate in more detail later on.\n\nWhile most works have focused on single images, several papers have addressed\ntemporal changes of data sets. One way to solve this problem is by \ndirectly incorporating the time axis, i.e., by using sequences of data as input and output.\nE.g., Saito et al. propose a temporal generator in their work \\cite{saito2017temporal},\nwhile Yu et al. \\cite{yu2017seqgan} proposed a sequence generator that learns a stochastic policy.\nIn these works, results need to be generated\nsequentially, while our algorithm processes individual frames independently, and in arbitrary order, if necessary.\nIn addition, such approaches would explode in terms of weights and computational resources for\ntypical four-dimensional fluid data sets.\n\nAn alternative here is to generate single frame data with additional\nloss terms to keep the results coherent over time.\nBhattacharjee etc. \\shortcite{bhattacharjee2017temporal} achieved improved coherence in their results for video frame prediction, \nby adding specially designed distance measures as a discontinuity penalty between nearby frames.\nFor video style transfer, a $L_2$ loss on warped nearby frames helped to \nalleviate temporal discontinuities, as shown by Ruder et al. \\shortcite{Ruder2016Artistic}. \nIn addition to a $L_2$ loss on nearby frames, Chen et al. \\shortcite{Chen2017ICCV} used neural networks to learn \nframe warping and frame combination in VGG feature space.\nSimilarly, Liu etc. \\cite{liu2017VideoSR} used neural networks to learn\nspatial alignment for low-resolution inputs, and adaptive aggregation for high-resolution outputs,\nwhich also improved the temporal coherence.\nDue to the three-dimensional data sets we are facing, we also adopt the single\nframe view. However, in contrast to all previous works, we propose the use\nof a temporal discriminator. We will show that relying on data-driven, learned\nloss functions in the form of a discriminator helps to improve results over manually\ndesigned losses. Once our networks are trained, this discriminator can be discarded.\nThus, unlike, e.g., aggregation methods, our approach does not influence runtime performance.\nWhile previous work shows that warping layers are useful in motion field \nlearning~\\cite{de2017deep, Chen2017ICCV}, our work targets the opposite direction:\nby providing our networks with velocities, warping layers \ncan likewise improve the training of temporally coherent content generation.\n\nMore recently, deep learning algorithms have begun to influence \ncomputer graphics algorithms.\nE.g., they were successfully used for efficient and noise-free renderings \\cite{bako2017kernel,chaitanya2017interactive}, \nthe illumination of volumes \\cite{kallweit2017deep}, \nfor modeling porous media \\cite{mosser2017porous}, and for character control \\cite{peng2017deeploco}.\nFirst works also exist that target numerical simulations. E.g., a conditional GAN\nwas used to compute solutions for smaller, two-dimensional advection-diffusion\nproblems \\cite{farimani2017, long2017pde}.\nOthers have demonstrated the inference of SPH forces with regression forests \\cite{ladicky2015data},\nproposed CNNs for fast pressure projections \\cite{tompson2016accelerating},\nlearned space-time deformations for interactive liquids \\cite{rtliquids2017} , and \nmodeled splash statistics with NNs \\cite{um2017mlflip}.\nCloser to our line of work, Chu et al. \\shortcite{chu2017cnnpatch} proposed\na method to look up pre-computed patches using CNN-based descriptors. Despite a similar goal, \ntheir methods still require additional Lagrangian tracking information, while our method\ndoes not require any modifications of a basic solver. In addition, our method does not use\nany stored data at runtime apart from the trained generator model.\n\nAs our method focuses on the conditional inference of\nhigh-re\\-so\\-lu\\-tion flow data sets, i.e. solutions of the {\\em Navier-Stokes} (NS) equations, we also give a brief overview of the\nrelated work here, with a particular focus on single-phase flows.\n\\revision{After the introduction of the stable fluids algorithm \\cite{Stam1999},} \n\\revision{a variety of extensions and variants have been developed over the years.\nE.g., more accurate Eulerian ad\\-vec\\-tion sche\\-mes \\cite{Kim05FlowFixer,Selle:2008:USM} \nare often employed, an alternative to which are Lagrangian versions \\cite{rasmussen2003smoke,magnus2011capturing}.\nWhile grids are more commonly used, particles can achieve non-dissipative results for which\na Eulerian grid would require a significant amount of refinement.\nFurthermore, procedural turbulence methods to increase apparent resolutions are popular \nextensions \\cite{Kim:2008:wlt,narain:2008:procTurb,schechter2008evolving}.\nIn contrast to our work, the different advection schemes and procedural\nturbulence methods require a calculation of the\nhigh-resolution transport of the density field over the full simulation sequence. \nAdditionally, Eulerian and Lagrangian representations can be advected in a parallelized \nfashion on a per frame basis, in line with the application of convolutions for NNs.\nOur method infers an instantaneous solution to the underlying advection problem\nbased only on a single snapshot of data, without having to compute a series of previous \ntime steps.}\n\nOur work also shares similarities in terms\nof goals with other phy\\-sics-based up-sampling algorithms \\cite{kavan2011physics},\nand due to this goal, is related to fluid control methods \\cite{McNamaraAdjointMethod,Pan:2013}.\nThese methods would work very well in conjunction with our approach, in order to \ngenerate a coarse input with the right shape and timing.\n\n\\section{Adversarial Loss Functions}\n\\label{sec:loss}\nBased on a set of low-resolution inputs, with corresponding\nhigh-resolution references, our goal is to train a CNN that produces\na temporally coherent, high-resolution solution with adversarial train\\-ing.\nWe will first very briefly summarize the basics of adversarial training,\nand then explain our extensions for temporal coherence and for \\revision{results control.} \n\\subsection{Generative Adversarial Networks}\n\\label{sec:ganbasic}\nGANs consist of two models, which are trained in conjunction:\nthe generator $G$ and the discriminator $D$. Both will be realized as convolutional neural networks in our case. \nFor regular ones, i.e., not conditional GANs,\nthe goal is to train a generator $G(x)$ that maps a simple data distribution, typically noise, $x$ to a complex desired output $y$, e.g., natural images.\nInstead of using a manually specified loss term to train the generator, another NN, the discriminator, is used as complex, learned loss function \\cite{goodfellow2014generative}.\nThis discriminator takes the form of a simple binary classifier, which is trained in a supervised manner \nto reject {\\em generated} data, i.e., it should return $D(G(x)) = 0$, and accept the {\\em real} data with $D(y) =1$.\nFor training, the loss for the discriminator is thus given by a sigmoid cross entropy\nfor the two classes ``generated'' and ``real'':\n\\revision{\n\\resizeEq{\n\\begin{split}\n\\mathcal{L}_D(D,G)= & \\mathbb{E}_{y\\sim p_{\\text{y}}(y)}[-\\log D(y)] +\\mathbb{E}_{x\\sim p_{\\text{x}}(x)}[-\\log (1-D(G(x)))] \\\\\n= & \\mathbb{E}_m [-\\log D(y_m)] + \\mathbb{E}_n [-\\log (1-D(G(x_n)))]\\ ,\n\\end{split} \n}{eq:disloss}\n}\nwhere $n$ is the number of drawn inputs $x$, while $m$ denotes the number of real data samples $y$.\n\\revision{Here we use the notation $y\\sim p_{\\text{y}}(y)$ for samples $y$ being drawn from a corresponding \nprobability data distribution $p_{\\text{y}}$, which will later on be represented by our numerical simulation framework.}\n\\revision{The continuous distribution $\\mathcal{L}_D(D,G)$ yields the average of discrete samples $y_n$ and $x_m$ in the second line of \\myrefeq{eq:disloss}.\nWe will omit the $y\\sim p_{\\text{y}}(y)$ and $x\\sim p_{\\text{x}}(x)$ subscripts of the sigmoid cross entropy,}\nand $n$ and $m$ subscripts of $D(y_m)$ and $G(x_n)$, for clarity below.\n\nIn contrast to the discriminator, the generator is trained to ``fool'' the discriminator into accepting its samples and thus to generate output that is close to the real data from $y$.\nIn practice, this means that the generator is trained to drive the discriminator result for its outputs to one.\nInstead of directly using the negative discriminator loss, GANs typically use\n\\revision{\n\\resizeEq{\n\\begin{split}\n\\mathcal{L}_G(D,G)= \\mathbb{E}_{x\\sim p_{\\text{x}}(x)}[-\\log (D(G(x)))] = \\mathbb{E}_n [-\\log (D(G(x)))]\n\\end{split}}{}\n}\nas the loss function for the generator, in order to reduce diminishing gradient problems \n\\cite{goodfellow2016nips}.\nAs $D$ is realized as a NN, it is guaranteed to be sufficiently differentiable as a loss function for $G$.\nIn practice, both discriminator and generator are trained in turns and will optimally reach an\nequilibrium state.\n\nAs we target a super-resolution problem, our goal is not to generate an arbitrary high-resolution output, but\none that corresponds to a low-resolution input, and hence we employ a {\\em conditional} GAN.\nIn terms of the dual optimization problem described above, this means that the input $x$ now represents\nthe low-resolution data set, and the discriminator is provided with $x$ in order to establish and \nensure the correct relationship\nbetween input and output, i.e., we now have $D(x,y)$ and $D(x,G(x))$ \\cite{mirza2014conditional}.\nFurthermore, previous work \\cite{zhao2015loss}\nhas shown that an\nadditional $L_1$ loss term with a small weight can be added to the generator to ensure that its output stays close \nto the ground truth $y$. This yields $\\lambda _{L_1} \\mathbb{E}_n \\left \\| G(x)-{y}\\right \\|_{1}$, where $\\lambda _{L_1}$\ncontrols the strength of this term, and we use $\\mathbb{E}$ for consistency to denote the\nexpected value, in this discrete case being equivalent to an average.\n\n\\subsection{ Loss in Feature Spaces}\n\\label{sec:featureloss}\n\n\\revision{In order to further control the coupled, non-linear optimization process, the features\nof the underlying CNNs can be constrained. This is an important issue, as controlling\nthe training process of GANs is known as a difficult problem.\nHere, we extend previous work on feature space losses, which\nwere shown to improve realism in natural images~\\cite{dosovitskiy2016generating}, and\nwere also shown to help with mode collapse problems \\cite{salimans2016improved}.} \nTo achieve this goal, an $L_2$ loss over parts or the whole feature space of a neural network is introduced for the generator.\nI.e., the intermediate results of the generator network are constrained w.r.t. a set of intermediate reference data.\nWhile previous work typically makes use of manually selected layers\nof pre-trained networks, such as the VGG net, we propose to use features of the discriminator as constraints instead.\n\nThus, we incorporate a novel loss term of the form\n\\begin{equation}\n\\mathcal{L}_{f}=\\mathbb{E} _{n,j} \\lambda _{f}^{j}\\left \\| F^{j}(G(x))-F^{j}(y) \\right \\|_{2}^{2} \\ ,\n\\label{eq:featureloss}\n\\end{equation}\nwhere $j$ is a layer in our discriminator network, and $F^{j}$ denotes the activations of the corresponding layer.\nThe factor $\\lambda _{f}^{j}$ is a weighting term, which can be adjusted on a per layer basis, as we will discuss in \\myrefsec{sec:artcontrol}. \nIt is particularly important in this case that we can employ the discriminator here, as no suitable, pre-trained networks\nare available for three-dimensional flow problems.\n\n\\revision{Interestingly, these weights yields different and realistic results both for positive as well as negative choices for the weights.\nFor $\\lambda _{f} > 0$\nthese loss terms effectively encourage a minimization of the mean feature space distances of real and\ngenerated data sets, such that generated features resemble features of the reference. \nSurprisingly, we found that training runs with $\\lambda _{f} < 0$\nalso yield excellent, and often slightly better results.\nAs we are targeting conditional GANs, our networks\nare highly constrained by the inputs. \nOur explanation for this behavior is that a negative feature loss in this setting encourages the optimization\nto generate results that differ in terms of the features, but are still similar, ideally indistinguishable, in \nterms of their final output. This is possible as we are not targeting a single ground-truth result, but \nrather, we give the generator the freedom to generate any result that best fits the collection of inputs\nit receives. From our experience, this loss term drives the generator towards realistic detail,\nan example of which can be seen in \\myreffig{fig:2dlayerlossexample}. \nNote \nthat due to the non-linear nature of the optimization, linearly changing $\\lambda _{f}$ \nyields to models with significant differences in the generated small scale features.}\n\\begin{figure}[t]\n \\centering \n\t \\begin{overpic}[width=0.24\\linewidth]{low_smallcut.png}\n\t \\put( 5,5){\\small \\color{white}{$a)$}}\n\t \\end{overpic}\n\t \\begin{overpic}[width=0.24\\linewidth]{l2_smallcut.png}\n\t \\put( 5,5){\\small \\color{white}{$b)$}}\n \t\\end{overpic}\n\t \\begin{overpic}[width=0.24\\linewidth]{test_13_smallcut.png}\n\t \\put( 5,5){\\small \\color{white}{$c)$}}\n\t \\end{overpic}\n\t \\begin{overpic}[width=0.24\\linewidth]{highreference_smallcut.png}\n\t\\put( 5,5){\\small \\color{white}{$d)$}}\n \t\\end{overpic}\n \\caption{From left to right: a) a sample, low-resolution input, b) a CNN output with naive $L_2$ loss (no GAN training), c)\n our tempoGAN output, and d) the high-resolution reference. The $L_2$ version learns a smooth result\n without small scale details, while our output in (c) surpasses the detail of the reference in certain regions.\n } \\label{fig:2dlayerlossexample}\n\\end{figure}\n\n\\subsection{ Temporal Coherence}\n\\label{sec:tempo}\nWhile the GAN process described so far is highly successful at generating highly \ndetailed and realistic outputs for static frames, these details are particularly challenging\nin terms of their temporal coherence.\nSince both the generator and the discriminator work on every frame independently, subtle changes of the input $x$ can lead to outputs $G(x)$ with distinctly different \n details for higher spatial frequencies.\n\nWhen the ground truth data $y$ comes from a transport process, such as frame motion or flow motion, it\n typically exhibits a very high degree of temporal coherence, and a velocity field $v_y$ exists for which \n $y^{t} = \\mathcal{A}( y^{t-1}, v_y^{t-1} )$. Here, we denote the advection operator\n (also called warp or transport in other works) with\n$\\mathcal{A}$, and we assume without loss of generality\nthat the time step between frame $t$ and $t-1$ is equal to one. Discrete time steps\nwill be denoted by superscripts, i.e., for a function $y$ of space and time $y^t = y(\\vec{x},t)$ denotes\na full spatial sample at time $t$.\nSimilarly, in order to solve the temporal coherence problem, the relationship $G(x^{t}) = \\mathcal{A}( G(x^{t-1}), v_{G(x)}^{t-1})$ should hold, \nwhich assumes that we can compute a motion $v_{G(x)}$ based on the generator input $x$. \nWhile directly computing such a motion can be difficult and unnecessary for general GAN problems, \nwe can make use of the ground truth data for $y$ in our conditional setting. I.e., in the following,\nwe will use a velocity reference $v_y$ corresponding to the target $y$, and perform a \nspatial down-sampling to compute the velocity $v_x$ for input $x$.\n\nEquipped with $v_x$, one possibility to improve temporal coherence would be\nto add an $L_2$ loss term of the form:\n\\begin{equation}\\label{eq:l2VelSingleSide}\n\\mathcal{L}_{2,t} = \\| G(x^{t}) - \\mathcal{A}( G(x^{t-1}), v_{x}^{t-1})\\|_{2}^{2}\n\\end{equation}\nWe found that extending the forward-advection difference with backward-advection improves the results further, i.e., the \nfollowing $L_2$ loss is clearly preferable over \\myrefeq{eq:l2VelSingleSide}:\n\\resizeEq{\n\\begin{split}\n\\mathcal{L}_{2,t} = \\| G(x^{t}) - \\mathcal{A}( G(x^{t-1}), v_{x}^{t-1})\\|_{2}^{2} + \\| G(x^{t}) - \\mathcal{A}( G(x^{t+1}), -v_{x}^{t+1})\\|_{2}^{2} \\\n\\end{split}\n}{eq:l2VelDoubleSide}\n, where we align the next frame at $t+1$ by advecting with $-v_{x}^{t+1}$.\n\nWhile this $\\mathcal{L}_{2,t}$ based loss improves temporal coherence, our tests\nshow that its effect is relatively small. E.g., it can improve outlines, but leads to clearly unsatisfactory results,\nwhich are best seen in the accompanying video. One side effect of this loss term\nis that it can easily be minimized by simply reducing the values of $G(x)$. \nThis is visible, e.g., in the second column of \\myreffig{fig:tempoStill}, which contains noticeably less density\nthan the other versions and the ground truth. However, we do not want to drive the generator\ntowards darker outputs, but rather make it aware of how the data should change over time.\n\n\\begin{figure}[b]\n\t\\centering\n\t\\begin{overpic}[width=\\linewidth]{tempofinal.png}\n\t\t\\put( 1.2,1.1){\\small \\color{white}{None}}\n\t\t\\put( 21,2){\\small \\color{white}{$\\mathcal{L}_{2,t}$}}\n\t\t\\put( 41,2){\\small \\color{white}{$\\mathcal{L}_{ \\footnotesize{D_t}' }$}}\n\t\t\\put( 61,2){\\small \\color{white}{$\\mathcal{L}_{D_t}$}}\n\t\t\\put( 81,2){\\small \\color{white}{$y$}}\n\t\\end{overpic}\n\t\\caption{ A comparison of different approaches for temporal coherence.\n\t\tThe top two rows show the inferred densities, while the bottom two rows contain the \n\t\ttime derivative of the frame content computed with a finite difference between frame $t$ and $t+1$.\n\t\tPositive and negative values are color-coded with red and blue, respectively.\n\t\tFrom left to right: no temporal loss applied, $\\mathcal{L}_{2,t}$ loss applied, \n\t\t$\\mathcal{L}_{D_t'}$, i.e., applied without advection, $\\mathcal{L}_{D_t}$ \n\t\tapplied with advection (our full tempoGAN approach), and the ground-truth $y$.\n\t\tFrom left to right across the different versions, the derivatives become \n\t\tless jagged and less noisy, as well as more structured and narrow.\n\t\tThis means the temporal coherence is improved, esp. for the result\n\t\tfrom our algorithm ($\\mathcal{L}_{D_t}$).}\n\t\\label{fig:tempoStill}\n\\end{figure}\n\nInstead of manually encoding the allowed temporal changes, \nwe propose to use another discriminator $D_t$, that learns from the given data whose changes are admissible.\nIn this way, the original spatial discriminator, which we will denote as $D_s(x,G(x))$ from now on, guarantees that \nour generator learns to generate realistic details, while the new temporal discriminator $D_t$ mainly focuses on \ndriving $G(x)$ towards solutions that match the temporal evolution of the\nground-truth $y$.\n\nSpecifically, $D_t$ takes three frames as input. We will denote such sets of three frames with a tilde in the following.\nAs real data for the discriminator, the set $\\widetilde{Y}_{\\mathcal{A}}$ contains three consecutive and advected frames, thus\n$\\widetilde{Y}_{\\mathcal{A}} = \\{ \\mathcal{A}( y^{t-1}, v_x^{t-1} )$, $y^{t}$, $\\mathcal{A}( y^{t+1}, -v_x^{t+1} )\\}$. \nThe generated data set contains correspondingly advected samples from the generator:\n$\\widetilde{G}_{\\mathcal{A}}(${\\footnotesize{$\\widetilde{X}$}}$) = \\{ \\mathcal{A}( G(x^{t-1}), v_{x}^{t-1})$, $G(x^{t})$, $\\mathcal{A}( G(x^{t+1}), -v_{x}^{t+1}) \\}$. \n\nSimilar to our spatial discriminator $D_s$, the temporal discriminator $D_t$ is trained as a\nbinary classifier on the two sets of data:\n\\resizeEq{\n\\begin{split}\n\\mathcal{L}_{D_t}(D_t,G)= \\mathbb{E}_m [-\\log D_t\\Big(\\widetilde{Y}_{\\mathcal{A}}\\Big) ]\n+ \\mathbb{E}_n [-\\log (1-D_t\\Big(\\widetilde{G}_{\\mathcal{A}}\\Big(\\widetilde{X}\\Big)\\Big))] \\\n\\end{split}}{eq:tempod}\n, where set $\\widetilde{X}$ also contains three consecutive frames, i.e., $\\widetilde{X} =\\{x^{t-1},$ $ x^t, x^{t+1}\\} $.\nNote that unlike the spatial discriminator, $D_t$ is not a \nconditional discriminator. It does not ``see'' the conditional input $x$, and thus\n$D_t$ is forced to make its judgment purely based on the given sequence.\n\n\\begin{figure}[b]\n\t\\centering \n\t\\begin{overpic}[width=\\linewidth]{advectInput.png}\n\t\t\\put( 3.0,2.0){\\small \\color{black}{a) $\\widetilde{Y}$}}\n\t\t\\put(26.0,2){\\small \\color{black}{a) $\\widetilde{Y}_{\\mathcal{A}}$}}\n\t\t\\put(54.0,2){\\small \\color{white}{b) $\\widetilde{Y}$}}\n\t\t\\put(77.0,2){\\small \\color{white}{b) $\\widetilde{Y}_{\\mathcal{A}}$}}\n\t\\end{overpic}\n\t\\caption{ These images highlight data alignment due to advection. \n\t\tThree consecutive frames are encoded as R, G, B channels\n\t\tof a single image, thus, ideally a fully aligned image would only contain shades of grey.\n\t\tThe two rows contain front and top views in the top and bottom row, respectively.\n We show two examples, a) and b). Each of them contains $\\widetilde{Y}$ left, \n and $\\widetilde{Y}_{\\mathcal{A}}$ right.\n\t\tThe RGB channels are the three input frames, t-1, t, and t+1.\n Compared with $\\widetilde{Y}$, $\\widetilde{Y}_{\\mathcal{A}}$ is significantly less saturated, \n\t\ti.e., better aligned. }\n\t\\label{fig:tempoData}\n\\end{figure}\n\nIn \\myreffig{fig:tempoStill}, we show a comparison of the different loss variants for improving temporal\ncoherence. The first column is generated with only the spatial discriminator, i.e., \nprovides a baseline for the improvements.\nThe second column shows the result using the $L_2$-based temporal loss $\\mathcal{L}_{2,t}$ from \\myrefeq{eq:l2VelDoubleSide}, \nwhile the fourth column shows the result using $D_t$ from \\myrefeq{eq:tempod}. The last column is the ground-truth data $y$.\nThe first two rows show the generated density fields. While $\\mathcal{L}_{2,t} $ reduces overall density content, the result \nwith $D_t$ is clearly closer to the ground truth.\nThe bottom two rows show time derivatives of the densities for frames t and t+1. \nAgain, the result from $D_t$ and the ground-truth $y$ match closely in terms of their time derivatives.\nThe large and jagged values of the first two rows indicate the undesirable temporal changes produced\nby the regular GAN and the $\\mathcal{L}_{2,t}$ loss.\n\nIn the third column of \\myreffig{fig:tempoStill}, we show a simpler variant of \nour temporal discriminator. Here, we employ the discriminator without aligning the\nset of inputs with advection operations, i.e.,\n\\resizeEq{\n\\begin{split}\n\\mathcal{L}_{D_t'}(D_t',G)=& \\mathbb{E}_m [-\\log D_t\\Big(\\widetilde{Y}\\Big)] + \\mathbb{E}_n \n[-\\log (1-D_t(\\widetilde{G}\\Big(\\widetilde{X}\\Big) ))]\n\\end{split}}{}\nwith $\\widetilde{Y} = \\{y^{t-1}, y^t, y^{t+1} \\}$ \nand $\\widetilde{G}\\Big(\\widetilde{X}\\Big) = \\{G(x^{t-1}), G(x^{t}), G(x^{t+1})\\}$.\n\n\\begin{figure*}[tb]\n \\centering \n\t\\includegraphics[width=0.99\\linewidth]{outline1.pdf} \\\\\n \\caption{ Here an overview of our tempoGAN architecture is shown. The three neural networks (blue boxes)\n are trained in conjunction. The data flow between them is highlighted by the red and black arrows. \\revision{Note that $x$ and $y$ denote fluid data that contains velocity and\/or vorticity fields, as well as density depending on the chosen architecture (see \\myrefsec{sec:inputs}).} }\n \\label{fig:nnArchResnet}\n\\end{figure*}\n\nThis version improves results compared to $\\mathcal{L}_{2,t}$, but does not \nreach the level of quality of $\\mathcal{L}_{D_t}$, as can be seen in \\myreffig{fig:tempoStill}.\nAdditionally, we found that $\\mathcal{L}_{D_t}$\noften exhibits a faster convergence during the training process. This is an indication that the underlying\nneural networks have difficulties aligning and comparing the data by themselves when using $\\mathcal{L}_{D_t'}$. \nThis intuition is illustrated in \\myreffig{fig:tempoData}, where\nwe show example content of the regular data sets $\\widetilde{Y}$ and the advected version\n$\\widetilde{Y}_{\\mathcal{A}}$ side by side. In this figure, the three chronological frames\nare visualized as red, green, and blue channels of the images. Thus, a pure gray-scale image\nwould mean perfect alignment, while increasing visibility of individual colors indicates un-aligned\nfeatures in the data. \\myreffig{fig:tempoData} shows\nthat, although not perfect, the advected one leads to clear improvements in terms of aligning the features of the data sets,\ndespite only using the approximated coarse velocity fields $v_x$. Our experiments show\nthat this alignment successfully improves the backpropagated gradients such that the\ngenerator learns to produce more coherent outputs.\nHowever, when no flow fields are available, $\\mathcal{L}_{D_t'}$ still represents a better choice than the simpler $\\mathcal{L}_{2,t}$ version.\nWe see this as another indicator of the power of adversarial training models. It seems\nto be preferable to let a neural network learn and judge the specifics of a data set, instead\nof manually specifying metrics, as we have demonstrated for \ndata sets of fluid flow motions above.\n\nIt is worth pointing out that our formulation for $D_t$ in \\myrefeq{eq:tempod} means\nthat the advection step is an inherent part of the generator training process. While\n$v_x$ can be pre-computed, it needs to be applied to the outputs of the generator during\ntraining. This in turn means that the advection needs to be tightly integrated into the training loop.\nThe results discussed in the previous paragraph indicate that if this is done correctly, the loss gradients of the\ntemporal discriminator are successfully passed through the advection steps to give the generator\nfeedback such that it can improve its results. In the general case, advection is a non-linear function, the discrete\napproximation for which we have abbreviated with $\\mathcal{A}(y^{t}, v_y^{t})$ above. Given a known \nflow field $v_y$ and time step, we can linearize this equation to yield a matrix $M y = \\mathcal{A}(y^{t}, v_y^{t}) = y^{t+1}$.\nE.g., for a first order approximation, $M$ would encode the Euler-step lookup of source positions\nand linear interpolation to compute the solution. While we have found first order scheme (i.e., semi-Lagrangian\nadvection) to work well, $M$ could likewise\nencode higher-order methods for advection.\n\nWe have implemented this process as an advection layer in our network training, which computes\nthe advection coefficients, and performs the matrix multiplication such that the \ndiscriminator receives the correct sets of inputs. When training the generator, the same \ncode is used, and the underlying NN framework can easily compute the necessary derivatives.\nIn this way, the generator actually receives three accumulated, and aligned gradients\nfrom the three input frames that were passed to $D_t$.\n\n\\subsection{Full Algorithm}\n\\label{sec:totalloss}\nWhile the previous sections have explained the different parts of our final loss function,\nwe summarize and discuss the combined loss in the following section.\nWe will refer to our full algorithm as {\\em tempoGAN}.\nThe resulting optimization problem that is solved with NN training consists\nof three coupled non-linear sub-problems: the generator, the conditional spatial discriminator,\nand the un-con\\-di\\-tio\\-nal temporal discriminator. The generator has to effectively minimize both discriminator losses, additional feature space constraints,\nand a $L_1$ regularization term. Thus, the loss functions can be summarized as:\n\\resizeEq{\n\\begin{split}\n\\mathcal{L}_{D_t}(D_t,G)= & -\\mathbb{E}_m [\\log D_t(\\widetilde{Y}_{\\mathcal{A}})] -\n \\mathbb{E}_n [\\log \\Big(1-D_t\\Big(\\widetilde{G}_{\\mathcal{A}}\\Big(\\widetilde{X}\\Big)\\Big)\\Big)] \\\\\n\\mathcal{L}_{D_s}(D_s,G)=& -\\mathbb{E}_m [\\log D_s(x, y)] - \\mathbb{E}_n [\\log (1-D_s(x, G(x)))] \\\\\n\\mathcal{L}_{G}(D_s,D_t,G)=& -\\mathbb{E}_n [\\log D_s(x, G(x)) ]\n- \\mathbb{E}_n [\\log D_t\\Big(\\widetilde{G}_{\\mathcal{A}}\\Big(\\widetilde{X}\\Big)\\Big)] \\\\\n& + \\mathbb{E} _{n,j} \\lambda _{f}^{j} \\left \\| F^{j}(G(x))-F^{j}({y}) \\right \\|_{2}^{2} + \\lambda _{L_1} \\mathbb{E}_n \\left \\| G(x)-{y}\\right \\|_{1}\n\\end{split}}{eq:totalLoss}\n\nOur generator has to effectively compete against two powerful adversaries,\nwho, along the lines of \"the enemy of my enemy is my friend\", \nimplicitly cooperate to expose the results of the generator. E.g., we have \nperformed tests without $D_s$, only using $D_t$, and the resulting generator\noutputs were smooth in time, but clearly less detailed than when using both discriminators.\n\nAmong the loss terms of the generator, the $L_1$ term has a relatively minor role to stabilize\nthe training by keeping the averaged output close to the target. However, due to the complex\noptimization problem, it is nonetheless helpful for successful training runs.\nThe feature space loss, on the other hand, directly influences the generated features.\nIn the adversarial setting the discriminator most likely learns distinct features \nthat only arise for the ground truth (positive features), or those that make it easy \nto identify generated versions, i.e., negative features that are only produced by the generator.\nThus, while training, the generator will receive gradients to make it produce more\nfeatures of the targets from $F(y)$, while the gradients from $F(G(x))$ will penalize\nthe generation of recognizable negative features. \n\nWhile positive values for $\\lambda_f$ reinforce this behavior, it is less clear\nwhy negative values can lead to even better results in certain cases. \n\\revision{Our explanation\nfor this behavior is that the negative weights drive the generator towards distinct features that\nhave to adhere to the positive and negative features detected by the discriminator,\nas explained above in \\myrefsec{sec:featureloss}, but at the same time differ from the average features in $y$.}\nThus, the generator cannot simply create different or no features,\nas the discriminator would easily detect this. Instead it needs to develop features\nthat are like the ones present in the outputs $y$, but\ndon't correspond to the average features in $F(y)$,\nwhich, e.g., leads to the fine detailed outputs shown in \\myreffig{fig:2dlayerlossexample}. \n\n\\begin{algorithm}\n\t\\revision{\n\t\t\\caption{tempoGAN training algorithm}\\label{alg:tempoGANalg}\t\t\n\t\t\\begin{algorithmic}[1] \\small \n\t\t\t\\For{number of training steps}\n\t\t\t\\For{$k_{D_{s}}$}\n\t\t\t\\State Compute data-augmented mini batch $x, y$\n\t\t\t\\State Update $D_{s}$ with $\\nabla_{D_{s}}[\\mathcal{L}_{D_s}(D_s,G)]$\n\t\t\t\\EndFor\n\t\t\t\\For{$k_{D_{t}}$}\n\t\t\t\\State Compute data-augmented mini batch $\\widetilde{X}, \\widetilde{Y}$\n\t\t\t\\State Compute advected frames $\\widetilde{Y}_{\\mathcal{A}}$ and $\\widetilde{G}_{\\mathcal{A}}\\Big(\\widetilde{X}\\Big)$\n\t\t\t\\State Update $D_{t}$ with $\\nabla_{D_{t}}[\\mathcal{L}_{D_t}(D_t,G)]$\n\t\t\t\\EndFor\n\t\t\t\\For{$k_{G}$}\n\t\t\t\\State Compute data-augmented mini batch $x, y, \\widetilde{X}$\n\t\t\t\\State Compute advected frames $\\widetilde{G}_{\\mathcal{A}}\\Big(\\widetilde{X}\\Big)$\n\t\t\t\\State Update $G$ with $\\nabla_{G}[\\mathcal{L}_{G}(D_s,D_t,G)]$\n\t\t\t\\EndFor\n\t\t\t\\EndFor\n\t\t\\end{algorithmic}\n\t}\n\\end{algorithm}\n\n\\section{Architecture and Training Data}\n\\label{sec:nnarch}\nWhile our loss function theoretically works with any realization of $G, D_s$ and $D_t$, their\nspecifics naturally have significant impact on performance and the quality of the generated outputs.\nA variety of network architectures has been proposed\nfor training generative models \\cite{goodfellow2014generative,RadfordMC15,berthelot2017began}, and\nin the following, we will focus on pure convolutional networks for the generator, i.e., networks without any\nfully connected layers. \nA fully convolutional network has the advantage that the trained network can\nbe applied to inputs of arbitrary sizes later on. \nWe have experimented with a large variety of generator architectures, and while many \nsimpler networks only yielded sub-optimal results,\nwe have achieved high quality results with generators based on the popular U-net \\cite{ronneberger2015u, isola2016image}, \nas well as with residual networks ({\\em res-nets}) \\cite{lim2017enhanced}.\nThe U-net concatenates activations from earlier layers to later layers \n(so called {\\em skip connections}) in order to allow the network to combine high- and low-level information, \nwhile the res-net\nprocesses the data using by multiple {\\em residual blocks}. Each of these residual blocks convolves the\ninputs without changing their spatial size, and the result of two convolutional layers is added to the original \nsignal as a ``residual'' correction. In the following, we will focus on the latter architecture, as it gave slightly \nsharper results in our tests. \n\nWe found the discriminator architecture to be less crucial. \nAs long as enough non-linearity is introduced over the course of several hidden layers,\nand there are enough weights, changing the connectivity of the discriminator did not significantly influence\nthe generated outputs. Thus, in the following, we will always use discriminators with four convolutional layers\nwith leaky ReLU activations~\\!\\footnote{With a leaky tangent of 0.2 for the negative half space.}\nfollowed by a fully connected layer to output the final score. \n\\revision{As suggested by Odena et al. \\shortcite{odena2016deconv}, we use the \nnearest-neighbor interpolation layers as the first two layers in our generator, instead of deconvolutional ones,\nand in the discriminator networks, the kernel size is divisible by the corresponding stride.\nAn overview of the architecture of our neural \nnetworks is shown in \\myreffig{fig:nnArchResnet}, while their details, such as layer configuration and activation \nfunctions, can be found in \\myrefapp{app:nnarch}.}\n\\subsection{Data Generation and Training} \\label{sec:datagen}\n\n\\revision{We use a randomized smoke simulation setup to generate the desired number of\ntraining samples. For this \nwe employ a standard fluids solve \\cite{Stam1999} with MacCormack advection and MiC-preconditioned CG solver.}\nWe typically generate around 20 simulations,\nwith 120 frames of output per simulation. For each of these, we randomly \ninitialize a certain number of smoke inflow regions, another set of velocity inflows,\nand a randomized buoyancy force. As inputs $x$,\nwe use a down-sampled version of the simulation data sets, typically by a factor of 4,\nwhile the full resolution data is used as ground truth $y$. \n\\revision{Note that this\nsetup is inherently {\\em multi-modal}: for a single low resolution configuration,\nan infinitely large number of correct high resolution exists. We do not explicitly\nsample the high resolution solution space, but the down-sampling\nin conjunction with data augmentation lead to ambiguous low- and high-resolution\npairs of input data.}\nTo prevent a large number of primarily empty samples, we discard inputs with\naverage smoke density of less than 0.02.\nDetails of the parametrization can be found in \\myrefapp{app:data}, and visualizations of the training data\nsets can be found in the supplemental video.\nIn \naddition, we show examples generated from a two-dimensional rising smoke \nsimulation with a different simulation setup than the one used for generating\nthe training data. It is, e.g., used in \\myreffig{fig:2dlayerlossexample}.\n\nWe use the same modalities for all training runs:\nwe employ the commonly used ADAM optimizer~\\!\\footnote{Parameterized with $\\beta=0.5$.} \nwith an initial learning rate of $2 \\cdot 10^{-4}$\nthat decays to 1\/20th for second half of the training iterations. \n\\revision{All parameters were determined experimentally, details are given in \\myrefapp{app:data}.\nThe number of training iterations is typically on the order of 10k. \nWe use 20\\% of the data for testing and the remaining 80\\% for training.\nOur networks did not require any additional regularization such \nas dropout or weight decay. \n\\revision{The training procedure is summarized again in \\myrefalg{alg:tempoGANalg}. \nDue to the typically limited amount of GPU memory, especially for 3D data sets, we perform multiple training steps\nfor each of the components. \nDetail are listed in \\myrefapp{app:data}.\nIn \\myrefalg{alg:tempoGANalg}, we use $k_{D_{s}}$, $k_{D_{t}}$, and $k_{G}$ to denote the number training iterations for $D_{s}$, $D_{t}$, and $G$, respectively.\n}\\\\\n\nWhile the coupled non-linear optimization can yield different results\neven for runs with the same parameters due to the non-deterministic nature\nof parallelized operations, we found the results to be stable in terms of quality.\nIn particular, we did not find it necessary to change the weights of the different\ndiscriminator loss terms. However, if desired, $\\lambda_{f}$ can be used to influence\nthe learned details as described above.}\nFor training and running the trained networks, we use \nNvidia GeForce GTX 1080 Ti GPUs (each with 11GB Ram) and Intel Core i7-6850K CPUs,\nwhile we used the {\\em tensorflow} and {\\em mantaflow} software frameworks for\ndeep learning and fluid simulation implementations, respectively.\n\n\\subsection{ Input Fields }\n\\label{sec:inputs}\n\n\\begin{figure}[tb]\n \\begin{overpic}[width=0.74 \\linewidth]{comp_0099_z1.png}\n\t\t\\put( 2,3){\\small \\color{black}{a)}}\n\t\t\\put(35,3){\\small \\color{black}{b)}} \n\t\t\\put(68,3){\\small \\color{black}{c)}} \n\t\\end{overpic}\n \\begin{overpic}[width=0.2466 \\linewidth]{t75_img_0099_z1.png}\n\t\t\\put(6 ,6){\\small \\color{black}{d)}} \n\t\\end{overpic}\n\t\\caption{ An illustration of different training results after 40k iterations with different input\n\t\tfields: a) $\\rho$, b) $\\rho+\\mathbf{v}$, c) $\\rho+\\mathbf{v}+\\mathbf{w}$,\n\t\tall with similar network sizes. Version d) with only $\\rho$ has 2x the number of weights.\n\t\tThe seams in the images show the size of the training patches. \n\t\tSupplemental physical fields lead to clear improvements in b) and c), that even additional \n\t\tweights cannot compensate for.}\n\t\\label{fig:densVelVort}\n\\end{figure}\n\\begin{figure}[tb]\n\t\\centering \n\t\\begin{overpic}[width=0.49\\linewidth]{t10_img_0140_noaug_z2.png}\n\t\t\\put( 3,3){\\small \\color{white}{a)}}\n\t\\end{overpic}\n\t\\begin{overpic}[width=0.49\\linewidth]{t05_img_0140_resn_z2.png}\n\t\t\\put( 3,3){\\small \\color{white}{b)}}\n\t\\end{overpic}\n\t\\caption{ \n\t\tAn identical GAN network trained with the same set of input data\n\t\tWhile version a) did not use data augmentation, leading to blurry results with streak-like artifacts,\n\t\tversion b), with data augmentation, produced sharp and detailed outputs.\n\t} \\label{fig:dataAugComp}\n\\end{figure}\n\nOn first sight, it might seem redundant and unnecessary to input flow velocity $\\mathbf{v}$ and vorticity\n$\\mathbf{w}$ in addition to the density $\\rho$. After all, we are only interested in the\nfinal output density, and many works on GANs exist, which demonstrate that\ndetailed images can be learned purely based on image content.\n\nHowever, over the course of numerous training runs, we noticed that giving the networks additional \ninformation about the underlying physics significantly improves convergence and quality of the inferred results. \nAn example is shown in \\myreffig{fig:densVelVort}. Here, we show how the training evolves\nfor three networks with identical size, structure and parameters, the only difference being the\ninput fields. From left to right, the networks receive $(\\rho)$, $(\\rho,\\mathbf{v})$, and $(\\rho,\\mathbf{v},\\mathbf{w})$.\nNote that these fields are only given to the \ngenerator, while the discriminator always only receives $(\\rho)$ as input. \nThe version with only density passed to the generator, $G(\\rho)$, fails to reconstruct smooth and detailed outputs. \nEven after 40000 iterations, the results exhibit strong grid artifacts and lack detailed structures. \nIn contrast, both versions with additional inputs start to yield higher quality outputs earlier during training.\nWhile adding $\\mathbf{v}$ is crucial, the addition of $\\mathbf{w}$ \nonly yields subtle improvements (most apparent at the top of the images in \\myreffig{fig:densVelVort}),\nwhich is why we will use $(\\rho,\\mathbf{v})$ to generate our final results below. \n\\newRachel{ The full training run comparison is in our supplemental video.} \n\nWe believe that the insight that auxiliary fields help improving training and inference quality is a surprising\nand important one. The networks do not get any explicit guidance on how to use the additional information.\nHowever, it clearly not only learns to use this information, but also benefits from having this\nsupporting information about the underlying physics processes. \nWhile larger networks can potentially alleviate the quality problems of\nthe den\\-si\\-ty-only version, as illustrated in \\myreffig{fig:densVelVort} d), \nwe believe it is highly preferable to instead construct and train smaller, \nphysics-aware networks. This not only shortens training times and accelerates convergence, but also makes evaluating\nthe trained model more efficient in the long run. The availability of physical inputs turned out\nto be a crucial addition in order to successfully realize high-dimensional GAN outputs for space-time data, \nwhich we will demonstrate in \\myrefsec{sec:results}. \n\\subsection{ Augmenting Physical Data}\n\\label{sec:dataaug}\nData augmentation turned out to be an important component\nof our pipeline due to the high dimensionality of our data sets and\nthe large amount of memory they require. \nWithout sufficient enough training data, the adversarial training yields undesirable results\ndue to overfitting.\nWhile data augmentation is common practice for natural images \\cite{dosovitskiy2016discriminative, krizhevsky2012imagenet},\nwe describe several aspects below that play a role for physical data sets.\n\nThe augmentation process allows us to train networks having millions of weights\nwith data sets that only contain a few hundred samples without overfitting. \nAt the same time, we can ensure that the trained networks respect the\ninvariants of the underlying physical problems, which is crucial\nfor the complex space-time data sets of flow fields that we are considering.\nE.g., we know from theory that solutions obey Galilean invariance, and we can make\nsure our networks are aware of this property not by providing large data sets, but instead\nby generating data with different inertial frames on the fly while training. \n\nIn order to minimize the necessary size of the training set without deteriorating the result quality,\nwe generate modified data sets at training time. We focus on spatial transformations, which\ntake the form of $\\tilde{\\vec{x}}(\\vec{p}) = \\vec{x}( A \\vec{p} )$, where $\\vec{p}$ is a spatial position, and $A$ denotes\nan $4\\times4$ matrix. For applying augmentation, we distinguish three types of components of a data set: \\begin{itemize}\n\\item {\\em passive}: these components can be transformed in a straight forward manner as described above. An example of passive\n\tcomponents are the advected smoke fields $\\rho$, shown in many of our examples.\n\\item {\\em directional}: the content of these components needs to be transformed in conjunction with the augmentation. A good example\n\tis velocity, whose directions need to be adjusted for rotations and flips, i.e., $\\tilde{\\vec{v}}(\\vec{p}) = A_{3\\times3}\\vec{v}( A \\vec{p} )$, where \n\t$A_{3\\times3}$ is the upper left $3\\times3$ matrix of $A$.\n\\item {\\em derived}: finally, derived components would be invalid after applying augmentation, and thus need to be re-computed. \n\t\\revision{A good example are physical quantities such as vorticity,\n\twhich contain mixed derivatives that cannot be easily transformed into a new frame of reference. \n\tHowever, these quantities typically can be calculated anew from other fields after augmentation.}\n\\end{itemize}\n\nIf the data set contains quantities that cannot be computed from other augmented fields, this unfortunately\nmeans that augmentation cannot be applied easily. However, we believe that a large class of typical physics data\nsets can in practice be augmented as described here.\n\n\\begin{figure*}[tb]\n\t\\centering \n\t\\begin{overpic}[width=0.33\\linewidth]{3d_lowinput.png}\n\t\t\\put( 3,3){\\small \\color{blue1}{a)}}\n\t\\end{overpic}\n\t\\begin{overpic}[width=0.33\\linewidth]{3d_13.png}\n\t\t\\put( 3,3){\\small \\color{blue1}{b)}}\n\t\\end{overpic}\n\t\\begin{overpic}[width=0.33\\linewidth]{3d_highreference.png}\n\t\t\\put( 3,3){\\small \\color{blue1}{c)}}\n\t\\end{overpic} \n\t\\caption{ These images show our algorithm applied to a 3D volume. F.l.t.r.: \n\t\t\\revision{a). a coarse input volume (down-sampled from the reference c, rendered with cubic up-sampling),}\n\t\tb). our result, and c). the high resolution reference. As in 2D, our trained model generates\n\t\tsharp features and detailed sheets that are at least on par with the reference. }\n\t\\label{fig:plume3d}\n\\end{figure*}\n\nFor matrix $A$, we consider affine transformation matrices that contain combinations of randomized translations, uniform scaling, reflections,\nand rotations. Here, only those transformations are allowed that do not violate the physical model\nfor the data set. \\revision{While shearing and non-uniform scaling could easily be added, they violate the NS momentum equation \nand thus should not be used for flow data.}\nWe have used values in the range $[0.85, 1.15]$ for scaling, and rotations by $[-90, 90]$\ndegrees.\nWe typically do not load derived components into memory\nfor training, as they are re-computed after augmentation. Thus, they are computed on the fly for a training batch\nand discarded afterwards.\n\nThe outputs of our simulations typically have significantly larger size than \nthe input tiles that our networks receive.\nIn this way, we have many choices for choosing offsets, in order\nto train the networks for shift invariance. This also aligns with our\ngoal to train a network that will later on work for arbitrarily sized inputs.\nWe found it important to take special care at spatial boundaries of the tiles.\nWhile data could be extended by Dirichlet or periodic boundary conditions, it is important that\nthe data set boundaries after augmentation do not lie outside the original data set. We enforce this \nby choosing suitable translations after applying the other transformations. This ensures that all data sets\ncontain only valid content, and the network does not learn from potentially unphysical or unrepresentative \ndata near boundaries.\nWe also do not augment the time axis in the same way as the spatial axes. We found that the spatial transformations above applied to\nvelocity fields give enough variance in terms of temporal changes.\n\\begin{figure}[tb]\n\t\\centering \\includegraphics[width=\\linewidth]{greenwlt.png}\\\\ \n\t\\caption{ We apply our algorithm to a horizontal jet of smoke in this example. The inset\n\t\tshows the coarse input (rendered with cubic up-sampling), and the result\n\t\tof our algorithm. The diffuse streaks caused by procedural turbulence in the input\n\t\t(esp. near the inflow) are turned into detailed wisps of smoke by our algorithm. }\n\t\\label{fig:green}\n\\end{figure}\n\\begin{figure}[tb]\n\t\\centering \\includegraphics[width=\\linewidth]{house.png}\\\\ \n\t\\caption{Our algorithm generated a high-resolution volume around an obstacle\n\t\twith a final resolution of $1024 \\times 720 \\times 720$.\n\t\tThe inset shows the input volume. This scene is also shown in \\myreffig{fig:teaser}\n\t\twith a different visualization. }\n\t\\label{fig:house}\n\\end{figure}\nAn example of the huge difference that data augmentation can make is shown in \\myreffig{fig:dataAugComp}.\nHere we compare two runs with the same amount of training data (160 frames of \ndata), one with, the other one without data augmentation.\nWhile training a GAN directly with this data produces blurry results,\nthe network converges to a final state with significantly sharper results with data augmentation. \nThe possibility to successfully train networks with only a small amount\nof training data is what makes it possible to train networks for 3D+time data, as \nwe will demonstrate in \\myrefsec{sec:results}.\n\\section{Results and Applications}\n\\label{sec:results}\nIn the following, we will apply our method discussed so far\nto different data sets, and explore different application settings. Among others, we will discuss related topics\nsuch as art direction, training convergence, and performance.~\\footnote{We will make code and trained models available upon acceptance of our work.}\n\\subsection{3D Results}\nWe have primarily used the 2D rising plume example in the previous sections \nto ensure the different variants can be compared easily. \nIn \\myreffig{fig:plume3d}, we demonstrate that these results directly extend\nto 3D. \\revision{We apply our method to a three-dimensional plume with resolution $64^3$, \nwhich in this case was generated by down-sampling a $256^3$ simulation such that we can compare our result to\nthis reference solution.\nFor this input data, the $256^3$ output produced by our tempoGAN exhibits small scale features that are at least\nas detailed as the ground truth reference.} The temporal coherence is especially important\nin this setting, which is best seen in the accompanying video.\n\nWe also apply our trained 3D model to two different inputs with higher resolutions. In both cases,\nwe use a regular simulation augmented with additional turbulence to generate an interesting\nset of inputs for our method. A first scene with $150\\times100\\times100$ is shown in \\myreffig{fig:green}, \nwhere we generate a $600\\times400\\times400$\noutput with our method. The output closely resembles\nthe input volumes, but exhibits a large number of fine details.\n\\revision{Note that our networks were only trained with down-sampled inputs,\nbut our models generalize well to regular simulation inputs without re-sampling, as illustrated by this example.}\n\nOur method also has no problems with obstacles in the flow, as shown in \\myreffig{fig:house}.\nThis example has resolutions of $256 \\times 180 \\times 180$ and $1024 \\times 720 \\times 720$ for input and output volumes. \nThe small-scale features closely adhere to the input flow around the obstacle. Although the obstacle is\ncompletely filled with densities towards the end of the simulation, there are no leaking artifacts\nas our method is applied independently to each input volume in the sequence.\nWhen showing the low-resolution input, we always employ cubic up-sampling, in order\nto not make the input look unnecessarily bad.\n\n\\subsection{\\revision{Fine Tuning Results}}\n\\label{sec:artcontrol}\n\\revision{\nGANs have a reputation for being particularly hard to influence and control,\nand influencing the outcome of simulation results is an important topic for applications in computer graphics. \nIn contrast to procedural methods,\nregular GAN models typically lack intuitive control knobs to influence the generated results.}\nWhile we primarily rely on traditional guiding techniques to control the low-resolution input,\nour method offers different ways to adjust the details produced by our tempoGAN algorithm.\n\\begin{figure}[tb]\n\t\\centering \\begin{overpic}[width=0.99\\linewidth]{summary.png}\n\t\t\\put( 3,53){\\small \\color{white}{a)}}\n\t\t\\put( 53,53){\\small \\color{white}{b)}}\n\t\t\\put( 3,3){\\small \\color{white}{c)}}\n\t\t\\put( 53,3){\\small \\color{white}{d)}}\n\t\\end{overpic} \t\n\t\\caption{The red\\&green images on the left of each pair represent the modified velocity inputs, while the corresponding result is shown on the right. For reference, pair a) shows the unmodified input velocity, and the regular output of our algorithm. }\n\t\\label{fig:differentvelocity}\n\\end{figure}\n\nA first control knob for fine-tuning the results is to modify the data fields of the conditional\ninputs. As described in \\myrefsec{sec:inputs}, our generator receives the velocity in addition to the density, \nand it internally builds tight relationships between the two. \nWe can use these entangled\ninputs to control the features produced in the outputs. To achieve this, we modify the velocity components\npassed to the generator with various procedural functions. \\myreffig{fig:differentvelocity} shows the results of original input and several modified velocity\n examples and the resulting density configurations. \n\\revision{We have also experimented with noise fields instead \\cite{mirza2014conditional},\n but found that the trained networks completely ignored these fields. Instead, the strongly correlated velocity fields\nnaturally provide a much more meaningful input for our networks, and as a consequence provide means for influencing the\nresults.}\n\nIn addition, \\myreffig{fig:zerovelocity} demonstrates that we can effectively suppress the generation\n of small scale details by setting all velocities to zero. Thus, the network learns a correlation between\n velocity magnitudes and amount of features. This is another indicator that the network learns to extract \n meaningful relationships from the data, as we expect turbulence and small-scale details to primarily \n form in regions with large velocities.\nThree-dimensional data can similarly be controlled, as illustrated in \\myreffig{fig:3ddifferentvelocity}. \n\nIn \\myrefsec{sec:featureloss}, we discussed the influence of the $\\lambda_{f}$ parameter for small scale features.\nFor situations where we might not have additional channels such as the velocity above, we can use $\\lambda_{f}$\nto globally let the network generate different features. However, as this only provides a uniform change that is encoded\nin the trained network, the resulting differences are more subtle than those from the velocity modifications above. Examples\nof different 2D and 3D outputs can be found in \\myreffig{fig:styleLLCompare} and \\myreffig{fig:3ddifferentstyle}, respectively.\n\\begin{figure}[t]\n \\centering \n\t\\begin{overpic}[width=0.99\\linewidth]{styl_150th.png}\n\t\\end{overpic} \n \\caption{\n\t\\revision{An illustration how the entangled inputs of density and velocity can be used to fine tune the results:\n\ton the left the velocities were scaled up by a factor of 2, while the right hand side was scaled by zero.\n\tThe network has learned a relationship between detail and velocities, leading\n\tto reduced details in regions where the velocity was set to zero. }}\n\\label{fig:zerovelocity}\n\\end{figure}\n\\begin{figure}[t]\n \\centering \\begin{overpic}[width=0.49\\linewidth]{test_08_3.jpg}\n \\put( 3,3){\\small \\color{blue1}{a)}}\n \\end{overpic} \n \\begin{overpic}[width=0.49\\linewidth]{test_08_1.png}\n \t\\put( 3,3){\\small \\color{blue1}{b)}}\n \\end{overpic} \n \\centering \\begin{overpic}[width=0.49\\linewidth]{test_08_4.jpg}\n \t\\put( 3,3){\\small \\color{blue1}{c)}}\n \\end{overpic} \n \\begin{overpic}[width=0.49\\linewidth]{test_08_2.png}\n \t\\put( 3,3){\\small \\color{blue1}{d)}}\n \\end{overpic} \n \\caption{a) is the result of tempoGAN with velocity set to zero.\n \tThe other\n three examples were generated with modified velocity inputs to achieve more stylized outputs.}\n \\label{fig:3ddifferentvelocity}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering \\begin{overpic}[width=0.49\\linewidth]{test_13_comp_4_small.png}\n\t\t\\put( 3,3){\\small \\color{white}{a)}}\n\t\\end{overpic} \n\t\\centering \\begin{overpic}[width=0.49\\linewidth]{test_9_comp_4_small.png}\n\t\t\\put( 3,3){\\small \\color{white}{b)}}\n\t\\end{overpic} \n\t\\caption{ A comparison of training runs with different feature loss weights: a) \n\t$\\lambda _{f}^{1,...,4}=-10^{-5}$ , \n\tb) \n\t$\\lambda _{f}^{1,4}=1\/3\\cdot{10^{-4}}$, $\\lambda _{f}^{2,3}=-1\/3\\cdot10^{-4}$. }\n\t\\label{fig:styleLLCompare}\n\\end{figure}\n\\begin{figure}[t]\n \\centering \\begin{overpic}[width=0.32\\linewidth]{test_13_dark_cut.png}\n \t \t\\put( 3,3){\\small \\color{white}{a)}}\n \\end{overpic} \n \\begin{overpic}[width=0.32\\linewidth]{test_9_dark_cut.png}\n \t \t\\put( 3,3){\\small \\color{white}{b)}}\n \\end{overpic} \n \\begin{overpic}[width=0.32\\linewidth]{refeh_dark_cut.png}\n \t \t\\put( 3,3){\\small \\color{white}{c)}}\n \\end{overpic} \n \\caption{ A comparison of training runs with different feature loss weights in 3D:\n a) with $\\lambda _{f}^{1,...,4}=-1\/3\\cdot10^{-6}$ ,\n b) with $\\lambda _{f}^{1}=1\/3\\cdot10^{-6}$, $\\lambda _{f}^{2,3,4}=-1\/3\\cdot10^{-6}$. The latter yields a sharpened result.\n Image c) shows the high resolution reference. }\n \\label{fig:3ddifferentstyle}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering \\begin{overpic}[width=0.49\\linewidth]{crop_v13.png} \n\t\t\\put( 3,3){\\small \\color{white}{a)}}\n\t\\end{overpic} \n\t\\begin{overpic}[width=0.49\\linewidth]{crop_wlt_2x_data_pos.png} \n\t\t\\put( 3,3){\\small \\color{white}{b)}}\n\t\\end{overpic} \n\t\\caption{ Our regular model a) and one trained with wavelet turbulence data\n\t\tb). In contrast to the model trained with real simulation data, the wavelet\n\t\tturbulence model produces flat regions with sharper swirls, mimicking\n\t\tthe input data.\n\t}\n\t\\label{fig:waveletTurbulence}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering \\begin{overpic}[width=0.49\\linewidth]{plume_4x_cut.png} \n\t\t\\put( 3,3){\\small \\color{white}{a)}}\n\t\\end{overpic} \n\t\\begin{overpic}[width=0.49\\linewidth]{plume_8x_cut.png} \n\t\t\\put( 3,3){\\small \\color{white}{b)}}\n\t\\end{overpic} \n\t\\caption{a) is the network output after a single application. \n\tb) is the network recursively applied to a) with a scaling factor of 2, resulting in a total increase of $8\\times$.}\n\t\\label{fig:recursive}\n\\end{figure}\n\\subsection{Additional Variants}\n\nIn order to verify that our network can not only work with two- or three-dimensional\ndata from a Navier-Stokes solver, we generated a more synthetic data set by applying\nstrong wavelet turbulence to a $4\\times$ up-sampled input flow. We then trained our network\nwith down-sampled inputs, i.e., giving it the task to learn the output\nof the wavelet turbulence algorithm. Note that a key difference here is that wavelet\nturbulence normally requires a full high-resolution advection over time, while our method\ninfers high-resolution data sets purely based on low-resolution data from a single frame.\n\\begin{figure*}[t]\n \\centering\n\t\t\\includegraphics[width=0.99\\linewidth, height = 3.3cm]{graphs.pdf}\n \\caption{ Several discriminator loss functions over the course of the 40k training iterations.\n a) $D_{s}$ (spatial discriminator) loss is shown in green without $D_{t}$, and orange with $D_{t}$.\n b) Temporal discriminator loss in blue with only $D_{t}$, and in red for tempoGAN (i.e., with $D_{s}$, and feature loss).\n c) Spatial discriminator loss is shown in green with $\\mathcal{L}_{f}$, and in dark blue without.\n For each graph, the dark lines show smoothed curves. The full data is shown in a lighter color in the background.\n\t}\n \\label{fig:graph-all}\n\\end{figure*}\n\nOur network successfully learns to generate structures similar \nto the wavelet turbulence outputs, shown in \\myreffig{fig:waveletTurbulence}. However, this data set \nturned out to be more difficult to learn than the original fluid simulation inputs.\nThe training runs required two times more training data than the regular simulation runs,\nand we used a feature loss of $\\lambda _{f}^{1,...,4}=10^{-5}$. \nWe assume that these more difficult training conditions are caused by the more \nchaotic nature of the procedural turbulence,\nand the less reasonable correlations between density and velocity inputs. Note that\ndespite using more wavelet turbulence input data, it is still a comparatively small data set.\n\nWe additionally were curious how well our network works when it is applied\nto a generated output, i.e., a recursive application. The result can be\nfound in \\myreffig{fig:recursive}, where we applied our network \nto its own output for an additional $2\\times$ upsampling. Thus, in total this led to an $8\\times$ increase of resolution.\nWhile the output is plausible, and clearly \ncontains even more fine features such as thin sheets, there is a tendency\nto amplify features generated during the first application.\n\\subsection{Training Progress}\n\nWith the training settings given in \\myrefapp{app:data}, our training\nruns typically converged to stable solutions of around $1\/2$ for the discriminator outputs after sigmoid activation. \nWhile this by itself does not guarantee that a desirable solution was found, it at least indicates convergence\ntowards one of the available local minima.\n\n\\revision{However, it is interesting how the discriminator loss changes \nin the presence of the temporal discriminator.\n\\myreffig{fig:graph-all} shows several graphs of discriminator losses over the course of a full training run.\nNote that we show the final loss outputs from \\myrefeq{eq:disloss} and \\myrefeq{eq:tempod} here.\nA large value means the discriminator does ``worse'', i.e.,\nit has more difficulty distinguishing real samples from the generated ones. \nCorrespondingly, lower values mean it can separate them more successfully.\nIn \\myreffig{fig:graph-all}a) it is visible that the spatial discriminator loss decreases when \nthe temporal discriminator is introduced. \nHere the graph only shows the spatial discriminator loss, and the discriminator itself\nis unchanged when the second discriminator is introduced. The training run corresponding to the\ngreen line is trained with only a spatial discriminator, and for the orange line with both spatial and temporal discriminators.\nOur interpretation\nof the lower loss for the spatial discriminator network is that the existence of a temporal discriminator in the optimization \nprevents the generator from using\nthe part of the solution space with detailed, but flickering outputs. Hence, the generator is driven to find a solution from the temporally\ncoherent ones, and as a consequence has a harder time, which in turn makes the job easier for the spatial discriminator. This manifests\nitself as a lower loss for the spatial discriminator, i.e. the lower orange curve in \\myreffig{fig:graph-all}a).}\n\nConversely, the existence of a spatial discriminator does not noticeably influence\nthe temporal discriminator, as shown in \\myreffig{fig:graph-all}b). This is also intuitive, as the spatial discriminator does not influence temporal changes. \n\\revision{We found that a generator trained only with $D_t$ typically produces fewer details\nthan a generator trained with both. In conjunction, our tests indicate that the two discriminators \nsuccessfully influence different aspects of the solution space,\nas intended.}\nLastly, \\myreffig{fig:graph-all}c) shows\nthat activating the negative feature loss from \\myrefsec{sec:featureloss} makes the task for\nthe generator slightly harder, resulting in a lowered spatial discriminator loss. \n\\subsection{Performance}\n\nTraining our two- and three-dimensional models\nis relatively expensive. Our full 2D runs typically\ntake around 14 hours to complete (1 GPU), while the 3D runs\ntook ca. 9 days using two GPUs. However, in practice, the state of the\nmodel after a quarter of this time is already indicative of\nthe final performance. The remainder of the time is typically\nspent fine-tuning the network.\n\nWhen using our trained network to generate high-resolution\noutputs in 3D, the limited memory of current GPUs poses a constraint\non the volumes that can be processed at once, as the intermediate\nlayers with their feature maps can take up significant amounts of memory. \nHowever, this does not pose a problem for generating larger final\nvolumes, as we can subdivide the input volumes, and process them piece by piece.\nWe generate tiles with a size of $136^3$ on one GPU, with a corresponding input of size $34^3$. \nOur 8 convolutional layers with a receptive field of 16 cells mean that up to four \ncells of an input could be influenced by a boundary. In practice, we found 3 input cells\nto be enough in terms of overlap. Generating a single $136^3$ output took ca. 2.2 seconds\non average. \nThus, generating a $256^3$ volume from an $64^3$ input took 17.9s on average.\n\\revision{Comparing the performance of our model with high resolution simulations\nis inherently difficult, due to the substantially different implementations and hardware platforms (CPU vs. GPU).\nHowever, for the example of \\myreffig{fig:green} we estimate that fluid simulation at the full resolution would\ntake ca. 31.5 minutes per frame of animation on average, while the evaluation of all volume\ntiles with our evaluation pipeline took ca. 3.9 minutes.} \n\n\\revision{\nThe cost for the trained model scales linearly with the number of cells in the volume,\nand in contrast to all previous methods for increasing the resolution of flow simulations,\nour method does not require any additional tracking information. It is also fully independent for all frames.\nThus, our method could ideally be applied on the fly before rendering a volume, after which the \nhigh resolution data could be discarded.\nAdditionally, due to GPU memory restrictions we currently evaluate our model in \nvolumetric tiles with 3 cells of overlap for the input. This overlap can potentially be reduced further,\nand become unnecessary when enough memory is available to process the full input volume at once.\n}\n\n\\subsection{Limitations and Discussion}\n\\label{sec:limitations}\n\nOne limitation of our approach is that the network encodes a fixed\nresolution difference for the generated details. While the initial up-sampling\nlayers can be stripped, and the network could thus be applied to inputs of any size,\nit will be interesting to explore different up-sampling factors beyond the factor\nof four which we have used throughout.\n\\revision{With our current implementation, our method can also be slower than, e.g., calculating\nthe advection for a high resolution grid. However, a high-res advection would typically not lead to\ndifferent dynamics than those contained in the input flow, and require a sequential solve\nfor the whole animation sequence.}\nOur networks have so far also focused\non buoyant smoke clouds. While obstacle interactions worked in our tests,\nwe assume that networks trained for larger data sets and with other types of interactions\ncould yield even better results.\n\nOur three-dimensional networks needed a long time to train, circa nine days\nfor our final model. Luckily, this is a one-time cost, and the network can be flexibly reused\nafterwards. However, if the synthesized small-scale features need to be fine-tuned,\nwhich we luckily did not find necessary for our work, the long runtimes could make this\na difficult process. \nThe feature loss weights clearly also are data dependent, e.g., we used\ndifferent settings for simulation and wavelet turbulence data.\nHere, it will be an interesting direction for future work to \ngive the network additional inputs for fine tuning the results beyond the velocity\nmodifications which discussed in \\myrefsec{sec:artcontrol}.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have realized a first conditional GAN approach for four-di\\-men\\-sional \ndata sets, and\nwe have demonstrated that it is possible to train generators that preserve temporal coherence\nusing our novel time discriminator.\nThe network architecture of this temporal discriminator, which ensures\nthat the generator receives gradient information even for complex transport processes, makes it \npossible to robustly train networks for temporal evolutions. We have shown\nthat this discriminator improves the generation of stable details as well as the learning process itself. \nAt the same time, our fully convolutional networks can be applied to inputs of arbitrary size, and \nour approach provides basic means for art direction of the generated outputs.\nWe also found it very promising to see that our CNNs are able to benefit from coherent, physical \ninformation even in complex 3D settings, which led to reduced network sizes.\n\nOverall, we believe that our contributions yield a robust and very general method \nfor generative models of physics problems, and for super-resolution flows in particular.\nIt will be highly interesting as future work to apply our tempoGAN to other physical problem settings,\nor even to non-physical data such as video streams.\n\n\\begin{acks}\nThis work was funded by the ERC Starting Grant {\\em realFlow} (StG-2015-637014). We would like to thank Wei He for helping with making the videos, and all members of the graphics labs of TUM, IST Austria and ETH Zurich for the thorough discussions.\n\\end{acks}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nThree interacting quantum systems may also be viewed as a bipartite system \ncoupled to a third, ``external\" system. The Raman coupled model, introduced some years ago \n\\cite{eber90} constitutes an example of a simple, analytically solvable model \ninvolving three quantum subsystems; a three level atom coupled to two \nmodes of the cavity quantized field. In the limit in which the excited atomic \nstate is well far-off resonance, a simpler, effective two-level \nHamiltonian may be derived, either by an adiabatic elimination of the upper atomic \nlevel \\cite{eber90} or via a unitary transformation \\cite{bose95}. In such a\nprocedure, energy (Stark) shifts arise and care must be taken, given that they may \nnot always be neglected \\cite{knig91}. In fact, the presence of the shifts\nnormally leads to a very different dynamics, e.g., from a non-periodic to a periodic \natomic inversion. Several features of the dynamics of such model have already been \ninvestigated \\cite{eber90}; in particular, quantum entanglement and possible applications \nto quantum computation \\cite{agar04,garr04}. Here we are going to discuss a different aspect \nof that system; the influence of one of the fields (a cavity mode) on \nthe dynamics of the atom as well as on the bipartite entanglement between the atom and the \nother cavity mode. We are going to consider different field preparations, such as coherent and \nthermal states, and we will also compare our results to the case in which one of the modes is not quantized, \nbut treated as a classical field, instead. For simplicity, we do not take into account cavity losses or \natomic spontaneous decay. Our paper is organized as follows: in Sec. II we present the model and solution. \nIn Sec. III we discuss dynamical features with different preparations. In Sec. IV we present the \nevolution of bipartite entanglement, and in Sec. V we summarize our conclusions.\n\n\\section{The model and solution}\n\\subsection{Two quantized modes}\n\nWe consider a three level atom (levels 1,2,3) in interaction with two modes (mode 1, of frequency \n$\\omega_1$ and mode 2, of frequency $\\omega_2$) of the quantized field \nin a lambda configuration. Direct transitions between the lower levels 1 and 2 are forbidden. \nIf the upper level (level 3) is highly detuned from the fields (detuning \n$\\Delta$), the effective Hamiltonian may be written as \\cite{bose95}\n\\begin{equation}\nH_{eff} = \\sigma_{11}E_{1}+\\sigma_{22}E_{2}+\\hbar\\omega_{1}a_{1}^{\\dagger}a_{1}+\\hbar\\omega_{2}a_{2}^{\\dagger}a_{2}\n -\\hbar\\frac{g_{1}^{2}}{\\Delta}\\sigma_{11}a_{1}^{\\dagger}a_{1}-\\hbar\\frac{g_{2}^{2}}{\\Delta}\\sigma_{22}a_{2}^{\\dagger}a_{2}+\\label{heffective}\n -\\hbar\\frac{g_{1}g_{2}}{\\Delta}\\left(a_{1}^{\\dagger}a_{2}\\sigma_{12}+a_{2}^{\\dagger}a_{1}\\sigma_{21}\\right),\n\\end{equation}\nwhere $\\sigma_{12}$ and $\\sigma_{21}$ are the transition operators between levels 1 and 2, \n$a_1(a_1^\\dagger)$ is the annihilation (creation) operator of mode 1, \n$a_2(a_2^\\dagger)$ is the annihilation (creation) operator of mode 2, and $g_1(g_2)$ are \nthe couplings of the transitions $1-3$ and $2-3$, respectively. The effective\nHamiltonian is valid in the limit $g_1\/\\Delta(g_2\/\\Delta)\\ll 1$, but under certain conditions, \nexact similar Hamiltonians may also be obtained \\cite{wu96}. The Stark shift terms \n$-\\hbar \\frac{g_1^2}{\\Delta}\\sigma_{11}a_1^\\dagger a_1$ and $-\\hbar \\frac{g_2^2}{\\Delta}\\sigma_{22}a_2^\\dagger a_2$ \nare usually neglected \\cite{eber90,garr04}, but one should be very careful, given that their inclusion results in a \nRabi frequency depending linearly on the photon numbers $n_1$ and $n_2$. As a consequence, because of the shifts, \nthe dynamics of the Raman coupled model becomes basically periodic, with Rabi frequency\n\\begin{equation}\n\\Omega_{n_{1},n_{2}}=\\frac{\\left[g_{1}^{2}n_{1}+g_{2}^{2}\\left(n_{2}+1\\right)\\right]}{2\\Delta}\\label{rabifreq}\n\\end{equation} \nin contrast to the Rabi frequency, which is proportional to $\\sqrt{n_1(n_2+1)}$, if the Stark shifts are \nneglected \\cite{eber90}. \nAssuming an initial density operator of the product form,\n\\begin{eqnarray}\n\\rho\\left(0\\right)=\\sum_{n_{1}n_{2}m_{1}m_{2}}^{\\infty}\\rho_{n_{1}n_{2}m_{1}m_{2}}\\left|1;n_{1},n_{2}\\right\\rangle \\left\\langle 1;m_{1},m_{2}\\right|,\n\\label{initialdo}\n\\end{eqnarray}\ni.e., with the atom initially prepared in level 1, and the fields in generic states characterized by the coefficients \n$\\rho_{n_{1}n_{2} m_{1}m_{2}}$. The full time-dependent density operator for the tripartite system may be written as\n\n\\begin{eqnarray}\n\\rho\\left(t\\right) && = \\sum_{n_{1}n_{2}m_{1}m_{2}}^{\\infty}\\{ A_{n_{1}n_{2}m_{1}m_{2}}\\left|1;n_{1},n_{2}\\right\\rangle \\left\\langle 1;m_{1},m_{2}\\right|\n + B_{n_{1}n_{2}m_{1}m_{2}}\\left|2;n_{1}-1,n_{2}+1\\right\\rangle \\left\\langle 2;m_{1}-1,m_{2}+1\\right|\\nonumber \\\\\n\\nonumber\\\\&& + C_{n_{1}n_{2}m_{1}m_{2}}\\left|1;n_{1},n_{2}\\right\\rangle \\left\\langle 2;m_{1}-1,m_{2}+1\\right|+h.c.\\},\\nonumber \n\\label{densityop}\\\\\n\\end{eqnarray}\nwith coefficients\n\\begin{eqnarray}\nA_{n_{1}n_{2}m_{1}m_{2}}&=&\\rho_{n_{1}n_{2}m_{1}m_{2}}e^{i\\nu_{n_{1}n_{2}m_{1}m_{2}}t}k_{1,n_{1},n_{2}}\\, k_{1,m_{1},m_{2}}^{*},\\nonumber \\\\\nB_{n_{1}n_{2}m_{1}m_{2}}&=&\\rho_{n_{1}n_{2}m_{1}m_{2}}e^{i\\nu_{n_{1}n_{2}m_{1}m_{2}}t}k_{2,n_{1},n_{2}}\\, k_{2,m_{1},m_{2}}^{*},\\nonumber \\\\ \nC_{n_{1}n_{2}m_{1}m_{2}}&=&\\rho_{n_{1}n_{2}m_{1}m_{2}}e^{i\\nu_{n_{1}n_{2}m_{1}m_{2}}t}k_{1,n_{1},n_{2}}\\, k_{2,m_{1},m_{2}}^{*},\\nonumber\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\nu_{n_{1}n_{2}m_{1}m_{2}} = \\left(m_{1}-n_{1}\\right)\\left(\\omega_{1}-\\frac{g_{1}^{2}}{2\\Delta}\\right)+\n\\left(m_{2}-n_{2}\\right)\\left(\\omega_{2}-\\frac{g_{2}^{2}}{2\\Delta}\\right),\\nonumber\n\\end{equation}\nand\n\n\\begin{eqnarray}\nk_{1,n_{1},n_{2}}\\left(t\\right) & = & \\cos\\left(\\Omega_{n_{1},n_{2}}t\\right) \n+ i\\left[\\frac{n_{1}-r^{2}\\left(n_{2}+1\\right)}{n_{1}+r^{2}\\left(n_{2}+1\\right)}\\right]\\sin\\left(\\Omega_{n_{1},n_{2}}\\,t\\right),\\nonumber \\\\\nk_{2,n_{1},n_{2}}\\left(t\\right) & = & \\frac{2ri\\sqrt{n_{1}\\left(n_{2}+1\\right)}}{\\left[n_{1}+r^{2}\\left(n_{2}+1\\right)\\right]}\\sin\\left(\\Omega_{n_{1},n_{2}}\\,t\\right),\\nonumber \\label{coeff}\n\\end{eqnarray}\nwhere the Rabi frequency $\\Omega_{n_{1},n_{2}}$ is given in Eq. (\\ref{rabifreq}) and having defined $r\\equiv g_2\/g_1$.\n\n\\subsubsection{Atomic dynamics with different initial field preparations}\n\nThe atomic response to the fields may be characterized by the atomic population inversion as a function of time, or\n\\begin{equation}\n\\label{atomicinv}\nW\\left(t\\right) = 2\\, \\mbox{Tr} \\left[ \\rho(t) |2\\rangle\\langle 2| \\right] - 1 \n = 8\\sum_{n_{1},n_{2}}^{\\infty}p_{n_1} p_{n_2}\n\\frac{r^{2}n_{1}\\left(n_{2}+1\\right)}{\\left[n_{1}+r^{2}\\left(n_{2}+1\\right)\\right]^{2}}\\sin^{2}\\left(\\Omega_{n_{1},n_{2}}t\\right) - 1,\\nonumber\n\\end{equation}\nwhere $p_{n_1} p_{n_2}= \\rho_{n_{1}n_{2}\\,;\\, n_{1}n_{2}}$ is the product of the photon number \ndistributions of the initial fields. The atomic inversion has\npeculiar features depending on the field statistics. For instance, in another well known model of optical \nresonance having a two level atom coupled to a single mode field, the Jaynes Cummings model (JCM), a field \ninitially in a Fock state leads to pure oscillations of the atomic inversion, while an initial coherent state \ncauses collapses and revivals \\cite{eber80} of the Rabi oscillations.\nOn the other hand, in the JCM, for a field initially in a thermal state, the structure of collapses and revivals \nbecomes highly desorganized \\cite{radm82} and look like random. In the Raman coupled model, in which an effective two \nlevel atom is coupled to two fields, rather than one, we expect of course a different behaviour. Perhaps the most \nstriking difference to the JCM is the periodicity of the atomic inversion; moreover, the ``revival'' \ntimes do not depend on the intensity of the fields, and the field statistics has only\nlittle effect on the pattern of collapses and revivals \\cite{knig91}. \nIn order to illustrate this, in Fig. (\\ref{figure1}) we have plots of the atomic inversion as a \nfunction of the scaled time $\\tau = g_1 t$ for modes 1 and 2 prepared in coherent states: \n$p_{n_i} = e^{-\\bar{n}_{i}}\\bar{n}_{i}^{n_{i}}\/n_{i}!$ [Fig. \\ref{figure1}(a)] and for mode 1 prepared in a coherent state and mode 2 prepared in a \nthermal state: $p_{n_1} = e^{-\\bar{n}_{1}}\\bar{n}_{1}^{n_{1}}\/n_{1}!$ and $p_{n_2} = \\bar{n}_{2}^{n_{2}}\/\\left(\\bar{n}_{2}+1\\right)^{n_{2}+1}$\n[Fig. \\ref{figure1}(b)]. \nWe may note the periodical and well defined revivals occurring at the same times in both cases. Curioulsly, even in the \ncase of a thermal (chaotic) initial preparation, revivals are well defined, although\ntheir amplitude is slightly suppressed; probably an effect due to the broader thermal distribution. We also note that \nif one of the fields is prepared in a Fock state, if the other mode is prepared in a coherent state, for instance, collapses \nand revivals will still occur, due to the presence of the ``in phase\" and ``out of phase \nterms\" in Eq. (\\ref{atomicinv}). In Fig. (\\ref{figure2}) we have a plot of the atomic inversion for mode 1 prepared in a \n$N$ photons Fock state and mode 2 prepared in a coherent state, and we note again the pattern of periodic and regular \ncollapses and revivals. In what follows, this particular preparation (Fock-coherent) is going to be compared with a situation \ntermed ``partially classical\", having one of the fields, mode 2, treated classically rather than prepared in a coherent state \n(fully quantized case).\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics{fig1.pdf}\n\\end{center}\n\\caption{Atomic inversion as a function of the scaled time $\\tau = g_1 t$ for a) modes 1 and 2 \nprepared in coherent states and b) mode 1 prepared in a \ncoherent state and mode 2 prepared in a thermal state. \nIn both cases $\\overline{n}_1 = 10.5$ and $\\overline{n}_2 = 10.1$.\nWe have considered $g_1 = g_2$ and $r = 1.012$.}\n\\label{figure1} \n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics{fig2.pdf}\n\\end{center}\n\\caption{Atomic inversion for mode 1 prepared in Fock state ($N = 5$) and mode 2 prepared in a coherent state ($\\overline{n} = 5$), with $r = 1.023$.}\n\\label{figure2} \n\\end{figure}\n\n\\subsection{One quantized mode}\n\nNow we consider mode 2 as being a classical field of amplitude $\\Omega_L$; the ``partially classical\" case, keeping mode 1 quantized. \nThe effetive Hamiltonian in the far-off resonance limit for the excited state is, \nin this case \\cite{mato01},\n\\begin{equation}\nH_{eff}^{\\prime}=\\frac{g^{2}a^{\\dagger}a}{\\Delta}\\sigma_{11}+\\frac{g^{2}r^{\\prime}{}^2}{\\Delta}\\sigma_{22}+\\lambda\\left(\\sigma_{12}a^{\\dagger}\n+\\sigma_{21}a\\right),\\label{semclasshamil}\n\\end{equation}\nwhere $g$ is the quantum field\/atom coupling constant, $\\lambda=g\\left|\\Omega_{L}^{*}\\right|\/\\Delta$ \nis the effective coupling constant and $r^{\\prime} = \\left|\\Omega_{L}^{*}\\right|\/g$. \nNote that apart from some shifts, the effective Hamiltonian is very similar to a JCM Hamiltonian. \nThis means that the atomic response to a field (mode 1) prepared in a Fock state is going to be in the form of pure Rabi oscillations, \nin contrast to the case where mode 2 is a quantum field prepared in the ``quasi-classical\" coherent state, which shows collapses and \nrevivals. This is another example showing that regarding the atomic response to a field, an intense quantum coherent state is not \nequivalent to a classical field.\n\n\\section{Entanglement}\n\nWe would like now to discuss the entanglement between the atom and one mode of the field \n(mode 1, for instance) considering the other mode as an ``external\" coupled\nsub-system. We then trace over the variables of mode 2, initially prepared in a coherent state, \nand examine the degree of entanglement between the atom and the remaining \nfield (mode 1) initially prepared, for the sake of simplicity, in a Fock state. In order to quantify \nentanglement, we use the negativity. As we have done in the case of \nthe atomic inversion, we will compare the results with the case in which mode 2 is considered to be a \nclassical field. For an initial state having the atom in level 1,\nmode 1 in a Fock state $|N\\rangle$ and mode 2 in a coherent state $|\\alpha\\rangle$, i.e., a product state \n$\\left|\\psi\\left(0\\right)\\right\\rangle =\\left|1\\right\\rangle \\otimes\\left|N\\right\\rangle \\otimes\\left|\\alpha\\right\\rangle$, \nthe joint atom-mode 1 density operator, obtained after tracing over mode 2 from Eq. (\\ref{densityop}), \n$\\tilde{\\rho}(t) = Tr_2[\\rho(t)]$ is given by\n\\begin{eqnarray}\n&&\\tilde{\\rho}\\left(t\\right) = \\sum_{n=0}^{\\infty}\\left|c_{n}\\right|^{2}k_{1,N,n}\\, k_{1,N,n}^{*}\\left|1;N\\right\\rangle \\left\\langle 1;N\\right| \n+\\sum_{n=0}^{\\infty}\\left|c_{n}\\right|^{2}\\, k_{2,N,n}\\, k_{2,N,n}^{*}\\left|2;N-1\\right\\rangle \\left\\langle 2;N-1\\right|\\nonumber \\\\\n&&+F(t)\\sum_{n=0}^{\\infty}c_{n+1}c_{n}^{*}\\, k_{1,N,n+1}\\, k_{2,N,n}^{*}\\left|1;N\\right\\rangle \\left\\langle 2;N-1\\right|\n+ h.c.,\\nonumber \\label{reduceddens}\n\\end{eqnarray}\nwith $F(t) = e^{-i\\left(\\omega_{2}-\\frac{g_{2}^{2}}{2\\Delta}\\right)t}$ and $c_{n}=e^{-\\bar{n}\/2}\\alpha^{n}\/\\sqrt{n!}$.\n\nWe may then calculate the negativity as a function of time\n\n\\begin{eqnarray}\n{\\cal {N}}\\left(t\\right) =2r\\sqrt{\\bar{n}N}\\left\\{ \\Bigg[\\sum_{n=0}^{\\infty}e^{-\\bar{n}}\\frac{\\bar{n}^{n}}{n!}\\left[\\frac{N-r^{2}\\left(n+2\\right)}{N+r^{2}\\left(n+2\\right)}\\right]\\right.\n\\frac{\\sin\\left(\\Omega_{N,n}t\\right)\\sin\\left(\\Omega_{N,n+1}t\\right)}{\\left[N+r^{2}\\left(n+1\\right)\\right]}\\Bigg]^{2}\\nonumber\\\\\n\\left.+\\left[\\sum_{n=0}^{\\infty}e^{-\\bar{n}}\\frac{\\bar{n}^{n}}{n!}\\:\\frac{\\sin\\left(\\Omega_{N,n}t\\right)\\cos\\left(\\Omega_{N,n+1}t\\right)}{\\left[N+r^{2}\\left(n+1\\right)\\right]}\\right]^{2}\\right\\} ^{1\/2},\n\\end{eqnarray}\nwhere $\\bar{n} = |\\alpha|^2$ and $\\Omega_{N,n}=\\left[g_{1}^{2}N+g_{2}^{2}\\left(n+1\\right)\\right]\/2\\Delta$. \nIt is also worth to compare the negativity to the linear entropy relative to the atomic state (obtained after tracing over mode 1, $\\rho_A(t) = \nTr_1[\\tilde{\\rho}(t)]$), which is defined as $\\zeta_A(t) = 1 - Tr[\\rho_A^2(t)]$ or\n\n\\begin{equation}\n\\zeta_{A}\\left(t\\right) =\n2 \\sum_{n=0}^{\\infty}e^{-\\bar{n}}\\:\\frac{\\bar{n}^{n}}{n!}\\frac{4r^{2}N\\left(n+1\\right)}{\\left[N+r^{2}\\left(n+1\\right)\\right]^{2}}\\sin^{2}\\left(\\Omega_{N,n}t\\right) \n-2\\left\\{ \\sum_{n=0}^{\\infty}e^{-\\bar{n}}\\:\\frac{\\bar{n}^{n}}{n!}\\frac{4r^{2}N\\left(n+1\\right)}\n{\\left[N+r^{2}\\left(n+1\\right)\\right]^{2}}\\sin^{2}\\left(\\Omega_{N,n}t\\right)\\right\\} ^{2}.\\label{linearentro}\n\\end{equation}\n\nThe linear entropy may be used as a measure of the quantum state purity, as it it zero for a pure \nstate and $> 0$ for a mixed state. In Fig. (\\ref{figure3}) we have plotted the negativity (\\ref{figure3}a) and \nthe linear entropy (\\ref{figure3}b) as a function of the scaled time $g_1 t$. \nWe note that entanglement is maximum approximately in the middle of the collapse region, and the atom-mode 1 \nstate tends to become separable at the revival times themselves (see Fig. \\ref{figure2}). At the same time one may notice the \nsimilarities and differences between the negativity and the linear entropy; the maximum of entanglement coincides\nwith the maximum of mixedness, and separability approximately occurs at times in which the atom is close to a pure state. \nHowever, we should point out that there are time intervals of maximum mixedness which do not correspond to maximum entanglement; \nthis explicitly shows us the inadequacy of the linear entropy as a \nmeasure of entanglement (as expected), given that the state under consideration (atom-mode 1) is generally not a pure state. \nAgain, the evolution of entanglement in the ``partially classical\" case is going to be very different. We would like to remark \nthat in this case a bipartite quantum system is under the action of an external classical field, instead of a \ntripartite system from which we have obtained a bipartite system by tracing over one of the subsystems. \nHaving mode 1 prepared in a $N$ photon Fock state (mode 2 being a classical field), we have the following \nexpressions for the atomic inversion (with the atom initially in state 1),\n\\begin{eqnarray}\nW^{\\prime}\\left(t\\right)=\\frac{8r^{\\prime2}N}{\\left(N+r^{\\prime2}\\right)}\\sin^{2}\\left[\\frac{\\left(N+r^{\\prime2}\\right)}{2r^{\\prime}}\\tau^{\\prime}\\right]-1, \\label{invsemiclass}\n\\end{eqnarray}\nand the negativity,\n\\begin{equation}\n{\\cal {N^{\\prime}}}\\left(t\\right) = \\frac{2r^{\\prime}\\sqrt{N}}{\\left(N+r^{\\prime2}\\right)}\n\\left\\{\\sin^{2}\\left[\\frac{\\left(N+r^{\\prime2}\\right)}{2r^{\\prime}}\\tau^{\\prime}\\right]\\right.\n\\left.-\\frac{4r^{\\prime2}N}{\\left(N+r^{\\prime2}\\right)^{2}}\\sin^{4}\\left[\\frac{\\left(N+r^{\\prime2}\\right)}{2r^{\\prime}}\\tau^{\\prime}\\right]\\right\\}^{1\/2},\n\\label{negatsemiclass} \n\\end{equation}\nwhere the scaled time is defined as $\\tau^{\\prime} = g\\left|\\Omega_{L}^{*}\\right|t\/\\Delta$ and $r^{\\prime} = \\left|\\Omega_{L}^{*}\\right|\/g$.\nCompared to the fully quantized situation; there are no collapses and revivals, as the atomic inversion \nand the negativity are now periodic functions of time. This is illustrated in Fig. (\\ref{figure4}), \nwhere we have plotted the atomic inversion in Eq. (\\ref{invsemiclass}) and the negativity in Eq. (\\ref{negatsemiclass}) \nas a function of the scaled time $\\tau^{\\prime}$. For a convenient choice of the parameter $r^{\\prime}$, the atomic population \ncompletely inverts and returns to its initial state at times in which atom-mode 1 are in a separable state, \na very different behaviour from what happens if mode 2 is a quantized field. \n\n\\section{Conclusions}\n\nWe have presented a study of the dynamics of three coupled quantum systems: one three level atom and two quantized cavity \nfields, focusing on some properties of the atomic system (population inversion) and the atom-mode 1 bipartite system \n(entanglement). The second field (mode 2) has been basically traced out and treated as an ``external\" \nsubsystem. Our study has been based on the Raman coupled model, involving an effective two level atom coupled to two \nelectromagnetic cavity fields. We have considered different preparations for mode 2: coherent and thermal states of the \nquantized field, as well as a classical field. In order to keep the consistency of the effective Hamiltonian, as has been \nalready pointed out \\cite{knig91,agar04} we have retained the Stark shift terms in the Raman Hamiltonian, which have the \nremarkable effect of keeping the dynamics periodic. Moreover, the periodic revival times will not depend on the statistics \nof the fields. This means that having one mode (or even two) prepared in the (highly mixed) thermal state will not change \nthe regular atomic response during the atom\/fields interaction, as we have shown explicitly here. This is a nice example of \ndynamical features robust against substantial variations in the field statistics. We have addressed the issue of bipartite \nentanglement between the atom and mode 1, after tracing out mode 2: we have found that\nfor mode 1 initially prepared in a Fock state and mode 2 in a coherent state, the atom\/mode 1 reach maximum entanglement \nin the collapse region of the atomic inversion. We have also compared both the atomic response and \nentanglement in the case in which mode 2 is treated as a classical field (``partially classical\" case), rather than a \nquantum coherent ``quasi-classical\" field, which result in very different evolutions.\n\\\\\n\\begin{flushleft}\nWe thank Prof. C.J. Villas-B\\^oas for fruitful comments.\nThis work is supported by the Agencies CNPq (INCT of Quantum Information) and FAPESP (CePOF), Brazil.\n\\end{flushleft}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics{fig3.pdf}\n\\end{center}\n\\caption{Negativity (a) and linear entropy (b) as a function of the scaled time $g_1 t$\nfor mode 1 prepared in Fock state ($N = 5$) and mode 2 prepared in a coherent state ($\\overline{n} = 5$), with $r = 1.023$.}\n\\label{figure3} \n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics{fig4.pdf}\n\\end{center}\n\\caption{Atomic inversion (red, dashed curve) and the Negativity (blue, continuous curve) \nas a function of the scaled time $\\tau^{\\prime}$ in the ``partially classical\" case, having mode 1 prepared in a Fock state ($N = 2$) and $r^\\prime = 1.41$.}\n\\label{figure4} \n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdxyu b/data_all_eng_slimpj/shuffled/split2/finalzzdxyu new file mode 100644 index 0000000000000000000000000000000000000000..32854f44317a3f400b6e011c2fdc014aa5c53d07 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdxyu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nVector equilibrium problem is a unified model of several problems, for\ninstance, vector variational inequalities and vector optimization problems.\nFor further relevant information on this topic, the reader is referred to\nthe following recent publications available in our bibliography: [1-7],\n[9-12], [14-17].\n\nIn this paper, we will suppose that $X$\\textit{\\ }is a nonempty, convex and\ncompact set in a Hausdorff locally convex space $E$ , $A:X\\rightarrow 2^{X}$\nand $F:X\\times X\\times X\\rightarrow 2^{X}$ are correspondences and $C\\subset\nX$ is a nonempty closed convex cone with int$C\\neq \\emptyset $.\n\nWe consider the following generalized strong vector quasi-equilibrium\nproblem (in short, GSVQEOP):\\newline\n\nfind $x^{\\ast }\\in X$ such that $x^{\\ast }\\in \\overline{A}(x^{\\ast })$ and\neach $u\\in A(x^{\\ast })$ implies that $F(u,x^{\\ast },z)\\nsubseteq $int$C$\nfor each $z\\in A(x^{\\ast }).$\n\n\\section{Preliminary results}\n\nLet $X$, $Y$ be topological spaces and $T:X\\rightarrow 2^{Y}$ be a\ncorrespondence. $T$ is said to be \\textit{upper semicontinuous} if for each \nx\\in X$ and each open set $V$ in $Y$ with $T(x)\\subset V$, there exists an\nopen neighborhood $U$ of $x$ in $X$ such that $T(x)\\subset V$ for each $y\\in\nU$. $T$ is said to be \\textit{lower semicontinuous} if for each x$\\in X$ and\neach open set $V$ in $Y$ with $T(x)\\cap V\\neq \\emptyset $, there exists an\nopen neighborhood $U$ of $x$ in $X$ such that $T(y)\\cap V\\neq \\emptyset $\nfor each $y\\in U$. $T$ is said to have \\textit{open lower sections} if \nT^{-1}(y):=\\{x\\in X:y\\in T(x)\\}$ is open in $X$ for each $y\\in Y.$\n\nThe following lemma will be crucial in the proofs.\n\n\\begin{lemma}\n(Yannelis and Prabhakar, \\cite{yan}). \\textit{Let }$X$\\textit{\\ be a\nparacompact Hausdorff topological space and }$Y$\\textit{\\ be a Hausdorff\ntopological vector space. Let }$T:X\\rightarrow 2^{Y}$\\textit{\\ be a\ncorrespondence with nonempty convex values\\ and for each }$y\\in Y$\\textit{, \n$T^{-1}(y)$\\textit{\\ is open in }$X$\\textit{. Then, }$T$\\textit{\\ has a\ncontinuous selection that is, there exists a continuous function }\nf:X\\rightarrow Y$\\textit{\\ such that }$f(x)\\in T(x)$\\textit{\\ for each }\nx\\in X$\\textit{.\\medskip }\n\\end{lemma}\n\nThe correspondence $\\overline{T}$ is defined by $\\overline{T}(x):=\\{y\\in\nY:(x,y)\\in $cl$_{X\\times Y}$ Gr $T\\}$ (the set cl$_{X\\times Y}$ Gr $(T)$ is\ncalled the adherence of the graph of $T$)$.$ It is easy to see that cl \nT(x)\\subset \\overline{T}(x)$ for each $x\\in X.$\n\nIf $X$ and $Y$ are topological vector spaces, $K$ is a nonempty subset of \nX, $ $C$ is a nonempty closed convex cone and $T:K\\rightarrow 2^{Y}$ is a\ncorrespondence, then \\cite{luc}, $T$ is called \\textit{upper }$C$\\textit\n-continuous at} $x_{0}\\in K$ if, for any neighborhood $U$ of the origin in \nY,$ there is a neighborhood $V$ of $x_{0}$ such that, for all $x\\in V,$ \nT(x)\\subset T(x_{0})+U+C.$ $T$ is called \\textit{lower }$C$\\textit\n-continuous at} $x_{0}\\in K$ if, for any neighborhood $U$ of the origin in \nY,$ there is a neighborhood $V$ of $x_{0}$ such that, for all $x\\in V,$ \nT(x_{0})\\subset T(x)+U-C.$\n\nThe property of properly $C-$quasi-convexity for correspondences is\npresented below.\n\nLet $X$ be a nonempty convex subset of a topological vector space\\textit{\\ }\nE,$ $Y$ be a topological vector space, and $C$ be a pointed closed convex\ncone in $Z$ with its interior int$C\\neq \\emptyset .$ Let $T:X\\rightarrow\n2^{Y}$ be a correspondence with nonempty values. $T$ is said to be \\textit\nproperly }$C-$\\textit{quasi-convex on} $X$ (\\cite{long}), if for any \nx_{1},x_{2}\\in X$ and $\\lambda \\in \\lbrack 0,1],$ either $T(x_{1})\\subset\nT(\\lambda x_{1}+(1-\\lambda )x_{2})+C$ or $T(x_{2})\\subset T(\\lambda\nx_{1}+(1-\\lambda )x_{2})+C.\\medskip $\n\nIn order to establish our main theorems, we need to prove some auxiliary\nresults. The starting point is the following statement:\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ and let $\\mathcal{C\\ }$\\ be a nonempty\nsubset of $X\\times X.$\\textit{\\ Assume that t}he following conditions are\nfulfilled:\n\\end{theorem}\n\n\\textit{a) }$\\mathcal{C}^{-}(y)$ $=\\{x\\in X:(x,y)\\in C\\}$ \\textit{is open fo\n} \\textit{each } $y\\in X;$\n\n\\textit{b)} $\\mathcal{C}^{+}(x)=\\{y\\in X:(x,y)\\in C\\}\\mathit{\\ }$ \\textit{is\nconvex and nonempty for each} $x\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$ such that $(x^{\\ast },x^{\\ast\n})\\in C$\\textit{.\\medskip }\n\n\\begin{proof}\nLet us define the correspondence $T:X\\rightarrow 2^{X},$ by\n\n$T(x)=\\mathcal{C}^{+}(x)$ for each $x\\in X.$\n\nThe correspondence $T$ is nonempty and convex valued and it has open lower\nsections$.$\n\nWe apply the Yannelis and Prabhakar's Lemma and we obtain that $T$\\textit{\\ \nhas a continuous selection $f:X\\rightarrow X.$\n\nAccording to the Tychonoff fixed point Theorem \\cite{is}, there exists \nx^{\\ast }\\in X$ such that $f(x^{\\ast })=x^{\\ast }.$ Hence, $x^{\\ast }\\in\nT(x^{\\ast })$ and obviously, $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}.\\medskip\n\\medskip $\n\\end{proof}\n\nThe next two results are direct consequences of Theorem 1.\\medskip\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ , and let $A:X\\rightarrow 2^{X}$ and \nP:X\\times X\\rightarrow 2^{X}$ be correspondences such that the following\nconditions are fulfilled:\n\\end{theorem}\n\n\\textit{a)} $A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b)} \\textit{the set }$\\{y\\in X:$ $A(x)\\cap P(x,y)=\\emptyset \\}\\cap\nA(x)$ \\ \\textit{is nonempty for each} $x\\in X;$\n\n\\textit{c) }$\\{y\\in X:$\\textit{\\ }$A(x)\\cap P(x,y)=\\emptyset \\}$\\textit{\\ is\nconvex for each }$x\\in X;$\n\n\\textit{d)} $\\{x\\in X:A(x)\\cap P(x,y)=\\emptyset \\}\\mathit{\\ }$ \\textit{is\nopen for each} $y\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\textit{\\ such that \n}$\\ x^{\\ast }\\in A(x^{\\ast })$ \\textit{and }$A(x^{\\ast })\\cap P(x^{\\ast\n},x^{\\ast })=\\emptyset .\\medskip $\n\n\\begin{proof}\nLet us define the set $\\mathcal{C}=\\{(x,y)\\in X\\times X:$ $A(x)\\cap\nP(x,y)=\\emptyset \\}\\cap $Gr$A.$\n\nThen, $\\mathcal{C}^{+}(x)=\\{y\\in X:$ $A(x)\\cap P(x,y)=\\emptyset \\}\\cap A(x)$\nfor each $\\ x\\in X$ and\n\n$\\mathcal{C}^{-}(y)=A^{-1}(y)\\cap \\{x\\in X:A(x)\\cap P(x,y)=\\emptyset \\}\\ \nfor each $x\\in X.$\n\nAssumption b) implies that $\\mathcal{C}$ is nonempty$.$ The set $\\mathcal{C\n^{-}(y)$ is open for each $y\\in X$ since Assumptions a) and d) hold.\n\nAccording to Assumptions b) and c), $A(x)\\cap $ $\\mathcal{C}^{+}(x)$ is\nnonempty and convex for each $x\\in X.$\n\nAll hypotheses of Theorem 1 are fulfilled, and then, there exists $x^{\\ast\n}\\in $\\ $X$\\ such that $\\ x^{\\ast }\\in A(x^{\\ast })$ and $A(x^{\\ast })\\cap\nP(x^{\\ast },x^{\\ast })=\\emptyset .$\n\\end{proof}\n\nWe establish the following result as a consequence of Theorem 2. It will be\nused in order to prove the existence of solutions for the considered vector\nquasi-equilibrium problem.\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ , and let $A:X\\rightarrow 2^{X}$, \nP:X\\times X\\rightarrow 2^{X}$ be correspondences such that the following\nconditions are fulfilled:\n\\end{theorem}\n\n\\textit{a)} $A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b)} \\textit{the set }$\\{y\\in X:$ $\\overline{A}(x)\\cap\nP(x,y)=\\emptyset \\}\\cap A(x)$ \\ \\textit{is nonempty for each} $x\\in X;$\n\n\\textit{c) }$\\{y\\in X:$\\textit{\\ }$\\overline{A}(x)\\cap P(x,y)=\\emptyset \\}\n\\textit{\\ is convex for each }$x\\in X;$\n\n\\textit{d)} $\\{x\\in X:\\overline{A}(x)\\cap P(x,y)=\\emptyset \\}\\mathit{\\ }$ \n\\textit{is open for each} $y\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\ \\textit{\\ such\nthat }$\\ x^{\\ast }\\in A(x^{\\ast })$ \\textit{and }$A(x^{\\ast })\\cap P(x^{\\ast\n},x^{\\ast })=\\emptyset .\\medskip $\n\nWe note that, according to Theorem 2, there exists $x^{\\ast }\\in $\\textit{\\ \n$X$\\textit{\\ }such that $\\ x^{\\ast }\\in A(x^{\\ast })$ and\\textit{\\ }\n\\overline{A}(x^{\\ast })\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .$ Obviously, \n\\overline{A}(x^{\\ast })\\cap P(x^{\\ast },x^{\\ast })=\\emptyset $ implies \nA(x^{\\ast })\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $\n\nIf $A(x)=X$ for each $x\\in X$, Theorem 2 implies the following corollary.\n\n\\begin{corollary}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ , and let $P:X\\times X\\rightarrow 2^{X}$\nbe a correspondence such that the following conditions are fulfilled:\n\\end{corollary}\n\n\\textit{a) }$\\{y\\in X:$\\textit{\\ }$P(x,y)=\\emptyset \\}$\\textit{\\ is nonempty\nand convex for each }$x\\in X;$\n\n\\textit{b)} $\\{x\\in X:P(x,y)=\\emptyset \\}\\mathit{\\ }$ \\textit{is open for\neach} $y\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\textit{\\ such that \n}$\\ P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $\n\nBy applying an approximation method of proof, we can prove Theorem 4.\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ and $\\mathcal{C\\ }$\\ be a nonempty,\nclosed subset of $X\\times X.$\\textit{\\ Assume that there exists a sequence }\n(G_{k})_{k\\in \\mathbb{N}^{\\ast }}$ \\textit{of subsets of }$X\\times X$ such\nthat \\textit{t}he following conditions are fulfilled:\n\\end{theorem}\n\n\\textit{a) }for each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{-}(y)$ $=\\{x\\in\nX:(x,y)\\in G_{k}\\}$ \\textit{is open for} \\textit{each } $y\\in X;$\n\n\\textit{b)} for each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{+}(x)=\\{y\\in\nX:(x,y)\\in G_{k}\\}\\mathit{\\ }$ \\textit{is convex and nonempty for each} \nx\\in X;$\n\n\\textit{c) }$G_{k}\\supseteq G_{k+1}$\\textit{\\ for each }$k\\in \\mathbb{N\n^{\\ast };$\n\n\\textit{d) for every open set }$G$\\textit{\\ with }$G\\supset \\mathcal{C},\n\\textit{\\ there exists }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\ such that }\nG_{k}\\subseteq \\mathcal{C}.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$\\textit{\\ such that }$(x^{\\ast\n},x^{\\ast })\\in \\mathcal{C}$\\textit{.\\medskip }\n\n\\begin{proof}\nFor each $k\\in \\mathbb{N}^{\\ast },$ we apply Theorem 1. Let $x^{k}\\in X$\nsuch that $(x^{k},x^{k})\\in G_{k}.$ Since $X$ is a compact set, we can\nconsider that the sequence $(x^{k})_{k}$ converges to some $x^{\\ast }\\in X.$\nWe claim that $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}.$\n\nIndeed, let us suppose, by way of contradiction, that $(x^{\\ast },x^{\\ast\n})\\notin \\mathcal{C}.$ Since $\\mathcal{C}$\\textit{\\ }is nonempty and\ncompact, we can choose a neighborhood $V_{(x^{\\ast },x^{\\ast })}$ of \n(x^{\\ast },x^{\\ast })$ and an open set $G$ such that $G\\supset \\mathcal{C}$\nand $V_{(x^{\\ast },x^{\\ast })}\\cap G=\\emptyset .$ According to Assumptions\nd) and c), there exists $k_{1}\\in \\mathbb{N}^{\\ast }$ such that \nG_{k}\\subseteq G$ for each $k\\geq k_{1}.$ Since $V_{(x^{\\ast },x^{\\ast })}$\nis a neighborhood of $(x^{\\ast },x^{\\ast }),$ there exists $k_{2}\\in \\mathbb\nN}^{\\ast }$ such that $(x^{k},x^{k})\\in V_{(x^{\\ast },x^{\\ast })}$ for each \nk\\geq k_{2}.$ $\\ $Hence, for $k\\geq $max($k_{2},k_{1}),$ \n(x^{k},x^{k})\\notin G_{k},$ which is a contradiction.\n\nConsequently, $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}.$\n\\end{proof}\n\nTheorem 5 is a consequence of Theorem 4 and it will be used in Section 3 in\norder to prove the existence of solutions for GSVQEP.\n\n\\begin{theorem}\n\\textit{Let }$X$\\textit{\\ be a nonempty, convex and compact set in a\nHausdorff locally convex space }$E$\\textit{\\ , and let }$A:X\\rightarrow\n2^{X} $\\textit{\\ and }$P:X\\times X\\rightarrow 2^{X}$\\textit{\\ be\ncorrespondences such that the following conditions are fulfilled:}\n\\end{theorem}\n\n\\textit{a) }$A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b) the set }$U=\\{(x,y)\\in X\\times X:$\\textit{\\ }$\\overline{A}(x)\\cap\nP(x,y)=\\emptyset \\}$\\textit{\\ \\ is closed} \\textit{and} $U\\cap $Gr$\\overline\nA}$ \\textit{is nonempty;}\n\n\\textit{c) there exists a sequence }$(P_{k})_{k\\in N}$\\textit{\\ of\ncorrespondences, where, for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ }\nP_{k}:X\\times X\\rightarrow 2^{X}$\\textit{\\ and let }$U_{k}=\\{(x,y)\\in\nX\\times X:$\\textit{\\ }$A(x)\\cap P_{k}(x,y)=\\emptyset \\}$\\textit{. Assume\nthat:}\n\n\\textit{\\ }$\\ \\ \\ $\\textit{c1) }$U_{k}^{+}(x)=\\{y\\in X:$\\textit{\\ }\n\\overline{A}(x)\\cap P_{k}(x,y)=\\emptyset \\}$\\textit{\\ is convex for each }\nx\\in X$\\textit{\\ and }$U_{k}^{+}(x)\\cap A(x)\\neq \\emptyset $\\textit{\\ for\neach }$x\\in X;$\n\n\\textit{\\ \\ \\ \\ c2 ) }$U_{k}^{-}(y)=\\{x\\in X:\\overline{A}(x)\\cap\nP_{k}(x,y)=\\emptyset \\}\\ $\\textit{\\ is open for each }$y\\in X;$\n\n\\textit{\\ \\ \\ \\ c3) for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ }\nP_{k}(x,y)\\subseteq P_{k+1}(x,y)$\\textit{\\ for each }$(x,y)\\in X\\times X;$\n\n\\textit{\\ \\ \\ \\ \\ c4) for every open set }$G$\\textit{\\ with }$G\\supset U\\cap \n$Gr$\\overline{A},$\\textit{\\ there exists }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\\nsuch that }$G\\supseteq U_{k}\\cap $Gr$A.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\textit{\\ such that \n}$\\ x^{\\ast }\\in \\overline{A}(x^{\\ast })$\\textit{\\ and }$A(x^{\\ast })\\cap\nP(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $\n\n\\begin{proof}\nLet us define $\\mathcal{C=}U\\cap $Gr$\\overline{A}.$ According to Assumptions\nb) and c), $\\mathcal{C\\ }$\\ is a nonempty and closed subset of $X\\times X.$\n\nFurther, \\textit{\\ }for each\\textit{\\ }$k\\in \\mathbb{N}^{\\ast },$ let us\ndefine $G_{k}=U_{k}\\cap $Gr$A\\subseteq X\\times X.$\n\nThen, for each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{+}(x)$ $=\\{y\\in X:(x,y)\\in\nG_{k}\\}=U_{k}^{+}(x)\\cap A(x)$ is nonempty and convex for each\\textit{\\ }\nx\\in X,$ since Assumptions a) and c1) hold.\n\nFor each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{-}(y)$ $=\\{x\\in X:(x,y)\\in\nG_{k}\\}=U_{k}^{-}(y)\\cap A^{-1}(y)$ is open for each\\textit{\\ }$y\\in X,$\nsince Assumptions a) and c2) hold.\n\nAssumption c3) implies that, \\textit{\\ }for each\\textit{\\ }$k\\in \\mathbb{N\n^{\\ast },$ $U_{k+1}\\subseteq U_{k}$ and then, $G_{k}\\supseteq G_{k+1}$ and\nAssumption c4) implies that for every open set $G$ with $G\\supset \\mathcal{C\n,$ there exists $k\\in \\mathbb{N}^{\\ast }$ such that $G_{k}\\subseteq \\mathcal\nC}.$\n\nAll hypotheses of Theorem 4 are verified. Therefore, there exists $x^{\\ast\n}\\in X$ such that $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}$.\n\nConsequently, there exists $x^{\\ast }\\in X$ such that $\\ x^{\\ast }\\in \n\\overline{A}(x^{\\ast })$\\textit{\\ }and\\textit{\\ }$A(x^{\\ast })\\cap P(x^{\\ast\n},x^{\\ast })=\\emptyset .\\medskip $\n\\end{proof}\n\n\\section{Main results}\n\nThis section is devoted to the study of the existence of solutions for the\nconsidered generalized strong vector quasi-equilibrium problem. We derive\nour main results by using the auxiliary theorems concerning correspondences,\nwhich have been established in the previous section. This new approach to\nsolve GSVQEP is intended to provide new conditions under which the solutions\nexist.\\medskip\n\nThe first theorem states that GSVQEP has solutions if $F(\\cdot ,y,\\cdot )$\nis lower ($-C$)-semicontinuous for each $y\\in X$ and $F(u,\\cdot ,z)$\\ is\nproperly $C-$\\ quasi-convex for each $(u,z)\\in X\\times X.$\n\n\\begin{theorem}\n\\textit{Let }$F:X\\times X\\times X\\rightarrow 2^{X}$\\textit{\\ be a\ncorrespondence with nonempty values. Suppose that:}\n\\end{theorem}\n\n\\textit{a) }$A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b) for each }$x\\in X,$\\textit{\\ there exists }$y\\in A(x)$\\textit{\\\nsuch that each }$u\\in \\overline{A}(x)$\\textit{\\ implies that }\nF(u,y,z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in \\overline{A}(x);\n$\n\n\\textit{c) }$F(\\cdot ,y,\\cdot ):$\\textit{\\ }$X\\times X\\rightarrow 2^{X}\n\\textit{\\ is lower (}$-C$\\textit{)-semicontinuous for each }$y\\in X;$\n\n\\textit{d) for each }$(u,z)\\in X\\times X,$\\textit{\\ }$F(u,\\cdot\n,z):X\\rightarrow 2^{X}$\\textit{\\ is properly }$C-$\\textit{\\ quasi-convex.}\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$\\textit{\\ such that }$x^{\\ast\n}\\in A(x^{\\ast })$\\textit{\\ and each }$u\\in A(x^{\\ast })$\\textit{\\ implies\nthat }$F(u,x^{\\ast },z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in\nA(x^{\\ast }),$\\textit{\\ that is, }$x^{\\ast }$\\textit{\\ is a solution for\nGSVQEP.\\medskip }\n\n\\begin{proof}\nLet us define $P:X\\times X\\rightarrow 2^{X},$ by\n\n$P(x,y)=\\{u\\in X:$ $\\exists z\\in \\overline{A}(x)$ such that \nF(u,y,z)\\subseteq C\\}$ for each $(x,y)\\in X\\times X.$\n\nAssumption b) implies that the set $\\{y\\in X:$ $\\overline{A}(x)\\cap\nP(x,y)=\\emptyset \\}\\cap A(x)$ \\ is nonempty for each $x\\in X.$\n\nWe claim that the set $E(x)$ $\\ $is convex for each $x\\in X,$ where\\ \nE(x)=\\{y\\in X:$\\textit{\\ }$\\overline{A}(x)\\cap P(x,y)=\\emptyset \\}=$\n\n$=\\{y\\in X:$\\textit{\\ }each $u\\in \\overline{A}(x)$ implies that \nF(u,y,z)\\nsubseteq C$ for each $z\\in \\overline{A}(x)\\}.$\n\nIndeed, let us fix $x_{0}\\in X$ and let us consider $y_{1},y_{2}\\in\nE(x_{0}). $ This means that each $u\\in \\overline{A}(x_{0})$ implies that \nF(u,y_{1},z)\\nsubseteq C$ and $F(u,y_{2},z)\\nsubseteq C$ for each $z\\in \n\\overline{A}(x_{0}).$\n\nLet $y(\\lambda )=\\lambda y_{1}+(1-\\lambda )y_{2}$ be defined for each \n\\lambda \\in \\lbrack 0,1].$\n\nWe claim that $y(\\lambda )\\in E(x_{0})$ for each $\\lambda \\in \\lbrack 0,1].$\n\nSuppose, on the contrary, that there exist $\\lambda _{0}\\in \\lbrack 0,1]$ $,$\n$u^{\\prime }\\in \\overline{A}(x_{0})$ and $z^{\\prime }\\in \\overline{A}(x_{0})$\nsuch that $F(u^{\\prime },y(\\lambda _{0}),z^{\\prime })\\subseteq C.$ Since \n(F(u^{\\prime },\\cdot ,z^{\\prime }):X\\rightarrow 2^{X}$ is\\textit{\\ }properl\n\\textit{\\ }$C-$\\textit{\\ }quasi-convex$,$ we have that:\n\n$F(u^{\\prime },y_{1},z^{\\prime })\\subseteq F(u^{\\prime },y(\\lambda\n),z^{\\prime })+C$ or $F(u^{\\prime },y_{2},z^{\\prime })\\subseteq F(u^{\\prime\n},y(\\lambda ),z^{\\prime })+C$\n\nOn the other hand, it is true that $F(u^{\\prime },y(\\lambda ),z^{\\prime\n})\\subseteq C.$ We obtain that:\n\n$F(u^{\\prime },y_{j},z^{\\prime })\\subseteq C+C\\subseteq C$ for $j=1$ or for \nj=2$.\n\nThis contradicts the assumption that $y_{1},y_{2}\\in E(x_{0})$.\nConsequently, $E(x_{0})$ is convex and Assumption c) from Theorem 3 is\nfulfilled.\n\nNow, we will prove that $D(y)=\\{x\\in X:\\overline{A}(x)\\cap P(x,y)=\\emptyset\n\\}\\mathit{\\ }$ is open for each $y\\in X.$\n\nIn order to do this, we will show that $^{C}D(y)$ is closed for each $y\\in X,\n$ where $\\ ^{C}D(y)=\\{x\\in X:\\overline{A}(x)\\cap P(x,y)\\neq \\emptyset \\\n\\mathit{=}\\{x\\in X:$ there exist $u,z\\in \\overline{A}(x)$ such that \nF(u,y,z)\\subseteq C\\}.$\n\nLet $(x_{\\alpha })_{\\alpha \\in \\Lambda }$ be a net in $^{C}D(y)$ such that\nlim$_{\\alpha }x_{\\alpha }=x_{0}.$ Then, there exist $u_{\\alpha },z_{\\alpha\n}\\in \\overline{A}(x_{\\alpha })$ such that $F(u_{\\alpha },y,z_{\\alpha\n})\\subseteq C.$\n\nSince $X$ is a compact set, we can suppose that $(u_{\\alpha })_{\\alpha \\in\n\\Lambda },(z_{\\alpha })_{\\alpha \\in \\Lambda }$ are convergent nets and let\nlim$_{\\alpha }u_{\\alpha }=u_{0}$ and lim$_{\\alpha }z_{\\alpha }=z_{0}.$\n\nThe closedness of $\\overline{A}$ implies that $u_{0},z_{0}\\in \\overline{A\n(x_{0}).$\n\nNow, we claim that $F(u_{0},y,z_{0})\\subseteq C.$\n\nSince $F(u_{\\alpha },y,z_{\\alpha })\\subseteq C$ and $F(\\cdot ,y,\\cdot ):\n\\textit{\\ }$X\\times X\\rightarrow 2^{X}$\\textit{\\ }is lower ($-C\n)-semicontinuous, for each neighborhood $U$ of the origin in $X,$ there\nexists a subnet $(u_{\\beta },z_{\\beta })_{\\beta }$ of $(u_{\\alpha\n},z_{\\alpha })_{\\alpha }$ such that $F(u_{0},y,z_{0})\\subset F(u_{\\beta\n},y,z_{\\beta })+U+C.$ Hence, $F(u_{0},y,z_{0})\\subset U+C.$\n\nWe will show that $F(u_{0},y,z_{0})\\subset C.$ Suppose, by way of\ncontradiction, that there exists $t\\in F(u_{0},y,z_{0})\\cap ^{C}C.$ We note\nthat $B=C-t$ is a closed set which does not contain $0.$ It follows that \n^{C}B$ is open and contains $0.$ Since $X$ is locally convex, there exists a\nconvex neighborhood $U_{1}$ of origin such that $U_{1}\\subset X\\backslash B$\nand $U_{1}=-U_{1}$. Thus, $0\\notin B+U_{1}$ and then, $t\\notin C+U_{1},$\nwhich is a contradiction. Therefore, $F(u_{0},y,z_{0})\\subset C.$\n\nWe proved that there exist $u_{0},z_{0}\\in \\overline{A}(x_{0})$ such that \nF(u_{0},y,z_{0})\\subseteq C.$ It follows that $^{C}D(y)$ is closed. Then, \nD(y)$ is an open set and Assumption d) from Theorem 3 is fulfilled.\n\nConsequently, all conditions of Theorem 3 are verified, so that there exists \n$x^{\\ast }\\in $\\ $X$\\ such that $\\ x^{\\ast }\\in A(x^{\\ast })$ and $A(x^{\\ast\n})\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $ Obviously, $x^{\\ast }\n\\textit{\\ }is a solution for GSVQEP.\\medskip\n\\end{proof}\n\nIn order to obtain a second result concerning the existence of solutions of\nGSVQEP, we use an approximation method and Theorem 5. We mention that this\nresult does not require convexity properties for the correspondence $F.$\n\n\\begin{theorem}\n\\textit{Let }$F:X\\times X\\times X\\rightarrow 2^{X}$\\textit{\\ be a\ncorrespondence. Suppose that:}\n\\end{theorem}\n\n\\textit{a) }$A$\\textit{\\ has nonempty, convex values and open lower\nsections; }$\\overline{A}$\\textit{\\ is lower semicontinuous;}\n\n\\textit{b) }$F$\\textit{\\ is upper semicontinuous with nonempty, closed\nvalues;}\n\n\\textit{c) }$U\\cap $Gr$\\overline{A}$\\textit{\\ is nonempty, where }$U=\\{\n\\textit{\\ }$(x,y)\\in X\\times X:u\\in \\overline{A}(x)$\\textit{\\ implies that }\nF(u,y,z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in \\overline{A\n(x)\\};$\n\n\\textit{d) there exists a sequence }$(F_{k})_{k\\in N}$\\textit{\\ of\ncorrespondences, such that, for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ }\nF_{k}:X\\times X\\times X\\rightarrow 2^{X}$\\textit{\\ and let }\nU_{k}=\\{(x,y)\\in X\\times X:u\\in \\overline{A}(x)$\\textit{\\ implies that }\nF_{k}(u,y,z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in \\overline{A\n(x)\\}$\\textit{. Assume that:}\n\n\\textit{d1) for each }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\ and for each }$x\\in\nX,$\\textit{\\ there exists }$y\\in A(x)$\\textit{\\ such that each }$u\\in \n\\overline{A}(x)$\\textit{\\ implies that }$F_{k}(u,y,z)\\nsubseteq $\\textit{int\n$C$\\textit{\\ for each }$z\\in A(x);$\n\n\\textit{d2) for each }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\ and for each }\n(u,z)\\in X\\times X,$\\textit{\\ }$F_{k}(u,\\cdot ,z):X\\rightarrow 2^{X}$\\textit\n\\ is properly }$C-$\\textit{\\ quasi-convex;}\n\n\\textit{d3) for each }$k\\in \\mathbb{N}^{\\ast }$ \\textit{and} \\textit{\\ for\neach }$y\\in X,$\\textit{\\ }$F_{k}(\\cdot ,y,\\cdot ):$\\textit{\\ }$X\\times\nX\\rightarrow 2^{X}$\\textit{\\ is lower (}$-C$\\textit{)-semicontinuous}$;$\n\n\\textit{d4) for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ for each }\n(x,y)\\in X\\times X,$\\textit{\\ and for each }$u\\in X$\\textit{\\ with the\nproperty that }$\\exists z\\in \\overline{A}(x)$\\textit{\\ such that }\nF_{k}(u,y,z)\\subseteq C,$\\textit{\\ there exists }$z^{\\prime }\\in \\overline{A\n(x)$\\textit{\\ such that }$F_{k+1}(u,y,z^{\\prime })\\subseteq C;$\n\n\\textit{d5) for every open set }$G$\\textit{\\ with }$G\\supset U\\cap $Gr\n\\overline{A},$\\textit{\\ there exists }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\\nsuch that }$G\\supseteq U_{k}\\cap $\\textit{Gr}$A.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$\\textit{\\ such that }$x^{\\ast\n}\\in \\overline{A}(x^{\\ast })$\\textit{\\ and each }$u\\in A(x^{\\ast })$\\textit\n\\ implies that }$F(u,x^{\\ast },z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for\neach }$z\\in A(x^{\\ast }),$\\textit{\\ that is, }$x^{\\ast }$\\textit{\\ is a\nsolution for GSVQEP.\\medskip }\n\n\\begin{proof}\nLet us define $P:X\\times X\\rightarrow 2^{X},$ by\n\n$P(x,y)=\\{u\\in X:$ $\\exists z\\in \\overline{A}(x)$ such that \nF(u,y,z)\\subseteq C\\}$ for each $(x,y)\\in X\\times X.$\n\nWe claim that, $U=\\{$\\textit{\\ }$(x,y)\\in X\\times X:u\\in \\overline{A}(x)\n\\textit{\\ }implies that $F(u,y,z)\\nsubseteq $int$C$ for each $z\\in \\overline\nA}(x)\\}$ is closed$.$\n\nLet $(x^{0},y^{0})\\in $cl$U.$ Then, there exists $(x^{\\alpha },y^{\\alpha\n})_{\\alpha \\in \\Lambda }$ a net in $U$ such that $\\lim_{\\alpha }(x^{\\alpha\n},y^{\\alpha })=(x^{0},y^{0})\\in X\\times X.$ Let $u\\in \\overline{A}(x^{0})$\nand $z\\in \\overline{A}(x^{0}).$ Since $\\overline{A}$ is lower semicontinuous\nand $\\lim_{\\alpha }x^{\\alpha }=x^{0},$ there exists nets $(u^{\\alpha\n})_{\\alpha \\in \\Lambda }\\ $\\ and $(z^{\\alpha })_{\\alpha \\in \\Lambda }$ in $X$\nsuch that $u^{\\alpha },z^{\\alpha }\\in \\overline{A}(x^{\\alpha })$ for each \n\\alpha \\in \\Lambda $ and $\\lim_{\\alpha }u^{\\alpha }=u,$ $\\lim_{\\alpha\n}z^{\\alpha }=z.$ Since $(x^{\\alpha },y^{\\alpha })_{\\alpha \\in \\Lambda }$ is\na net in $U,$ $\\ $then$,$ for each $\\alpha \\in \\Lambda ,$ $F(u^{\\alpha\n},y^{\\alpha },z^{\\alpha })\\nsubseteq $int$C$ $,$ that is, $F(u^{\\alpha\n},y^{\\alpha },z^{\\alpha })\\cap W\\neq \\emptyset ,$ where $W=X\\backslash $int\nC,$ that is, there exists $(t^{\\alpha })_{\\alpha \\in \\Lambda }$ a net in $X$\nsuch that $t^{\\alpha }\\in F(u^{\\alpha },y^{\\alpha },z^{\\alpha })\\cap W$ for\neach $\\alpha \\in \\Lambda .$\n\nSince $X$ is compact, we can suppose that $\\lim_{\\alpha }t^{\\alpha }=t^{0}.$\nThe closedness of $W$ implies that $t^{0}\\in W.$ We invoke here the\nclosedness of $F$ and we conclude that $t^{0}\\in F(u,y^{0},z).$ Therefore, \nF(u,y^{0},z)\\cap W\\neq \\emptyset ,$ and, thus, $u\\in \\overline{A}(x^{0})$\nimplies $F(u,y^{0},z)\\nsubseteq $int$C$ for each $z\\in \\overline{A}(x^{0}).$\nHence, $U$ is closed.\n\nFor each $k\\in \\mathbb{N}^{\\ast },$ let us define $P_{k}:X\\times\nX\\rightarrow 2^{X},$ by\n\n$P_{k}(x,y)=\\{u\\in X:$ $\\exists z\\in \\overline{A}(x)$ such that \nF_{k}(u,y,z)\\subseteq C\\}$ for each $(x,y)\\in X\\times X$ and\n\n$U_{k}=\\{(x,y)\\in X\\times X:u\\in \\overline{A}(x)$ implies that \nF_{k}(u,y,z)\\nsubseteq $int$C$ for each $z\\in \\overline{A}(x)\\}=\\{(x,y)\\in\nX\\times X:$\\textit{\\ }$\\overline{A}(x)\\cap P_{k}(x,y)=\\emptyset \\}.$\n\n\\textit{\\ }Let\\textit{\\ }$k\\in \\mathbb{N}^{\\ast }.$\n\nAssumption d1) implies that $U_{k}^{+}(x)\\cap A(x)\\neq \\emptyset $\\textit{\\ \nfor each\\textit{\\ }$x\\in X$ and Assumption d2) implies that $U_{k}^{+}(x)$\nis convex \\textit{\\ }for each\\textit{\\ }$x\\in X$ (we use a similar proof\nwith the one of Theorem 6)\n\nSince $F_{k}(\\cdot ,y,\\cdot ):$\\textit{\\ }$X\\times X\\rightarrow 2^{X}\n\\textit{\\ }is lower ($-C$)-semicontinuous for each $y\\in X,$ by following an\nargument similar with the one from the proof of Theorem 6, we can prove that\n\n$U_{k}^{-}(y)=\\{x\\in X:\\overline{A}(x)\\cap P_{k}(x,y)=\\emptyset \\}\\ $ is\nopen for each\\textit{\\ }$y\\in X.$\n\nAssumption d4) implies that $P_{k}(x,y)\\subseteq P_{k+1}(x,y)$\\textit{\\ }for\neach\\textit{\\ }$(x,y)\\in X\\times X.$\n\nAll conditions of Theorem 5 are verified, so that there exists $x^{\\ast }\\in \n$\\ $X$\\ such that $\\ x^{\\ast }\\in \\overline{A}(x^{\\ast })$ and $A(x^{\\ast\n})\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $ Obvioulsy, $x^{\\ast }\n\\textit{\\ }is a solution for GSVQEP.\\medskip\n\\end{proof}\n\n\\section{Concluding remarks}\n\nThis paper developed a framework for discussing the existence of solutions\nfor a generalized strong vector quasi-equilibrium problem. The results have\nbeen obtained under assumptions which are different than the existing ones\nin literature. An approximation technique of proof has been developed.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Sec-1}\n\n\\subsection{Background}\n\\label{Sec-1.1}\n\nMicroorganisms are ubiquitous in nature and manage critical ecosystem services, ranging from global nutrient cycling to human health. Microbes do not exist in a vacuum in nature, but instead are members of diverse communities known as microbiomes, wherein microbes may interact with other microbes of different species. These microbial interactions may be favorable or antagonistic and are crucial for the successful establishment and maintenance of a microbial community, and frequently result in important pathogenic or beneficial effect to the host or environment \\citep{braga2016microbial}. A thorough understanding of how microbe interact with one another is critical to uncovering the underlying role microorganisms play in the host or environment.\nHowever, it is a highly challenging biological task as it is estimated that only 1\\% of bacteria are cultivatable. The inability to culture the majority of microbial species has motivated the use of culture-independent methods for microbiome studies in different environments \\citep{faust2012microbial, tang2019microbial}. \n\nFortunately, recent innovations in in situ DNA sequencing provide opportunities to infer how microbes interact with one another or their environments. Modern studies on microbial interactions frequently rely on DNA sequencing techniques through the bioinformatic analysis of taxonomically diagnostic genetic markers (e.g., 16S rRNA) sequenced directly from a sample. The counts of the taxonomically diagnostic genetic markers can be used to represent the abundance of microbial species, e.g., Operational Taxonomic Units (OTUs) or phylotypes (e.g., genera), in a sample. Here, the frequency with which a taxon's marker is observed in a sequence library represents its relative abundance in the community. When such abundance data are available from many communities, interactions among microbiota can be inferred through \\textcolor{blue}{statistical} correlation analysis \\citep{faust2012microbial}. \\textcolor{blue}{For example}, if the relative abundances of two microbial taxa are statistically correlated, then it is inferred that they interact on some level. This approach has been used to document interaction networks in the healthy human microbiome \\citep{faust2012microbial}, as well as free-living microbial communities \\citep{freilich2010large}, and has been useful for generating hypotheses of host-microbiome interaction \\citep{morgan2015associations}.\n\n\\subsection{Methodological Innovations}\n\\label{Sec-1.2}\n\nDespite the great potential of microbiome interaction networks as a tool to advance microbiome research, the power of this approach has been limited by the availability of effective statistical methods and computationally efficient estimation techniques. Microbial abundance data possess a few important features that pose tremendous challenges to standard statistical tools. First, the data are represented as compositional counts of the 16S rRNA sequences because the total count of sequences per sample is predetermined by how deeply the sequencing is conducted, a concept named sequencing depth. The counts only carry information about the relative abundances of the taxa instead of their absolute abundances. Second, the sequencing depth is always finite and often varies considerably across samples in a microbiome dataset. Thus, the observed relative abundance of a taxon in a sample is only an estimator of its true relative abundance with the variance depending on sample-specific sequencing depth, causing the ``heteroscedasticity'' issue \\citep{mcmurdie2014waste}. Third, the data are high-dimensional in nature. It is likely that the number of taxa is far more than the number of samples in any biological experiment.\n\n\nWhen such abundance data are available, one common strategy to resolve the interactions among microbial taxa is to use correlation-type analyses. For example, after the sample correlations are calculated between \\textcolor{blue}{the relative abundances of} each pair of microbial taxa, a threshold is then applied such that an interaction is deemed present if the sample correlation exceeds the threshold. More \\textcolor{blue}{recently developed methods} have started to \\textcolor{blue}{account for} the compositional feature and aim to construct sparse networks for the absolute abundances instead of relative abundances, \\textcolor{blue}{including} SparCC \\citep{friedman2012inferring}, CCLasso \\citep{fang2015cclasso}, and REBACCA \\citep{ban2015investigating}, \\textcolor{blue}{among others.}\n\n\\textcolor{blue}{The above-mentioned network methods based on marginal correlations} could lead to spurious correlations that are caused by confounding factors such as other \\textcolor{blue}{species} in the same community. \\textcolor{blue}{To eliminate the detection of spurious correlations}, interactions among taxa can be modeled through their conditional dependencies given the other taxa. The Gaussian graphical model \\textcolor{blue}{is a classical approach to modeling} the conditional dependency, \\textcolor{blue}{in which the conditional dependency is determined by the nonzero entries of the inverse covariance matrix of a multivariate normal distribution used to model the data.} Graphical lasso \\citep{yuan2007model, banerjee2008model, friedman2008sparse} and neighborhood selection \\citep{meinshausen2006high} are two commonly used methods to estimate sparse inverse covariance matrix under the \\textcolor{blue}{high-dimensional} Gaussian graphical model. However, \\textcolor{blue}{when applied to microbiome abundance data, its multivariate normality assumption was violated by either the count or the compositional features of the data.}\n\nSeveral methods have been proposed to infer microbial conditional dependence networks based on Gaussian graphical models, such as SPIEC-EASI \\citep{kurtz2015sparse}, gCoda \\citep{fang2017gcoda}, CD-trace \\citep{yuan2019compositional}, and SPRING \\citep{yoon2019microbial}. In order to transform the discrete counts to continuous variables and to remove the compositionality constraint, all these methods take the centered log-ratio transformation \\citep{aitchison1986statistical} on the observed counts as their first step. However, the centered log-ratio transformation suffers from an undefined inverse covariance matrix of the transformed data. To partially address this issue, these methods impose a sparsity assumption on the inverse covariance matrix and adding an $L_1$-norm penalty to their objective functions. Additionally, these methods ignore the heteroscedasticity issue as they simply treat the observed relative abundances as the truth. Ignoring the heteroscedasticity issue could impact downstream analysis including constructing microbial interaction networks \\citep{mcmurdie2014waste}.\n\nIn this article, we provide a new statistical tool to help unleash the full potential of microbiome interaction networks as a research tool in the microbiome field. We adopt the logistic normal multinomial distribution to model the compositional count data \\citep{aitchison1986statistical, billheimer2001statistical, xia2013logistic}. Compared to previous methods, this model accounts for the heteroscedasticity issue as the sequencing depth is treated as the number of trials in the multinomial distribution. Additionally, the additive log-ratio transformation applied to the multinomial probabilities results in a well-defined inverse covariance matrix in contrast to the centered log-ratio transformation. Based on this model, we develop an efficient algorithm that iterates between Newton-Raphson and graphical lasso for estimating a sparse inverse covariance matrix. We call this new approach ``compositional graphical lasso''. We establish the theoretical convergence of the algorithm and illustrate the advantage of compositional graphical lasso in comparison to current methods under a variety of simulation scenarios. We further apply the developed method to the data from the Zebrafish Parasite Infection study \\citep{gaulke2019longitudinal} (see Section \\ref{Sec-1.3}) to investigate how microbial interactions associate with parasite infection.\n\n\n\\subsection{Zebrafish Parasite Infection Study}\n\\label{Sec-1.3}\n\nHelminth parasites represent a significant threat to the health of human and animal populations, and there is a growing need for tools to treat, diagnose, and prevent these infections. A growing body of evidence points to the gut microbiome as an agent that interacts with parasites to influence their success in the gut. To clarify how the gut microbiome varies in accordance with parasitic infection dynamics, the Zebrafish Parasite Infection Study was a recent effort \\citep{gaulke2019longitudinal} conducted at Oregon State University, which assessed the association of an intestinal helminth of zebrafish, \\textit{Pseudocapillaria tomentosa}, and the gut microbiome of 210 4-month-old 5D line zebrafish. Among these fish, 105 were exposed to \\textit{P.\\ tomentosa} and the remaining 105 were unexposed controls. At each of the seven time points after exposure, a randomly selected group of 30 fish (15 exposed and 15 unexposed) were euthanized and fecal samples were collected. The parasite burden and tissue damage in \\textit{P.\\ tomentosa}-infected fish were also monitored over 12 weeks of infection.\n\nPrevious analyses \\citep{gaulke2019longitudinal} of the Zebrafish Parasite Infection Study data have revealed that parasite exposure, burden, and intestinal lesions were correlated with gut microbial diversity. They also identified individual taxa whose abundance associated with parasite burden, suggesting that gut microbiota may influence \\textit{P.\\ tomentosa} success. Numerous associations between taxon abundance, burden, and gut pathologic changes were also observed, indicating that the magnitude of microbiome disruption during infection varies with infection severity. However, it remains unclear how parasite success may disrupt or be modulated by the microbial interactions in the gut. Understanding how the microbial interactions associate with parasitic infection can help resolve potential drug or diagnostic test for parasitic infection.\n\n\\subsection{Benchmarking and Novel Biological Discoveries}\n\\label{Sec-1.4}\n\nTo evaluate the performance of our method and to benchmark it against previously available tools, we take advantage of a unique data resource provided by the \\textit{Tara} Oceans Project on ocean planktons. This data set is particularly suitable for method comparison because it enjoys an experimentally validated sub-network of plankton interactome that has served as a gold standard for method benchmarking. Compared to other methods, compositional graphical lasso performs better in reconstructing the microbial interactions that are validated by the literature. In addition, it performs the best in picking up the keystone taxa, which are those with an excessive number of interactions with other taxa in the literature, such as \\textit{Amoebophyra} \\citep{chambouvet2008control}, \\textit{Blastodinium} \\citep{skovgaard2012parasitic}, \\textit{Phaeocystis} \\citep{verity2007current}, and \\textit{Syndinium} \\citep{skovgaard2005phylogenetic}. All these genera are well-described keystone organisms in marine ecosystems. Finally, compositional graphical lasso affords an opportunity to resolve novel modulators of community composition. For example, as one of a few described genera within the syndinean dinoflagellates---an enigmatic lineage with abundant diversity in marine environmental clone libraries, \\textit{Euduboscquella} \\citep{bachvaroff2012molecular} ranked high in the degree distribution uniquely by compositional graphical lasso.\n\nTo investigate the role of the gut microbiome plays in the parasite infections, we apply compositional graphical lasso to the Zebrafish Parasite Infection Study data. {\\color{blue}Interestingly, compositional graphical lasso identifies changes in interaction degree between infected and uninfected individuals for three taxa, \\textit{Photobacterium}, \\textit{Gemmobacter}, and \\textit{Paucibacter}, which are inversely predicted by other methods. Further investigation of these method-specific taxa interaction changes reveals their biological plausibility, and provides insight into their relevance in the context of parasite-linked changes in the zebrafish gut microbiome. In particular, based on our observations, we speculate on the potential pathobiotic roles of \\textit{Photobacterium} and \\textit{Gemmobacter} in the zebrafish gut, and the potential probiotic role of \\textit{Paucibacter}. Future studies should seek to experimentally validate the ecological roles of \\textit{Photobacterium}, \\textit{Gemmobacter}, and \\textit{Paucibacter} in the zebrafish gut, including their impacts on the rest of the microbial community and their roles in infection induced tissue damage.}\n\n\n\n\n\\section{Compositional Graphical Lasso}\n\\label{Sec-2}\n\n\\subsection{Logistic Normal Multinomial Model}\n\\label{Sec-2.1}\n\nConsider a microbiome abundance dataset with $n$ independent samples, each of which is composed of the observed counts of $K + 1$ taxa, denoted by $\\mathbf{x}_i = (x_{i,1}, \\ldots, x_{i,K+1})'$ for the $i$-th sample, $i = 1,\\ldots,n$. Due to the compositional property of the data, the total count of all taxa for each sample $i$ is a fixed number, denoted by $M_i$. Naturally, a multinomial distribution is imposed on the observed counts:\n\\begin{equation} \n\\mathbf{x}_i | \\mathbf{p}_i \\sim \\text{Multinomial}(M_i; p_{i,1}, \\ldots, p_{i,K +1}), \\label{multinomial}\n\\end{equation}\nwhere $\\mathbf{p}_i = (p_{i,1}, \\ldots, p_{i,K+1})'$ are the multinomial probabilities for all taxa and $\\sum_{k=1}^{K+1} p_{i,k} = 1$.\n\nTo apply the additive log-ratio transformation \\citep{aitchison1986statistical} on the multinomial probabilities, we choose one taxon, without loss of generality the $(K+1)$-th taxon, as a reference to which all the other tax are compared. The transformed multinomial probabilities are given by\n\\begin{equation}\nz_{i,k} = \\log (\\frac{p_{i, k}}{p_{i, K + 1}}),\\ i = 1,\\ldots,n, \\ k = 1,\\ldots,K. \\label{log.ratio.transformation}\n\\end{equation}\nLet $\\mathbf{z}_i = (z_{i,1}, \\ldots, z_{i,K})'$ for $i = 1,\\ldots,n$, and further assume that they follow an i.i.d. multivariate normal distribution\n\\begin{equation}\n\\mathbf{z}_1, \\ldots, \\mathbf{z}_n \\stackrel{i.i.d.}{\\sim} N (\\boldsymbol\\mu, \\boldsymbol\\Sigma), \\label{logistic.normal}\n\\end{equation}\nwhere $\\boldsymbol\\mu$ is the mean and $\\boldsymbol\\Sigma$ is the covariance matrix. Let $\\boldsymbol\\Omega = \\boldsymbol\\Sigma^{-1}$ be the inverse covariance matrix or the precision matrix.\n\nThe above model given in (\\ref{multinomial})--(\\ref{logistic.normal}) is often referred to as the logistic normal multinomial model. In this model, a multinomial distribution is imposed on the compositional counts, which is the distribution of the observed data given the multinomial probabilities. In addition, to capture the variation of the multinomial probabilities across samples, we impose a logistic normal distribution on the multinomial probabilities as a prior distribution. We thereby obtain as our final model the logistic normal multinomial model, which is a hierarchical model with two levels.\n\nThe logistic normal multinomial model has a long history in modeling compositional count data and it has also been applied to analyze microbiome abundance data. For example, \\citet{xia2013logistic} proposed a penalized regression under this model to identify a subset of covariates that are associated with the taxon composition. Our objective is different from \\citet{xia2013logistic} as we aim to reveal the microbial interaction network by finding a sparse estimator of the inverse covariance matrix $\\boldsymbol{\\Omega}$ in (\\ref{logistic.normal}). It is also noteworthy that \\cite{jiang2020microbial} has the same objective as ours. However, \\cite{jiang2020microbial} did not make full use of the logistic normal multinomial model as it focused on correcting the bias of a naive estimator of the $\\boldsymbol{\\Sigma}$ that does not require the logistic normal part of the model. By contrast, we aim to find an estimator of $\\boldsymbol{\\Omega}$ directly based on the logistic normal multinomial model.\n\n\n\\subsection{Objective Function}\n\\label{Sec-2.2}\n\n{\\color{blue}From the logistic normal multinomial model in (\\ref{multinomial})--(\\ref{logistic.normal}), we aim to derive an objective function of $\\boldsymbol{\\Omega}$ to estimate the microbial interaction network. To this end, we take a two-step procedure similar to SPIEC-EASI \\citep{kurtz2015sparse}. In the first step, we find estimated values of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ based on the logistic normal multinomial model given $\\mu$ and $\\boldsymbol{\\Sigma}$; in the second step, we find an estimate of $\\boldsymbol{\\Omega}$ based on the estimated values of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$.\n\nIn the first step, we consider the posterior distribution of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ given the data $\\mathbf{x}_1,\\ldots,\\mathbf{x}_n$ and find the maximum a posteriori (MAP) estimates of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$. For $i=1,\\ldots,n$, the logarithm of the posterior density function of $\\mathbf{z}_i$ given $\\mathbf{x}_i$ is\n\\begin{align*} \n& \\log[f_{\\boldsymbol{\\mu},\\boldsymbol{\\Omega}}(\\mathbf{z}_i | \\mathbf{x}_i)] \\propto \\log[f_{\\boldsymbol{\\mu},\\boldsymbol{\\Omega}}(\\mathbf{x}_i, \\mathbf{z}_i)] \\\\\n\\propto{}& \\sum_{k=1}^{K+1} x_{i,k} \\log p_{i,k} + \\frac12 \\log[\\det(\\boldsymbol{\\Omega})] - \\frac12 (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}) \\\\\n={} &\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1) + \\frac12 \\log[\\det(\\boldsymbol{\\Omega})] - \\frac12 (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}),\n\\end{align*}\nwhere $\\propto$ denotes that two quantities are equal up to a term not depending on $\\mathbf{z}_i$. In the above derivation, we ignored the marginal density function of $\\mathbf{x}_i$ that does not depend on $\\mathbf{z}_i$, which does not affect the estimation of $\\mathbf{z}_i$.}\n\nBy independence between all the samples, the logarithm of the posterior density function of $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$ given the data $(\\mathbf{x}_1,\\ldots,\\mathbf{x}_n)$ can be written as (again, ignoring a term independent of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$):\n\\begin{equation} \n\\sum_{i=1}^n \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right] + \\frac{n}2 \\log[\\det(\\boldsymbol{\\Omega})] - \\frac12 \\sum_{i=1}^n (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}). \\label{objective.function.1}\n\\end{equation}\nGiven the values of the multivariate normal parameters $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\Omega}$, one can maximize (\\ref{objective.function.1}) with respect to $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$. This leads to the MAP estimator $(\\hat\\mathbf{z}_1,\\ldots,\\hat\\mathbf{z}_n)$.\n\n{\\color{blue}In the second step, we find a sparse inverse covariance estimator of $\\boldsymbol{\\Omega}$ based on the estimate values $(\\hat\\mathbf{z}_1,\\ldots,\\hat\\mathbf{z}_n)$. Hereby, we use the graphical lasso estimator, which minimizes the $L_1$ penalized negative log-likelihood function as follows:\n\\begin{equation}\n-\\frac{1}2 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac1{2n} \\sum_{i=1}^n (\\hat\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\hat\\mathbf{z}_i - \\boldsymbol{\\mu}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1. \\label{objective.function.2}\n\\end{equation}\n\nIt turns out that the above two-step procedure is equivalent to minimizing an overall objective function with respect to both $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ and $(\\boldsymbol{\\mu},\\boldsymbol{\\Omega})$:\n\\begin{align}\n\\ell(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n,\\boldsymbol{\\mu},\\boldsymbol{\\Omega}) ={}& -\\frac1n\\sum_{i=1}^n \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right] \\notag\\\\\n& -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac{1}{2n} \\sum_{i=1}^n (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1. \\label{objective.function}\n\\end{align}\nIn other words, $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$ are all treated as unknown parameters in the minimization of the objective function (\\ref{objective.function}).\n\nIt is noteworthy that similar ideas to the above two-step procedure have been used in existing microbial interaction network estimation methods. For example, SPIEC-EASI is also a two-step procedure. In its first step, the abundance counts are converted into their centered log-ratio transformed data; in its second step, either graphical lasso or neighborhood selection is applied to estimate a sparse inverse covariance matrix based on the centered log-ratio transformed data in the first step.\n\nAlthough both our method and SPIEC-EASI can be regarded as two-step procedures, we underline two important distinctions between them. First, our method is based on the additive log-ratio transformation and SPIEC-EASI uses the centered log-ratio transformation. As mentioned in the Introduction, the centered log-ratio transformation suffers from an undefined inverse covariance matrix of the transformed data while the additive log-ratio transformation does not. Second, in the first step, our method accounts for the sequencing depths [$M_i$'s in (\\ref{multinomial})] and further the uncertainty of the observed relative abundances, and thus addresses the heteroscedasticity issue. However, the heteroscedasticity issue is ignored in SPIEC-EASI.} \n\n\n\\subsection{Computational Algorithm}\n\\label{Sec-2.3}\n\nThe objective function (\\ref{objective.function}) naturally includes three sets of parameters $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$, which motivates us to apply a block coordinate descent algorithm. A block coordinate descent algorithm minimizes the objective function iteratively for each set of parameters given current values of the other sets. Given the initial values $(\\mathbf{z}_1^{(0)},\\ldots,\\mathbf{z}_n^{(0)})$, $\\boldsymbol{\\mu}^{(0)}$, and $\\boldsymbol{\\Omega}^{(0)}$, a block coordinate algorithm repeats the following steps cyclically for iteration $t= 0,1,2,\\ldots$ until the algorithm converges.\n\\begin{enumerate}[nosep]\n\\item Given $\\boldsymbol{\\mu}^{(t)}$ and $\\boldsymbol{\\Omega}^{(t)}$, find $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ that maximizes (\\ref{objective.function}).\n\\item Given $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ and $\\boldsymbol{\\Omega}^{(t)}$, find $\\boldsymbol{\\mu}^{(t+1)}$ that maximizes (\\ref{objective.function}).\n\\item Given $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ and $\\boldsymbol{\\mu}^{(t+1)}$, find $\\boldsymbol{\\Omega}^{(t+1)}$ that maximizes (\\ref{objective.function}).\n\\end{enumerate}\n\nBelow, we will present the details of this algorithm in each iteration. For the initial values $(\\mathbf{z}_1^{(0)},\\ldots,\\mathbf{z}_n^{(0)})$, we use their maximum likelihood estimators from the multinomial distribution, i.e.,\n\\[ {z}_{i,k}^{(0)} = \\log (\\frac{x_{i, k}}{x_{i, K + 1}}),\\ i = 1,\\ldots,n, \\ k = 1,\\ldots,K.\\] \nIf $x_{i, k} = 0$ for some $i$ and $k$, we add a small constant to it to evaluate the log ratio. For the initial value $\\boldsymbol{\\mu}^{(0)}$, we have a closed form minimizer of $\\boldsymbol{\\mu}$ for (\\ref{objective.function}) given the values of $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, which is $\\boldsymbol{\\mu} = \\bar{\\mathbf{z}} = \\frac1n \\sum_{i=1}^n \\mathbf{z}_i$. Therefore, we set the initial value as $\\boldsymbol{\\mu}^{(0)} = \\frac1n \\sum_{i=1}^n \\mathbf{z}_i^{(0)}$. Finally, for the initial value $\\boldsymbol{\\Omega}^{(0)}$, we use the estimate of the graphical lasso algorithm taking the sample covariance matrix computed from $\\mathbf{z}_1^{(0)},\\ldots,\\mathbf{z}_n^{(0)}$ as input.\n\nIn step 1, given $\\boldsymbol{\\mu}^{(t)}$ and $\\boldsymbol{\\Omega}^{(t)}$, minimizing the objective function (\\ref{objective.function}) with respect to $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$ is equivalent to minimizing the following objective function with respect to each $\\mathbf{z}_i$ separately, for $i = 1,\\ldots,n$:\n\\begin{equation}\n\\ell_i^{(t)}(\\mathbf{z}_i) = \\frac12 (\\mathbf{z}_i - \\boldsymbol{\\mu}^{(t)})' \\boldsymbol{\\Omega}^{(t)} (\\mathbf{z}_i - \\boldsymbol{\\mu}^{(t)}) - \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right]. \\label{objective.function.zi}\n\\end{equation}\nThe above objective function is a smooth and convex function in $\\mathbf{z}_i$ with the following positive definite Hessian matrix\n\\begin{equation*}\n\\boldsymbol{\\Omega}^{(t)} + \\frac{M_i}{\\left(\\sum_{k=1}^K e^{z_{i,k}} + 1\\right)^2} \\left\\{\\left(\\sum_{k=1}^K e^{z_{i,k}} + 1\\right) \\text{diag} (e^{\\mathbf{z}_i}) - (e^{\\mathbf{z}_i})(e^{\\mathbf{z}_i})'\\right\\},\n\\end{equation*}\nwhere $e^{\\mathbf{z}_i} = (e^{z_{i,1}}, \\ldots, e^{z_{i,K}})'$ and $\\text{diag}(e^{\\mathbf{z}_i})$ is the diagonal matrix with the diagonal elements $e^{z_{i,1}}, \\ldots, e^{z_{i,K}}$. Therefore, we apply the Newton-Raphson algorithm to find the minimizer numerically. In addition, we implement a line search procedure in each Newton-Raphson iteration following the Armijo rule \\citep{armijo1966minimization}. This procedure ensures sufficient decrease in the objective function at each iteration to prevent possible divergence of the algorithm.\n\nStep 2 is similar to the initialization step, in which $\\boldsymbol{\\mu}$ has a closed-form solution and is updated as $\\bar{\\mathbf{z}}^{(t+1)} = \\frac1n \\sum_{i=1}^n \\mathbf{z}_i^{(t+1)}$ from the current numerical values of $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ that are computed from the Newton-Raphson algorithm in step 1.\n\nIn step 3, given $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ and $\\boldsymbol{\\mu}^{(t+1)} = \\bar{\\mathbf{z}}^{(t+1)}$, the objective function for $\\boldsymbol{\\Omega}$ can be simplified as\n\\begin{align}\n\\ell^{(t)}(\\boldsymbol{\\Omega}) &= -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac{1}{2n} \\sum_{i=1}^n (\\mathbf{z}_i^{(t+1)} - \\boldsymbol{\\mu}^{(t+1)})' \\boldsymbol{\\Omega} (\\mathbf{z}_i^{(t+1)} - \\boldsymbol{\\mu}^{(t+1)}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1, \\notag \\\\\n&= -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac{1}{2} \\mathrm{tr} (\\mathbf{S}^{(t+1)} \\boldsymbol{\\Omega}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1, \\label{objective.function.omega} \n\\end{align}\nwhere $\\mathbf{S}^{(t+1)} = \\frac{1}{n} \\sum_{i=1}^n (\\mathbf{z}_i^{(t+1)} - \\bar{\\mathbf{z}}^{(t+1)}) (\\mathbf{z}_i^{(t+1)} - \\bar{\\mathbf{z}}^{(t+1)})'$. It is obvious that minimizing the objective function (\\ref{objective.function.omega}) becomes a graphical lasso problem \\citep{yuan2007model, banerjee2008model, friedman2008sparse}. It is well known that the graphical lasso objective function is a convex function in $\\boldsymbol{\\Omega}$ \\citep{banerjee2008model} and efficient algorithms have been developed for its optimization \\citep{friedman2008sparse}. In this paper, we implement this step using the graphical lasso algorithm included in the \\texttt{huge} \\citep{zhao2012huge} package in R.\n\nThe above block coordinate descent algorithm iterates between Newton-Raphson and graphical lasso and is designed specifically to optimize the objective function (\\ref{objective.function}) for compositional count data. Therefore, we name this algorithm the compositional graphical lasso algorithm, and the entire approach the compositional graphical lasso method including both the model and the algorithm for the analysis of microbiome abundance data.\n\n\\subsection{Theoretical Convergence}\n\\label{Sec-2.4}\n\nUnfortunately, the objective function (\\ref{objective.function}) is not necessarily a convex function jointly in $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$. However, \nwe have shown that it is convex in each of the three subsets of its parameters. The convergence property of such an optimization problem has been studied in the literature. For example, \\cite{tseng2001convergence} studied the convergence property of a block coordinate descent method applied to minimize a nonconvex function with certain separability and regularity properties. We will establish the convergence property of the compositional graphical lasso algorithm following \\cite{tseng2001convergence}.\n\nRecall that our algorithm treats the three sets of parameters $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$ as three blocks and optimizes for each block iteratively. In addition, as in \\cite{tseng2001convergence}, the objective function (\\ref{objective.function}) can be regarded as the sum of two parts, the first of which is an inseparable but differentiable function given by\n\\begin{equation}\n\\ell_0(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n,\\boldsymbol{\\mu},\\boldsymbol{\\Omega}) = \\frac{1}{2n} \\sum_{i=1}^n (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}), \\label{objective.function.inseparable}\n\\end{equation}\nand the second of which is a sum of separable and differentiable functions given by $\\ell_1(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n) + \\ell_2(\\boldsymbol{\\Omega})$, where\n\\begin{align}\n\\ell_1(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n) &= -\\frac1n\\sum_{i=1}^n \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right], \\label{objective.function.separable.1}\\\\\n\\ell_2(\\boldsymbol{\\Omega}) &= -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\lambda \\|\\boldsymbol{\\Omega}\\|_1. \\label{objective.function.separable.2}\n\\end{align} \n\\cite{tseng2001convergence} established the convergence property of a block coordinate descent algorithm under regularity conditions on $\\ell_0$, $\\ell_1$, and $\\ell_2$.\n\nTo present the main convergence property of the compositional graphical lasso algorithm, let's review the definition of a cluster point in real analysis. A cluster point of a set $\\mathcal{A} \\subset \\mathbb{R}^n$ is a real vector $\\mathbf{a} \\in \\mathbb{R}^n$ such that for every $\\delta > 0$, there exists a point $\\mathbf{x}$ in $\\mathcal{A}\\setminus \\{\\mathbf{a}\\}$ satisfying that $\\|\\mathbf{x} - \\mathbf{a}\\|_2 < \\delta$. Obviously, any limit point of the set $\\mathcal{A}$ is a cluster point. Furthermore, define a cluster point of the compositional graphical lasso algorithm to be a cluster point of the set $\\{(\\mathbf{z}_1^{(t)},\\ldots,\\mathbf{z}_n^{(t)},\\boldsymbol{\\mu}^{(t)},\\boldsymbol{\\Omega}^{(t)}): t = 0,1,2,\\ldots\\}$, which are minimizers found at each iteration $t$. Then, the following theorem presents a theoretical property for every cluster point of our algorithm as follows.\n\n\\begin{thm} \\label{Thm-1}\nAny cluster point of the compositional graphical lasso algorithm is a stationary point of the objective function (\\ref{objective.function}).\n\\end{thm}\n\nThe proof of Theorem \\ref{Thm-1} can be found in the supplementary materials. This theorem guarantees that a cluster point, usually a limit point, of the compositional graphical lasso algorithm, is at least a stationary point. It is noteworthy that there exists a global minimizer for the objective function in (\\ref{objective.function}) because we have proved the coerciveness of the objective function in the proof \\citep[Lemma 8.4]{calafiore2014optimization}. Therefore, to achieve global optimization in practice, one can run the algorithm multiple times starting with different initial values and choose the one solution that yields the smallest objective function among the multiple ones.\n\nIn addition, the values of the objective function at each iteration, i.e., \\\\\n$\\{\\ell(\\mathbf{z}_1^{(t)},\\ldots,\\mathbf{z}_n^{(t)},\\boldsymbol{\\mu}^{(t)},\\boldsymbol{\\Omega}^{(t)}): t = 0,1,2,\\ldots\\}$, will always converge. This is because that the objective function is bounded below (as $\\ell_0 + \\ell_2$ is bounded below as shown in the proof and $\\ell_1 > 0$ by definition)\nand our algorithm results in non-increasing objective function values between two iterations. Therefore, the values of objective function will always converge to a limit point. In practice, we have always observed the numerical convergence of both the minimizers and the values of the objective function after a certain number of iterations.\n\n\n\n\n\n\n\\subsection{Tuning Parameter Selection}\n\\label{Sec-2.5}\n\nThere is a large body of literature on the selection of a tuning parameter in the variable selection framework. Common approaches can be broadly categorized into three types: criteirion-based methods, prediction-based methods, and stability-based methods. Criterion-based methods such as Akaike information criterion (AIC) \\citep{akaike1974new} and Bayesian information criteria (BIC) \\citep{schwarz1978estimating} balance the model complexity and the goodness of fit, prediction-based methods such as cross validation \\citep{stone1974cross, geisser1975predictive} and generalized cross validation \\citep{golub1979generalized} aim to minimize the expected prediction error of the selected model on independent datasets. Stability-based methods such as stability selection \\citep{meinshausen2010stability} and the Stability Approach to Regularization Selection (StARS) \\citep{liu2010stability} select a model with high stability under subsampling or bootstrapping the original data.\n\nIn this work, we apply StARS to select the tuning parameter $\\lambda$ in our objective function (\\ref{objective.function}). In StARS, we draw $N$ subsamples without replacement from the original dataset with $n$ observations, each subsample of size $b$. For each value of the tuning parameter $\\lambda$, we obtain an estimate of $\\boldsymbol{\\Omega}$, i.e., a network for each subsample. Then, we measure the total instability of these resultant networks across the $N$ subsamples. The total instability of these networks is defined by averaging the instabilities of each edge across the $N$ subsamples over all possible edges, where the instability of each edge is estimated as the twice the sample variance of the Bernoulli indicator of whether this edge is selected or not in each of the $N$ subsamples.\n\nStarting from a large penalty which corresponds to the empty network, the instability of networks increases as $\\lambda$ decreases. StARS stops and selects the tuning parameter to be the minimum value of $\\lambda$'s with which the instability of the resultant networks is less than a threshold $\\beta > 0$. In principle, StARS selects a tuning parameter so that the resultant network is the densest among networks with a total instability less than a threshold $\\beta$ without violating some sparsity assumption. The selected network is the ``densest on the sparse side,\" as it starts with the empty network and stops when the instability first across the threshold.\n\n\\section{Simulation Studies}\n\\label{Sec-3}\n\n\\subsection{Simulation Settings}\n\\label{Sec-3.1}\n\nTo evaluate the performance of compositional graphical lasso, we conduct \\textcolor{blue}{simulation studies} and compare it with other network estimation methods.\n\nGiven that our goal is to estimate the true network, i.e., $\\boldsymbol{\\Omega}$ in (\\ref{logistic.normal}), we consider the following three types of true precision matrices $\\boldsymbol{\\Omega} = (\\omega_{kl})_{1\\le k,l \\le K}$, which are different in the pattern of edge distributions as well as the degree of connectedness.\n\n\\begin{enumerate}[nosep]\n\\item Chain network: $\\omega_{kk} = 1.5$, $\\omega_{kl} = 0.5$ if $|k - l| = 1$, and $\\omega_{kl} = 0$ if $|k - l| > 1$. A node is designed to be connected to its adjacent nodes, and the connectedness of nodes is balanced.\n\\item Random network: $\\omega_{kl} = 1$ with probability $3\/K$ for $k \\ne l$. A node is connected to all other nodes randomly with a fixed probability, set to be $3\/K$. Similar to the chain structure, the connectedness of nodes is balanced.\n\\item Hub network: All nodes are randomly split into $\\lceil K \/ 20 \\rceil$ disjoint groups, and a hub node $k$ is selected from each group. For any other node $l$ in the same group, $\\omega_{kl} = 1$. All the remaining entries of $\\boldsymbol{\\Omega}$ are zero. Here, nodes are partitioned into the same group at random, but is then designated to be connected to the hub node at certain. The degree of connectedness among nodes is extremely unbalanced in this case: the hub nodes are connected to all the other nodes in its group (around 20 nodes) and all the other nodes are only connected to the hub node in its group, i.e., just one node.\n\\end{enumerate}\n\nIn addition to the true network, we also consider two other factors that are expected to influence the result. The first factor is the sequencing depth, $M_i$, in the multinomial distribution (\\ref{multinomial}). \\textcolor{blue}{We simulate $M_i$ from uniform distributions, the detail of which will be discussed in the following subsections.} The second factor is the degree of variation in the logistic normal distribution (\\ref{logistic.normal}). For each of the three types of precision matrices, we consider an additional factor by multiplying a positive constant $c$ to $\\boldsymbol{\\Omega}$ so that the true precision matrix is $c\\boldsymbol{\\Omega}$. We choose $c = 1$ and $c = 1\/5$ separately and call the two settings low and high compositional variation, respectively.\n\nThe data are simulated following the logistic normal multinomial model in (\\ref{multinomial})--(\\ref{logistic.normal}). We first simulate $\\mathbf{z}_i \\sim N (\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ independently for $i = 1,\\ldots,n$; then, we perform the inverse log-ratio transformation (also know as the softmax transformation, the inverse transformation of (\\ref{log.ratio.transformation})) to obtain the multinomial probabilities $\\mathbf{p}_i$ for $i = 1,\\ldots,n$; last, we simulate multinomial counts $\\mathbf{x}_i$ from a multinomial distribution with sequencing depth $M_i$ and probabilities $\\mathbf{p}_i$. Throughout this simulation study, we fix $n = 100$ and $K = 200$.\n\nThe simulation results are based on 100 replicates. For each replicate, we apply compositional graphical lasso, neighborhood selection, and graphical lasso separately to obtain a sparse estimator of $\\boldsymbol{\\Omega}$. For neighborhood selection and graphical lasso, we first obtain an estimate of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ from the multinomial distribution via the additive log-ratio transformation\n\\begin{equation}\n\\tilde{z}_{i,k}= \\log (\\frac{x_{i, k}}{x_{i, K + 1}}),\\ i = 1,\\ldots,n, \\ k = 1,\\ldots,K, \\label{z.surrogates}\n\\end{equation}\nand then apply neighborhood selection and graphical lasso directly on the estimates $\\tilde\\mathbf{z}_1,\\ldots,\\tilde\\mathbf{z}_n$ by treating them as surrogates for their true counterparts, i.e., $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$. These methods are almost identical to SPIEC-EASI although we replaced the centered log-ratio transformation in SPIEC-EASI with additive log-ratio transformation for a fair comparison.\n\nTo compare the performance of the three methods in terms of network recovery, all three methods are applied with a sequence of tuning parameter values, and their true positive rates (TPR) and false positive rates (FPR) in terms of edge selection are recorded for each value of $\\lambda$. An ROC curve is plotted from the average TPR and the average FPR over the 100 replicates for each of the tuning parameters.\n\nIn addition, we apply StARS to select an optimal tuning parameter $\\lambda$. Following the recommendation in \\citet{liu2010stability}, we set the threshold for the total instability to be $\\beta = 0.05$, the size of each subsample $b = 7 \\sqrt{n}$, and the number of subsamples $N = 50$. Once the optimal tuning parameter is determined by StARS, we fit the whole dataset with the selected tuning parameter and evaluate the resultant network using three criteria: precision, recall, and F1 score, which are defined as\n\\begin{equation*}\n\\text{Precision} = \\frac{\\text{TP}}{\\text{TP} + \\text{FP}},\\quad \\text{Recall} = \\frac{\\text{TP}}{\\text{TP} + \\text{FN}}, \\quad \\text{F1} = \\frac{2 \\times \\text{Precision} \\times \\text{Recall}}{\\text{Precision} + \\text{Recall}},\n\\end{equation*}\nwhere TP, FP, and FN are numbers of true positives, false positives, and false negatives, respectively. \n\n\\subsection{\\textcolor{blue}{Simulation Results for Dense Data}}\n\\label{Sec-3.2}\n\n\\textcolor{blue}{In this subsection, we evaluate the performance of compositional graphical lasso on dense data, in which most of the simulated counts are nonzero. This corresponds to a simulation setting where the sequencing depths $M_i$'s are relatively larger than the number of taxa $K + 1$. Still, to evaluate the effect of the sequencing depth on the performance of network estimation methods, we simulate $M_i$ from two uniform distributions, Uniform$(20K, 40K)$ and Uniform$(100K, 200K)$, and call the two settings low and high sequencing depth in this subsection, respectively.}\n\nFigure \\ref{ROC} presents the ROC curves for compositional graphical lasso (Comp-gLASSO), neighborhood selection (MB), and graphical lasso (gLASSO), from which we can see that compositional graphical lasso dominates its competitors in terms of edge selection in all settings. In particular, the advantage of the compositional graphical lasso over neighborhood selection and graphical lasso is the most obvious when the compositional variation is high and the sequencing depth is low, no matter which type of network structure is considered. On the contrary, the three methods perform very similarly for all types of network structures when the compositional variation is low and the sequencing depth is high. The difference between compositional graphical lasso and the rest is intermediate for the other two settings when both compositional variation and sequencing depth are high or when both are low. Comparing graphical lasso and neighborhood selection, they tend to perform more similarly although graphical lasso seems to outperform neighborhood selection in some settings with a small margin.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{ROC_MB.pdf}\n\\caption{ROC curves for compositional graphical lasso (Comp-gLASSO), graphical lasso (gLASSO) and neighborhood selection (MB). Solid blue: Comp-gLASSO; dashed red: gLASSO; dotted black: MB. $\\mathbf{h\/l, h\/l}$: high\/low sequencing depth, high\/low compositional variation.}\n\\label{ROC}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{bar_graph_MB.pdf}\n\\caption{Recall, precision and F1 score for the network selected by StARS for compositional graphical lasso (Comp-gLASSO), graphical lasso (gLASSO) and neighborhood selection (MB). Red (left): Comp-gLASSO; green (middle): gLASSO; blue (right): MB. $\\mathbf{h\/l, h\/l}$: high\/low sequencing depth, high\/low compositional variation.}\n\\label{Recall.Precision.F1}\n\\end{figure}\n\nThe above observations agree with our expectation about how the two factors, compositional variation and sequencing depth, affect the comparison between the methods. Recall that neighborhood selection and graphical lasso replace the true values of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ by their estimates\/surrogates $\\tilde\\mathbf{z}_1,\\ldots,\\tilde\\mathbf{z}_n$ as in (\\ref{z.surrogates}) without taking into account the estimation accuracy or uncertainty of these surrogates. First, a higher sequencing depth leads to more accurate surrogates $\\tilde\\mathbf{z}_1,\\ldots,\\tilde\\mathbf{z}_n$; therefore, it is not surprising to see that the three methods perform more similarly when the sequencing depth is high. Second, a higher compositional variation results in a higher variation in $\\mathbf{z}_i$'s and further in $\\mathbf{p}_i$'s that are multinomial probabilities. Since neighborhood selection and graphical lasso ignore the multinomial component in the model, it is also not surprising to see that their performance is deteriorated by a high compositional variation.\n\nFigure \\ref{Recall.Precision.F1} presents the recall, precision, and F1 score from 50 replicates of the estimated network resulted from the tuning parameter selected by StARS. The first observation would be that the precisions of both compositional graphical lasso and graphical lasso are much worse than their recalls, whereas the precisions and recalls are more comparable for neighborhood selection. Interestingly, StARS results in a much more sparse network for neighborhood selection than the other methods under the same stability threshold, suggesting that fewer edges selected by neighborhood selection are stable enough (within the instability threshold). When it comes to method comparison, compositional graphical lasso has much higher recall than neighborhood selection in most settings, but have comparable or lower precision in most of the settings with high sequencing depth. The network from compositional graphical lasso has higher F1 score than the ones from neighborhood selection in most settings, except when sequencing depth is high and compositional variation is low for chain and hub networks. In addition, the network from compositional graphical lasso has higher or comparable precision, recall, and F1 score than the ones from graphical lasso in all settings. Similar to the observations from the ROC curves, the advantage of compositional graphical lasso is more obvious with a low sequencing depth or a high compositional variation.\n\n\\subsection{\\textcolor{blue}{Simulation Results for Sparse Data}}\n\\label{Sec-3.3}\n\n\\textcolor{blue}{In this subsection, we evaluate the performance of compositional graphical lasso on sparse data, in which a range of proportions of the compositional counts are simulated as zero. We only present the simulation results for the chain network, given that the simulation results for the other network types are similar. To examine how our method performs for different sparsity levels, we simulate $M_i$ from four uniform distributions, Uniform$(8K, 16K)$, Uniform$(4K, 8K)$, Uniform$(2K, 4K)$, and Uniform$(K, 2K)$, respectively. Note that the sparsity level of the simulated data also depends on the compositional variation (see Section \\ref{Sec-3.1}). In a typically simulated dataset with high compositional variation, the sparsity level is around 40\\%, 50\\%, 60\\%, and 70\\% with the four uniform distributions for $M_i$; in a typically simulated dataset with low compositional variation, the sparsity level is around 5\\%, 10\\%, 25\\%, and 40\\% with the four uniform distributions for $M_i$. In both cases, we refer to these four sparsity levels as 1, 2, 3, and 4, with 1 the least sparse and 4 the most sparse. Due to the limited space, we place the figures resulted from this simulation in the supplementary materials.}\n\n\\textcolor{blue}{Figure \\ref{ROC_MB_Sparse} presents the ROC curves for compositional graphical lasso (Comp-gLASSO), neighborhood selection (MB), and graphical lasso (gLASSO). Similar to Figure \\ref{ROC}, we can see that compositional graphical lasso dominates its competitors in terms of edge selection across all sparsity levels. As the sequencing depths for sparse data are relatively small compared to the ones used for dense data, the advantage of the compositional graphical lasso over neighborhood selection and graphical lasso is obviously and consistently observed in all simulation settings. In addition, graphical lasso and neighborhood selection tend to perform more similarly although graphical lasso seems to outperform neighborhood selection with a small margin. This result is consistent with that observed for dense data in Section \\ref{Sec-3.2}.}\n\n\\textcolor{blue}{Figure \\ref{Recall.Precision.F1.Sparse} presents the recall, precision, and F1 score from 50 replicates of the estimated network resulted from the tuning parameter selected by StARS. The first observation is that all methods perform worse for sparse data than for dense data from the comparison between Figure \\ref{Recall.Precision.F1.Sparse} and Figure \\ref{Recall.Precision.F1}; all methods perform worse as the sparsity level increases. Similar to Figure \\ref{Recall.Precision.F1}, Figure \\ref{Recall.Precision.F1.Sparse} implies that compositional graphical lasso outperforms graphical lasso and neighborhood selection, with a higher recall, precision, and F1 score consistently observed in most settings. The only exception is that, when data are extremely sparse with a low compositional variation, graphical lasso seems to outperform compositional graphical lasso by a very small margin as both methods are not working well. For sparse data, StARS results in an almost empty network for neighborhood selection, resulting in zero recall, precision, and F1 score for most settings. This agrees with our observation from the simulation results for dense data that neighborhood selection tends to produce a much sparser network than compositional graphical lasso and graphical lasso when its tuning parameter is selected by StARS.}\n\n\\section{Real Data}\n\\label{Sec-4}\n\n\\subsection{Benchmark Study: \\textit{Tara} Oceans Project}\n\\label{Sec-4.1}\nTo better understand the ocean, the largest ecosystem on the earth, the \\textit{Tara} Oceans Project aims to build the global ocean interactome that can be used to predict the dynamics and structure of ocean ecosystems. To achieve this, the \\textit{Tara} Oceans Consortium sampled both plankton and environmental data at 210 sites from the world's oceans using the 110-foot research schooner \\textit{Tara} during the \\textit{Tara} Oceans Expedition (2009-2013). The data collected was later processed using sequencing and imaging techniques. One unique advantage of the \\textit{Tara} Oceans Project is that it has generated a list of 91 genus-level marine planktonic interactions that have been validated in the literature \\citep{lima2015determinants}. Though this list only comprises interactions between microbes that represent a small fraction of the total marine eukaryotic diversity and is therefore far from complete, it could serve as partial ground truth for us to evaluate the interactions identified by different methods. Thus, our major goal is to use the \\textit{Tara} Oceans Project as a benchmark study in order to compare the performance of different methods in constructing the planktonic interactions.\n\nAs the partial ground truth is a list of genus-level interactions, we choose to analyze the genus-level abundance data, which are aggregated from the original OTU abundance data downloadable from the \\textit{Tara} Oceans Project data repository (\\url{http:\/\/taraoceans.sb-roscoff.fr\/EukDiv\/}). As a benchmark study, we only include the 81 genera that are involved in the list of gold-standard interactions in our analysis. In addition, we discard the samples with too few sequence reads (less than 100), resulting in 324 samples left in our analysis.\n\nSimilar to the simulation study, we apply compositional graphical lasso, graphical lasso, and neighborhood selection to estimate the interaction network among the 81 genera. We first pick the genus \\textit{Acrosphaera}, which has the largest average relative abundance among those genera not involved in the gold-standard list, and use this genus as the reference taxon for all three methods. Then, we apply each method with a sequence of 70 decreasing tuning parameter values, resulting in a sequence of interaction networks starting from an empty network. Finally, we apply StARS to find the optimal tuning parameter, in which the parameters $\\beta$, $b$, and $N$ are set the same as in the simulation study.\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{TARA_path_half.pdf}\n \\caption{}\n \\label{tara_path}\n\\end{subfigure}%\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{TARA_degree_half.pdf}\n \\caption{}\n \\label{tara_degree}\n\\end{subfigure}\n\\caption{(a): Number of identified literature interactions versus number of edges of the estimated network from the \\textit{Tara} dataset. (b): The degree distribution of vertices from the networks selected by StARS. Solid red: compositional graphical lasso; dashed green: graphical lasso; dashed dotted blue: neighborhood selection.}\n\\label{network.P}\n\\end{figure}\n\nFirst, we compare the three methods in terms of how highly they rank the literature validated interactions amongst their top reported edges. Specifically, we start with a large value of the tuning parameter that results in an empty network, then decrease the tuning parameter so that the network becomes denser, and stop until the network has about 200 edges (out of a total of 3240 possible edges). At each value of the tuning parameter, we plot the number of literature validated interactions included in the network versus the total number of edges of the network, resulting in a step function for each method (Figure \\ref{tara_path}). From Figure \\ref{tara_path}, we observe that compositional graphical lasso identifies slightly more literature validated interactions than graphical lasso until the total number of edges arrives 175 and graphical lasso identifies one more literature validated interaction afterwards. Neighborhood selection selects much fewer literature validated interactions than either compositional graphical lasso or graphical lasso. These observations imply that compositional graphical lasso slightly outperforms graphical lasso in reconstructing the literature validated interactions, while its advantage over neighborhood selection is much more obvious.\n\nSecond, we compare the overall topologies of the three interaction networks with tuning parameters selected by StARS. We find that compositional graphical lasso, graphical lasso, and neighborhood selection identify 749, 921, and 190 edges, respectively, with the same instability threshold used in StARS. This agrees with our observation in the simulation study that the network from neighborhood selection is much sparser than those from compositional graphical lasso and graphical lasso. The degree distributions from the networks estimated by the three methods are shown in Figure \\ref{tara_degree}.\nThe center of the three degree distributions are ranked as neighborhood selection, compositional graphical lasso, and graphical lasso in the ascending order, which is also reflected in the densities of the three interaction networks.\n\nThird, we investigate the high-degree nodes, i.e., hub genera, in the interaction networks. High-degree nodes are often thought to represent taxa that elicit an effect on a large number of other members of their community, such as keystone taxa and generalistic parasites. It is observed that there are a few hub genera that have an excessive number of interactions with other genera in the literature, such as \\textit{Amoebophrya}, \\textit{Blastodinium}, and \\textit{Parvilucifera}, which are referred to as benchmark hub genera. Although the literature validated interactions are rather incomplete, it is still of interest to evaluate how well the three methods pick up those benchmark hub genera. Since the density of networks from the three methods are rather different, it is hard to compare the degrees of the hub genera from the three networks directly, but it is reasonable to compare the ranks of those degrees within each degree distribution. The method that generates lower ranks (degree of genera ranked in descending order) for those hubs in their degree distribution are believed to pick up the benchmark hub genera better. \n\nA list of 7 benchmark hubs (which has degree $\\geq 5$) along with their degrees from the incomplete network constructed from the literature is shown in Table \\ref{tab:degree rank of lit hubs}, followed by the corresponding ranks of those genera in the degree distributions from each of the three methods and their corresponding degrees in the parentheses. We can see that compositional graphical lasso generates lower ranks than graphical lasso for all 7 genera, while neighborhood selection generates lower ranks than compositional graphical lasso for 3 genera, and the opposite for the other 4 genera. Overall, compositional graphical lasso performs the best in picking up the benchmark hub genera among the three methods.\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n \\hline\n & Literature & Comp-gLASSO & gLASSO & MB \\\\ \n \\hline\n \\textit{Amoebophrya} & 1 (21) & 31 (19) & 47 (21) & 2 (9) \\\\ \n \\textit{Blastodinium} & 2 (12) & 13 (23) & 26 (24) & 30 (5) \\\\ \n \\textit{Parvilucifera} & 2 (12) & 46 (17) & 67 (19) & 58 (3) \\\\ \n \\textit{Syndinium} & 4 (7) & 14 (23) & 27 (24) & 19 (6) \\\\ \n \\textit{Vampyrophrya} & 4 (7) & 34 (19) & 60 (20) & 6 (8) \\\\ \n \\textit{Phaeocystis} & 6 (6) & 1 (31) & 3 (29) & 17 (6) \\\\ \n \\textit{Pirsonia} & 7 (5) & 64 (15) & 68 (19) & 33 (5) \\\\\n \\hline\n\\end{tabular}\n\\caption{\\label{tab:degree rank of lit hubs} For the hub genera, their ranks in each degree distribution (in descending order) from the literature, compositional graphical lasso (Comp-gLASSO), graphical lasso (gLASSO) and neighborhood selection (MB). The numbers in the parentheses are the corresponding degrees of the genera.}\n\\end{table}\n\nWe find that compositional graphical lasso predicts a high degree for genera that are known to act as keystone species or parasitize other taxa, many of which are also high-degree in the literature validated network. For example, \\textit{Phaeocystis} elicits a high degree in both literature validated and compositional graphical lasso networks. This genus is a well-described keystone organism in some marine ecosystems \\citep{verity2007current}, where it causes large phytoplankton blooms and plays an important role in the global cycling of carbon and sulfur \\citep{verity1996organism}. Similarly, these two networks predict a high degree for taxa that are known parasites, such as \\textit{Blastodinium} \\citep{skovgaard2012parasitic}, \\textit{Amoebophyra} \\citep{chambouvet2008control}, and \\textit{Syndinium} \\citep{skovgaard2005phylogenetic}. In some instances, compositional graphical lasso uniquely reveals high-degree genera even when compared to the literature validated network, such as in the case of the parasite \\textit{Euduboscquella} \\citep{bachvaroff2012molecular}, which is ranked the 5th by compositional graphical lasso but only has 1 interaction in the literature. Given the fact that the literature validated network only captures a subset of interactions that exist in nature (i.e., those interactions that have been explicitly tested), we posit that compositional graphical lasso affords an opportunity to resolve novel modulators of community composition, such as keystone taxa and generalistic parasites, that follow-up experiments can validate.\n\n\\subsection{Zebrafish Parasite Infection Study}\n\\label{Sec-4.2}\nThe Zebrafish Parasite Infection Study, recently conducted at Oregon State University, used a zebrafish helminth intestinal infection model to resolve how the gut microbiome and parasite burden covary during infection \\citep{gaulke2019longitudinal}. This study quantified the infection burden of an intestinal helminth of zebrafish, \\textit{Pseudocapillaria tomentosa}, and the gut microbiome in 210 4-month-old zebrafish. Half of these fish were exposed to the parasite, \\textit{P.\\ tomentosa}, and the other half were unexposed. Given that not all exposed fish would be ultimately infected, parasite burden was measured more accurately as the total number of worms in their fecal samples. In addition, the same fecal samples were used to measure the abundance of the gut microbiome species via 16S rRNA sequencing, which resulted in gut microbiome data for 207 fish. Among these fish, 81 were infected after being exposed to \\textit{P.\\ tomentosa} and the other 126 fish were not infected, as indicated by the total number of worms. Our major goal is to evaluate how the microbial interaction network associates with successful parasite infection in the gut.\n\nSimilar to the \\textit{Tara} Oceans Project, we choose to analyze the genus-level abundance data. We discard those genera that have a nonzero abundance in less than 5\\% of the samples, resulting in 42 genera left in our analysis. In addition, we follow the same strategy to define a reference genus as in \\citet{jiang2020microbial}, i.e., combining those OTUs that do not have a genus-level taxonomic classification into a pseudo-genus and using it as the reference genus. The analysis is conducted in the same way as in the \\textit{Tara} Oceans Project, except that the methods (compositional graphical lasso, graphical lasso, and neighborhood selection) are applied separately for the uninfected and the infected fish, resulting in three interaction networks for each group of fish. Analyzing the uninfected and infected fish separately allows us to compare the microbial interaction networks between the two groups.\n\nFirst, we compare the overall topologies of the interaction networks with tuning parameters selected by StARS. We find that compositional graphical lasso, graphical lasso, and neighborhood selection identify 262, 312, and 78 edges, respectively, for uninfected fish, and 241, 259, and 60 edges, respectively, for infected fish. Comparing the two groups of fish, the interaction networks for the uninfected fish are slightly denser than the infected fish. The comparison among the three methods agrees with our observation in the simulation study and in the \\textit{Tara} Oceans Project that the network from neighborhood selection tends to be much sparser than those from compositional graphical lasso and graphical lasso. The degree distributions based on the three methods are shown in Figure \\ref{fish_degree}, which again suggests high similarity between the two groups of fish. Similar to the \\textit{Tara} Oceans Project, neighborhood selection results in the lowest median degree, followed by composition graphical lasso and graphical lasso, regardless of whether the fish are infected or not.\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{not_infected_degree.pdf}\n \\caption{}\n \\label{not_infected_degree}\n\\end{subfigure}%\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{infected_degree.pdf}\n \\caption{}\n \\label{infected_degree}\n\\end{subfigure}\n\\caption{(a): The degree distribution of vertices from the networks selected by StARS for uninfected fish. (b): The degree distribution of vertices from the networks selected by StARS for infected fish. Solid red: compositional graphical lasso; dashed green: graphical lasso; dashed dotted blue: neighborhood selection.}\n\\label{fish_degree}\n\\end{figure}\n\nSecond, we further investigate the degree of each node in the interaction networks. This analysis helps identify those high-degree nodes, i.e., hub genera, which are indicative of keystone taxa in the microbial community. {\\color{blue}Due to the limited space, we refer to Table \\ref{fish_hubs} in the supplementary materials that presents all 42 genera with their degrees in each network in descending order.} Although the networks are similar in density between the two groups of fish, we have observed a substantial difference between the degrees of individual nodes. To see this we sort the nodes in each network based on their degrees and compare the top ten nodes between networks. Taking compositional graphical lasso as an example, only three out of the top ten nodes overlap between uninfected and infected fish: \\textit{Paicibacter}, \\textit{Photobacterium}, \\textit{Fusobacterium}. This suggests that the network structures between the uninfected and infected fish are more distinctive than what their overall topologies might appear. The hub genera identified by different methods are quite different from each other too. For the group of uninfected fish, only three out of the top ten nodes overlap between the networks from the three methods: \\textit{Aeromonas}, \\textit{Photobacterium}, and \\textit{Rheinheimera}; similarly, for the group of infected fish, only three out of the top ten nodes overlap between the networks from the three methods: \\textit{Fusobacterium}, \\textit{Paucibacter}, and \\textit{Yersinia}.\n\n{\\color{blue}Third, we compare the networks generated from uninfected and infected fish to clarify the relationship between infection and the interactions that exist in the gut microbiome, motivated by the hypothesis that intestinal dysbiosis is defined not only by changes in community composition, but also by how members of the community ecologically interact. In order to understand this relationship, we identify microbes whose interaction set changes between infected and uninfected hosts then compare these results across methods. One of the bacterial taxa that compositional graphical lasso identifies as increasing in interaction degree among infected individuals, while other methods estimate a decrease in detected interaction degree, is the genus \\textit{Gemmobacter}. Our earlier work \\citep{gaulke2019longitudinal} showed that the abundance of this taxon positively links to parasite exposure, indicating that it is more ecologically successful in infected versus uninfected intestines. The analysis from compositional graphical lasso indicates that this increase in \\textit{Gemmobacter} relative abundance in infected individuals is coincident with an increase in the ecological interactions between \\textit{Gemmobacter} and other members of the gut microbiome, as it interacts with a greater number of other taxa in infected individuals. While \\textit{Gemmobacter} is not a particularly well studied genus, prior work links it to dysbiotic diseases in a variety of hosts \\citep{bates2022microbiome, ni2020gut}. Our observation of an increase in its interaction with other taxa in infected individuals strengthens the hypothesis that \\textit{Gemmobacter} may act as an agent of dysbiosis, as its increase in its relative abundance not only links to a disease context, but its more integrated role in the microbial community (i.e., it potentially impacts a larger number of taxa in the gut).\n\nThe results from compositional graphical lasso also uniquely suggest that the \\textit{Paucibacter} genus in uninfected individuals displays a higher number of identified interactions relative to infected individuals. The other methods we evaluated find that \\textit{Paucibacter}'s interactions are relatively consistent between infected and uninfected individuals. The results from compositional graphical lasso are notable because the genus \\textit{Paucibacter} includes clades that have been identified as being conserved in the healthy zebrafish gut microbiome \\citep{sharpton2021phylogenetic}, presumably because these taxa are critical to the commensal microbial community. Based on our observations, namely that disruption to the broader microbiome by parasite infection appears to disrupt the set of interactions between \\textit{Paucibacter} and other taxa in the community, we hypothesize that \\textit{Paucibacter} is a keystone member of the healthy zebrafish gut microbiome and that parasite exposure induced dysbiosis occurs in part by disrupting \\textit{Paucibacter} in the gut. Collectively, our observations reveal patterns of interaction between infected and uninfected individuals that clarify the potential ecological role of \\textit{Gemmobacter} and \\textit{Paucibacter}, which future research should seek to empirically test.}\n\n{\\color{blue}In addition to these observations about \\textit{Gemmobacter} and \\textit{Paucibacter}, compositional graphical lasso also finds insightful patterns about \\textit{Photobacterium}, a genus which contains pathogens of marine fish \\citep{romalde2002photobacterium, osorio2018photobacterium}, though its role as a pathogen in zebrafish is less clear.} However, as prior work has cultured \\textit{Photobacterium} from the zebrafish gut \\citep{cantas2012culturable} and other studies have demonstrated that \\textit{Photobacterium} elicits toxicity to zebrafish cells in culture \\citep{osorio2018photobacterium}, observations collectively point to \\textit{Photobacterium} as a potential pathogen or pathobiont of zebrafish. Consistent with this speculation, prior work found that \\textit{Photobacterium} statistically explained variation in \\textit{P. tomentosa} worm burden as well as worm infection induced intestinal inflammation \\citep{gaulke2019longitudinal}. We sought to determine if our interaction network analysis could clarify the role of \\textit{Photobacterium} in the zebrafish gut microbiome, especially as it relates to parasite infection.\n\nInterestingly, \\textit{Photobacterium} is identified as an interaction hub (i.e., a relatively high degree node) in all networks assembled from all methods we evaluated. However, compositional graphical lasso uniquely reveals that the number of interactions linked to \\textit{Photobacterium} increases in infected fish (19 edges) as compared to uninfected fish (16 edges), whereas the other approaches observe the opposite pattern (graphical lasso: 11 versus 17 edges; neighborhood selection: 3 versus 5 edges). These differences in the \\textit{Photobacterium} subgraph across approaches also includes variation in the specific taxa that \\textit{Photobacterium} is inferred to interact with. {\\color{blue}To better understand changes in the set of \\textit{Photobacterium} interactions, we extracted the first-degree sub-graph connected to \\textit{Photobacterium} that each of the three methods produced for both the infected and uninfected host interaction networks. Compositional graphical lasso alone estimates eight interactions in the \\textit{Photobacterium}-specific infected host network, and six of these eight genera have previously been linked to parasite exposure, disease burden, or histopathology score.} Two of these parasite etiology-linked taxa are \\textit{Aeromonas} and \\textit{Gemmobacter}, genera which are notable because they contain microbes that elicit pathogenic or pathobiotic phenotypes. For example, the \\textit{Aeromonas} genus includes well-characterized and abundant opportunistic pathogens, such as \\textit{A. hydrophila} \\citep{saraceni2016establishment}, and the \\textit{Gemmobacter} genus, as noted above, has been shown to opportunistically increase abundance \\citep{huang2020exposure} in dysbiotic fish gut communities which display impeded overall gut function. {\\color{blue}Given that \\textit{Photobacterium} proliferates in the infected host gut, these observations suggest that \\textit{Photobacterium} may work alongside and even promote the growth of other pathobiotic taxa to disrupt the composition of the gut microbiome and induce dysbiosis upon infection.}\n\nBased on these collective observations, we hypothesize that \\textit{Photobacterium} is an intestinal pathobiont of the zebrafish gut and contributes to dysbiosis in infected fish. \\textit{Photobacterium} is a relatively prevalent genus in the zebrafish intestines, even in uninfected fish. However, \\textit{P. tomentosa} infection links to an increase in the relative abundance of \\textit{Photobacterium}, which also positively associates with intestinal hyperplasia \\citep{gaulke2019longitudinal}, possibly due to the cytotoxic effect of \\textit{Photobacterium}. Given that \\textit{Photobacterium} is also an interaction hub whose influence on the community increases in infected fish, at least according to compositional graphic lasso, infection induced changes in \\textit{Photobacterium} relative abundance may drive additional changes in the success of other taxa in the microbiome, including opportunistic pathogens and pathobionts like members of \\textit{Aeromonas} and \\textit{Gemmobacter}. Going forward, future studies should seek to experimentally validate our novel hypothesis about the pathobiotic role of \\textit{Photobacterium} in the zebrafish gut, including its impact on the rest of the microbial community and its role in infection induced tissue damage. If our hypothesis is accurate, \\textit{Photobacterium} may serve as an important model taxon for discerning how the gut microbiome and helminths interact to impact infection outcomes.\n\n{\\color{blue}Collectively, these results provide evidence for our hypothesis that in the case of intestinal helminth infection, dysbiosis may be defined by not only a change in the composition of the gut microbiome, but a restructuring in how key members of the microbial community interact with the rest of the microbiota. Notably, these patterns were only observed by compositional graphical lasso, as the other methods did not observe unique evidence of such infection-associated inversions for \\textit{Gemmobacter}, \\textit{Paucibacter}, or \\textit{Photobacterium}.\n\nFinally, we visualize the six genus interaction networks, three for infected fish and three for uninfected fish (Figure \\ref{fish_StARS}). For a better visualization, we only keep the top $100$ edges for the networks resulted from compositional graphical lasso and graphical lasso as they are much denser than the ones from neighborhood selection. Their edges are ranked in the exactly same way as in the \\textit{Tara} Oceans Study, first by selection probability and then by edge weight (see Figure \\ref{tara_StARS} in the supplementary materials for details). For all networks, darker blue implies higher magnitude in the absolute value of partial correlation.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.8]{Fish_StARS_genus_level.pdf}\n\\caption{Inferred networks from each method with edges filtered by selection probability and ranked by edge weight separately for the groups of uninfected and infected fish. In each network, darker blue implies stronger (larger in absolute value) edge weight.}\n\\label{fish_StARS}\n\\end{figure}}\n\n\\section{Discussion}\n\\label{Sec-5}\n\nA growing body of work points to the gut microbiome as an agent to treat, diagnose, and prevent human diseases. The innovation of these utilities requires foundational knowledge about the pathobiotic {\\color{blue}or probiotic} role that the gut microbiome play in the development and progression of the diseases. While most studies have focused on investigating how the abundance of gut microbes covary with a disease, such as infection \\citep{gaulke2019longitudinal}, how the microbial interactions associate with the diseases is largely unknown. Understanding how the microbial interactions associate with a disease is critical in identifying microbes as a pathobiont {\\color{blue}or probiotic}, which are candidates for potential drug or diagnostic test.\n\nThis work focuses on gaining a better understanding of the underlying role that microorganisms play in their communities by constructing microbial interaction networks. We propose a novel method called compositional graphical lasso that simultaneously accounts for the following key features of microbiome abundance data: (a) the data are compositional counts and only carry information about relative abundances of the taxa; (b) the observed relative abundance is subject to heteroscedasticity as its variance depends on the varying sequence depth across samples; (c) the data are high-dimensional as the number of taxa are often larger than the number of samples. We have demonstrated the advantages of our approach over previously proposed methods in simulations and on a benchmark dataset.\n\n\nWe apply compositional graphical lasso to the data from the Zebrafish Parasite Infection Study, which used a parasite infection model to identify gut microbiota that positively and negatively associate with infection burden \\citep{gaulke2019longitudinal}. {\\color{blue}Our approach identified method-specific changes in interaction degree between infected and uninfected individuals for three taxa, \\textit{Photobacterium}, \\textit{Gemmobacter}, and \\textit{Paucibacter}. Further investigation of these method-specific taxa interaction changes reveals their biological plausibility, and provides insight into their relevance in the context of parasite-linked changes in the zebrafish gut microbiome. In particular, we hypothesize that \\textit{Photobacterium} and \\textit{Gemmobacter} are pathobionts of zebrafish gut for parasitic infection and that \\textit{Paucibacter} is a probiotic. Future studies should seek to experimentally validate their ecological roles in the zebrafish gut, including their impacts on the rest of the microbial community and their roles in infection induced tissue damage.}\n\n\nIt is noteworthy that compositional graphical lasso requires one to choose a reference taxon. As a general rule of thumb, we recommend choosing a ``common'' taxon that has a high average relative abundance and relatively few zero counts across samples, as its true relative abundance serves as the denominator in the additive log-ratio transformation. In practice, we also suggest that a user choose a taxon that is not of direct interest to investigate, as the reference taxon will not be represented in the resultant network. {\\color{blue}It is also of interest to study how much the choice of the reference taxon may affect the estimated network, and whether some robustness could be guaranteed across different choices of the reference. Because the robustness against the choice of the reference is so important not just for compositional graphical lasso, but for many other network estimation methods as well, we have chosen to devote a major effort to this issue in a separate ongoing project. In this ongoing work, we have theoretically established the reference-invariance property of the inverse covariance matrix of a class of additive log-ratio transformation-based models, including the logistic normal multinomial model as a special case. However, as it is beyond the scope of this paper, this invariance property will be detailed in the future.}\n\n\\bibliographystyle{asa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nBlind decoding, also known as blind detection, requires the receiver of a set of bits to identify if said bits compose a codeword of a particular channel code. In 3GPP LTE\/LTE-Advanced standards blind detection is used by the user equipment (UE) to receive control information related to the downlink shared channel. The UE attempts the decoding of a set of candidates, to identify if one of the candidates holds its control information. \nBlind detection will be required in the $5^{\\text{th}}$ generation wireless communication standard (5G) as well: ongoing discussions are considering a substantial reduction of the time frame allocated to blind detection, from $16\\mu$s to $4\\mu$s. Blind detection must be performed very frequently, and given the high number of decoding attempts that must be performed in a limited time \\cite{3GPP_R8}, it can lead to large implementation costs and high energy consumption. Blind detection solutions for codes adopted in previous generation standards can be found in \\cite{Moosavi_GLOBECOM11,Xia_TSP14,Zhou_ENT13}.\n\nPolar codes are a class of capacity-achieving error correcting codes, introduced by Ar{\\i}kan in \\cite{arikan}. They are characterized by simple encoding and decoding algorithms, and have been selected for use in 5G \\cite{3gpp_polar}. In \\cite{arikan}, the successive-cancellation (SC) decoding algorithm has been proposed as well. It is optimal for infinite code lengths, but its error-correction performance degrades quickly at moderate and short code lengths. In its original formulation, it also suffers from long decoding latency. SC list (SCL) decoding has been proposed in \\cite{tal_list} to improve the error-correction performance of SC, at the cost of increased decoding latency. In \\cite{sarkis, hashemi_SSCL, xiong_symbol, hashemi_FSSCL}, a series of techniques has been proposed, aimed at improving the decoding speed of both SC and SCL without sacrificing error-correction performance.\n\nBlind detection of polar codes has been recently addressed in \\cite{Condo_COMML17}, where a blind detection scheme fitting within 3GPP LTE-A and future 5G requirements has been proposed. It is based on a two-step scheme: a first SC decoding phase helps selecting a set of candidates, subsequently decoded with SCL. An early stopping criterion for SCL is also proposed to reduce average latency. Another recent work on polar code blind detection \\cite{Giard_BD} detaches itself from 4G-5G standard requirements, and proposes a metric on which the outcome of the blind detection can be based.\n\nIn this work, we extend the blind detection scheme presented in \\cite{Condo_COMML17} and its early stopping criterion by considering SCL also in the first decoding phase, and provide improved detection accuracy results. We then propose an architecture to implement the blind detection scheme: it relies on an SCL decoder with tunable list size, that can be used for both the first and second decoding stages. The architecture is synthesized and implementation results are reported for various system parameters.\n\nThe rest of the paper is organized as follows. Section~\\ref{sec:prel} introduces background information on polar codes and blind detection. Section~\\ref{sec:blind} details the proposed blind detection scheme, and provides simulation results to evaluate its performance. The architecture of the blind detection system is detailed in Section~\\ref{sec:HW}, and implementation results are given in Section~\\ref{sec:impl}. Finally, Section~\\ref{sec:conc} draws the conclusion.\n\n\n\\section{Preliminaries} \\label{sec:prel}\n\n\\subsection{Polar Codes} \\label{sec:prel:PC}\n\n\nA polar code $\\mathcal{P}(N,K)$ is a linear block code of length $N=2^n$ and rate $K\/N$, and it can be expressed as the concatenation of two polar codes of length $N\/2$. This is due to the fact that the encoding process is represented by a modulo-$2$ matrix multiplication a\n\\begin{equation}\n\\mathbf{x} = \\mathbf{u} \\mathbf{G}^{\\otimes n}\\text{,}\n\\end{equation}\nwhere $\\mathbf{u} = \\{u_0,u_1,\\ldots,u_{N-1}\\}$ is the input vector, $\\mathbf{x} = \\{x_0,x_1,\\ldots,x_{N-1}\\}$ is the codeword, and the generator matrix $\\mathbf{G}^{\\otimes n}$ is the $n$-th Kronecker product of the polarizing matrix $\\mathbf{G}=\\bigl[\\begin{smallmatrix} 1&0\\\\ 1&1 \\end{smallmatrix} \\bigr]$. The polarization effect brought by polar codes allows to divide the $N$-bit input vector $\\mathbf{u}$ between reliable and unreliable bit-channels.\nThe $K$ information bits are assigned to the most reliable bit-channels of $\\mathbf{u}$, while the remaining $N-K$, called frozen bits, are set to a predefined value, usually $0$.\nCodeword $\\mathbf{x}$ is transmitted through the channel, and the decoder receives the logarithmic likelihood ratio (LLR) vector $\\mathbf{y} = \\{y_0,y_1,\\ldots,y_{N-1}\\}$.\n\nIn the seminal work on polar codes \\cite{arikan}, the SC decoder is proposed. The SC-based decoding process can be represented as a binary tree search, in which the tree is explored depth first, with priority given to the left branches. Fig.~\\ref{fig:tree} shows an example of SC decoding tree for $\\mathcal{P}(16,8)$, where nodes at stage $s$ contain $2^s$ bits. White leaf nodes are frozen bits, while black leaf nodes are information bits.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[scale=1.8, thick]\n\n\\fill (0,0) circle [radius=.05];\n\n\\fill (-1,-.5) circle [radius=.05];\n\\fill (1,-.5) circle [radius=.05];\n\n\\fill (-1.5,-1) circle [radius=.05];\n\\fill (-.5,-1) circle [radius=.05];\n\\fill (.5,-1) circle [radius=.05];\n\\fill (1.5,-1) circle [radius=.05];\n\n\\fill (-1.75,-1.5) circle [radius=.05];\n\\fill (-1.25,-1.5) circle [radius=.05];\n\\fill (-.75,-1.5) circle [radius=.05];\n\\fill (-.25,-1.5) circle [radius=.05];\n\\fill (.25,-1.5) circle [radius=.05];\n\\fill (.75,-1.5) circle [radius=.05];\n\\fill (1.25,-1.5) circle [radius=.05];\n\\fill (1.75,-1.5) circle [radius=.05];\n\n\\draw (-1.875,-2) circle [radius=.05];\n\\draw (-1.625,-2) circle [radius=.05];\n\\draw (-1.375,-2) circle [radius=.05];\n\\draw (-1.125,-2) circle [radius=.05];\n\\draw (-.875,-2) circle [radius=.05];\n\\fill (-.625,-2) circle [radius=.05];\n\\fill (-.375,-2) circle [radius=.05];\n\\fill (-.125,-2) circle [radius=.05];\n\\fill (.125,-2) circle [radius=.05];\n\\fill (.375,-2) circle [radius=.05];\n\\fill (.625,-2) circle [radius=.05];\n\\fill (.875,-2) circle [radius=.05];\n\\draw (1.125,-2) circle [radius=.05];\n\\draw (1.375,-2) circle [radius=.05];\n\\draw (1.625,-2) circle [radius=.05];\n\\fill (1.875,-2) circle [radius=.05];\n\n\\draw (0,-.05) -- (-1,-.45);\n\\draw (0,-.05) -- (1,-.45);\n\n\\draw (-1,-.55) -- (-1.5,-.95);\n\\draw (-1,-.55) -- (-.5,-.95);\n\\draw (1,-.55) -- (.5,-.95);\n\\draw (1,-.55) -- (1.5,-.95);\n\n\\draw (-1.5,-1.05) -- (-1.75,-1.45);\n\\draw (-1.5,-1.05) -- (-1.25,-1.45);\n\\draw (-.5,-1.05) -- (-.75,-1.45);\n\\draw (-.5,-1.05) -- (-.25,-1.45);\n\\draw (.5,-1.05) -- (.25,-1.45);\n\\draw (.5,-1.05) -- (.75,-1.45);\n\\draw (1.5,-1.05) -- (1.25,-1.45);\n\\draw (1.5,-1.05) -- (1.75,-1.45);\n\n\\draw (-1.75,-1.55) -- (-1.875,-1.95);\n\\draw (-1.75,-1.55) -- (-1.625,-1.95);\n\\draw (-1.25,-1.55) -- (-1.375,-1.95);\n\\draw (-1.25,-1.55) -- (-1.125,-1.95);\n\\draw (-.75,-1.55) -- (-.875,-1.95);\n\\draw (-.75,-1.55) -- (-.625,-1.95);\n\\draw (-.25,-1.55) -- (-.375,-1.95);\n\\draw (-.25,-1.55) -- (-.125,-1.95);\n\\draw (.25,-1.55) -- (.125,-1.95);\n\\draw (.25,-1.55) -- (.375,-1.95);\n\\draw (.75,-1.55) -- (.625,-1.95);\n\\draw (.75,-1.55) -- (.875,-1.95);\n\\draw (1.25,-1.55) -- (1.125,-1.95);\n\\draw (1.25,-1.55) -- (1.375,-1.95);\n\\draw (1.75,-1.55) -- (1.625,-1.95);\n\\draw (1.75,-1.55) -- (1.875,-1.95);\n\n\\draw [very thin,gray,dashed] (-2,0) -- (2,0);\n\\draw [very thin,gray,dashed] (-2,-.5) -- (2,-.5);\n\\draw [very thin,gray,dashed] (-2,-1) -- (2,-1);\n\\draw [very thin,gray,dashed] (-2,-1.5) -- (2,-1.5);\n\\draw [very thin,gray,dashed] (-2,-2) -- (2,-2);\n\n\\node at (-2.3,0) {$s=4$};\n\\node at (-2.3,-.5) {$s=3$};\n\\node at (-2.3,-1) {$s=2$};\n\\node at (-2.3,-1.5) {$s=1$};\n\\node at (-2.3,-2) {$s=0$};\n\n\\end{tikzpicture}\n\\caption{Binary tree example for $\\mathcal{P}(16,8)$. White circles at $s=0$ are frozen bits, black circles at $s=0$ are information bits.}\n\\label{fig:tree}\n\\end{figure}\n\nFig.~\\ref{fig:MessagePassing} portrays the message passing among SC tree nodes. Parents pass LLR values $\\alpha$ to children, that send in return the hard bit estimates $\\beta$. \nThe left and right branch messages $\\alpha^\\text{l}$ and $\\alpha^\\text{r}$, in the hardware-friendly version of \\cite{leroux}, are computed as\n\\begin{align}\n\\alpha^{\\text{l}}_i = & \\text{sgn}(\\alpha_i)\\text{sgn}(\\alpha_{i+2^{s-1}})\\min(|\\alpha_i|,|\\alpha_{i+2^{s-1}}|) \\text{,} \\label{eq1} \\\\\n\\alpha^{\\text{r}}_i =& \\alpha_{i+2^{s-1}} + (1-2\\beta^\\text{l}_i)\\alpha_i \\text{,}\n\\label{eq2}\n\\end{align}\nwhile $\\beta$ is computed as\n\\begin{equation}\n\\beta_i =\n \\begin{cases}\n \\beta^\\text{l}_i\\oplus \\beta^\\text{r}_i, & \\text{if} \\quad i < 2^{s-1} \\text{,}\\\\\n \\beta^\\text{r}_{i-2^{s-1}}, & \\text{otherwise},\n \\end{cases}\n \\label{eq3}\n\\end{equation}\nwhere $\\oplus$ denotes the bitwise XOR. The SC operations are scheduled according to the following order: each node receives $\\alpha$ first, then sends $\\alpha^\\text{l}$, receives $\\beta^\\text{l}$, sends $\\alpha^\\text{r}$, receives $\\beta^\\text{r}$, and finally sends $\\beta$.\nWhen a leaf node is reached, $\\beta_i$ is set as the estimated bit $\\hat{u}_i$:\n\\begin{equation}\n\\hat{u}_i =\n \\begin{cases}\n 0 \\text{,} & \\text{if } i \\in \\mathcal{F} \\text{ or } \\alpha_{i}\\geq 0\\text{,}\\\\\n 1 \\text{,} & \\text{otherwise,}\n \\end{cases} \\label{eq6}\n\\end{equation}\nwhere $\\mathcal{F}$ is the set of frozen bits.\n\nThe SC decoding process requires full tree exploration: however, in \\cite{alamdar, sarkis} it has been shown that it is possible to prune the tree by identifying patterns in the sequence of frozen and information bits, achieving substantial speed increments. This improved SC decoding is called fast simplified SC (Fast-SSC).\n\n\nSC decoding suffers from modest error-correction performance with moderate and short code lengths. To improve it, the SCL algorithm was proposed in \\cite{tal_list}. It is based on the same process as SC, but each time that a bit is estimated at a leaf node, both its possible values $0$ and $1$ are considered. A set of $L$ codeword candidates is stored, so that a bit estimation results in $2L$ new candidates, half of which must be discarded. To this purpose, a path metric (PM) is associated to each candidate and updated at every new estimate: the $L$ paths with the lowest PM survive. In the LLR-based SCL proposed in \\cite{balatsoukas}, the hardware-friendly formulation of the PM is\n\\begin{align}\n \\text{PM}_{{i}_l} =& \\begin{cases}\n \\text{PM}_{{i-1}_l}, & \\text{if } \\hat{u}_{i_l} = \\frac{1}{2}\\left(1-\\text{sgn}\\left(\\alpha_{i_l}\\right)\\right)\\text{,}\\\\\n \\text{PM}_{{i-1}_l} + |\\alpha_{i_l}|, & \\text{otherwise,}\n \\end{cases} \\label{eq7}\n\\end{align} \nwhere $l$ is the path index and $\\hat{u}_{i_l}$ is the estimate of bit $i$ at path $l$.\nAs with SC decoding, SCL tree pruning techniques relying on the identification of frozen-information bit patterns have been proposed in \\cite{hashemi_SSCL,hashemi_FSSCL}, called simplified SCL (SSCL) and Fast-SSCL.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[scale=.5]\n\n\\draw [very thin,gray,dashed] (-2,0) -- (2,0);\n\\draw [very thin,gray,dashed] (-2,-2) -- (2,-2);\n\\draw [very thin,gray,dashed] (-2,-4) -- (2,-4);\n\n\\node at (-3,0) {$s+1$};\n\\node at (-3,-2) {$s$};\n\\node at (-3,-4) {$s-1$};\n\n\\fill (0,0) circle [radius=.25];\n\\fill (0,-2) circle [radius=.2];\n\\fill (-1.5,-4) circle [radius=.15];\n\\fill (1.5,-4) circle [radius=.15];\n\n\\draw [->,very thick] (-.1,-.4) -- (-.1,-1.7) node [left,midway,rotate=0] {$\\alpha$};\n\\draw [->,very thick] (.1,-1.7) -- (.1,-.4) node [right,midway,rotate=0] {$\\beta$};\n\n\\draw [->,very thick] (-.25,-2.2) -- (-1.45,-3.75) node [left,midway,rotate=0] {$\\alpha^{\\text{l}}$};\n\\draw [->,very thick] (-1.3,-3.85) -- (-.1,-2.3) node [right,near start,rotate=0] {$\\beta^{\\text{l}}$};\n\\draw [<-,very thick] (.25,-2.2) -- (1.45,-3.75) node [right,midway,rotate=0] {$\\beta^{\\text{r}}$};\n\\draw [<-,very thick] (1.3,-3.85) -- (.1,-2.3) node [left,near start,rotate=0] {$\\alpha^{\\text{r}}$};\n\n\\end{tikzpicture}\n\\caption{Message passing in tree graph representation of SC decoding.}\n\\label{fig:MessagePassing}\n\\end{figure}\n\n\n\n\\subsection{Blind Detection}\n\nThe physical downlink control channel (PDCCH) is used in 3GPP LTE\/LTE-Advanced to transmit the downlink control information (DCI) related to the downlink shared channel. The DCI carries information regarding the channel resource allocation, transport format and hybrid automatic repeat request, and allows the UE to receive, demodulate and decode.\n\nA cyclic redundancy check (CRC) is attached to the DCI payload before transmission. The CRC is masked according to an ID, like the radio network temporary identifier (RNTI), of the UE to which the transmission is directed, or according to one of the system-wide IDs. Finally, the DCI is encoded with a convolutional code. The UE is not aware of the format with which the DCI has been transmitted: it thus has to explore a combination of PDCCH locations, PDCCH formats, and DCI formats in the common search space (CSS) and UE-specific search space (UESSS) and attempt decoding to identify useful DCIs. This process is called blind decoding, or blind detection. For each PDCCH candidate in the search space, the UE performs channel decoding, and demasks the CRC with its ID. If no error is found in the CRC, the DCI is considered as carrying the UE control information.\n\n\nBased on LTE standard R8 \\cite{3GPP_R8}, the performance specifications for the blind detection process are the following:\n\\begin{itemize}\n \\item The DCI of PDCCH is from $8$ to $57$ bits plus $16$-bit CRC, masked by $16$-bit ID.\n \\item In UESSS, a maximum of $2$ DCI formats can be sent per transmission time interval (TTI) for $2$ potential frame lengths. Therefore, $16$ candidate locations in UESSS $\\rightarrow$ $32$ candidates.\n \\item In CSS, a maximum of $2$ DCI formats can be sent per TTI for $2$ potential frame lengths. Therefore, $6$ candidate locations in CSS $\\rightarrow$ $12$ candidates.\n \\item Code length could be between $72$ and $576$ bits.\n \\item Information length (including $16$-bit CRC) could be between $24$ and $73$ bits.\n \\item Target signal-to-noise ratio (SNR) is dependent on the targeted block error rate (BLER): $10^{-2}$.\n \\item There are two types of false-alarm scenarios: Type-1, when the UE ID is not transmitted but detected, and Type-2, when the UE ID is transmitted but another one is detected. The target false-alarm rate (FAR) is below $1.52\\times 10^{-5}$.\n \\item Missed detection occurs when UE ID is transmitted but not detected. The missed detection rate (MDR) is close to BLER curve.\n \\item The available time frame for blind detection is $16\\mu$s.\n\\end{itemize}\n\n\n\n\\section{Blind Detection Scheme} \\label{sec:blind}\n\nIn \\cite{Condo_COMML17}, polar codes have been considered within a blind detection framework, and a blind detection scheme has been proposed. Frozen bit positions are selected to instead transmit the RNTI. Fig.~\\ref{fig:scheme} shows the block diagram of the devised blind detection scheme. $C_1$ candidates are received at the same time: in this case, $C_1=44$. The $C_1$ candidates are decoded with the simple SC algorithm, and a PM is obtained for each candidate, equivalent to the LLR of the last decoded bit: thanks to the serial nature of SC decoding, the LLR of the last bit can be interpreted as a reliability measure on the decoding process.\nThe PMs are then sorted, to help the selection of the best candidates to forward to the following decoding phase. $C_2$ candidates are in fact selected to be decoded with the more powerful SCL decoding algorithm, that guarantees a better error-correction performance, at a higher implementation complexity. The $C_2$ candidates are chosen as:\n\\begin{enumerate}\n \\item All candidates whose ID, after the first phase, matches the one assigned to the UE. If more than $C_2$ are present, the ones with the highest PMs are selected.\n \\item If free slots among the $C_2$ remain, the candidates with the smallest PMs are selected. The candidates with large PMs have higher probability to be correctly decoded: if their ID does not match the one assigned to the UE, it is probably a different one. On the other hand, candidates with small PMs have a higher chance of being incorrectly decoded, and a transmission to the UE might be hiding among them.\n\\end{enumerate}\nAfter the SCL decoding phase, if one of the $C_2$ candidates matches the UE ID, it is selected, otherwise no selection is attempted.\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/Arch.tikz}\n \\caption{Polar codes blind detection scheme.}\n \\label{fig:scheme}\n\\end{figure}\n\nIn \\cite{Condo_COMML17}, an early stopping criterion has been proposed as well, to reduce the latency and energy expenditure of the second phase of the blind detection scheme, The first phase requires the full decoding of each candidate, to identify the $C_2$ codewords that will be sent to the second phase. In the second phase, however, all codewords whose ID does not match the UE ID will be discarded. Thus, as soon as the ID is shown to be different, the decoding can be interrupted. Since SC-based decoding algorithms estimate codeword bits sequentially, the ID evaluation can be performed every time an ID bit is estimated. In case the estimated bit is different from the UE ID bit, the decoding is stopped. \n\nThree methods of ID bits have been described in \\cite{Condo_COMML17} to choose the bits assigned to the ID:\n\\begin{itemize}\n \\item ID mode 1: the ID bits are the $16$ most reliable bits after the $K$ information bits. \n \\item ID mode 2: the ID bits are the $16$ most reliable bits, while the $K$ information bits are the most reliable bits after the $16$ ID bits.\n \\item ID mode 3: considering the order with which bits are decoded in SC-based algorithms, the ID bits are the first $16$ to be decoded among the $K+16$ most reliable bits.\n\\end{itemize}\nThe three techniques yield negligible differences in terms of error-correction performance, while ID mode 3 yields considerable advantages over mode 1 and mode 2 when early stopping is applied. In fact, since the ID bits are decoded earlier, the average percentage of estimated bits decreases, and the reduction in average latency is more substantial.\n\nIn this work, we generalize the blind detection scheme proposed in \\cite{Condo_COMML17}, by considering SCL also for the first decoding phase. In particular, we consider a list sizes $L_1\\ge1$ for the first decoding phase, and a list size $L_{\\max}>L_1$ for the second decoding phase. It should be noted that when $L_1=1$, the blind detection scheme reverts to that of \\cite{Condo_COMML17}.\n\n\n\n\\subsection{Simulation Results} \\label{sec:simres}\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/blerHW.tikz}\n \\\\\n \\ref{blerlegend}\n \\caption{BLER curves with SCL when $L=8$.}\n \\label{fig:BLER}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/MissedDetectionHW.tikz}\n \\\\\n \\ref{MDRlegend}\n \\caption{Missed detection rates after the second decoding phase with $L_1=2$, $L_{\\max}=8$, and $C_2=5$. Transmissions include $C_1\/2$ cases of $N_1=256$ and $C_1\/2$ cases of $N_2=512$.}\n \\label{fig:MD_SCL}\n\\end{figure}\n\n\n\nTo evaluate the effectiveness of the proposed blind detection scheme, simulations were performed. The BLER, MDR, and FAR have been measured on the additive white Gaussian noise (AWGN) channel, with binary phase-shift keying (BPSK) modulation, at the variation of different code parameters. We focused on polar codes with block lengths $N=\\{256, 512\\}$, since in \\cite{Condo_COMML17} it has been shown that they constitute the most critical cases in terms of speed. Four information lengths $K=\\{8, 16, 32, 57\\}$ have been considered, while the number of ID bits has been set to $16$. The 3GPP standardization committee has decided that information bits in polar codes must be assigned to the $K$ most reliable bit-channels \\cite{3gpp_polar_AH}: thus, the ID bits have been assigned according to ID mode 1. The ID values assigned to the $C_1$ candidates are randomly selected over $16$ bits. While different numbers of candidates passed to the second phase have been considered in \\cite{Condo_COMML17}, we have focused here on $C_2=5$, for which a good tradeoff between accuracy and latency is found. At the same time, we set $L_{\\max}=8$ and $L_1=2$: it is a representative case for which $L_{\\max}$ guarantees good error-correction performance, and at which SCL decoders can be implemented with reasonable complexity.\n\n\nFig.~\\ref{fig:BLER} plots the BLER curves for all the considered code lengths and rates. As expected, their error-correction performance improves as the code length increases and the code rate decreases. In Fig.~\\ref{fig:MD_SCL}, the first of the metrics specific to the blind detection problem, the MDR, is depicted. The MDR can be defined as the number of missed detections divided by the number of transmissions in which the UE ID was sent. The curves in Fig.~\\ref{fig:MD_SCL} have been obtained considering $C_1\/2$ candidates of length $N_1=256$, and $C_1\/2$ candidates of length $N_2=512$ in each transmission, with $K_1=K_2$ information bits. Together with the MDR, in Fig.~\\ref{fig:MD_SCL} the BLER curves relative to the aggregate transmissions are portrayed. It can be seen that the MDR curve is always lower than the relative BLER curve.\n\n\nThe FAR curves for the considered case study are portrayed in Fig.~\\ref{fig:FAR}. The system target FAR is equivalent to the FAR obtained with a $16$-bit CRC: in 5G, a CRC of at least $16$-bits long is foreseen. Here, we evaluate the additional contribution that the proposed blind detection scheme can bring in lowering the FAR on top of the CRC. It can be seen that the FAR is kept below the $10^{-4}$ threshold at SNR values for which the BLER is still very high, and decreases as the channel conditions improve. In the blind detection method presented in \\cite{Giard_BD}, the FAR increases as the MDR decreases. On the other hand, the proposed scheme allows to decrease both at the same time, thus avoiding performance limitations that could make it unappealing for 5G standard applications.\n\nThe impact of the devised early stopping criterion on the average number of estimated bits is shown in Fig.~\\ref{fig:avg_est}, for $K=32$ and $K=57$. These results consider each of the $C_2$ candidates separately, since the number of candidates of length $N_1$ and $N_2$ in the second phase depends on the PMs received from the first phase, and thus on channel SNR. The solid curves have been obtained in cases the UE ID was sent through the considered code, while the dashed curves in cases it was not sent through the code. \n\n\n\n\\begin{itemize}\n \\item For $N=256$ (curves with a circle marker), it is possible to observe the same behavior noted in \\cite{Condo_COMML17} for $N=128$ as well. In case the UE ID was sent, as the channel conditions improve, the number of estimated bits increases until stabilizing at a maximum average value. This phenomenon can be explained by the fact that when the SNR is low, it is more likely that the codeword carrying the UE ID is not selected to be among the $C_2$ candidates. Thus the decoders in the second phase easily encounter ID bits different from the UE ID early in the decoding process. As the channel conditions improve, the codeword with the UE ID falls among the $C_2$ candidates with rising probability. Consequently, the decoder tasked with its decoding does not interrupt the process, reaching $100\\%$ estimated bits, while the remaining $C_2-1$ decoders stop the decoding early, thus averaging the estimated bit percentage at a stable value ($67\\%$ for $K=32$ and $61\\%$ for $K=57$). The dashed curves show instead a stable value regardless of channel conditions: since among the $C_2$ candidates there is never one carrying the UE ID, all second phase decoders tend to stop the decoding early, at a percentage independent of the SNR, and mostly influenced by the position of bits assigned to the ID.\n\\item For $N=512$ (curves with a cross marker) a similar behavior to the $N=256$ case can be observed when the UE ID is not sent, with the average number of estimated bits stable at all the considered SNR values. On the other hand, when the UE ID is sent, the trend is different: at low SNR values, the percentage of estimated bits is very close to $100\\%$. As the SNR value increases, the average starts to decrease, until it settles on a stable value.\nThis behavior is due to the fact that at low SNR, it is very unlikely that a codeword with $N=512$ is among the $C_2$ second phase candidates if the UE ID is not matching: the longer code length and lower rate contribute to a higher decoding reliability during the first phase, that allows to screen out unlikely candidates better than the $N=256$ case.\n\\end{itemize}\n\n\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/FalseAlarmHW.tikz}\n \\\\\n \\ref{FARlegend}\n \\caption{False alarm rates after the second decoding phase with $L_1=2$, $L_{\\max}=8$, and $C_2=5$. Transmissions include $C_1\/2$ cases of $N_1=256$ and $C_1\/2$ cases of $N_2=512$.}\n \\label{fig:FAR}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/AvgEstHW.tikz}\n \\\\\n \\ref{ESlegend}\n \\caption{Average percentage of estimated bits during the second decoding phase with early stopping when $L_{\\max}=8$ and $C_2=5$.}\n \\label{fig:avg_est}\n\\end{figure}\n\n\n\n\n\\section{Hardware Architecture} \\label{sec:HW}\nTo evaluate the implementation cost of the devised blind detection scheme, we designed a decoder architecture that supports it, portrayed in Fig.~\\ref{fig:BDarch}. An array of flexible list size SCL decoders handles both the first and second decoding phase. A dedicated module selects the $C_2$ candidates for the second phase according to the criteria described in Section~\\ref{sec:blind}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[scale=0.55]{.\/BDsys.pdf}\n \\caption{Polar codes blind detection system architecture.}\n \\label{fig:BDarch}\n\\end{figure}\n\n\\subsection{Flexible list size SCL decoder}\n\nWe based our SCL decoder architecture on that of \\cite{hashemi_SSCL_TCASI,hashemi_FSSCL}: the decoding process follows the one described in Section~\\ref{sec:prel:PC} for a list size $L_{\\max}$. Most of the datapath and memories are instantiated $L_{\\max}$ times: multiple candidates are stored at the same time, with the best candidate being selected at the end of the decoding. While in \\cite{hashemi_SSCL_TCASI,hashemi_FSSCL} the final candidate is selected according to a CRC check, in the proposed architecture no CRC is considered, and the validity of the final candidate is based on the matching ID and PM value.\n\nThe SC decoding tree is descended by computing (\\ref{eq1}) and (\\ref{eq2}) at each stage $s$, with priority being given to left branches. These calculations are performed by $L_{\\max}$ parallel sets of $P$ processing elements (PEs), with $P$ being a power of $2$. In the stages for which $2^s>2P$, the operations in (\\ref{eq1}) and (\\ref{eq2}) are performed over $2^s\/(2P)$ steps, while a single step is needed otherwise. Internal memories store the updated LLR values between stages.\n\nPEs get two LLR values as input, and concurrently compute both $\\alpha^{\\text{l}}$ and $\\alpha^{\\text{r}}$ according to (\\ref{eq1}) and (\\ref{eq2}), respectively. The correct output is selected depending on the index of the leaf node to be estimated. When a leaf node is reached, the decoder controller module identifies the leaf node as either an information bit or a frozen bit. If a frozen bit is found, the paths are not split, and the bit is estimated only as $0$, and the $L$ memories are updated with the same bit or LLR values. Instead, in case of an information bit, both $0$ and $1$ are considered, so that paths are split, and the PMs updated for the $2L$ candidates according to (\\ref{eq7}). Afterwards, the PMs are sorted, identifying the $L$ surviving paths.\n\nAll memories in the decoder are registers, enabling the internal LLR and $\\beta$ values to be read, updated by the PEs, and written back in a single clock cycle. At the same time, the paths are either updated or split and updated, and the new PMs computed. In the following clock cycle, in case the paths were split, the PMs are sorted and the surviving paths selected.\n\nCodes with different code lengths can be decoded by storing the appropriate memory offsets for every considered code in a dedicated memory.\n\nThis baseline decoder has been modified to better fit the needs of the proposed blind detection scheme. In order to maximize resource sharing, the SCL decoder has been sized for $L_{\\max}>L_1$, and the effective list size can be selected through a dedicated input.\nThe $L_{\\max}-L_1$ paths that are not used in the first decoding phase are used to decode up to $\\left\\lfloor(L_{\\max}-L_1)\/L_1\\right\\rfloor$ additional candidates at the same time. In order to exploit the unused paths, additional functional modules are necessary.\n\\begin{itemize}\n \\item The baseline decoder uses a single memory to store the channel LLR values, sharing it among the different paths. If different codewords have to be decoded at the same time, the channel memory needs to be instantiated not once, but $\\left\\lfloor L_{\\max}\/L_1\\right\\rfloor$ times.\n \\item The decoder relies on sorting and selection logic that identifies the surviving $L_{\\max}$ ones after paths are split. To support the parallel decoding of $\\left\\lfloor L_{\\max}\/L_1\\right\\rfloor$ candidates, as many sorting and selection modules targeting the selection of $L_1$ paths out of $2L_1$ are instantiated.\n\\end{itemize}\nIf $L_1=1$ is selected, the path splitting and PM sorting steps are bypassed, reverting decoders to the standard SC case. Since a single set of SCL decoders can handle both decoding phases, the total number of decoders is $N_{\\text{SCL}_{\\max}}$ (see Fig. \\ref{fig:BDarch}). However, the effective number of decoders for the first decoding phase is $N_{\\text{SCL}_1}=N_{\\text{SCL}_{\\max}}\\times \\left\\lfloor L_{\\max}\/L_1\\right\\rfloor$.\n\nThe early stopping technique described in Section~\\ref{sec:blind} has been also implemented. The decoder receives as input the position of the ID bits and the value of the UE ID: every time a bit in an ID position is estimated, the bit value is compared to the expected UE ID bit. All paths whose estimated bit does not match the UE ID bit are deactivated. This operation is performed after the $L$ surviving paths have been selected, in order not to force the survival of unlikely paths and increase the FAR. In case all paths have been deactivated, the decoding is stopped. The early stopping logic can be activated and deactivated by means of a dedicated control signal. Since the same hardware is used for both decoding phases, early stopping is enabled only during the second one.\n\n\n\n\\subsection{PM sorting and candidate selection}\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics{.\/C2sel.pdf}\n \\caption{PM sorting and candidate selection architecture.}\n \\label{fig:C2sel}\n\\end{figure*}\n\nFig.~\\ref{fig:C2sel} depicts the architecture of the PM sorting and candidate selection block. It processes the output of the first decoding phase to select the $C_2$ candidates for the second phase, and selects the overall system output based on the results from the second phase. For each of the $N_{\\text{SCL}_1}$ first phase decoders, a PM and a flag signalling a UE ID match are received. They are stored every time the respective \\texttt{Valid} signal is risen by the decoder. The \\texttt{Valid} signal is also used as an enable for the PM and UE ID match register address counter, and for the counter keeping track of how many codewords had a matching UE ID after the first phase. When all the $C_1$ candidates have gone through the first decoding phase, a \\texttt{Valid} signal is issued to the sorter module, that receives as input all the stored PMs. The sorter module returns the $C_2$ minimum PMs in as many clock cycles: each PM is compared to all the others, and a single clock cycle is necessary to identify the minimum one, that is excluded from the subsequent comparison. When the $C_2$ minima have been found, the selector module considers how many candidates had a matching UE ID after the first phase, and selects the $C_2$ candidates for the second phase among them and those with the minimum PM values. The $C_2$ candidates are sent to the $N_{\\text{SCL}_{\\max}}$ decoders by means of a dedicated counter. Returning PMs and UE ID match flags are received and compared by another selector: when all $C_2$ candidates have been decoded, the selected codeword, if any, is output.\n\n\n\\section{Implementation Results} \\label{sec:impl}\n\nThe architecture proposed in Section~\\ref{sec:HW} has been described in VHDL and synthesized in TSMC 65~nm CMOS technology. Table~\\ref{tab:asic} reports the synthesis results for the architecture sized for a maximum code length $N_{\\max}=512$, a maximum list size $L_{\\max}=8$, $C_2=5$, and a target frequency $f=1$~GHz. Various $N_{\\text{SCL}_{\\max}}$ values have been considered, leading to different latencies and area occupations. Since during the first decoding phase $L_1=2$, the effective number of decoders $N_{\\text{SCL}_1}$ is equal to $4N_{\\text{SCL}_{\\max}}$, even if only $N_{\\text{SCL}_{\\max}}$ are physically instantiated.\nRegarding the area, the $N_{\\text{SCL}_{\\max}}$ SCL decoders contribute to the majority of the complexity, ranging from $97.8\\%$ when $N_{\\text{SCL}_{\\max}}=1$ to $99.7\\%$ when $N_{\\text{SCL}_{\\max}}=5$. The logic complexity of the PM sorting and candidate selection module remains almost unchanged at the variation of $N_{\\text{SCL}_{\\max}}$, being mainly affected by $C_1$ and $C_2$. Memories have been synthesized with registers only, without the use of RAM, and account for $36\\%$ of the total area occupation.\n\nThe worst case latency of the proposed blind detection system can be found as \n\\begin{align} \\label{eq:lat2}\n\\begin{split}\n T_{\\text{bd}}=&\\left\\lceil \\frac{C_1}{N_{\\text{SCL}_1}} \\right\\rceil \\left(\\frac{T^1_{\\text{SCL}}}{2}+\\frac{T^2_{\\text{SCL}}}{2}\\right)\\\\\n &+ T_{\\text{sort}} + \\left\\lceil \\frac{C_2}{N_{\\text{SCL}_{\\max}}} \\right\\rceil \\max\\left(T^1_{\\text{SCL}},T^2_{\\text{SCL}}\\right)~ \\text{,}\\\\\n\\end{split} \n\\end{align}\nwhere $T^1_{\\text{SCL}}$ and $T^2_{\\text{SCL}}$ are the SCL decoding latencies for codes of length $N_1$ and $N_2$, respectively, while $T_{\\text{sort}}$ is the number of time steps required to sort the PM of the first decoding phase and obtain the $C_2$ candidates out of the $C_1$ candidate locations. Also, it is worth remembering that for the proposed architecture, $N_{\\text{SCL}_1}=\\left\\lfloor L_{\\max}\/L_1\\right\\rfloor \\times N_{\\text{SCL}_{\\max}}$.\nThe SCL decoding latency can be found as \\cite{balatsoukas}\n\\begin{equation*}\nT^x_{\\text{SCL}}=2N_x+K_x+16-2 \\text{,}\n\\end{equation*}\nfor $x\\in\\{1,2\\}$.\nFrom the results presented in Table \\ref{tab:asic}, it is possible to see that even when considering the relatively old 65~nm technology node, the $16\\mu$s worst case latency target can be reached with a single SCL decoder running at a frequency of $1$~GHz, while $N_{\\text{SCL}_{\\max}}=5$ guarantees a worst case latency of $3.6\\mu$s, meeting the $4\\mu$s target as well.\n\nHowever, considering only the worst case latency is indeed an unrealistic scenario. To begin with, while there is no guarantee on how the $C_2$ candidates are distributed among $N_1$ and $N_2$, simulation results have shown that we can expect the $C_2$ candidates either to favor the shorter code length, or to be equally divided between $N_1$ and $N_2$ candidates. Thus, the factor $$\\left\\lceil\\frac{C_2}{N_{\\text{SCL}_{\\max}}} \\right\\rceil\\max\\left(T^1_{\\text{SCL}},T^2_{\\text{SCL}}\\right)$$ in (\\ref{eq:lat2}), that represents the contribution of the second decoding phase, could be better expressed as: \n\\begin{equation*}\n\\left\\lceil\\frac{\\left\\lceil C_2\/2\\right\\rceil}{N_{\\text{SCL}_{\\max}}} \\right\\rceil T^1_{\\text{SCL}} + \\left\\lceil\\frac{C_2-\\left\\lceil C_2\/2\\right\\rceil}{N_{\\text{SCL}_{\\max}}} \\right\\rceil T^2_{\\text{SCL}}~.\n\\end{equation*}\nNote that this is still a conservative assumption, since it entails the $C_2$ candidates equally divided among the two code lengths. We can refine this assumption by taking in account the effect of early stopping. We can approximate the latency reduction with a multiplicative factor $E^x$ associated to $T^x_{\\text{SCL}}$. Consequently, the average latency of the blind detection system, for $N_{\\text{SCL}_{\\max}} \\epsilon_0$ such that $\\sup_{x\\in M} f(x) = f_M$ and $M$ is a CC of the level set ${\\mathcal{X}^{f_M - \\epsilon_0}} := \\{ x : f(x) \\ge f_M - \\epsilon_0 \\}$.\n\\end{definition}\n\nWe require the following Assumption~\\ref{assumption2} on $\\epsilon_0$-modal sets. Note that under Assumption~\\ref{assumption-main} on modal-sets, Assumption~\\ref{assumption2} on $\\epsilon_0$-modal sets will hold for $\\epsilon_0$ sufficiently small.\n\n\\begin{assumption}\\label{assumption2}\nThe $\\epsilon_0$-modal sets are on the interior of $\\mathcal{X}$ and $f_M \\ge 2\\epsilon_0$ for all $\\epsilon_0$-modal sets $M$. \n\\end{assumption}\n\n\\begin{remark}\nSince each $\\epsilon_0$-modal set contains a modal-set, it follows that the number of $\\epsilon_0$-modal sets is finite.\n\\end{remark}\n\nThe following extends Proposition~\\ref{prop:main-assumptions} to show the additional properties of the regions around the $\\epsilon_0$-modal sets necessary in our analysis. The proof is in Appendix~\\ref{supportinglemmas}.\n\n\\begin{proposition}[Extends Proposition~\\ref{prop:main-assumptions}] \\label{prop:main-assumptions-general}\nFor any $\\epsilon_0$-modal set $M$, there exists $\\lambda_M, A_M, r_M, l_M, u_M, r_s, S_M$ such that the following holds. $A_M$ is a CC of $\\mathcal{X}^{\\lambda_M} := \\{x : f(x) \\ge \\lambda_M\\}$ containing $M$ which satisifies the following.\n\\begin{itemize} \n\\item \\emph{$A_M$ isolates $M$ by a valley}: $A_M$ does not intersect any other $\\epsilon_0$-modal sets and $A_M$ and $\\mathcal{X}^{\\lambda_M} \\backslash A_M$ are $r_s$-separated by $S_M$ with $r_s>0$ where $r_s$ does not depend on $M$. \n\n\\item \\emph{$A_M$ is full-dimensional}: $A_M$ contains an envelope $B(M, r_M)$ of $M$, with $r_M>0$. \n\\item \\emph{$f$ is smooth around some maximum modal-set in $M$}: There exists modal-set $M_0 \\subseteq M$ such that $f$ has density $f_M$ on $M_0$ and $f_M - f(x) \\le u_M(d(x, M_0))$ for $x \\in B(M_0, r_M)$\n\\item \\emph{$f$ is both \\emph{smooth} and has \\emph{curvature} around $M$}: $u_M$ and $l_M$ are increasing continuous functions on $[0, r_M]$, $u_M(0) = l_M(0) = 0$ and $u_M(r), l_M(r) > 0$ for $r > 0$, and\n\\begin{align*}\nl_M(d(x, M)) \\le f_M - \\epsilon_0 - f(x) \\le u_M(d(x, M)) \\forall x \\in B(M, r_M).\n\\end{align*}\n\n\\end{itemize}\n\\end{proposition}\n\n\n\n\\begin{algorithm}[tb]\n \\caption{M-cores (estimating $\\epsilon_0$-modal-sets)}\n \\label{alg:epsilonmodalset}\n\\begin{algorithmic}\n \\STATE Initialize $\\widehat{\\mathcal{M}}:= \\emptyset$. Define $\\beta_k = 4\\frac{C_{\\delta, n}}{\\sqrt{k}}$. \n \\STATE Sort the $X_i$'s in descending order of $f_k$ values. \n \\FOR{$i=1$ {\\bfseries to} $n$}\n \\STATE Define $\\lambda := f_k(X_i)$.\n \\STATE Let $A$ be the CC of $G(\\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ that contains $X_i$.\n \\IF{$A$ is disjoint from all cluster-cores in $\\widehat{\\mathcal{M}}$}\n \\STATE Add $\\widehat{M} := \\{ x \\in A : f_k(x) > \\lambda - \\beta_k \\lambda - \\epsilon_0 \\}$ to\n $\\widehat{\\mathcal{M}}$. \n \\ENDIF\n \\ENDFOR\n \\STATE \\textbf{return} $\\widehat{\\mathcal{M}}$. \n\n\\end{algorithmic}\n\\end{algorithm}\n\n\nNext we give admissibility conditions for $\\epsilon_0$-modal sets. The only changes (compared to admissibility conditions for modal-sets) are the constant factors. In particular, when $\\epsilon_0=0$ and $\\tilde\\epsilon = 0$ it is the admissibility conditions for modal-sets. As discussed in the main text, a larger $\\tilde\\epsilon$ value will prune more aggressively at the cost of requiring a larger number of samples. Furthermore, it is implicit below that $\\tilde\\epsilon < l_M(\\min\\{r_M, r_s\\}\/2)$. This ensures that we don't prune too aggressively that the estimated $\\epsilon_0$-modal sets merge together. \n\n\n\\begin{definition} $k$ is {\\bf admissible} for an $\\epsilon_0$-modal set $M$ if (letting $u_M^{-1}, l_M^{-1}$ be the inverses of $u_M, l_M$)\n\\begin{align*}\n\\max \\left\\{ \\left(\\frac{24C_{\\delta, n} (\\sup_{x \\in \\mathcal{X}} f(x) + \\epsilon_0)}{l_M(\\min\\{r_M, r_s\\}\/2) - \\tilde\\epsilon} \\right)^2, 2^{7 + d} C_{\\delta, n}^2 \\right\\} \\\\\n\\le k \\le \\frac{v_d \\cdot (f_M - \\epsilon_0)}{2^{2+2d}} \\left(u_M^{-1} \\left ( \\frac{C_{\\delta, n} (f_M - \\epsilon_0)}{2\\sqrt{k}}\\right) \\right)^d \\cdot n.\n\\end{align*}\n\\end{definition}\n\n\n\n\n\n\n\n\n\n\\section{Supporting lemmas and propositions}\\label{supportinglemmas}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:main-assumptions-general}]\nLet $M$ be an $\\epsilon_0$-modal set with maximum density $f_M$ and minimum density $f_M - \\epsilon_0$ (i.e. $f_M - \\epsilon_0 \\le f(x) \\le f_M$ for $x\\in M$). \nDefine ${\\mathcal{X}^{\\lambda}} := \\{ x : f(x) \\ge \\lambda\\}$.\nLet $A_1,...,A_m$ be the CCs of ${\\mathcal{X}^{f_M - \\epsilon_0}}$ (there are a finite number of CCs since each CC contains at least one modal-set and the number of modal-sets is finite). \nDefine $r_{\\text{min}} := \\min_{A_i \\neq A_j} \\inf_{x \\in A_i, x' \\in A_j} |x - x'|$, which is the minimum distance between pairs of points in different CCs.\nNext, define the one-sided Hausdorff distance for closed sets $A, B$: $d_{H'}(A, B) := \\max_{x \\in A} \\min_{x \\in B} |x - y|$. Then consider\n$g(t) := d_{H'}({\\mathcal{X}^{f_M - \\epsilon_0 - t}}, {\\mathcal{X}^{f_M - \\epsilon_0}})$.\n\nSince $f$ is continuous and has a finite number of modal-sets, $g$ has a finite number of points of discontinuity (i.e. when $f_M - \\epsilon_0 - t$ is the density of some modal-set) and we have $g(t) \\rightarrow 0$ as $t \\rightarrow 0$. \nThus, there exists $0 < \\lambda_M < f_M - \\epsilon_0$ such that $g(f_M - \\epsilon_0 - \\lambda_M) < \\frac{1}{4}r_{\\text{min}}$ and there are no modal-sets or $\\epsilon_0$-modal sets with minimum density in $[\\lambda_M, f_M - \\epsilon_0)$. \nFor each $A_i$, there exists exactly one CC of $\\mathcal{X}^{\\lambda_M}$, $A_i'$, such that\n$A_i \\subset A_i'$. Since $g(f_M - \\epsilon_0 - \\lambda_M) < \\frac{1}{4}r_{\\text{min}}$, it follows that $A_i' \\subseteq B(A_i, \\frac{1}{4}r_{\\text{min}})$. Thus, the $A_i'$'s are pairwise separated by distance at least $\\frac{1}{2}r_{\\text{min}}$. Moreover, there are no other CCs in ${\\mathcal{X}^{f_M - \\epsilon_0}}$ because there are no modal-sets with density in $[\\lambda_M, f_M - \\epsilon_0)$. \n\nThen, let $A_M$ be the CC of $\\mathcal{X}^{\\lambda_M}$ containing $M$. Then $A_M$ contains no other $\\epsilon_0$-modal sets and it is $\\frac{1}{5}r_{\\text{min}}$-separated by $\\mathcal{X}^{\\lambda_M} \\backslash M$ by some set $S_M$ (i.e. take $S_M := \\{x : d(x, A_M) = \\frac{1}{5} r_{\\text{min}}\\}$). Since there is a finite number of modal-sets, it suffices to take $r_s$ to be the minimum of the corresponding $\\frac{1}{5}r_{\\text{min}}$ for each $\\epsilon_0$-modal set. This resolves the first part of the proposition.\n\nLet $h(r) := \\inf_{x \\in B(M, r)} f(x)$. Since $f$ is continuous, $h$ is continuous and decreasing with $h(0) = f_M - \\epsilon_0 > \\lambda_M$. Take $r_M > 0$ sufficiently small so that $h(r_M) > \\lambda_M$. This resolves the second part of the proposition. \n\n\nTake $M_0$ to be some modal-set with density $f_M$ in $M$. One must exist since $M$ has local-maxima at level $f_M$.\nFor each $r$, let $u_M(r) := \\max\\{f_M - \\epsilon_0 - \\inf_{x \\in B(M, r)} f(x), f_M - \\inf_{x \\in B(M_0, r)} f(x) \\}$. Then, we have $f_M - f(x) \\le u_M(d(x, M_0))$ and $f_M - \\epsilon_0 - f(x) \\le u_M(d(x, M))$. Clearly $u_M$ is increasing on $[0, r_M]$ with $u_M(0) = 0$ and continuous since $f$ is continuous. If $u_M$ is not strictly increasing then we can replace it with a strictly increasing continuous function while still having $u_M(r) \\rightarrow 0$ as $r \\rightarrow 0$ (i.e. by adding an appropriate strictly increasing continuous function). This resolves the third part of the proposition and the upper bound in the fourth part of the proposition. \n\nNow, define $g_M(t) := d({\\mathcal{X}^{f_M - \\epsilon_0 - t}} \\cap {A_M}, M)$ \nfor $t \\in [0, \\frac{1}{2} (f_M - \\epsilon_0 - \\lambda_M)]$. \nThen, $g_M$ is continuous, $g_M(0) = 0$ and is strictly increasing. \nDefine $l_M$ to be the inverse of $g_M$. Clearly $l_M$ is continuous, strictly increasing, and $l_M(r) \\rightarrow 0$ as $r \\rightarrow 0$. From the definition of $g_M$, it follows that for $x \\in B(M, r_M)$, $f_M - \\epsilon_0- f(x) \\ge l_M(d(x, M))$\nas desired.\n\\end{proof}\n\nWe need the following result giving guarantees on the empirical balls.\n\\begin{lemma}[\\citep{CD10}] \\label{ball_bounds} \nPick $0 < \\delta < 1$. Assume that $k \\ge d \\log n$. Then with probability at least $1 - \\delta$, for every ball $B \\subset \\mathbb{R}^d$ we have\n\\begin{align*}\n\\mathcal{F}(B) \\ge C_{\\delta, n} \\frac{\\sqrt{d \\log n}}{n} &\\Rightarrow \\mathcal{F}_n(B) > 0\\\\\n\\mathcal{F}(B) \\ge \\frac{k}{n} + C_{\\delta, n} \\frac{\\sqrt{k}}{n} &\\Rightarrow \\mathcal{F}_n(B) \\ge \\frac{k}{n} \\\\\n\\mathcal{F}(B) \\le \\frac{k}{n} - C_{\\delta, n}\\frac{\\sqrt{k}}{n} &\\Rightarrow \\mathcal{F}_n(B) < \\frac{k}{n}.\n\\end{align*}\n\\end{lemma}\n\n\n\nLemma~\\ref{fk_bounds} of \\cite{dasgupta2014optimal} establish convergence rates for $f_k$. \n\n\\begin{definition}\\label{rhat} For $x \\in \\mathbb{R}^d$ and $\\epsilon > 0$, define \n$\\hat{r}(\\epsilon, x):=\\sup\\left\\{r : \\sup_{x' \\in B(x, r)} f(x') - f(x) \\le \\epsilon \\right\\}$ and\n$\\check{r}(\\epsilon, x):=\\sup\\left\\{r : \\sup_{x' \\in B(x, r)} f(x) - f(x') \\le \\epsilon \\right\\}$.\n\\end{definition}\n\n\\begin{lemma}[Bounds on $f_k$]\\label{fk_bounds} Suppose that $\\frac{C_{\\delta, n}}{\\sqrt{k}} < \\frac{1}{2}$. Then the follow two statements each hold with probability at least $1 - \\delta$: \n\\begin{align*}\nf_k(x) < \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}} \\right)(f(x) + \\epsilon),\n\\end{align*}\nfor all $x\\in \\mathbb{R}^d$ and all $\\epsilon > 0$ provided $k$ satisfies $v_d\\cdot \\hat{r}(\\epsilon, x)^d \\cdot (f(x) + \\epsilon) \\ge \\frac{k}{n} - C_{\\delta, n}\\frac{\\sqrt{k}}{n}$.\n\\begin{align*}\nf_k(x) \\ge \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}} \\right)(f(x) - \\epsilon),\n\\end{align*}\nfor all $x\\in \\mathbb{R}^d$ and all $\\epsilon > 0$ provided $k$ satisfies $v_d\\cdot \\check{r}(\\epsilon, x)^d \\cdot (f(x) - \\epsilon) \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}$. \n\\end{lemma}\n\n\n\\begin{lemma}[Extends Lemma~\\ref{r_n_upper_bound}] \\label{r_n_upper_bound_general}(Upper bound on $r_n$) Let $M$ be an $\\epsilon_0$-modal set with maximum density $f_M$ and suppose that $k$ is admissible. With probability at least $1 - \\delta$,\n\\begin{align*}\nr_n(M) \\le \\left(\\frac{2C_{\\delta, n} \\sqrt{d \\log n}}{n\\cdot v_d\\cdot (f_M - \\epsilon_0)}\\right)^{1\/d}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{r_n_upper_bound_general}]\nDefine $r_0 := \\left(\\frac{2C_{\\delta, n} \\sqrt{d \\log n}}{nv_d\\cdot (f_M - \\epsilon_0)}\\right)^{1\/d}$ and $r := (4k\/(nv_df_M))^{1\/d}$. Since $k$ is admissible, we have that $u_M(r_0) \\le u_M(r) \\le (f_M - \\epsilon_0)\/ 2$. We have\n\\begin{align*}\n\\mathcal{F}(B(x, r_0)) &\\ge v_d{r_0}^d(f_M -\\epsilon_0 - u_M(r_0)) \\ge v_d{r_0}^d (f_M - \\epsilon_0)\/2 \n= \\frac{C_{\\delta, n} \\sqrt{d\\log n}}{n}.\n\\end{align*}\nBy Lemma~\\ref{ball_bounds}, this implies that $\\mathcal{F}_n (B(x, r_0)) > 0$ with probability at least $1 - \\delta$ and therefore we have $r_n(x) \\le r_0$.\n\\end{proof}\n\n\n\n\n\n\\section{Isolation Results} \\label{appendix:isolation}\n\n\n\nThe following extends Lemma~\\ref{isolation} to handle more general\n$\\epsilon_0$-modal sets and pruning parameter $\\tilde\\epsilon$. \n\n\\begin{lemma}[Extends Lemma~\\ref{isolation}] (Isolation) \\label{isolation_general} \nLet $M$ be an $\\epsilon_0$-modal set and $k$ be admissible for $M$. Suppose $0 \\le \\tilde\\epsilon < l_M(\\min\\{r_M, r_s\\}\/2)$ and let $\\hat{x}_M := \\argmax_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$. Then the following holds\nwith probability at least $1-5\\delta$: when processing sample point $\\hat{x}_M$ in Algorithm~\\ref{alg:epsilonmodalset} we will add $\\widehat{M}$ to $\\widehat{\\mathcal{M}}$ where \n$\\widehat{M}$ does not contain points outside of $\\mathcal{X}_M$.\n\\end{lemma}\n\n\\begin{proof} Define $\\widehat{f}_M := f_k(\\hat{x}_M)$, $\\lambda = \\widehat{f}_M$ and $\\bar{r} := \\min\\{r_M, r_s\\} \/ 2$.\nIt suffices to show that (\\rm{i}) $\\mathcal{X} \\backslash \\mathcal{X}_M$ and $B(M, \\bar{r})$ are disconnected in $G(\\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ and (\\rm{ii}) $\\hat{x}_M \\in B(M, \\bar{r})$. \n\nIn order to show (\\rm{i}), we first show that $G(\\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ contains no points from $B(S_M, r_s\/2)$ and no points from $\\mathcal{X}_M \\backslash B(M, \\bar{r})$. Then, all that will be left is showing that there are no edges between $B(M, \\bar{r})$ and $\\mathcal{X} \\backslash \\mathcal{X}_M$.\n \n\nWe first prove bounds on $f_k$ that will help us show (\\rm{i}) and (\\rm{ii}). Let $\\bar{F} := f_M - \\epsilon_0 - l_M(\\bar{r}\/2)$. Then for all $x \\in \\mathcal{X}_M \\backslash B(M, \\bar{r})$, we have $\\hat{r}(\\bar{F} - f(x), x) \\ge \\bar{r}\/2$. Thus the conditions for Lemma~\\ref{fk_bounds} are satisfied by the admissibility of $k$ and hence $f_k(x) < \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}} \\right) \\bar{F}$. Now,\n\\begin{align*}\n\\sup_{x \\in \\mathcal{X}_M\\backslash B(M, \\bar{r})} f_k(x) &< (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) \\bar{F} = (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) (f_M - \\epsilon_0 - l_M(\\bar{r}\/2))\\\\\n &\\le (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}})^3 \\widehat{f}_M - (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) \\cdot (\\epsilon_0 + l_M(\\bar{r}\/2)) \\le \\lambda- 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon},\n\\end{align*}\nwhere the second inequality holds by using Lemma~\\ref{fk_bounds} as follows. Choose $x \\in M_0$ and $\\epsilon = \\frac{C_{\\delta, n}}{2\\sqrt{k}} f_M$. Then $\\check{r}(\\epsilon, x) \\ge u^{-1}(\\epsilon)$. The conditions for Lemma~\\ref{fk_bounds} hold by the admissibility of $k$ and thus $\\widehat{f}_M \\ge f_k(x) \\ge (1 - C_{\\delta, n}\/\\sqrt{k})^2 f_M$. Furthermore it follows from Lemma~\\ref{fk_bounds} that $\\widehat{f}_M < (1 + 2C_{\\delta, n}\/\\sqrt{k}) f_M$; \ncombine this admissibility of $k$ to obtain the last inequality. Finally, from the above, we also have $\\sup_{x \\in \\mathcal{X}_M\\backslash B(M, \\bar{r})} f_k(x) < \\widehat{f}_M$, implying (\\rm{ii}).\n\n\\noindent Next, if $x \\in B(S_M, r_s\/2)$, then $\\hat{r}(\\bar{F} - f(x) , x) \\ge \\bar{r}\/2$ and the same holds for $B(S_M, r_s\/2)$:\n\\begin{align*}\n\\sup_{x \\in B(S_M, r_s\/2)} f_k(x) < \n\\lambda- 9\\beta_k \\lambda - \\epsilon_0 -\\tilde{\\epsilon}.\n\\end{align*}\nThus, $G(\\lambda- 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ contains no point from $B(S_M, r_s\/2)$ and no point from $\\mathcal{X}_M \\backslash B(M, \\bar{r})$. \n\n\\noindent All that remains is showing that there is no edge between $B(M, \\bar{r})$ and $\\mathcal{X} \\backslash \\mathcal{X}_M$. It suffices to show that any such edge will have length less than $r_s$ since $B(S_M, r_s\/2)$ separates them by a width of $r_s$. We have for all $x \\in B(M, \\bar{r})$, \n\\begin{align*}\\mathcal{F}(B(x, \\bar{r})) \\ge v_d\\bar{r}^d\\inf_{x' \\in B(x, 2\\bar{r})} f(x') \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}.\n\\end{align*}\nThus by Lemma~\\ref{ball_bounds}, we have $r_k(x) \\le \\bar{r} < r_s$, establishing (\\rm{i}).\n\n\\end{proof}\n\n\n\n\\section{Integrality Results} \\label{appendix:integrality}\n\nThe goal is to show that the $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ refered to above contains $B(M, r_n(M))$. We give a condition under which $B(M, r_n(M)) \\cap X_{[n]}$ would be connected in $G(\\lambda)$ for some $\\lambda$. It is adapted from arguments in Theorem V.2 in \\cite{CDKvL14}.\n\n\n\\begin{lemma}\\label{connectedness} (Connectedness)\nLet $M$ be an $\\epsilon_0$-modal set and $k$ be admissible for $M$. Then with probability at least $1 - \\delta$, $B(M, r_n(M)) \\cap X_{[n]}$ is connected in $G(\\lambda)$ if \n\\begin{align*}\n\\lambda \\le \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 (f_M - \\epsilon_0).\n\\end{align*}\n\\end{lemma}\n\n\n\\begin{proof}\nFor simplicity of notation, let $A := B(M, r_n(M))$. It suffices to prove the result for $\\lambda = (1 - \\frac{C_{\\delta, n}}{\\sqrt{k}})^2 (f_M - \\epsilon_0)$.\nDefine $r_{\\lambda} = (k\/(nv_d\\lambda))^{1\/d}$ and $r_o = (k\/(2nv_df_M))^{1\/d}$. \nFirst, we show that each $x \\in B(A, r_\\lambda)$, there is a sample point in $B(x, r_o)$. \nWe have for $x\\in B(A, r_\\lambda)$,\n\\begin{align*}\n\\mathcal{F}(B(x, r_o)) &\\ge v_d r_o^d \\inf_{x' \\in B(x, r_o + r_\\lambda )} f(x') \\ge v_d r_o^d(f_M - \\epsilon_0 - u_M(r_o + r_\\lambda + r_n(M))) \\\\\n&\\ge v_d r_o^d (f_M - \\epsilon_0) \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right) \\ge C_{\\delta, n} \\frac{\\sqrt{d \\log n}}{n}.\n\\end{align*}\nThus by Lemma~\\ref{ball_bounds} we have that with probability at least $1 - \\delta$, $B(x, r_o)$ contains a sample uniformly over $x \\in B(A, r_\\lambda)$. \n\n \\noindent Now, let $x$ and $x'$ be two points in $A\\cap X_{[n]}$. We now show that there exists $x = x_0,x_1,...,x_p = x'$ such that $||x_i - x_{i+1}|| < r_o$ and $x_i \\in B(A, r_o)$. Note that since $A$ is connected and the density in $B(A, r_o + r_\\lambda)$ is lower bounded by a positive quantity, then for arbitrary $\\gamma \\in (0, 1)$, we can choose $x = z_0, z_1,...,z_p = x'$ where $||z_{i+1} - z_i|| \\le \\gamma r_o$. Next, choose $\\gamma$ sufficiently small such that \n\\begin{align*}\nv_d\\left( \\frac{(1-\\gamma)r_o}{2}\\right)^d\\gamma \\ge \\frac{C_{\\delta, n}\\sqrt{d\\log n}}{n},\n\\end{align*}\nthen there exists a sample point $x_i$ in $B(z_i, (1-\\gamma)r_o\/2)$. Moreover we obtain that \n\\begin{align*}\n||x_{i+1} - x_i|| &\\le ||x_{i+1} - z_{i+1}|| + ||z_{i+1} - z_i|| + ||z_i -x_i|| \\le r_o.\n\\end{align*} \n\n\\noindent All that remains is to show $(x_i, x_{i+1}) \\in G(\\lambda)$. We see that $x_i \\in B(A, r_o)$. However, for each $x \\in B(A, r_o)$, we have \n\\begin{align*}\n\\mathcal{F}(B(x, r_\\lambda)) &\\ge v_dr_\\lambda^d \\inf_{x' \\in B(x, r_o + r_\\lambda)} f(x') \n\\ge v_d r_\\lambda^d (f_M - \\epsilon_0) \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)\n\\ge \\frac{k}{n} + \\frac{C_{\\delta, n} \\sqrt{k}}{n}.\n\\end{align*}\n Thus $r_k(x_i) \\le r_\\lambda$ for all $i$. Therefore, $x_i \\in G(\\lambda)$ for all $x_i$. Finally, $||x_{i+1} - x_i|| \\le r_o \\le \\min \\{ r_k(x_i), r_k(x_{i+1})\\}$ and thus $(x_i, x_{i+1}) \\in G(\\lambda)$. Therefore, $A \\cap X_{[n]}$ is connected in $G(\\lambda)$, as desired. \n \\end{proof}\n\n\n\n\nThe following extends Lemma~\\ref{integrality} handle more general $\\epsilon_0$-modal sets. \n\\begin{lemma}[Extends Lemma~\\ref{integrality}] (Integrality) \\label{integrality_general} \nLet $M$ be an $\\epsilon_0$-modal set with density $f_M$, and suppose $k$ is admissible for $M$. Let $\\hat{x}_M := \\argmax_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$. Then the following holds with probability at least $1 - 3\\delta$. When processing sample point $\\hat{x}_M$ in Algorithm~\\ref{alg:modalset}, if we add $\\widehat{M}$ to $\\widehat{\\mathcal{M}}$, then $B(M, r_n(M)) \\cap X_{[n]} \\subseteq \\widehat{M}$.\n\\end{lemma}\n\n\\begin{proof} Define $\\widehat{f}_M := f_k(\\hat{x}_M)$ and $\\lambda := \\widehat{f}_M$. It suffices to show that $B(M, r_n(M)) \\cap X_{[n]}$ is connected in $G(\\lambda - 9\\beta_k \\lambda - \\tilde{\\epsilon})$. By Lemma~\\ref{connectedness}, $B(M, r_n(M)) \\cap X_{[n]}$ is connected in $G(\\lambda_0)$ when $\\lambda_0 \\le (1 - \\frac{C_{\\delta, n}}{\\sqrt{k}})^2 (f_M - \\epsilon_0) $. Indeed, we have that\n\\begin{align*}\n\\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 (f_M - \\epsilon_0)\n&\\ge \\widehat{f}_M \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 \/ \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right) - \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 \\epsilon_0 \\\\\n&\\ge \\lambda - \\beta_k \\lambda - \\epsilon_0 \\ge \\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon},\n\\end{align*}\nwhere the first inequality follows from Lemma~\\ref{fk_bounds},\nas desired.\n\\end{proof}\n\n\n\\section{Theorem~\\ref{theo:main}} \\label{appendix:theomain}\n\nCombining the isolation and integrality, we obtain the following extention of Corollary~\\ref{identification}.\n\n\n\\begin{corollary}[Extends Corollary~\\ref{identification}] (Identification) \\label{identification_general}\nSuppose we have the assumptions of Lemmas~\\ref{isolation_general} and~\\ref{integrality_general} for $\\epsilon_0$-modal set $M$. Define $\\widehat{f}_M := \\max_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$ and $\\lambda := \\widehat{f}_M$. With probability at least $1 - 5\\delta$, there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that $B(M, r_n(M)) \\cap X_{[n]} \\subseteq \\widehat{M} \\subseteq \\{ x \\in \\mathcal{X}_M \\cap X_{[n]} : f_k(x) \\ge \\lambda- \\beta_k \\lambda - \\epsilon_0 \\}$\n\\end{corollary}\n\n\\begin{proof}\nBy Lemma~\\ref{isolation_general}, there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ which contains only points in $\\mathcal{X}_M$ with maximum $f_k$ value of $\\widehat{f}_M$. Thus, we have $\\widehat{M} \\subseteq \\{ x \\in \\mathcal{X}_M \\cap X_{[n]} : f_k(x) \\ge \\widehat{f}_M - \\beta_k \\widehat{f}_M - \\epsilon_0 \\}$. By Lemma~\\ref{integrality_general}, $B(M, r_n(M)) \\cap X_{[n]} \\subseteq \\widehat{M}$.\n\\end{proof}\n\n\nThe following extends Theorem~\\ref{theo:main} to handle more general $\\epsilon_0$-modal sets and pruning parameter $\\tilde{\\epsilon}$. \n\n\\begin{theorem}[Extends Theorem~\\ref{theo:main}] \\label{theo:main_general}\nLet $\\delta > 0$ and $M$ be an $\\epsilon_0$-modal set. Suppose $k$ is admissible for $M$ and $0 \\le \\tilde\\epsilon < l_M(\\min\\{r_M, r_s\\}\/2)$. Then with probability at least $1 - 6\\delta$, there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that \n \\begin{align*}\n d(M, \\widehat{M}) \\le l_M^{-1}\\left(\\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M\\right), \\end{align*}\n which goes to $0$ as $C_{\\delta, n}\/\\sqrt{k} \\rightarrow 0$.\n\\end{theorem}\n\n\\begin{proof} Define $\\tilde{r} = l_M^{-1}\\left(\\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M\\right)$. There are two directions to show: $\\max_{x \\in \\widehat{M}} d(x, M) \\le \\tilde{r}$ and $\\sup_{x \\in M} d(x, \\widehat{M}) \\le \\tilde{r}$ with probability at least $1 - \\delta$.\n\nWe first show $\\max_{x \\in \\widehat{M}} d(x, M) \\le \\tilde{r}$.\nBy Corollary~\\ref{identification_general} we have $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that $\\widehat{M} \\subseteq \\{ x \\in \\mathcal{X}_M : f_k(x) \\ge \\widehat{f}_M - \\beta_k \\widehat{f}_M - \\epsilon_0 \\}$ where $\\widehat{f}_M := \\max_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$. \nHence, it suffices to show \n\\begin{align}\\label{consistency_to_show}\n\\inf_{x \\in B(M_0, r_n(M))} f_k(x) \\ge \\sup_{\\mathcal{X}_M \\backslash B(M, \\tilde{r})} f_k(x) + \\beta_k \\widehat{f}_M + \\epsilon_0.\n\\end{align}\nDefine $r := (4\/f_Mv_d)^{1\/d}(k\/n)^{1\/d}$. For any $x \\in B(M_0, r + r_n(M))$, $f(x) \\ge f_M - u_M(r + r_n(M)) := \\check{F}$. Thus, for any $x \\in B(M_0, r_n(M))$ we can let $\\epsilon = f(x) - \\check{F}$ and thus $\\check{r}(\\epsilon, x) \\ge r$ and hence the conditions for Lemma~\\ref{fk_bounds} are satisfied. Therefore, with probability at least $1-\\delta$,\n\\begin{align}\\label{consistency_bound1}\n \\inf_{x \\in B(M_0, r_n(M))} f_k(x) \\ge \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - u_M(r +r_n(M))).\n\\end{align}\nFor any $x\\in \\mathcal{X}_M \\backslash B(M, \\tilde{r}\/2)$, $f(x) \\le f_M - \\epsilon_0 - l_M(\\tilde{r}\/2) := \\hat{F}$. Now, for any $x \\in \\mathcal{X} \\backslash B(M, \\tilde{r})$, let $\\epsilon := \\hat{F} - f(x)$. We have $\\hat{r}(\\epsilon, x) \\ge \\tilde{r}\/2 = l^{-1}_M (8C_{\\delta, n}\/\\sqrt{k})\/2 \\ge l^{-1}_M (u_M(2r)) \/ 2 \\ge r$ (since $l_M$ is increasing and $l_M \\le u_M$) and thus the conditions for Lemma~\\ref{fk_bounds} hold. Hence, with probability at least $1 - \\delta$,\n\\begin{align}\\label{consistency_bound2}\n\\sup_{x \\in \\mathcal{X}_M \\backslash B(M, \\tilde{r})} f_k(x) \\le \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - \\epsilon_0 - l_M(\\tilde{r})).\n\\end{align}\nThus, by (\\ref{consistency_bound1}) and (\\ref{consistency_bound2}) applied to (\\ref{consistency_to_show}) it suffices to show that\n\\begin{align}\\label{consistency_to_show_2}\n&\\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - u_M(r + r_n(M))) \n\\ge \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - \\epsilon_0 - l_M(\\tilde{r})) + \\beta_k \\widehat{f}_M + \\epsilon_0,\n\\end{align}\nwhich holds when \n\\begin{align}\\label{consistency_to_show_3}\nl_M(\\tilde{r}) \\ge u_M(r + r_n(M)) + \\frac{3C_{\\delta, n}}{\\sqrt{k}}f_M + \\beta_k \\widehat{f}_M.\n\\end{align}\nThe admissibility of $k$ ensures that $r_n(M) \\le r \\le r_M\/2$ so that the regions of $\\mathcal{X}$ we are dealing with in this proof are confined within $B(M_0, r_M)$ and $B(M, r_M) \\backslash M$.\n\nBy the admissibility of $k$, $u_M(2r) \\le \\frac{C_{\\delta, n}}{2\\sqrt{k}}f_M$. This gives\n\\begin{align*}\nl_M(\\tilde{r}) &= \\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M \\ge u_M(2r) + \\frac{15C_{\\delta,n }}{2\\sqrt{k}}f_M \n\\ge u_M(r + r_n(M)) + \\frac{3C_{\\delta, n}}{\\sqrt{k}} f_M + \\beta_k \\widehat{f}_M,\n\\end{align*}\nwhere the second inequality holds since $C_{\\delta, n}\/\\sqrt{k} < 1\/16$, $u$ is increasing, $r \\ge r_n(M)$, and $\\widehat{f}_M \\le \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)f_M$ by Lemma~\\ref{fk_bounds}. Thus, showing (\\ref{consistency_to_show_3}), as desired.\n\nThis shows one direction of the Hausdorff bound. We now show the other direction, that $\\sup_{x \\in M} d(x, M) \\le \\tilde{r}$.\n\nIt suffices to show for each point $x \\in M$ that the distance to the closest sample point $r_n(x) \\le \\tilde{r}$ since $\\widehat{M}$ contains these sample points by Corollary~\\ref{identification_general}. However, by Lemma~\\ref{r_n_upper_bound_general} and the admissibility of $k$, $r_n(x) \\le \\tilde{r}$ as desired.\n\\end{proof}\n\n\n\n\n\n\\section{Theorem~\\ref{pruning}} \\label{appendix:pruning}\n\nWe need the following Lemma~\\ref{connectedness_pruning} which gives guarantees us that \ngiven points in separate CCs of the pruned graph, these points will also be in separate CCs of $f$ at a nearby level. \\cite{CDKvL14} gives a result for a different graph and the proof can be adapted to give the same result for our graph (but slightly different assumptions on $k$).\n\n\\begin{lemma}[Separation of level sets under pruning, \\cite{CDKvL14}] \\label{connectedness_pruning} \nFix $\\epsilon > 0$ and let $r(\\epsilon) := \\inf_{x \\in \\mathbb{R}^d} \\min \\{\\hat{r}(\\epsilon, x), \\check{r}(\\epsilon, x) \\}$. Define $\\Lambda := \\max_{x \\in \\mathbb{R}^d} f(x)$ and assume $\\tilde\\epsilon_0 \\ge 2 \\epsilon + \\beta_k(\\lambda_f + \\epsilon)$ and let $\\tilde{G}(\\lambda)$ be the graph with vertices in $G(\\lambda)$ and edges between pairs of vertices if they are connected in $G(\\lambda - \\tilde\\epsilon_0)$. Then the following holds with probability at least $1 - \\delta$.\\\\\n\nLet $\\tilde{A}_1$ and $\\tilde{A}_2$ denote two disconnected sets of points $\\tilde{G}(\\lambda)$. Define $\\lambda_f := \\inf_{x \\in \\tilde{A}_1 \\cup \\tilde{A}_2}f(x)$. Then $\\tilde{A}_1$ and $\\tilde{A}_2$ are disconnected in the level set $\\{ x \\in \\mathcal{X} : f(x) \\ge \\lambda_f\\}$ if $k$ satisfies\n\\begin{align*}\nv_d (r(\\epsilon) \/ 2) ^d (\\lambda_f - \\epsilon) \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}\n\\end{align*}\nand \n\\begin{align*}\nk \\ge \\max \\{ 8 \\Lambda C_{\\delta, n}^2\/(\\lambda_f - \\epsilon), 2^{d + 7} C_{\\delta, n}^2 \\}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nWe prove the contrapositive. Let $A$ be a CC of $\\{x \\in \\mathcal{X} : f(x) \\ge \\lambda_f \\}$ with $\\lambda_f = \\min_{x \\in A \\cap X_{[n]}} f(x)$. Then it suffices to show $A \\cap X_{[n]}$ is connected in $G(\\lambda')$ for $\\lambda' := \\min_{x \\in A \\cap X_{[n]}} f_k(x) - \\tilde\\epsilon_0$. \n\nWe first show $A \\cap X_{[n]}$ is connected in $G(\\lambda)$ for $\\lambda = (\\lambda_f - \\epsilon) \/ (1 + C_{\\delta, n}\/\\sqrt{k})$ and all that will remain is showing $\\lambda' \\le \\lambda$.\n\nDefine $r_o := (k\/(2nv_df_M))^{1\/d}$ and $r_\\lambda := (k\/(nv_d\\lambda))^{1\/d}$. Then from the first assumption on $k$, it follows that $r_\\lambda \\le r(\\epsilon)\/2$. Now for each $x \\in B(A, r_\\lambda)$, we have \n\\begin{align*}\n\\mathcal{F}(B(x, r_o)) \\ge v_d r_o^d \\inf_{x' \\in B(x, r_o + r_\\lambda )} f(x') \n\\ge v_d r_o^d(\\lambda_f - \\epsilon) \\ge C_{\\delta, n} \\frac{\\sqrt{d \\log n}}{n}.\n\\end{align*}\nThus, by Lemma~\\ref{ball_bounds} we have with probability at least $1-\\delta$ that $B(x, r_0)$ contains a sample point.\n\nNow, in the same way shown as in Lemma~\\ref{connectedness}, we have the following. If $x$ and $x'$ be two points in $A\\cap X_{[n]}$ then there exists $x = x_0,x_1,...,x_p = x'$ such that $||x_i - x_{i+1}|| < r_o$ and $x_i \\in B(A, r_o)$. \n\nNext is showing $(x_i, x_{i+1}) \\in G(\\lambda)$. We see that $x_i \\in B(A, r_o)$. However, for each $x \\in B(A, r_o)$, we have \n\\begin{align*}\n\\mathcal{F}(B(x, r_\\lambda)) &\\ge v_dr_\\lambda^d \\inf_{x' \\in B(x, r_o + r_\\lambda)} f(x') \n\\ge v_d r_\\lambda^d (\\lambda_f - \\epsilon) \\ge \\frac{k}{n} + \\frac{C_{\\delta, n} \\sqrt{k}}{n}.\n\\end{align*}\n Thus $r_k(x_i) \\le r_\\lambda$ for all $i$. Therefore, $x_i \\in G(\\lambda)$ for all $x_i$. Finally, $||x_{i+1} - x_i|| \\le r_o \\le \\min \\{ r_k(x_i), r_k(x_{i+1})\\}$ and thus $(x_i, x_{i+1}) \\in G(\\lambda)$. Therefore, $A \\cap X_{[n]}$ is connected in $G(\\lambda)$.\n\nAll that remains is showing $\\lambda' \\le \\lambda$. We have\n\\begin{align*}\n\\lambda' = \\min_{x \\in A \\cap X_{[n]}} f_k(x) - \\tilde\\epsilon_0 \n\\le \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}} \\right) (\\lambda_f + \\epsilon) - \\tilde\\epsilon_0\n\\le \\lambda,\n\\end{align*}\nwhere the first inequality holds by Lemma~\\ref{fk_bounds}, and the second inequality holds from the assumption on $\\tilde\\epsilon_0$, as desired. \n\n\\end{proof}\n\n\nWe state the pruning result for more general choices of $\\tilde{\\epsilon}$. Its proof is standard and given here for completion. (See e.g. \\cite{dasgupta2014optimal}). \n\n\\begin{theorem}[Extends Theorem~\\ref{pruning}] \\label{pruning_general}\nLet $0< \\delta < 1$ and $\\tilde\\epsilon \\ge 0$. There exists $\\lambda_0 = \\lambda_0(n, k)$ such that the following holds with probability at least $1 - \\delta$. All $\\epsilon_0$-modal set estimates in $\\widehat{\\mathcal{M}}$ chosen at level $\\lambda \\ge \\lambda_0$ can be injectively mapped to $\\epsilon_0$-modal sets $\\braces{M: \\lambda_M \\geq \\min_{\\{x\\in \\mathcal{X}_{[n]} : f_k(x) \\ge \\lambda - \\beta_k \\lambda \\}} f(x)}$, provided $k$ is admissible for all such $M$. \n\nIn particular, if $f$ is H\\\"older-continuous, (i.e. $||f(x) - f(x')|| \\le c||x - x'||^\\alpha $ for some $0 < \\alpha\\le 1$, $c > 0$) and $\\tilde\\epsilon = 0$, \nthen $\\lambda_0 \\to 0$ as $n\\to \\infty$, provided \n$C_1 \\log n \\le k \\le C_2 n^{2\\alpha \/ (2\\alpha + d)}$, for some $C_1, C_2$ independent of $n$. \n\\end{theorem}\n\n\\begin{proof}\nDefine $r(\\epsilon) := \\inf_{x \\in \\mathbb{R}^d} \\min \\{\\hat{r}(\\epsilon, x), \\check{r}(\\epsilon, x) \\}$. Since $f$ is uniformly continuous, it follows that $r(0) = 0$, $r$ is increasing, and $r(\\epsilon) > 0$ for $\\epsilon > 0$. \n\nThus, there exists $\\tilde{\\lambda}_{n,k,\\tilde\\epsilon} > 0$ such that \n\\begin{align*}\n\\tilde{\\lambda}_{n, k, \\tilde\\epsilon} = \\frac{k}{n\\cdot v_d \\cdot r((8\\beta_k \\tilde{\\lambda}_{n,k,\\tilde\\epsilon} + \\tilde{\\epsilon})\/3)}.\n\\end{align*}\nDefine\n\\begin{align*}\n\\lambda_0 := \\max \\{\\tilde\\lambda_{n,k,\\tilde\\epsilon}, 32\\beta_k \\sup_{x \\in \\mathcal{X}} f(x) + 4 \\tilde\\epsilon \\}.\n\\end{align*}\n\nLet us identify each estimated $\\epsilon_0$-modal set $\\widehat{M}$ with the point $\\hat{x}_M := \\max_{x \\in \\widehat{M}} f_k(x)$. Let us call these points modal-points. Then it suffices to show that there is an injection from modal-points to the $\\epsilon_0$-modal sets. \n\nDefine $G'(\\lambda)$ to be the graph with vertices in $G(\\lambda - \\beta_k \\lambda)$ and edges between vertices if they are in the same CC of $G(\\lambda - 9\\beta_k \\lambda - \\tilde\\epsilon)$ and $X_{[n]}^\\lambda := \\{ x : f_k(x) \\ge \\lambda \\}$. \nLet $\\tilde{A}_{i, \\lambda} := \\tilde{A}_i \\cap X_{[n]}^\\lambda$ for $i = 1,...,m$ to be the vertices of the CCs of $G'(\\lambda)$ which do not contain any modal-points chosen thus far as part of estimated modal-sets. \n\nFix level $\\lambda > 0$ such that $\\lambda_f := \\inf_{x \\in X_{[n]}^\\lambda} f(x) \\ge \\lambda_0\/2$. Then the conditions are satisified for Lemma~\\ref{connectedness_pruning} with $\\epsilon = (8\\beta_k\\lambda + \\tilde\\epsilon)\/3$. Suppose that $\\tilde{A}_{1,\\lambda},...,\\tilde{A}_{m,\\lambda}$ are in ascending order according to $\\lambda_{i, f} := \\min_{x \\in \\tilde{A}_{i, \\lambda}} f(x)$. Starting with $i = 1$, by Lemma~\\ref{connectedness_pruning}, $\\mathcal{X}^{\\lambda_1, f}$ can be partitioned into disconneced subsets $A_1$ and $\\mathcal{X}^{\\lambda_1, f} \\backslash A_1$ containing respectively $\\tilde{A}_{1, \\lambda}$ and $\\cup_{i=2}^m \\tilde{A}_{i, \\lambda}$. Assign the modal-point $\\argmax_{x \\in \\tilde{A}_{1, \\lambda}} f_k(x)$ to any $\\epsilon_0$-modal set in $A_1$. Repeat the same argument successively for any $\\tilde{A}_{i, \\lambda}$ and $\\cup_{j=i+1}^m \\tilde{A}_{j, \\lambda}$ until all modal-points are assigned to distinct $\\epsilon_0$-modal sets in disjoint sets $A_i$. \n\nNow by Lemma~\\ref{connectedness_pruning}, $\\mathcal{X}^{\\lambda_f}$ can be partitioned into disconnected subsets $A$ and $\\mathcal{X}^{\\lambda_f} \\backslash A$ containing respectively $\\tilde{A}_\\lambda := \\cup_{i=1}^m \\tilde{A}_{i, \\lambda}$ and $\\mathcal{X}_{[n]}^{\\lambda_f} \\backslash \\tilde{A}_{\\lambda}$. Thus, the modal-points in $\\tilde{A}_\\lambda$ were assigned to $\\epsilon_0$-modal sets in $A$. \n\n\\noindent Now we repeat the argument for all $\\lambda' > \\lambda$ to show that the modal-points in $X_{[n]}^\\lambda \\backslash \\tilde{A}_\\lambda$ can be assigned to distinct $\\epsilon_0$-modal sets in $\\mathcal{X}^\\lambda \\backslash A^\\lambda$. (We have $\\lambda'_f := \\min_{x \\in X_{[n]}^{\\lambda' - \\beta_k\\lambda'} }f(x) \\ge \\lambda_f$).\n\n\\noindent Finally, it remains to show that $\\lambda \\ge \\lambda_0$ implies $\\lambda_f \\ge \\lambda_0 \/2$. We have $\\lambda_0 \/ 4 \\ge 8\\beta_k\\lambda + \\tilde\\epsilon$, thus $r(\\lambda_0 \/ 4) \\ge r(8\\beta_k\\lambda + \\tilde\\epsilon)$. It follows that\n\\begin{align*}\nv_d(r(\\lambda_0\/4))^d\\cdot (\\lambda_0\/4) \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}.\n\\end{align*} \nHence, for all $x$ such that $f(x) \\le \\lambda_0 \/ 2$, we have\n\\begin{align*}\nf_k(x) \\le (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) (f(x) + \\lambda_0 \/ 4) \\le \\lambda_0.\n\\end{align*}\n\nTo see the second part, suppose we have $C_1, C_2 > 0$ such that $C_1 \\log n \\le k \\le C_2 n^{2\\alpha \/ (2\\alpha + d)}$. This combined with the fact that $r(\\epsilon) \\ge (\\epsilon\/C)^{1\/\\alpha}$ implies $\\lambda_0 \\rightarrow 0$, as desired.\n\n\\end{proof}\n\n\n\n\n\\section{Point-Cloud Density} \\label{appendix:pointcloud}\nHere we formalize the fact that modal-sets can serve as good models for high-density structures in data, for instance \na low-dimensional structure $M$ $+$ noise. \n\n\\begin{lemma} (Point Cloud with Gaussian Noise)\nLet $M \\subseteq \\mathbb{R}^d$ be compact (with possibly multiple connected-components of differing dimension). Then there exists a density $f$ over $\\mathbb{R}^d$ such that the density is uniform in $M$ and has Gaussian decays around $M$ i.e.\n\\begin{align*}\nf(x) = \\frac{1}{Z} \\exp(-d(x, M)^2\/(2\\sigma^2)),\n\\end{align*} \nwhere $\\sigma > 0$ and $Z > 0$ depends on $M, \\sigma$. Thus, the modal-sets of $f$ are the connected-components of $M$. \n\\end{lemma}\n\n\\begin{proof}\nSince $M$ is compact in $\\mathbb{R}^d$, it is bounded. Thus there exists $R > 0$ such that $M \\subseteq B(0, R)$. \nIt suffices to show that for any $\\sigma > 0$,\n\\begin{align*}\n\\int_{\\mathbb{R}^d} \\exp(-d(x, M)^2\/(2\\sigma^2)) dx < \\infty.\n\\end{align*}\nBy a scaling of $x$ by $\\sigma$, it suffices to show that \n\\begin{align*}\n\\int_{\\mathbb{R}^d} g(x) dx < \\infty,\n\\end{align*}\nwhere $g(x) := \\exp(-\\frac{1}{2} d(x, M)^2)$. Consider level sets $\\mathcal{X}^\\lambda := \\{ x \\in \\mathbb{R}^d : g(x) \\ge \\lambda \\}$. \nNote that $\\mathcal{X}^\\lambda \\subseteq B(M, \\sqrt{2 \\log(1\/\\lambda)})$ based on the decay in $g$ around $M$. Clearly the image of $g$ is $(0, 1]$ so consider partitioning this range into intervals $[1, 1\/2], [1\/2, 1\/3], ...$. Then it follows that \n\\begin{align*}\n\\int_{\\mathbb{R}^d} g(x) dx \n&\\le \\sum_{n=2}^\\infty \\text{Vol}(\\mathcal{X}^{1\/n}) \\left(\\frac{1}{n-1} - \\frac{1}{n}\\right)\n\\le \\sum_{n=2}^\\infty \\frac{\\text{Vol}(B(M, \\sqrt{2 \\log(n))}) }{(n-1)n}\\\\\n&\\le \\sum_{n=2}^\\infty \\frac{\\text{Vol}(B(0, R + \\sqrt{2 \\log(n))}) }{(n-1)n}\n= \\sum_{n=2}^\\infty \\frac{v_d(R + \\sqrt{2 \\log(n))})^d }{(n-1)n} \\\\\n&\\le v_d\\cdot 2^{d-1} \\sum_{n=2}^\\infty \\frac{R^d + (2 \\log(n))^{d\/2}}{(n-1)n} < \\infty,\n\\end{align*}\nwhere the last inequality holds by AM-GM. As desired.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{Implementation} \\label{appendix:implementation}\n\nIn this section, we explain how to implement Algorithm~\\ref{alg:epsilonmodalset} (which supersedes Algorithm~\\ref{alg:modalset}) efficiently. Here we assume that for our sample $X_{[n]}$, we have the $k$-nearest neighbors for each sample point. In our implementation, we simply use kd-tree, although one could replace it with any method that can produce the $k$-nearest neighbors for all the sample points. In particular, one could use approximate $k$-NN methods if scale is an issue. \n\nThis section now concerns with what remains: constructing a data structure that maintains the CCs of the mutual $k$-NN graph as we traverse down the levels. At level $\\lambda$ in Algorithm~\\ref{alg:epsilonmodalset}, we must keep track of the mutual $k$-NN graph for points $x$ such that $f_k(x) \\ge \\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde\\epsilon$. Thus as $\\lambda$ decreases, we add more vertices (and corresponding edges to the mutual $k$-nearest neighbors). Algorithm~\\ref{alg:epsilonmodalsetinterface} shows what functions this data structure must support. Namely, adding nodes and edges, getting CCs of nodes, and checking if a CC intersects with the current estimates of the $\\epsilon_0$-modal sets.\n\nWe implement this data structure as a disjoint-set forest data structure. \nThe CCs can be represented as disjoint-sets of forests. Adding a node corresponds to making a set while adding an edge corresponds to a union operation. We can identify the verticies with the roots of the corresponding set's trees and thus getConnectedComponent and componentSeen can be implemented in a straightforward way.\n\nIn sum, the bulk of the time complexity is in preprocessing the data. This consists of obtaining the initial $k$-NN graph, i.e. distances to nearest neighbors; this one time operation is of worst-case order $O(n^2)$, similar to usual clustering procedures (e.g. Mean-Shift, K-Means, Spectral Clustering), but average case $O(nk \\log n)$. After this preprocessing step, the estimation procedure itself requires just $O(nk)$ operations, each with amortized $O(\\alpha(n))$ where $\\alpha$ is the inverse Ackermann function. Thus, the implementation provided in Algorithm~\\ref{alg:epsilonmodalsetimpl} is near-linear in $k$ and $n$. \n\n\n\n\n\\begin{algorithm}[tb]\n \\caption{Interface for Mutual $k$-NN graph construction}\n \\label{alg:epsilonmodalsetinterface}\n\\begin{algorithmic}\n\\STATE InitializeGraph() \\hfill \/\/ Creates an empty graph\n\\STATE addNode(G, node) \\hfill \/\/ Adds a node\n\\STATE addEdge(G, node1, node2) \\hfill \/\/ Adds an edge\n\\STATE getConnectedComponent(G, node) \\hfill\/\/ Get the vertices in node's CC\n\\STATE componentSeen(G, node) \\hfill \/\/ checks whether node's CC intersects with the estimates. If not, then marks the component as seen.\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\begin{algorithm}[tb]\n \\caption{Implementation of M-cores (Algorithm~\\ref{alg:epsilonmodalset})}\n \\label{alg:epsilonmodalsetimpl}\n\\begin{algorithmic}\n\n\\STATE Let $\\text{kNNSet}(x)$ be the $k$-nearest neighbors of $x \\in X_{[n]}$. \n\\STATE $\\widehat{\\mathcal{M}} \\leftarrow \\{ \\}$\n\\STATE $G \\leftarrow \\text{InitializeGraph()}$ \n\\STATE Sort points in descending order of $f_k$ values\n\\STATE Let $p \\leftarrow 1$\n\\FOR{$i = 1,...,n$}\n\\STATE $\\lambda \\leftarrow f_k(X_i)$\n\\WHILE {$p < n$ and $f_k(X_p) \\ge \\lambda - 9\\beta_k \\lambda -\\epsilon_0 - \\tilde\\epsilon$}\n\\STATE addNode($G, X_p$)\n\\FOR{$x \\in \\text{kNNSet}(X_p) \\cap G$}\n\\STATE addEdge($G, x, X_p$)\n\\ENDFOR \n\\STATE $p \\leftarrow p + 1$\n\\ENDWHILE\n\\IF{not componentSeen($G, X_i$)}\n\\STATE $\\text{toAdd} \\leftarrow \\text{getConnectedComponent}(G, X_i)$\n\\STATE Delete all $x$ from $\\text{toAdd}$ where $f_k(x) < \\lambda - \\beta_k \\lambda$\n\\STATE $\\widehat{\\mathcal{M}} \\leftarrow \\widehat{\\mathcal{M}} + \\{\\text{toAdd}\\}$\n\\ENDIF\n\\ENDFOR\n\\RETURN{$\\widehat{\\mathcal{M}}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\subsection{Practical Setup} \n\nThe analysis prescribes a setting of $\\beta_k = O(1\/\\sqrt{k})$. Throughout the experiments we simply fix $\\beta_k = 2 \/ \\sqrt{k}$, and let our choice of $k$ be the essential parameter. \nAs we will see, M-cores yields competitive and stable performance for a wide-range of settings of $k$. The implementation can be done efficiently and is described in Appendix~\\ref{appendix:implementation}. \n\n\n\nWe will release an optimized Python\/C++ version of the code at \\cite{url}. \n\n\n\n\\subsection{Qualitative Experiments on General Structures} \n\\begin{figure}[h]\n\\centering \n\\includegraphics[width=0.155\\textwidth]{eye1_original}\n\\includegraphics[width=0.155\\textwidth]{eye1_filtered_new}\n\\includegraphics[width=0.155\\textwidth]{eye1_estimated_new}\n\\includegraphics[width=0.155\\textwidth]{eye2_original}\n\\includegraphics[width=0.155\\textwidth]{eye2_filtered_new}\n\\includegraphics[width=0.155\\textwidth]{eye2_estimated_new}\n\\caption{Diabetic Retinopathy: (Left 3 figures) An unhealthy eye, (Right 3 figures) A healthy eye. \nIn both cases, shown are (1) original image, (2) a filter applied to the image, (3) modal-sets (structures of capillaries) estimated by M-cores on the corresponding filtered image. The unhealthy eye is characterized by a proliferation of \ndamaged blood capillaries, while a healthy eye has visually fewer capillaries. The analysis task is to automatically discover the higher number of capillary-structures in the unhealthy eye. M-cores discovers $29$ structures for unhealthy eye vs $6$ for healthy eye.\n}\n\\label{fig:eye}\n\\end{figure} \n\nWe start with a qualitative experiment highlighting the flexibility of the procedure in fitting \na large variety of high-density structures. For these experiments, we use $k = \\frac{1}{2} \\cdot \\log^2 n$, which is within the\ntheoretical range for admissible values of $k$ (see Theorem \\ref{theo:main} and Remark \\ref{kadmissible}). \n\nWe consider a medical imaging problem. Figure~\\ref{fig:eye} displays the procedure applied to the Diabetic Retinopathy detection problem \\cite{drd}. While this is by no means an end-to-end treatment of this detection problem, it gives a sense of M-cores' versatility in fitting \nreal-world patterns. In particular, M-cores automatically estimates a reasonable number of clusters, independent \nof shape, while pruning away (most importantly in the case of the healthy eye) false clusters due to noisy data. As a result, it correctly picks up a much larger number of clusters in the case of the unhealthy eye. \n\n\n\n\\subsection{Clustering applications}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=1\\textwidth]{real_data_examples_new.png}\n\\caption{Comparison on real datasets (along the rows) across different hyperparameter settings for each algorithm (along the columns). The hyperparameters being tuned are displayed at the bottom of the figure for each clustering algorithm. Scores: the blue line with triangular markers is Adjusted-Mutual-Information, and the dotted red line is Adjusted-Rand-Index. }\n\\label{fig:realworld}\n\\end{figure*} \nWe now evaluate the performance of M-cores on clustering applications, where for\n{\\bf clustering:} we assign every point $x_i\\in X_{[n]}$ to $\\argmin_{\\widehat{M} \\in \\widehat{\\mathcal{M}}} d(x_i, \\widehat M)$, i.e. to the closest estimated modal-set. \n\nWe compare M-cores to two common density-based clustering procedures, DBSCAN and Mean-Shift, as implemented in the \\textit{sci-kit-learn} package. Mean-Shift clusters data around point-modes, i.e. local-maxima of $f$, and is therefore most similar to M-cores in its objective. \n\n\n\n{\\bf Clustering scores.} We compute two established scores which evaluate a clustering against a labeled ground-truth. The \\emph{rand-index}-score is the $0$-$1$ accuracy in grouping pairs of points, (see e.g. \\cite{hubert}); the \\emph{mutual information}-score is the (information theoretic) mutual-information between the distributions induced by the clustering and \nthe ground-truth (each cluster is a mass-point of the distribution, see e.g. \\cite{vinh2010information}). For both scores we report the \\emph{adjusted} version, which adjusts the score so that a random clustering (with the same number of clusters as the ground-truth) scores near $0$ (see e.g. \\cite{hubert}, \\cite{vinh2010information}). \n\n{\\bf Datasets.} Phonemes \\cite{hastie2005elements}, and UCI datasets: Glass, Seeds, Iris, and Wearable Computing. They are described in the table below\n\\begin{center}\n \\begin{tabular}{ | p{2cm}| p{0.7cm} | p{0.35cm} | p{0.7cm} | p{8cm} |}\n \\hline\n {\\small \\emph{Dataset}} & $n$ & $d$ & {\\small \\emph{Labels}} & {\\small \\emph{Description}} \\\\ \\hline\n {\\small Phonemes} & {\\small 4509} & {\\small 256} & {\\small 5} & {\\small Log-periodograms of spoken phonemes} \\\\ \\hline\n {\\small Glass} & {\\small 214} & {\\small 7} & {\\small 6} & \\small{Properties of different types of glass} \\\\ \\hline\n {\\small Seeds} & {\\small 210} & {\\small 7} & {\\small 3} & \\small{Geometric measurements of wheat kernels} \\\\ \\hline\n {\\small Iris} & {\\small 150} & {\\small 4} & {\\small 3} & \\small{Various measurements over species of flowers} \\\\ \\hline\n {\\small Wearable} & {\\small 10000} & {\\small 12} & {\\small 5} & \n \\small{4 sensors on a human body, recording body posture and activity} \\\\ \\hline\n \\end{tabular}\n\\end{center}\n\n{\\bf Results.} Figure~\\ref{fig:realworld} reports the performance of the procedures for each dataset. Rather than reporting the performance of the \nprocedures under \\emph{optimal-tuning}, we report their performance \\emph{over a range} of hyperparameter settings, \nmindful of the fact that optimal-tuning is hardly found in practice (this is a general problem in clustering given the lack of ground-truth to guide tuning). \n\n\nFor M-cores we vary the parameter $k$. For DBSCAN and Mean-Shift, we vary the main parameters, respectively \\emph{eps} (choice of level-set), and \\emph{bandwidth} (used in density estimation). \nM-cores yields competitive performance across the board, with stable scores over a large range of values of $k$ (relative to sample size). Such stable performance to large changes in $k$ is quite desirable, considering that proper tuning of hyperparameters remains a largely open problem in clustering. \n\\vspace{0.3cm}\n\n{\\bf Conclusion} \\\\\n\n\\vspace{-0.2cm} \n\nWe presented a theoretically-motivated procedure which can consistently estimate modal-sets, i.e. nontrivial high-density structures in data, under benign distributional conditions. This procedure is easily implemented and yields competitive and stable scores in clustering applications. \n\n\n\n\n\n\\subsection*{#1}}{\\qed\\medskip}\n\\theoremstyle{definition}\n\\newtheorem{discussion}{Discussion}\n\n\\newcommand{\\mathcal{G}}{\\mathcal{G}}\n\\renewcommand{\\P}{\\mathbb{P}}\n\\newcommand{\\mathbb{E}}{\\mathbb{E}}\n\n\\newcommand{\\hat{C}}{\\hat{C}}\n\\newcommand{\\check{C}}{\\check{C}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\\newcommand{\\mathcal{X}}{\\mathcal{X}}\n\\newcommand{\\mathcal{Y}}{\\mathcal{Y}}\n\\newcommand{X_{[n]}}{X_{[n]}}\n\\newcommand{\\mathbf{Y}}{\\mathbf{Y}}\n\\newcommand{\\mathcal{M}}{\\mathcal{M}}\n\\newcommand{\\Sh}[2]{\\mathcal{S}\\left(#1, #2\\right)}\n\\newcommand{\\mathcal{N}_r}{\\mathcal{N}_r}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\mathcal{H}}{\\mathcal{H}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\widetilde{S}}{\\widetilde{S}}\n\\newcommand{\\mathbf{A}}{\\mathbf{A}}\n\\newcommand{\\A^{\\!\\!*}}{\\mathbf{A}^{\\!\\!*}}\n\\newcommand{\\tilde{R}}{\\tilde{R}}\n\\newcommand{\\bar{a}}{\\bar{a}}\n\\newcommand{\\widetilde{f}}{\\widetilde{f}}\n\\newcommand{\\mathcal{S}}{\\mathcal{S}}\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\mathcal{O}}{\\mathcal{O}}\n\\newcommand{\\mathbb{R}}{\\mathbb{R}}\n\\newcommand{\\mathbb{N}}{\\mathbb{N}}\n\\newcommand{\\mathbb{E}}{\\mathbb{E}}\n\\newcommand{\\ext}[2]{{\\overline{\\mathbb{E}}_{#1}}\\left[#2\\right]}\n\\newcommand{\\exf}[2]{{\\mathbb{E}_{#1}}\\left[#2\\right]}\n\\newcommand{\\ex}[1]{{\\mathbb{E}}\\left[#1\\right]}\n\\newcommand{{\\expectation} \\,}{{\\mathbb{E}} \\,}\n\\newcommand{\\expecf}[1]{{\\mathbb{E}_{#1}} \\,}\n\\newcommand{\\norm}[1]{\\left\\|#1\\right\\|}\n\\newcommand{\\abs}[1]{\\left|#1\\right|}\n\\newcommand{\\paren}[1]{\\left(#1\\right)}\n\\newcommand{\\pr}[1]{\\mathbb{P}\\left(#1\\right)}\n\\newcommand{\\prf}[2]{\\mathbb{P}_{#1}\\left(#2\\right)}\n\\newcommand{\\widetilde{\\sigma}}{\\widetilde{\\sigma}}\n\\newcommand{\\mathbf{\\top}}{\\mathbf{\\top}}\n\\newcommand{\\widetilde{x}}{\\widetilde{x}}\n\\newcommand{\\widetilde{X}}{\\widetilde{X}}\n\\newcommand{\\widetilde{m}}{\\widetilde{m}}\n\\newcommand{\\SigmaT_Z}{\\SigmaT_Z}\n\\newcommand{\\tilde{\\rho}}{\\tilde{\\rho}}\n\\newcommand{\\mathring{\\delta}}{\\mathring{\\delta}}\n\\newcommand{\\text{tr}}{\\text{tr}}\n\\newcommand{\\diameter}[1]{\\Delta_n\\left(#1\\right)}\n\\newcommand{\\diam}[1]{\\Delta_n^2\\left(#1\\right)}\n\\newcommand{\\diamA}[1]{\\Delta_{n,a}^2\\left(#1\\right)}\n\\newcommand{\\Delta^2_{\\X}}{\\Delta^2_{\\mathcal{X}}}\n\\newcommand{\\meanN}[1]{\\bar{#1}_n}\n\\newcommand{\\mean}[1]{\\bar{#1}}\n\\newcommand{\\text{err}}{\\text{err}}\n\\newcommand{\\ind}[1]{\\mathds{1}\\left\\{#1\\right\\}}\n\\newcommand{\\braces}[1]{\\left\\{#1\\right\\}}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\level}{level}\n\\DeclareMathOperator*{\\volume}{vol}\n\\DeclareMathOperator*{\\Expectation}{\\mathbb{E}}\n\\newcommand{\\lev}[1]{\\level\\left(#1\\right)}\n\\newcommand{\\vol}[1]{\\volume\\left(#1\\right)}\n\n\n\\title{Modal-set estimation with an application to clustering}\n\n\n\\author{\n Heinrich Jiang \\thanks{Much of this work was done when this author was at Princeton University Mathematics Department.}\\\\\n \\texttt{heinrich.jiang@gmail.com} \\\\\n \\And\n Samory Kpotufe\\\\\n ORFE, Princeton University\\\\\n \\texttt{samory@princeton.edu}\\\\\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n{\\bf Abstract}. We present a first procedure that can estimate -- with statistical consistency guarantees -- any local-maxima of a density, under benign distributional conditions. The procedure estimates all such local maxima, or \\emph{modal-sets}, of any bounded shape or dimension, including usual point-modes. In practice, modal-sets can arise as dense low-dimensional structures in noisy data, and more generally serve to better model the rich variety of locally-high-density structures in data. \n\nThe procedure is then shown to be competitive on clustering applications, and moreover is quite stable to a wide range of settings of its tuning parameter. \n\\end{abstract}\n\n\n\n\\section{Introduction}\nMode estimation is a basic problem in data analysis. \n Modes, i.e. points of locally high density, serve as a measure of central tendency and are therefore important in unsupervised problems such as outlier detection, image or audio segmentation, and clustering in particular (as cluster cores). In the present work, we are interested in capturing a wider generality of \\emph{modes}, i.e. general structures (other than single-points) of locally high density, that can arise in modern data. \n\nFor example, application data in $\\mathbb{R}^d$ (e.g. speech, vision) are often well modeled as arising from \na lower-dimensional structure $M$ + noise. In other words, such data is densest on $M$, hence \nthe ambient density $f$ is more closely modeled as locally maximal at (or near) $M$, a nontrivial subset of $\\mathbb{R}^d$, \nrather than maximal only at single points in $\\mathbb{R}^d$. Such a situation is illustrated in Figure \\ref{fig:clustercores}. \n\nWe therefore extend the notion of \\emph{mode} to any connected \nsubset of $\\mathbb{R}^d$ where the unknown density $f$ is locally maximal; we refer to these as \\emph{modal-sets} of $f$. A modal-set can be of any bounded shape and dimension, from $0$-dimensional (point modes), to full dimensional surfaces, and aim to capture the possibly rich variety of dense structures in data. \n\nOur main contribution is a procedure, M(odal)-cores, that consistently estimates all such modal-sets from data, of general shape and dimension, with minimal assumption on the unknown $f$. The procedure builds on recent developments in topological data analysis \n\\cite{SN10, CD10, KV11, RSNW12, balakrishnan2013cluster, CDKvL14, eldridge2015beyond}, and works by traversing certain $k$-NN graphs which encode level sets of a $k$-NN density estimate. We show that, if $f$ is continuous on compact support, the Hausdorff distance between any modal-set and its estimate vanishes as $n\\to \\infty$ (Theorem \\ref{theo:main}); the estimation rate for point-modes matches (up to $\\log n$) the known minimax rates. Furthermore, under mild additional smoothness condition on $f$ (H\\\"older continuity), \\emph{false} structures (due to empirical variability) are correctly identified and pruned. We know of no such general statistical guarantees in mode estimation. \n\nWhile there is often a gap between theoretical procedures and practical ones, the present procedure is easy to implement and yields competitive scores on clustering applications; here, as in \\emph{mode-based clustering}, clusters are simply defined as regions of high-density of the data, and the estimated modal-sets serve as the centers of these regions, i.e. as \\emph{cluster-cores}. A welcome aspect of the resulting clustering procedure is its stability to tuning \nsettings of the parameter $k$ (from $k$-NN): it maintains high clustering scores (computed with knowledge of the ground-truth) over a wide range of settings of $k$, for various datasets. \n Such stability to tuning is of practical importance, since typically the ground-truth is unknown, so clustering procedures come with tuning parameters that are hard to set in practice. Practitioners therefore use various rule-of-thumbs and can thus benefit from procedures that are less-sensitive to their hyperparameters. \n \n\n\nIn the next section we put our result in context with respect to previous work on mode estimation and density-based clustering in general. \n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure} \n\\centering \n\\includegraphics[width=0.25\\textwidth]{3d_2000_samples.jpg}\n\\includegraphics[width=0.25\\textwidth]{3d_density.jpg}\n\\includegraphics[width=0.25\\textwidth]{3d_estimated.jpg}\n\n\\label{fig:clustercores}\n\\caption{Main phase of M-cores. (Left) Points-cloud generated as three 1-dimensional rings + noise. (Middle) The 3 rings, and (Right) their estimate (as modal-sets) by M-cores.}\n\\end{figure}\n\n\\subsection*{Related Work} \n$\\bullet$ Much theoretical work on mode-estimation is concerned with understanding the statistical difficulty of the problem, and as such, often only considers the case of densities with single point-modes \\cite{parzen1962estimation, chernoff1964estimation, eddy1980optimum, devroye1979recursive, tsybakov1990recursive, abraham2004asymptotic}. \nThe more practical case of densities with multiple point-modes has received less attention in the theoretical literature. However there exist practical estimators, e.g., the popular \\emph{Mean-Shift} procedure (which doubles as a clustering procedure), which are however harder to analyze. Recently, \\cite{arias2013estimation} shows the consistency of a variant of Mean-Shift. Other recent work of \\cite{genovese2013nonparametric} derives a method for pruning false-modes obtained by mode-seeking procedures. Also recent, the work of \\cite{dasgupta2014optimal} shows that point-modes of a $k$-NN density estimate $f_k$ approximate the true modes of the unknown density $f$, assuming $f$ only has point-modes and bounded Hessian at the modes; their procedure, therefore operates on level-sets of $f_k$ (similar to ours), but fails in the presence of more general high-density structures such as modal-sets. To handle such general structures, we have to identify more appropriate level-sets to operate on, the main technical difficulty being that local-maxima of $f_k$ can be relatively far (in Hausdorff) from those of $f$, for instance single-point modes rather than more general modal-sets, due to data-variability. \nThe present procedure handles general structures, and is consistent under the much weaker conditions of continuity (of $f$) on a compact domain.\n\nA related line of work, which seeks more general structures than point-modes, is that of \\emph{ridge} estimation (see e.g. \\citep{ozertem2011locally, genovese2014nonparametric}). A ridge is typically defined as a lower-dimensional structure away from which the density curves (in some but not all directions), and can serve to capture various lower-dimensional patterns apparent in point clouds. In contrast, the modal-sets defined here can be full-dimensional and are always local maxima of the density. Also, unlike in ridge estimation, we do not require local differentiability of the unknown $f$, nor knowledge of the dimension of the structure, thus allowing a different but rich set of practical structures. \n\n$\\bullet$ A main application of the present work, and of mode-estimation in general, is \\emph{density-based clustering}. Such clustering was formalized in early work of \\cite{carmichael1968finding, hartigan1975clustering, H81}, and can take various forms, each with their advantage. \n\nIn its hierarchical version, one is interested in estimating the connected components (CCs) of \\emph{all} level sets \n$\\braces{f \\ge \\lambda}_{\\lambda>0}$ of the unknown density $f$. Many recent works analyze approaches that consistently estimate such a hierarchy under quite general conditions, e.g. \\cite{SN10, CD10, KV11, RSNW12, balakrishnan2013cluster, CDKvL14, eldridge2015beyond}. \n\nIn the \\emph{flat} clustering version, one is interested in estimating the CCs of $\\braces{f \\ge \\lambda}$ for a single $\\lambda$, somehow appropriately chosen \n\\citep{RV09, SSN09, MHL09, RW10, S11, sriperumbudur2012consistency}. The popular DBSCAN procedure \\citep{ester1996density} can be viewed as estimating such single level set. The main disadvantage here is in the ambiguity in the choice of $\\lambda$, especially when the levels $\\lambda$ of $f$ have different numbers of clusters (CCs). \n\n\nAnother common flat clustering approach, most related to the present work, is \\emph{mode-based} clustering. The approach clusters points to estimated modes of $f$, a fixed target, and therefore does away with the ambiguity in choosing an appropriate level $\\lambda$ of $f$ \\citep{fukunaga1975estimation, cheng1995mean, comaniciu2002mean, li2007nonparametric, chazal2013persistence}. As previously discussed, \nthese approaches are however hard to analyze in that mode-estimation is itself not an easy problem. Popular examples are extensions of $k$-Means to categorical data \\cite{chaturvedi2001k}, and the many variants of Mean-Shift which cluster points by gradient ascent to the closest mode. \nNotably, the recent work \\cite{wasserman2014feature} analyzes clustering error of Mean-Shift in a general high-dimensional setting with potentially irrelevant features. The main assumption is that $f$ only has point-modes. \n\n\n\n\n\n\\section{Overview of Results}\n\\label{sec:Overview}\n\\input{Overview.tex}\n\n\n\\section{Analysis Overview}\n\\label{sec:analysis}\n\\input{Analysis.tex}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\input{Experiments.tex}\n\n\n{\n\n\\subsection{Basic Setup and Definitions}\nWe have samples $X_{[n]} = \\{X_1,...,X_n\\}$ drawn i.i.d. from \na distribution $\\mathcal{F}$ over $\\mathbb{R}^d$ with density $f$. We let $\\mathcal{X}$ denote the support of $f$. \nOur main aim is to estimate all local maxima of $f$, or \\emph{modal-sets} of $f$, as we will soon define. \n\nWe first require the following notions of distance between sets. \n\\begin{definition}\\label{ball} \nFor $M\\subset \\mathcal{X}$, $x\\in \\mathcal{X}$, let $d(x, M) := \\inf_{x'\\in M} \\norm{x - x'}$. \nThe {\\bf Hausdorff} distance between $A, B \\subset \\mathcal{X}$ is defined as \n$d(A,B) := \\max \\{ \\sup_{x \\in A} d(x, B), \\sup_{y \\in B} d(y, A) \\}.$\n\\end{definition}\n\nA modal set, defined below, extends the notion of a point-mode to general subsets of $\\mathcal{X}$ \nwhere $f$ is locally maximal. These can arise for instance, as discussed earlier, in applications where high-dimensional \ndata might be modeled as a (disconnected) manifold $\\mathcal{M}$ + ambient noise, each connected component of which \ninduces a modal set of $f$ in ambient space $\\mathbb{R}^D$ (see e.g. Figure \\ref{fig:clustercores}). \n\n\\begin{definition}\\label{modal-set} For any $M \\subset \\mathcal{X}$ and $r>0$, define the \\emph{envelope} $B(M, r) := \\{x : d(x, M) \\leq r\\}$. A connected set $M$ is a {\\bf modal-set} of $f$ if $\\forall x \\in M$, $f(x) = f_M$ for some fixed $f_M$, and there exist $r>0$ such that $f(x) < f_M$ for all $x \\in B(M, r) \\backslash M$. \n\\end{definition}\n\n\\begin{remark} \nThe above definition can be relaxed to \\emph{$\\epsilon_0$-modal sets}, i.e., to allow $f$ to vary by a small $\\epsilon_0$ on $M$. Our results extend easily to this more relaxed definition, with minimal changes to some constants. This is because the procedure operates on $f_k$, and therefore already needs to account for variations in $f_k$ \non $M$. This is described in Appendix~\\ref{appendix:eps-modal-set}. \n\\end{remark}\n\n\n\n\n\n\\subsection{Estimating Modal-sets}\nThe algorithm relies on nearest-neighbor density estimate $f_k$, defined as follows. \n\n\\begin{definition} \\label{kNNdensity} Let $r_k(x) := \\min \\{ r : |B(x, r) \\cap X_{[n]} | \\ge k \\}$. \nDefine the {\\bf $k$-NN density estimate} as\n$$f_k(x) := \\frac{k}{n\\cdot v_d\\cdot r_k(x)^d},\n\\text{where } v_d \\text{ is the volume of a unit sphere in } \\mathbb{R}^d.$$\n\\end{definition}\n\nFurthermore, we need an estimate of the level-sets of $f$; various recent work on cluster-tree estimation (see e.g. \\cite{CDKvL14}) have shown that such level sets are encoded by subgraphs of certain \\emph{modified} $k$-NN graphs. \nHere however, we directly use $k$-NN graphs, simplifying implementation details, but requiring a bit of side analysis.\n\n\\begin{definition}\n Let $G(\\lambda)$ denote the (mutual) {\\bf $k$-NN graph} with vertices $\\{ x \\in X_{[n]} : f_k(x) \\ge \\lambda\\}$ and an edge between $x$ and $x'$ iff $||x - x'|| \\le \\min \\{r_k(x), r_k(x') \\}$. \n \\end{definition}\n $G(\\lambda)$ can be viewed as approximating the $\\lambda$-level set of $f_k$, hence approximates the $\\lambda$-level set of $f$ (implicit in the connectedness result in Appendix~\\ref{appendix:integrality}). \n \nAlgorithm \\ref{alg:modalset} (M-cores) estimates the modal-sets of the unknown $f$.\nIt is based on various insights described below. \nA basic idea, used for instance in point-mode estimation \\cite{dasgupta2014optimal}, is to proceed top-down on the level sets of $f_k$ (i.e. on $G(\\lambda), \\, \\lambda \\to 0$), and identify new modal-sets as they appear in separate CCs at a level $\\lambda$. \n\nHere we have to however be careful: the CCs of $G(\\lambda)$ (essentially modes of $f_k$) might be \nsingleton points (since $f_k$ might take unique values over samples $x\\in X_{[n]}$) while the modal-sets to be estimated might be of any dimension and shape. Fortunately, if a datapoint $x$, locally maximizes $f_k$, and belongs to some modal-set $M$ of $f$, then the rest of $M\\cap X_{[n]}$ must be at a nearby level; Algorithm \\ref{alg:modalset} therefore proceeds by checking a nearby level ($\\lambda - 9\\beta_k \\lambda$) from which it picks a specific set of points as an estimate of $M$. The main parameter here is $\\beta_k$ which is worked out explicitly in terms of $k$ and requires no a priori knowledge of distributional parameters. The confidence level $\\delta$ can be viewed in practice as fixed (e.g. $\\delta = 0.05$). The essential algorithmic parameter is therefore just $k$, which, as we will show, can be chosen over a wide range (w.r.t. $n$) while ensuring statistical consistency. \n\n\\begin{definition}Let $0< \\delta < 1$. Define $C_{\\delta, n} := 16\\log(2\/\\delta)\\sqrt{d \\log n}$, and define $\\beta_k = 4\\frac{C_{\\delta, n}}{\\sqrt{k}}$. \\end{definition}\n\nWe note that the above definition of $\\beta_k$ is somewhat conservative (needed towards theoretical guarantees), since the exact constants $C_{\\delta, n}$ turn out to have little effect in implementation. \n\nA further algorithmic difficulty is that a level $G(\\lambda)$ might have too many CCs w.r.t. the ground truth. For example, due to variability in the data, $f_k$ might have more modal-sets than $f$, inducing too many CCs at some level $G(\\lambda)$. Fortunately, it can be shown that the nearby level $\\lambda - 9\\beta_k \\lambda$ will likely have the right number of CCs. Such lookups down to lower-level act as a way of \\emph{pruning false modal-sets}, and trace back to earlier work \\citep{KV11} on pruning cluster-trees. Here, we need further care: \nwe run the risk of over-estimating a given $M$ if we look too far down (aggressive pruning), since \na CC at lower level might contain points \\emph{far outside} of a modal-set $M$. \nTherefore, the main difficulty here is in figuring out how far down to look and yet not over-estimate \\emph{any} $M$ (to ensure consistency). In particular our lookup \\emph{distance} of $9\\beta_k \\lambda$ is adapted to the level $\\lambda$ unlike in aggressive pruning. \n\nFinally, for clustering with M-cores, we can simply assign every data-point to the closest estimated modal-set (acting as cluster-cores).\n\n\\begin{algorithm}[tb]\n \\caption{M-cores (estimating modal-sets).}\n \\label{alg:modalset}\n\\begin{algorithmic}\n \\STATE Initialize $\\widehat{\\mathcal{M}}:= \\emptyset$. Define $\\beta_k = 4\\frac{C_{\\delta, n}}{\\sqrt{k}}$. \n \\STATE Sort the $X_i$'s in decreasing order of $f_k$ values (i.e. $f_k(X_i) \\geq f_k(X_{i+1})$). \n \\FOR{$i=1$ {\\bfseries to} $n$}\n \\STATE Define $\\lambda := f_k(X_i)$.\n \\STATE Let $A$ be the CC of $G(\\lambda - 9\\beta_k \\lambda)$ that contains $X_i$. \\hfill (\\rm{i})\n \\IF{$A$ is disjoint from all cluster-cores in $\\widehat{\\mathcal{M}}$}\n \\STATE Add $\\widehat{M} := \\{ x \\in A : f_k(x) > \\lambda - \\beta_k \\lambda \\}$ to\n $\\widehat{\\mathcal{M}}$. \n \\ENDIF\n \\ENDFOR\n \\STATE \\textbf{return} $\\widehat{\\mathcal{M}}$. \\textit{ \/\/ Each $\\widehat M\\in \\widehat{\\mathcal{M}}$ is a cluster-core estimating \n a modal-set of the unknown $f$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\subsection{Consistency Results}\nOur consistency results rely on the following mild assumptions. \n\\begin{assumption} $f$ is continuous with compact support $\\mathcal{X}$. Furthermore $f$ has a finite number of modal-sets \nall in the interior of its support $\\mathcal{X}$. \n\\label{assumption-main}\n\\end{assumption}\n\n\nWe will express the convergence of the procedure explicitly in terms of quantities that characterize the behavior of $f$ at the boundary of every modal set. The first quantity has to do with how \\emph{salient} a modal-set, i.e whether it is sufficiently \\emph{separated} from other modal sets. We start with the following definition of \\emph{separation}. \n\n\\begin{definition}\\label{rsalient} \nTwo sets $A, A' \\subset \\mathcal{X}$ are {\\bf $r$-separated}, if there exists a set $S$ such that every path from $A$\n to $A'$ crosses $S$ and $\\sup_{x \\in B(S, r)} f(x) < \\inf_{x \\in A\\cup A'} f(x)$.\n\\end{definition}\nThe next quantities characterize the \\emph{change} in $f$ in a neighborhood of a modal set $M$. \nThe existence of a proper such neighborhood $A_M$, and appropriate functions $u_M$ and $l_M$ capturing smoothness and curvature, follow from the above assumptions on $f$. This is captured in the proposition below. \n\n\\begin{proposition}\n\\label{prop:main-assumptions}\nLet $M$ be a modal-set of $f$. Then there exists a CC $A_M$ of some level-set $\\mathcal{X}^{\\lambda_M} := \\{x : f(x) \\ge \\lambda_M\\}$, containing $M$, such that the following holds. \n\\begin{itemize} \n\\item \\emph{$A_M$ isolates $M$ by a valley}: $A_M$ does not intersect any other modal-set; and $A_M$ and $\\mathcal{X}^{\\lambda_M} \\backslash A_M$ are $r_s$-separated (by some $S_M$) for some $r_s>0$ independent of $M$. \n\\item \\emph{$A_M$ is full-dimensional}: $A_M$ contains an envelope $B(M, r_M)$ of $M$, for some $r_M>0$. \n\\item \\emph{$f$ is both \\emph{smooth} and has \\emph{curvature} around $M$}: there exist functions $u_M$ and $l_M$, increasing and continuous on $[0, r_M]$, $u_M(0) = l_M(0) = 0$, such that $\\forall x \\in B(M, r_M)$, \n\\begin{align*}\nl_M(d(x, M)) \\le f_M - f(x) \\le u_M(d(x, M)). \n\\end{align*}\n\n\\end{itemize}\n\\end{proposition}\n\n\n\nFinally, our consistency guarantees require the following admissibility condition on $k = k(n)$. \nThis condition results, roughly, from needing the density estimate $f_k$ to properly approximate the behavior \nof $f$ in the neighborhood of a modal-set $M$. In particular, we intuitively need $f_k$ values to be smaller for \npoints far from $M$ than for points close to $M$, and this should depend on the smoothness and curvature of $f$ \naround $M$ (as captured by $u_M$ and $l_M$). \n\n\\begin{definition} $k$ is {\\bf admissible} for a modal-set $M$ if (we let $u_M^{-1}$ denote the inverse of $u_M$):\n$$\\max \\left\\{ \\left(\\frac{24 \\sup_{x \\in \\mathcal{X}} f(x)}{l_M(\\min\\{r_M, r_s\\}\/2)} \\right)^2, 2^{7 + d} \\right\\}\\cdot C_{\\delta, n}^2 \\le k \\le \\frac{v_d \\cdot f_M}{2^{2+2d}} \\left(u_M^{-1} \\left ( \nf_M\\frac{C_{\\delta, n}}{2\\sqrt{k}}\\right) \\right)^d \\cdot n.$$\n\\end{definition}\n\n\\begin{remark}\\label{kadmissible} The admissibility condition on $k$, although seemingly opaque, allows for a wide range of settings of $k$. For example, suppose $u_M(t) = c t^{\\alpha}$ for some $c, \\alpha > 0$. These are polynomial tail conditions common in mode estimation, following e.g. from H\\\"older assumptions on $f$. \nAdmissibility then (ignoring $\\log (1\/\\delta)$), is immediately seen to correspond to the wide range\n$$C_1\\cdot \\log n \\leq k \\leq C_2\\cdot n^{2\\alpha\/(2\\alpha + d)},$$ where $C_1, C_2$ are constants depending on $M$, but independent of $k$ and $n$. It's clear then that even the simple choice $k = \\Theta(log^2 n)$ is always admissible \\emph{for any} $M$ for $n$ sufficiently large.\n\\end{remark}\n\n\n{\\bf Main theorems.} We then have the following two main consistency results for Algorithm \\ref{alg:modalset}. Theorem \\ref{theo:main} states a rate (in terms of $l_M$ and $u_M$) at which \nany modal-set $M$ is approximated by some estimate in $\\widehat{\\mathcal{M}}$; Theorem \\ref{pruning} establishes \\emph{pruning} guarantees. \n\\begin{theorem} \n\\label{theo:main}\nLet $0< \\delta < 1$. The following holds with probability at least $1- 6\\delta$, simultaneously for all modal-sets $M$ of $f$. Suppose $k$ is admissible for $M$. Then there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that the following holds. Let $l_M^{-1}$ denote the inverse of $l_M$. \n \\begin{align*}\n d(M, \\widehat{M}) \\le l_M^{-1}\\left(\\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M\\right), \n \\text{ which goes to } 0 \\text{ as } C_{\\delta, n}\/\\sqrt{k} \\rightarrow 0.\n \\end{align*}\n\\end{theorem}\nIf $k$ is admissible for all modal-sets $M$ of $f$, then $\\widehat{\\mathcal{M}}$ estimates all modal-sets of $f$ \nat the above rates. These rates can be instantiated under the settings in Remark~\\ref{kadmissible}: \nsuppose $l_M(t) = c_1 t^{\\alpha_1}$, $u_M(t) = c t^\\alpha$, $\\beta_1 \\geq \\beta$; then \nthe above bound becomes $d(M, \\widehat{M}) \\lesssim k^{-1\/2{\\alpha_1}}$ for admissible $k$. As in the remark, $k = \\Theta (\\log^2 n)$ is admissible, simultaneously for all $M$ (for $n$ sufficiently large), and therefore all modal-sets of $f$ are recovered at the above rate. \nIn particular, taking large $k = O(n^{2\\alpha\/(2\\alpha + d)})$ optimizes the rate to $O(n^{-\\alpha\/(2\\alpha_1\\alpha + \\alpha_1 d)})$. Note that for $\\alpha_1 = \\alpha = 2$, the resulting rate ($n^{-1\/(4+d)}$) is tight (see e.g. \\cite{tsybakov1990recursive} for matching lower-bounds in the case of point-modes $M = \\braces{x}$.). \n\nFinally, Theorem~\\ref{pruning} (pruning guarantees) states that any estimated modal-set in $\\widehat{\\mathcal{M}}$, at a sufficiently high level (w.r.t. to $k$), corresponds to a \\emph{true} modal-set of $f$ at a similar level. Its proof consists of showing that if two sets of points are wrongly disconnected at level $\\lambda$, they remain connected at nearby level $\\lambda - 9\\beta_k \\lambda$ (so are reconnected by the procedure). The main technicality is the dependence of the nearby level on the empirical $\\lambda$; the proof is less involved and given in Appendix~\\ref{appendix:pruning}. \n\n\\begin{theorem} \\label{pruning}\nLet $0< \\delta < 1$. There exists $\\lambda_0 = \\lambda_0(n, k)$ such that the following holds with probability at least $1 - \\delta$. All modal-set estimates in $\\widehat{\\mathcal{M}}$ chosen at level $\\lambda \\ge \\lambda_0$ can be injectively mapped to modal-sets $\\braces{M: \\lambda_M \\geq \\min_{\\{x\\in \\mathcal{X}_{[n]} : f_k(x) \\ge \\lambda - \\beta_k \\lambda \\}} f(x)}$, provided $k$ is admissible for all such $M$. \n\nIn particular, if $f$ is H\\\"older-continuous, (i.e. $||f(x) - f(x')|| \\le c||x - x'||^\\alpha $ for some $0 < \\alpha\\le 1$, $c > 0$)\nthen $\\lambda_0 \\xrightarrow{n \\to \\infty} 0$, provided \n$C_1 \\log n \\le k \\le C_2 n^{2\\alpha \/ (2\\alpha + d)}$, for some $C_1, C_2$ independent $n$. \n\n\n\\end{theorem}\n\\begin{remark}\nThus with little additional smoothness ($\\alpha \\approx 0$) over uniform continuity of $f$, any estimate above level $\\lambda_0 \\to 0$ corresponds to a true modal-set of $f$. We note that these pruning guarantees can be strengthened as needed by implementing a more aggressive pruning: simply replace $G(\\lambda - 9\\beta_k \\lambda)$ in the procedure (on line (\\rm{i})) with $G(\\lambda - 9\\beta_k \\lambda - \\tilde\\epsilon)$ using a \\emph{pruning parameter} $\\tilde \\epsilon \\ge 0$. This allows $\\lambda_0 \\rightarrow 0$ faster. However the rates of Theorem \\ref{theo:main} (while maintained) then require a larger initial sample size $n$. \nThis is discussed in Appendix~\\ref{appendix:pruning}. \n\\end{remark}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the framework of research and development of novel acceleration schemes and technology, the upgrade of the SPARC\\_LAB test facility \\cite{ferrario2013sparc_lab} at INFN-LNF is foreseen, based on a high gradient linac. High brightness electron bunches are fundamental for the successful development of plasma-based accelerators, for instance, whereas external injection schemes are considered, i.e. particle beam driven and laser driven plasma wakefield accelerators (PWFA and LWFA, respectively). Indeed, the ultimate beam brightness and its stability and reproducibility are strongly influenced by the RF-generated electron beam.\n\nIn this scenario the SPARC\\_LAB upgrade, named as EuPRAXIA@SPARC\\_LAB \\cite{ferrario_eaac2017}, might be one of the possible candidates to host EuPRAXIA (European Plasma Research Accelerator with eXcellence In Applications) \\cite{Walker:IPAC2017-TUOBB3}. EuPRAXIA is a design study in the framework of Horizon 2020 (INFRADEV-1-2014), funded to bring together for the first time novel acceleration schemes, based for instance on plasmas, modern lasers, the latest correction\/feedback technologies and large-scale user areas. Such a research infrastructure would achieve the required quantum leap in accelerator technology towards more compact and more cost-effective accelerators, opening new horizons for applications and research.\n\nThe preliminary EuPRAXIA@SPARC\\_LAB linac layout is based on an S-band Gun, three S-band TW structures and an X-band booster with a bunch compressor \\cite{ferrario_eaac2017}. The booster design has been driven by the need of a high accelerating gradient required to achieve a high facility compactness, which is one of the main goals of the EuPRAXIA project. The baseline technology chosen for the EuPRAXIA@SPARC\\_LAB booster is X-band. The total space allocated for the linac accelerating sections is $\\approx$25 m, corresponding to an active length of $\\approx$16 m taking into account the space required to accommodate beam diagnostics, magnetic elements, vacuum equipment and flanges. Two average accelerating gradient options are foreseen for the X-band linac: high gradient (HG) of 57 MV\/m and very high gradient (VHG) of 80 MV\/m, corresponding to double the power of the HG case. The RF linac layout is based on klystrons with SLEDs \\cite{farkas} that feed several TW accelerating structures. The operating mode is the $2\\pi\/3$ mode at 11.9942 GHz. The preliminary RF system parameters are summarized in Table \\ref{tab:RF_parameters} \\cite{cpi_website}.\n\n\\begin{table}[hbt]\n\\center{\n\\caption{RF system parameters.}\n \\label{tab:RF_parameters}\n\\begin{tabular}{lcr}\n\\midrule\nFrequency & 11.9942 GHz\\\\\nPeak RF power & \\SI {50}{\\mega W}\\\\\nRF pulse length $t_k$ & \\SI {1.5}{\\micro s}\\\\\nUnloaded Q-factor $Q_0$ of SLED & 180000\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\nIn this paper we illustrate the preliminary RF design of the X-band booster. The single cell parameters have been calculated by electromagnetic (e.m.) simulations. On the basis of these results, the accelerating structure length and geometry have been optimized by numerical studies. Finally, the basic RF power distribution layout has been designed. \n\n\\section{Single cell study}\\label{sec:single_cell}\n\nThe main single cell parameters (shunt impedance per unit length $R$, normalized group velocity $v_g\/c$, Q-factor $Q$, peak value of modified Poynting vector \\cite{poynting,poynting4} normalized to the average accelerating field $S_{c\\;max}\/E_{acc}^2$) as a function of the iris radius $a$ have been calculated with ANSYS Electronics Desktop \\cite{hfss_website}. The results are reported in Fig. \\ref{fig:cell_parameters}.\n\n\n\nAccording to beam dynamics calculations and single bunch beam break up limits, an average iris radius $\\langle a \\rangle$=3.2 mm has been taken into account (the corresponding parameters are given in Tab. \\ref{tab:a_3.2}) \\cite{vaccarezza_eaac2017}.\n\n\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{cell_parameters.pdf}\n\\caption{Single cell parameters as a function of the iris radius.}\n\\label{fig:cell_parameters}\n\\end{figure}\n\n\\begin{table}[hbt]\n\\center{\n\\caption{Single cell parameters for an iris radius of 3.2 mm.}\n \\label{tab:a_3.2}\n\\begin{tabular}{lcr}\n\\midrule\niris radius $a$ [mm] & 3.2\\\\\niris thickness $t$ [mm]& 2.5\\\\\ncell radius $b$ [mm] & 10.139\\\\\ncell length $d$ [mm]& 8.332\\\\\n$R$ [M$\\Omega$\/m] & 93\\\\\n$v_g\/c$ [$\\%$] & 1.382\\\\\n$Q$ & 6396\\\\\n$S_{c\\;max}\/E_{acc}^2$ [A\/V] & $3.9 \\cdot 10^{-4}$\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\section{Analytical optimization of structure effective shunt impedance}\n\nThe accelerating gradient distribution along the structure after one filling time $t_f$ is given by the formula \\cite{anal_grudiev}:\n\\begin{linenomath*}\n\\begin{equation}\\label{eq:G}\nG(z,t_f) = G_0[t_f-\\tau(z)] g(z),\n\\end{equation}\n\\end{linenomath*}\nwhere $z$ is the longitudinal position, $\\tau(z) = \\int_{0}^{z} \\frac{dz^\\prime}{v_g(z^\\prime)}$ is the signal time delay and $g(z)$ is defined as:\n\\begin{linenomath*}\n\\begin{equation}\ng(z)=\\sqrt{\\frac{v_g(0)}{v_g(z)}} \\sqrt{\\frac{R(z)Q(0)}{R(0)Q(z)}} e^{-\\frac{1}{2}\\int_{0}^{z} \\frac{\\omega}{v_g(z^\\prime)Q(z^\\prime)} dz^\\prime}.\n\\end{equation}\n\\end{linenomath*}\n$G_0$ is the gradient at the beginning of the structure given by:\n\\begin{linenomath*}\n\\begin{equation}\nG_0(t)=\\sqrt{\\frac{\\omega R(0) P_0(t)}{v_g(0) Q(0)}}.\n\\end{equation}\n\\end{linenomath*}\n$P_0(t)$ is the input RF power and, due to the SLED, is given by:\n\\begin{linenomath*}\n\\begin{equation}\nP_0(t)=P_k \\cdot k_{SLED}^2(t),\n\\end{equation}\n\\end{linenomath*}\nwhere $P_k$ is the power from the klystron and $k_{SLED}(t)$ is the SLED electric field gain factor \\cite{farkas}.\n\nIntegrating Eq. \\eqref{eq:G} along the structure length $L_s$, we obtain the accelerating voltage $V_a$.\n\nThe efficiency of the structure is given by the effective shunt impedance per unit length $R_s$ defined as \\cite{neal}:\n\\begin{linenomath*}\n\\begin{equation}\\label{eq:R_s}\nR_s = \\frac{V_a^2}{P_k L_s} = \\frac{V_a \\langle G \\rangle}{P_k} = \\frac{\\langle G \\rangle^2 L_s}{P_k},\n\\end{equation}\n\\end{linenomath*}\n where $\\langle G \\rangle = V_a\/L_s$ is the average accelerating gradient. \n\n\n\nFor a constant impedance (CI) structure $R_s$, as a function of the section attenuation $\\tau_s$ ($= \\frac{\\omega}{2 v_g Q}L_s$), is given by \\cite{leduff}:\n\\begin{linenomath*}\n\\begin{align}\\label{eq:Rs_CI}\nR_s =\\;&2 \\tau_s R \\left\\{ \\frac{1 - \\frac{2 Q_l}{Q_e}}{\\tau_s} \\left( 1 - e^{-\\tau_s} \\right) + \\right. \\nonumber \\\\ &\\left. + \\frac{\\frac{2 Q_l}{Q_e} \\left[ 2 - e^{- \\left( \\frac{\\omega t_k}{2 Q_l} - \\tau_s \\frac{Q}{Q_l} \\right)} \\right]}{\\tau_s \\left( 1 - \\frac{Q_l}{Q_e} \\right)} \\left( e^{-\\tau_s \\frac{Q_l}{Q_e}} - e^{-\\tau_s} \\right) \\right\\}^2,\n\\end{align}\n\\end{linenomath*}\nwhere $Q_l = \\frac{Q_0 Q_e}{Q_0 + Q_e}$ is the loaded Q-factor of SLED (being $Q_e$ the external quality factor).\nFor a constant gradient (CG) structure $R_s$ is given by \\cite{farkas}:\n\\begin{linenomath*}\n\\begin{align}\\label{eq:Rs_CG}\nR_s =\\;&R \\frac{2 \\tau_s}{1+\\tau_s} \\left\\{ 1 - \\frac{2 Q_l}{Q_e} + \\frac{2 Q_l}{Q_e} \\left[ 2 - e^{-\\frac{\\omega t_k}{2 Q_l}} \\left( \\frac{1 + \\tau_s}{1 - \\tau_s} \\right)^{\\frac{Q}{2 Q_l}} \\right] \\cdot \\right. \\nonumber \\\\ &\\left. \\cdot \\frac{1 - \\tau_s}{2 \\tau_s} \\frac{1}{1-Q\/2Q_l} \\left[ \\left( \\frac{1 + \\tau_s}{1 - \\tau_s} \\right)^{1 - \\frac{Q}{2 Q_l}} - 1 \\right] \\right\\}^2.\n\\end{align}\n\\end{linenomath*}\nFigure \\ref{fig:Rs_CI_CG} shows $R_s$ as a function of $\\tau_s$ for both structures, with the parameters of Tabs. \\ref{tab:RF_parameters} and \\ref{tab:a_3.2}. In both cases the value of the external Q-factor $Q_e$ of the SLED has been chosen in order to maximize $R_s$. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{Rs_CI_CG.pdf}\n\\caption{Effective shunt impedance per unit length for the CI and CG structure.}\n\\label{fig:Rs_CI_CG}\n\\end{figure}\n\nThe accelerating gradient for a CI structure is given by \\cite{leduff}:\n\\begin{linenomath*}\n\\begin{align}\nG(z,t_f) = &\\sqrt{\\frac{\\omega}{v_g} \\frac{R}{Q} P_k}\\;e^{-\\tau_s\\frac{z}{L_s}} \\cdot \\nonumber \\\\ &\\cdot \\left\\{ 1 + \\frac{2 Q_l}{Q_e} \\left[ e^{- \\frac{\\omega (L_s - z)}{2 v_g Q_l}} \\left( 2 - e^{- \\frac{\\omega \\left(t_k - t_f\\right)}{2 Q_l}} \\right) - 1 \\right] \\right\\},\n\\end{align}\n\\end{linenomath*}\nwhile for a CG structure is given by \\cite{farkas}:\n\\begin{linenomath*}\n\\begin{align}\nG(z,t_f) = &\\sqrt{\\frac{2 \\tau_s}{1 + \\tau_s} \\frac{R}{L_s} P_k} \\left\\{ 1 + \\frac{2 Q_l}{Q_e} \\cdot \\right. \\nonumber \n\\\\ &\\left. \\cdot \\left[ \\left( \\frac{1 + \\tau_s \\left(1 - \\frac{2z}{L_s}\\right)}{1 - \\tau_s} \\right)^{- \\frac{Q}{2 Q_l}} \\left( 2 - e^{-\\frac{\\omega \\left(t_k - t_f\\right)}{2 Q_l}} \\right) - 1 \\right] \\right\\}.\n\\end{align}\n\\end{linenomath*}\nThe previous formulas \\eqref{eq:Rs_CI},\\eqref{eq:Rs_CG} allow calculating the optimum $\\tau_s$ (=$\\tau_{s0}$) that maximize $R_s$. This fixes also the filling time of the structure, i.e. the compressed pulse length and allows to calculate the corresponding gradient profiles given in Figure \\ref{fig:G_CI_CG}.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{G_CI_CG.pdf}\n\\caption{Normalized gradient distribution along the structure for the CI and CG structure.}\n\\label{fig:G_CI_CG}\n\\end{figure}\n\nThe final optimum structures parameters are summarized in Table \\ref{tab:CI_vs_CG}.\n\n\\begin{table}[hbt]\n\\center{\n\\caption{CI and CG structure parameters (analytical study).}\n \\label{tab:CI_vs_CG}\n\\begin{tabular}{lcr}\n\\toprule\n\\textbf{Parameter} & \\textbf{CI} & \\textbf{CG}\\\\\n\\midrule\n$R_s$ \\SI {}{[M\\ohm\/m]} & 343 & 344\\\\\nOptimal structure length $L_s$ [m] & 0.474 & 0.432\\\\\nFilling time $t_f$ [ns] & 114 & 118\\\\\nExternal Q-factor $Q_e$ of SLED & 20030 & 21170\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Numerical optimization}\n\nIn the analytical study, the CG solution is approximated because of the assumption of constant $R\/Q$ \\cite{lapostolle} along the structure. On the other hand in the CI case it is quite easy to verify that, in the VHG case, one exceeds the maximum value of $S_{c\\;max}$ that allows to have a Breakdown rate (BDR) lower than $10^{-6}$ bpp\/m \\cite{poynting,poynting4}. For these reasons we also performed a numerical study.\n\nTo this purpose we have considered a linear tapering of the irises as sketched in Figure \\ref{fig:tapering}, defined by the modulation angle $\\theta$. We have then calculated (by Eq. \\eqref{eq:G}) the gradient profile along the structure for different $\\theta$ and $L_s$. In the calculation we have used the polynomial fits of the single cell parameters illustrated in section \\ref{sec:single_cell}. From the gradient profiles we have finally calculated the effective shunt impedance per unit length and the peak value of the modified Poynting vector.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics*[width=180pt]{linear_tapering_2.png}\n\\caption{Sketch of the linear iris tapering.}\n\\label{fig:tapering}\n\\end{figure}\n\nFigure \\ref{fig:BFL_G} shows, as an example, the normalized gradient profile as a function of $\\theta$ for $L_s$ equal to 0.5 m. $R_s$ and $S_{c\\;max}$ (VHG case), as a function of $\\theta$ and for different $L_s$, are given in Figures \\ref{fig:BFL_Rs} and \\ref{fig:BFL_Sc}. In Figure \\ref{fig:BFL_Sc} we have also reported the maximum value of $S_{c\\;max}$ that, according to the scaling law given in \\cite{poynting}, allows having a BDR lower than $10^{-6}$ bpp\/m. From the plot it is quite easy to note that the 0.4 m and 0.5 m long structures have the same efficiency while the 0.667 m case is worse. Concerning $S_{c\\;max}$, the 0.4 m solution is better but requires, on the other hand, a larger number of structures per unit length. In conclusion the 0.5 m case with $\\theta$=\\SI{0.1}{\\degree} has been chosen as the design baseline for the X-band linac.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{BFL_G_theta_60cells_fl.pdf}\n\\caption{Normalized gradient after one filling time as function of the modulation angle (0.5 m case).}\n\\label{fig:BFL_G}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{BFL_Rs_48_60_80cells.pdf}\n\\caption{Effective shunt impedance per unit length as a function of the modulation angle for three structure lengths.}\n\\label{fig:BFL_Rs}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics*[scale=1]{BFL_Sc_max_48_60_80cells_80MVm.pdf}\n\\caption{Peak value of modified Poynting vector as a function of the modulation angle for three structure lengths (VHG case).}\n\\label{fig:BFL_Sc}\n\\end{figure}\n\n\n\\section{Linac basic layout}\n\nAccording to the results of numerical study, the TW X-band accelerating sections optimized for the EuPRAXIA@SPARC\\_LAB application are 0.5 m long and show an effective shunt impedance per unit length of \\SI {346}{\\mega\\ohm\/m}. Commercially available X-band klystrons \\cite{cpi_website} provide up to 50 MW peak power with \\SI {1.5}{\\micro s} long RF pulses. We have estimated the RF losses in the waveguide distribution system of $\\approx$-7 dB and, as a consequence, $\\approx$40 MW available input power. The basic RF module of the EuPRAXIA@SPARC\\_LAB X-band linac can be conveniently composed by a group of 8 TW sections assembled on a single girder and powered by one (for HG) or two (for VHG) klystrons by means of one pulse compressor system and a waveguide network splitting and transporting the RF power to the input couplers of the sections. The sketch of the basic module is given in Fig. \\ref{fig:RF_layout} while the final main linac parameters are shown in Tab. \\ref{tab:final_parameters}.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=180pt]{RF_layout.png}\n\\caption{RF power distribution layout of a single module for the HG and VHG cases.}\n\\label{fig:RF_layout}\n\\end{figure}\n\n\\begin{table}[hbt]\n\\center{\n\\caption{X-band linac parameters.}\n \\label{tab:final_parameters}\n\\begin{tabular}{lcr}\n\\midrule\nFrequency of operation [GHz] & \\multicolumn{2}{c}{11.9942}\\\\\nRF pulse length $t_f$ [ns] & \\multicolumn{2}{c}{129}\\\\\nUnloaded Q-factor $Q_0$ of SLED & \\multicolumn{2}{c}{180000}\\\\\nExternal Q-factor $Q_e$ of SLED & \\multicolumn{2}{c}{21800}\\\\\n$a$ first-last cell [mm] & \\multicolumn{2}{c}{$3.636 - 2.764$}\\\\\nStructure length $L_s$ [m] & \\multicolumn{2}{c}{0.5}\\\\\nActive length $L_t$ [m] & \\multicolumn{2}{c}{16}\\\\\nNo. of structures $N_s$ & \\multicolumn{2}{c}{32}\\\\\n$v_g\/c$ first-last cell [\\%] & \\multicolumn{2}{c}{$2.23 - 0.77$}\\\\\n$R_s$ \\SI {}{[\\mega\\ohm\/m]} & \\multicolumn{2}{c}{346}\\\\\n& HG & VHG\\\\\nAverage gradient $\\langle G \\rangle$ [MV\/m] & 57 & 80\\\\\nEnergy gain $W_{gain}$ [MeV] & 912 & 1280\\\\\nTotal Required RF power $P_{RF}$ [W] & 150 & 296\\\\\nNo. of klystrons $N_k$ & 4 & 8\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\section{Conclusions}\nIn the paper we have illustrated the preliminary RF design of the EuPRAXIA@SPARC\\_LAB X-band linac. It has been done performing e.m. simulations of the single cell, and analytical and numerical optimization of the structure efficiency taking into account the available space, local field quantity (modified Poynting vector) minimization and available commercial klystrons. The final linac\nlayout and main parameters have been finally shown.\n\n\\section*{Acknowledgments}\n\nThis work was supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 653782.\n\n\n\n\n\n\\bibliographystyle{elsarticle-num}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeeiy b/data_all_eng_slimpj/shuffled/split2/finalzzeeiy new file mode 100644 index 0000000000000000000000000000000000000000..5a287b81bf093f0e9d0370c6f1698e73593239d6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeeiy @@ -0,0 +1,5 @@ +{"text":"\\section*{Acknowledgements}\nThis manuscript has been authored under contract number\nDE-AC02-98CH10886 with the U.S.~Department of Energy. Accordingly,\nthe U.S. Government retains a non-exclusive, royalty-free license to\npublish or reproduce the published form of this contribution, or allow\nothers to do so, for U.S.~Government purposes.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n\nIntermediate band solar cells (IBSCs) \\cite{Luque_PRL_1997,Okasa_APR_2015} constitute an attractive approach to develop next-generation solar cells, since they offer the potential to significantly exceed the single-junction Shockley-Queisser (detailed balance) photovoltaic efficiency limit. \\cite{Shockley_JAP_1961} This is achieved via introduction of an intermediate band (IB) lying energetically within the band gap of a host matrix semiconductor. In the case that the IB is electrically isolated from both the valence band (VB) and conduction band (CB) of the host matrix semiconductor -- via gaps in the density of states -- carrier generation due to absorption of photons having energy less than the band gap $E_{g} = E_{\\scalebox{0.7}{\\textrm{CB}}} - E_{\\scalebox{0.7}{\\textrm{VB}}}$ of the host matrix semiconductor can occur via two-step photon absorption (TSPA). In a hole-based IBSC, TSPA proceeds via (i) an electron in the IB being promoted to the CB via absorption of a sub-band gap photon having energy $E_{\\scalebox{0.7}{\\textrm{H}}} = E_{\\scalebox{0.7}{\\textrm{CB}}} - E_{\\scalebox{0.7}{\\textrm{IB}}}$, and (ii) subsequent promotion of the resulting IB hole to the VB via the absorption of a second sub-band gap photon having energy $E_{\\scalebox{0.7}{\\textrm{L}}} = E_{\\scalebox{0.7}{\\textrm{IB}}} - E_{\\scalebox{0.7}{\\textrm{VB}}}$. The presence of the IB therefore enhances the photocurrent generated by a single-junction solar cell at fixed illumination, by allowing absorption of photons having energy $< E_{g}$, while the electrical isolation of the IB from the host matrix CB and VB ensures that the open-circuit voltage $V_{\\scalebox{0.7}{\\textrm{OC}}}$ is determined by $E_{g}$ rather than being limited by the sub-band gap energies $E_{\\scalebox{0.7}{\\textrm{L}}}$ and $E_{\\scalebox{0.7}{\\textrm{H}}}$. \\cite{Luque_PRL_1997,Luque_AM_2010}\n\nDue to their promise of high efficiency -- with a predicted detailed balance limit of 63.8\\% under concentrated illumination \\cite{Luque_AM_2010,Luque_PRL_1997,Okasa_APR_2015} -- IBSCs have attracted significant research interest. However, despite the simplicity of the underlying concepts, practical realisation of IBSCs has proved extremely challenging. \\cite{Marti_JPE_2013,Okasa_APR_2015} Practical efforts to realise IBSCs have to date centred primarily on two approaches to introduce an IB into the band gap of a host matrix semiconductor, relying on (i) a bound electron or hole ground state in a three-dimensional quantum confined heterostructure possessing a discrete density of states, e.g.~in a quantum dot (QD), or (ii) an IB formed via incorporation of a small concentration of a substitutional impurity, e.g.~in a highly-mismatched alloy (HMA) such as dilute nitride GaN$_{x}$As$_{y}$P$_{1-x-y}$. \\cite{Kudraweic_PRA_2014} The performance of QD-IBSCs has been limited by a combination of poor sub-band gap absorption, which limits short-circuit current density $J_{\\scalebox{0.7}{\\textrm{SC}}}$, and short radiative lifetimes $\\tau_{\\scalebox{0.7}{\\textrm{rad}}}$ for carriers occupying IB states, resulting in loss of carriers from the IB via radiative recombination and so reducing $V_{\\scalebox{0.7}{\\textrm{OC}}}$. Also, the presence of insufficiently large band offsets in QDs can lead to loss of carriers via thermionic emission (e.g.~from the IB to CB in an electron-based QD-IBSC). \\cite{Ramiro_IEEEJP_2015} The use of localised impurity states in HMA-IBSCs also presents fundamental issues, increasing losses associated with non-radiative (Shockley-Read-Hall) recombination of carriers occupying IB states at defect sites, thereby limiting carrier extraction and degrading overall efficiency.\n\n\n\nAs a result, research efforts to realise IBSCs have increasingly shifted away from conventional platforms such as type-I QDs, and towards novel materials and heterostructures whose electronic and optical properties offer the potential to overcome the aforementioned limitations. In particular, there is increasing interest in heterostructures having type-II band alignment, \\cite{Kechiantz_N_2007,Tayagaki_APL_2012,Kechiantz_PP_2015,Takeshi_APL_2016} due to their potential to suppress radiative losses as a consequence of the intrinsically large radiative lifetimes associated with their bound eigenstates (resulting from reduced electron-hole spatial overlaps), \\cite{Cuadra_PE_2002,Nishikawa_APL_2012} as well as reduced intra-band carrier relaxation via non-radiative (Auger) recombination. \\cite{Tomic_APL_2011,Tomic_APL_2013} Much of the work to date on type-II QD-IBSC systems has centred on InAs\/GaAs$_{1-x}$Sb$_{x}$ QDs -- an electron-based IBSC, in which the IB is formed by the lowest energy bound electron states in the InAs dot, and holes are localised in a GaAs$_{1-x}$Sb$_{x}$ quantum well buffer layer -- where increases in QD uniformity and number density have been reported in epitaxial growth of vertical QD stacks. \\cite{Ban_APL_2010,Liu_SEMSC_2012,Hatch_OE_2014,Cheng_SEMSC_2016} Experimental investigations have revealed increased $J_{\\scalebox{0.7}{\\textrm{SC}}}$ compared to conventional type-I InAs\/GaAs QDs, \\cite{Liu_SEMSC_2012} in line with the theoretically predicted increase in the radiative lifetime of IB states in these structures. \\cite{Tomic_APL_2013} While these promising results highlight the potential of type-II heterostructures as candidate IBSCs, the InAs\/GaAs(Sb) system suffers from a relatively low CB offset, leading to carrier leakage via thermionic emission from the IB at room temperature. \\cite{Cheng_SEMSC_2016} Furthermore, the sub-band gaps in InAs\/GaAs(Sb) QDs are far from the optimum values $E_{\\scalebox{0.7}{\\textrm{H}}} = 0.97$ eV and $E_{\\scalebox{0.7}{\\textrm{L}}} = 0.45$ eV required to maximise the calculated detailed balance efficiency for an IBSC based on a GaAs host matrix semiconductor. \\cite{Wang_IETO_2014}\n\nWhat is therefore required is to broaden investigations to additional type-II heterostructure systems, focusing on the ability to engineer the electronic properties so as to reliably tune the energy and character of the IB states in order to maximise overall efficiency. One such class of novel type-II heterostructures are GaAs$_{1-x}$Sb$_{x}$\/GaAs quantum rings (QRs). These hole-based IBSCs -- in which the IB is formed by the highest energy bound hole eigenstate in the QR -- have attracted increasing attention due not only to their type-II band alignment, but also to their large band offsets, which are expected to mitigate carrier loss via thermionic emission from the IB. Experimental analysis of prototype IBSCs based on type-II GaAs$_{1-x}$Sb$_{x}$\/GaAs vertical QR stacks has revealed several promising properties compared to conventional QD-IBSCs, including (i) enhanced TSPA and external quantum efficiency, \\cite{Wagener_JAP_2014_2,Shoji_AIPA_2017} (ii) reduced losses via radiative recombination, leading to improved carrier extraction and overall efficiency, \\cite{Hwang_JAP_2012} and (iii) recovery of $V_{\\scalebox{0.7}{\\textrm{OC}}}$ under concentrated illumination. \\cite{Tsai_OE_2014,Fujita_PP_2015,Montesdeoca_SEMSC_2018}\n\n\n\nDespite numerous and ongoing experimental investigations of type-II GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs for IBSC applications, there is little detailed information available in the literature regarding their electronic properties from a theoretical perspective. Here, we present a combined analytical and numerical analysis of the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs and demonstrate that minor changes in morphology, compatible with established epitaxial growth, can be exploited to tune the QR hole ground state (IB) to an optimum energy to maximise IBSC efficiency. Using an analytical analysis -- based on a solution of the time-independent Schr\\\"{o}dinger equation for a cylindrical QR of infinite potential depth -- we highlight that the QR geometry offers significant flexibility, compared to the conventional QD geometry, to engineer the valence band (VB) structure of GaAs$_{1-x}$Sb$_{x}$\/GaAs structures for IBSC applications. Our numerical calculations -- based on a multi-band \\textbf{k}$\\cdot$\\textbf{p} Hamiltonian, and including full strain and piezoelectric effects -- corroborate this finding. We highlight that type-II GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs are ideally suited to IBSC applications, due not only to their intrinsically large radiative lifetimes, but also due to their large VB offsets, which can be expected to mitigate thermionic emission of holes from the IB. Furthermore, the strong confinement of the highest energy hole (IB) states in these QRs is expected to mitigate the impact of miniband formation via electronic coupling between QRs in high-density stacks, which has been demonstrated to lead to a closing of the gap in the density of states between the IB and CB in electron-based InAs\/GaAs QD stacks. \\cite{Tomic_PRB_2010}\n\nWe undertake a numerical optimisation of the QR morphology, by varying the QR dimensions and alloy composition, to identify structures which allow to achieve optimum IB energy so as to maximise IBSC efficiency. Our results confirm the potential of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs for IBSC applications, and provide guidelines for the growth of suitable structures for prototype IBSCs.\n\n\n\nThe remainder of this paper is organised as follows. In Sec.~\\ref{sec:theoretical_model} we outline the theoretical models we have developed to calculate the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs, describing analytical and numerical approaches in Secs.~\\ref{sec:theoretical_model_analytical} and~\\ref{sec:theoretical_model_numerical} respectively. OWe present our results in Sec.~\\ref{sec:results}, beginning in Sec.~\\ref{sec:results_analytical} with an analysis of the QR ground state obtained from the analytical model. In Sec.~\\ref{sec:results_numerical} we describe the electronic structure of real GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs, based on numerical multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations. Finally, in Sec.~\\ref{sec:conclusions} we summarise and conclude.\n\n\n\n\\section{Theoretical model}\n\\label{sec:theoretical_model}\n\nIn this section we describe the theoretical approaches we have applied to investigate the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs. We begin in Sec.~\\ref{sec:theoretical_model_analytical} with a description of the analytical solution of Schr\\\"{o}dinger's equation for a QR of infinite potential depth, and in Sec.~\\ref{sec:theoretical_model_numerical} describe numerical strain relaxation and multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations for realistic QRs.\n\n\n\n\\subsection{Analytical: solution of Schr\\\"{o}dinger's equation}\n\\label{sec:theoretical_model_analytical}\n\n\nOur analytical analysis starts via the solution of the time-independent Schr\\\"{o}dinger equation for the eigenstates of an idealised, [001]-oriented cylindrical QR of infinite potential depth. The QR is taken to have inner and outer radii $a_{1}$ and $a_{2}$ in the plane perpendicular to the [001] direction, and height $h$ along [001]. A schematic illustration of the QR geometry is shown in Fig.~\\ref{fig:analytical}(a). Choosing the origin of a cylindrical polar coordinate system $( r, \\phi, z )$ to lie at the centre of the base of the QR, the potential -- which is axially symmetric about the [001], or $z$, direction and hence independent of $\\phi$ -- is\n\n\\begin{equation}\n\tV (r, h) = \\left\\{\n\t\t\\begin{array}{lr}\n\t\t\t0 \\; , & a_{1} < r < a_{2}~\\textrm{and}~0 < z < h \\\\\n\t\t\t\\infty, & \\textrm{otherwise}\n\t\t\\end{array}\n\t\t\\right. \\, .\n\t\\label{eq:quantum_ring_potential}\n\\end{equation}\n\n\n\nWe proceed via separation of variables by writing the QR eigenstates $\\psi_{lmn} ( r, \\phi, z ) = R_{lm} ( r ) \\Phi_{m} ( \\phi ) Z_{n} ( z )$, yielding three separable differential equations (one in each of the radial, polar and longitudinal coordinates $r$, $\\phi$ and $z$). The solution of the $\\phi$ and $z$ equations are as in the conventional cylindrical QD of infinite potential depth (corresponding here to $a_{1} = 0$). The solution in the polar ($\\phi$) direction is trivial due to the axial symmetry of the potential about the $z$ direction, with $\\Phi_{m} ( \\phi ) = \\frac{ 1 }{ \\sqrt{ 2 \\pi } } \\, e^{ i m \\phi }$ for integer $m$. Along the $z$ direction the eigenstates are those of the infinite square well\n\n\\begin{equation}\n Z_{n} ( z ) = \\sqrt{ \\frac{ 2 }{ h } } \\sin \\left( \\frac{ n \\pi z }{ h } \\right) \\, .\n \\label{eq:general_solution_longitudinal}\n\\end{equation}\n\nWithin the QR $V ( r, z ) = 0$ and the radial equation reduces to the (cylindrical) Bessel equation, the general solution of which is $R_{lm} ( r ) = A_{lm} J_{m} ( kr ) + B_{lm} Y_{m} ( kr )$, where $J_{m} ( kr )$ and $Y_{m} ( kr )$ are respectively the Bessel functions of the first and second kind. Here, $l$ is a positive integer which indexes the (discrete) allowed values $k_{lm}$ of the radial wave vector $k$. To this point, the general solution is identical to that of a cylindrical QD of infinite potential depth. The difference in the QR case arises due to the presence of the central barrier for $r \\leq a_{1}$. Due to the presence of the central barrier we seek radial wave functions satisfying the boundary conditions $R_{lm} ( a_{1} ) = 0$ and $R_{lm} ( a_{2} ) = 0$. Applying the first of these conditions allows us to solve for $B_{lm}$ in terms of $A_{lm}$, giving\n\n\\begin{equation}\n R_{lm} ( r ) = A_{lm} \\left( J_{m} ( kr ) - \\frac{ J_{m} ( k a_{1} ) }{ Y_{m} ( k a_{1} ) } \\, Y_{m} ( kr ) \\right) \\, ,\n \\label{eq:radial_wave_function}\n\\end{equation}\n\n\\noindent\nwhere the constant $A_{lm}$ can be determined via normalisation. Applying the second boundary condition then yields the transcendental equation\n\n\\begin{equation}\n J_{m} ( k a_{1} ) \\, Y_{m} ( \\rho k a_{1} ) - J_{m} ( \\rho k a_{1} ) \\, Y_{m} ( k a_{1} ) = 0 \\, ,\n \\label{eq:quantum_ring_transcendental}\n\\end{equation}\n\n\\noindent\nwhere we have defined $\\rho = \\frac{ a_{2} }{ a_{1} }$ as the ratio of the outer and inner QR radii (so that $\\rho k a_{1} = k a_{2}$). The wave vector $k_{lm}$ associated with the radial wave function $R_{lm} ( r )$ is then, for a given value of $m$, determined via the $l^{\\scalebox{0.7}{\\textrm{th}}}$ root $k_{lm} a_{1}$ of Eq.~\\eqref{eq:quantum_ring_transcendental}. Defining $a = a_{2} - a_{1}$ as the radial thickness of the QR, we note that Eq.~\\eqref{eq:radial_wave_function} reduces to $R_{lm} ( r ) = A_{lm} J_{m} ( kr )$ in the limit $a_{1} \\to 0$ ($a_{2} \\to a$), with Eq.~\\eqref{eq:quantum_ring_transcendental} correspondingly reducing to $J_{m} ( ka ) = 0$, yielding the well-known solution of Schr\\\"{o}dinger's equation for a cylindrical QD of radius $a$ as the QR is transformed into a QD via the removal of the central potential barrier.\n\nThe energies of the QR eigenstates are given by the sum of the in- and out-of-plane contributions\n\n\\begin{equation}\n E_{lmn} = \\frac{ \\hbar^{2} }{ 2 m_{0} } \\left( \\frac{ k_{lm}^{2} }{ m_{\\perp}^{\\ast} } + \\frac{ \\pi^{2} n^{2} }{ m_{\\parallel}^{\\ast} h^{2} } \\right) \\, ,\n \\label{eq:quantum_ring_ground_state_energy}\n\\end{equation}\n\n\\noindent\nwhere $m_{0}$ is the free electron mass, and where $m_{\\parallel}^{\\ast}$ and $m_{\\perp}^{\\ast}$ are respectively the (relative) effective mass parallel and perpendicular to the [001] growth direction. The QR ground state is obtained for quantum numbers $( l, m, n ) = ( 1, 0, 1 )$.\n\n\n\n\\begin{figure*}[ht!]\n\t\\includegraphics[width=1.00\\textwidth]{.\/figure_1.pdf}\n\t\\caption{(a) Schematic illustration of the QR geometry considered in this work, viewed side-on from the [100] direction (top) and top-down from the [001] direction (bottom). The cylindrical QR has inner and outer radii $a_{1}$ and $a_{2}$, radial thickness $a = a_{2} - a_{1}$, and height $h$ along the growth direction. (b) Variation of the ground state radial wave vector $k_{10}$, computed as the first root of Eq.~\\eqref{eq:quantum_ring_transcendental} for $m = 0$, as a function of the inner radius $a_{1}$ for a QR of fixed radial thickness $a$. As $a_{1}$ increases as a proportion of $a$, $k_{10}$ rapidly converges to a value of $\\frac{ \\pi }{ a }$ (dashed black line). The inset shows the Bessel functions $J_{0} (x)$ (solid blue line) and $Y_{0} (x)$ (dashed blue line), as well as the left-hand side of Eq.~\\eqref{eq:quantum_ring_transcendental} for $\\rho = 2$ (i.e.~$a_{1} = a$). (c) Contour map showing the confinement energy associated with the HH ground state in a GaSb\/GaAs QR having outer radius $a_{2} = 12$ nm, calculated via Eqs.~\\eqref{eq:quantum_ring_transcendental} and Eq.~\\eqref{eq:quantum_ring_ground_state_energy}, as a function of the QR inner radius $a_{1}$ and height $h$.}\n \\label{fig:analytical}\n\\end{figure*}\n\n\n\n\\subsection{Numerical: multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations}\n\\label{sec:theoretical_model_numerical}\n\nOur numerical analysis of the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs is based on multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations. \\cite{Arkani_NUSOD_2019} We employ a supercell approach, by embedding a cylindrical GaAs$_{1-x}$Sb$_{x}$\/GaAs QR in a GaAs matrix, and relax the supercell by minimising the total elastic energy with respect to the components $\\epsilon_{ij}$ of the strain tensor to obtain the strain fields $\\epsilon_{ij} ( \\textbf{r} )$. The QR VB eigenstates are computed using a strain-dependent 6-band \\textbf{k}$\\cdot$\\textbf{p} Hamiltonian -- i.e.~the Luttinger-Kohn VB Hamiltonian, including Bik-Pikus strain-related terms -- which explicitly treats heavy-hole (HH), light-hole (LH) and spin-split-off (SO) VB states. The QR CB eigenstates are computed using a strain-dependent 1-band (effective mass) Hamiltonian. We note that the separate treatment of the CB and VB eigenstates has been chosen to circumvent the emergence of spurious solutions in full 8-band \\textbf{k}$\\cdot$\\textbf{p} calculations, which arise in the presence of large plane wave cut-off energies due to the strong inter-band coupling present in narrow-gap GaSb. Both the 6-band VB and 1-band CB calculations explicitly include the strain-induced piezoelectric potential, computed to second order for a given structure using the relaxed strain fields. The QR band offsets are computed firstly for an unstrained structure using the model solid theory -- assuming a natural (unstrained) VB offset of 0.58 eV between GaSb and GaAs \\cite{Wei_APL_1998,Hinuma_PRB_2014} -- with the position-dependent band edge energy (confining potential) profiles then computed via direct diagonalisation of the strain-dependent bulk 6- and 1-band \\textbf{k}$\\cdot$\\textbf{p} Hamiltonians at each real space grid point in the supercell.\n\nOur numerical calculations have been implemented using a plane wave (reciprocal space) approach, via the S\/Phi\/nX software library. \\cite{Marquardt_CPC_2010} We employ [001]-oriented supercells of size of 50 nm $\\times$ 50 nm $\\times$ 14 nm, and a plane wave cut-off energy equivalent to a real space resolution of 0.2 nm in each of the $x$, $y$ and $z$ directions. This choice of plane wave cut-off was validated by examining the convergence of the bound carrier eigenstate energies, which were found to vary by $<1$ meV with respect to further increases in the size of the plane wave basis set. All material parameters used in our calculations -- including lattice and elastic constants, band gaps, VB spin-orbit splitting energies and Luttinger parameters, electron effective masses, and CB and VB edge deformation potentials -- are taken from Ref.~\\onlinecite{Vurgaftman_JAP_2001}, with the exception of the first and second order piezoelectric coefficients, which are taken from Ref.~\\onlinecite{Bester_PRB_2006}. All calculations are performed at temperature $T = 300$ K.\n\n\n\n\\section{Results}\n\\label{sec:results}\n\nIn this section we present the results of our theoretical analysis, beginning in Sec.~\\ref{sec:results_analytical} with a description of trends in the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs based on the analytical treatment described in Sec.~\\ref{sec:theoretical_model_analytical}. Specifically, we describe (i) the rapid convergence of the ground state radial wave vector $k_{10}$ in a QR having fixed radial thickness $a = a_{2} - a_{1}$, allowing the ground state energy in a realistic QR to be easily and accurately estimated, and (ii) trends in the HH confinement energy in realistic GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs. In Sec.~\\ref{sec:results_numerical} we present the results of our numerical calculations, including the strain fields and band offsets (confining potentials) in relaxed QRs, as well as the electronic properties obtained from multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations, allowing morphologies supporting optimised IBSC sub-band gaps to be identified.\n\n\n\n\\subsection{Analytical: ground state radial wave vector convergence and QR HH confinement energy}\n\\label{sec:results_analytical}\n\n\nWe begin by demonstrating a useful convergence property of the ground state radial wave vector $k_{10}$, obtained from the first ($l = 0$) root of Eq.~\\eqref{eq:quantum_ring_transcendental} for $m = 0$. Specifically, for a QR having radial thickness $a = a_{2} - a_{1}$, $k_{10}$ converges rapidly to $\\frac{ \\pi }{ a }$ with increasing inner radius $a_{1}$. This provides a useful approximation, $k_{10} \\approx \\frac{ \\pi }{ a }$, which can be used to estimate the radial contribution to the QR ground state energy (cf.~Eq.~\\eqref{eq:quantum_ring_ground_state_energy}).\n\nTo see that this is the case we note that, for large $x$, $J_{0} (x) \\approx \\sqrt{ \\frac{ 2 }{ \\pi x } } \\cos \\left( x - \\frac{ \\pi }{ 4 } \\right)$ and $Y_{0} (x) \\approx \\sqrt{ \\frac{ 2 }{ \\pi x } } \\sin \\left( x - \\frac{ \\pi }{ 4 } \\right)$, so that the $m = 0$ radial wave function can be written in the form\n\n\\begin{equation}\n R_{l0} (r) \\approx C_{l0} \\sqrt{ \\frac{ 2 }{ \\pi k r } } \\cos \\left( kr + \\theta \\right) \\, ,\n \\label{eq:radial_wave_function_limit}\n\\end{equation}\n\n\\noindent\nfor some phase $\\theta$ and normalisation constant $C_{l0}$. This approximate expression for $R_{l0} (r)$ satisfies the boundary conditions $R_{l0} ( a_{1} ) = 0$ and $R_{l0} ( a_{2} ) = 0$ for radial wave vectors $k_{l0} \\sim \\frac{ l \\pi }{ a }$. We therefore deduce that, for a QR of fixed radial thickness $a$, as the inner radius $a_{1}$ increases, the first root $k_{10}$ of Eq.~\\eqref{eq:quantum_ring_transcendental} should converge to $\\frac{ \\pi }{ a }$.\n\nTo verify that this is the case we have computed the first root $k_{10} a_{1}$ of Eq.~\\eqref{eq:quantum_ring_transcendental} as a function of $a_{1}$ for a QR having fixed thickness $a$ (= 11.5 nm). The results of this analysis are summarised in Fig.~\\ref{fig:analytical}(b), where the solid green line shows the calculated variation of $k_{10}$ (in units of $\\frac{ \\pi }{ a }$) as a function of $\\frac{ a_{1} }{ a }$ ($= \\frac{ 1 }{ \\rho - 1 }$). The inset to Fig.~\\ref{fig:analytical}(b) shows the functions $J_{0} (x)$ (solid blue line) and $Y_{0} (x)$ (dashed blue line), as well as the left-hand side of Eq.~\\eqref{eq:quantum_ring_transcendental} for $m = 0$ and $\\rho = 2$ ($a_{2} = 2 a_{1}$, solid red line). Examining Fig.~\\ref{fig:analytical}(b), we in fact note \\textit{rapid} convergence of $k_{10}$ to $\\frac{ \\pi }{ a }$ with increasing $\\frac{ a_{1} }{ a }$. For $a_{1} = 0$ -- i.e.~for a cylindrical QD of radius $a$ -- $k_{10}$ is given by the first root of $J_{0} ( ka ) = 0$. This solution, highlighted by the open green circles in Fig.~\\ref{fig:analytical}(b) and its inset, is $k_{10} a \\approx 2.4048$ (the first zero of $J_{0} (ka)$), so that $k_{10} \\approx 0.7655 \\times \\frac{ \\pi }{ a }$. As $\\frac{ a_{1} }{ a }$ increases due to the inclusion of the central potential barrier to form a QR, $k_{10}$ then rapidly approaches $\\frac{ \\pi }{ a }$. For example, for $\\frac{ a_{1} }{ a } = 1$ ($\\rho = 2$), which corresponds well to real QR dimensions, we compute $k_{10} \\approx 0.9941 \\times \\frac{ \\pi }{ a }$.\n\nGenerally, we find $k_{10} = \\frac{ \\pi }{ a }$ to be an excellent approximation to the ground state radial wave vector for $\\frac{ a_{1} }{ a } \\gtrsim 1$ -- i.e.~for $a_{1} \\gtrsim \\frac{ a_{2} }{ 2 }$ or, equivalently, $\\rho \\lesssim 2$. For $a = 11.5$ nm we note that $\\rho = 2$ corresponds to an outer QR radius $a_{2} = 2 a_{1} = 23$ nm, dimensions typical of epitaxially grown GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs, which will be analysed in further detail in Sec.~\\ref{sec:results_numerical}. This rapid convergence of $k_{10}$ then provides a simple and reliable approach to estimate the radial contribution to the QR ground state energy, circumventing the requirement to numerically compute the roots of Eq.~\\eqref{eq:quantum_ring_transcendental}, with the overall accuracy of the corresponding estimate of the ground state energy $E_{101}$ being better for small QR aspect ratios $\\frac{ h }{ 2 a_{2} }$ (where the radial component contributes a small proportion of the total confinement energy, cf.~Eq.~\\eqref{eq:quantum_ring_ground_state_energy}).\n\n\n\nWe have used Eq.~\\eqref{eq:quantum_ring_ground_state_energy} to estimate the confinement energy associated with the HH ground state in a GaSb\/GaAs QR. To do so we set $m_{\\parallel}^{\\ast} = ( \\gamma_{1} - 2 \\gamma_{2} )^{-1}$ and $m_{\\perp}^{\\ast} = ( \\gamma_{1} + 2 \\gamma_{2} )^{-1}$, which are the bulk HH VB edge effective masses admitted by the 6-band Luttinger-Kohn Hamiltonian, \\cite{Luttinger_PR_1955,Vurgaftman_JAP_2001} where $\\gamma_{1}$ and $\\gamma_{2}$ are the VB Luttinger parameters. Following Ref.~\\onlinecite{Vurgaftman_JAP_2001} we set $\\gamma_{1} = 13.4$ and $\\gamma_{2} = 4.7$ and respectively obtain $m_{\\parallel}^{\\ast} = 0.250$ and $m_{\\perp}^{\\ast} = 0.044$ for the (relative) HH effective masses parallel and perpendicular to the [001] direction in GaSb. The results of our calculations are summarised in Fig.~\\ref{fig:analytical}(c), in which the solid red lines are contours of constant HH ground state (confinement) energy as a function of inner radius $a_{1}$ and height $h$, for a QR having fixed outer radius $a_{2} = 12$ nm. We note from these results the flexibility offered by the QR geometry from the perspective of band structure engineering for IBSC applications: the HH confinement energy can readily be tuned across a broad range via relatively minor adjustments in QR morphology, allowing the IB energy to be tuned in a hole-based QR-IBSC. This provides distinct advantages compared to, e.g., equivalent GaAs$_{1-x}$Sb$_{x}$\/GaAs QDs, since the QR inner radius $a_{1}$ provides an additional parameter by which the electronic properties can be tuned. The realities of epitaxial growth do not in general allow $a_{1}$ to be fine tuned independently of the other QR dimensions $a_{2}$ and $h$, the relationships between which are in practice determined in large part by the (Stranski-Krastanov) strain relaxation mechanism that drives QR formation. However, it is generally observed that Stranski-Krastanov QR formation tends to fix $a_{2}$, with the inner radius $a_{1}$ then depending largely on growth rate. Correspondingly, QR heights are generally found to be in the range $h = 3 \\pm 2$ nm. \\cite{Khan_EMC_2016} Despite incomplete control over the precise morphology of individual QRs during epitaxial growth, we emphasise that GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs offer additional benefits for IBSC applications due to the nature of carrier localisation within these structures. To elucidate these properties requires a more detailed, quantitative analysis of the electronic properties, to which we now turn our attention.\n\n\n\n\\subsection{Numerical: strain, band offsets, carrier localisation and IBSC sub-band gaps}\n\\label{sec:results_numerical}\n\n\nWhile the analytical treatment of Secs.~\\ref{sec:theoretical_model_analytical} and~\\ref{sec:results_analytical} provides useful insight into some of the general features of the electronic properties of QRs, this approach neglects several key factors which play an important role in determining the nature of the electronic properties in a real semiconductor QR. Firstly, the large lattice mismatch which drives QR formation -- 7.2\\% in the case of GaSb\/GaAs -- produces large, strongly position dependent strain fields, which impact the electronic properties directly (as well as via the associated strain-induced piezoelectric potential). Secondly, the natural type-II band offsets between GaSb and GaAs produce confining potentials which are markedly different in nature for electrons and holes. Thirdly, the analytical treatment described above neglects band hybridisation effects, which can be expected to play a role in determining precise the nature of QR eigenstates in the presence of strain, quantum-confinement, spin-orbit coupling and narrow band gap, all of which are present in real GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs. In order to quantitatively understand the QR electronic structure we have therefore undertaken multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations, based on the formalism described in Sec.~\\ref{sec:theoretical_model_numerical}. We provide here an overview of the results of these calculations, elucidating the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs and identifying optimised QR morphologies providing electronic properties well-suited to IBSC applications.\n\n\n\nFor our \\textbf{k}$\\cdot$\\textbf{p} calculations we begin by considering an exemplar GaSb\/GaAs QR having inner radius $a_{1} = 5$ nm, outer radius $a_{2} = 11.5$ nm and height $h = 3$ nm (dimensions typical of epitaxially grown QRs \\cite{Khan_EMC_2016}). The solid blue and red lines in Fig.~\\ref{fig:numerical}(a) respectively show linescans -- taken through the centre of the QR along the [100] direction -- of the calculated hydrostatic and biaxial components $\\epsilon_{\\scalebox{0.7}{\\textrm{hyd}}} = \\textrm{Tr}( \\epsilon )$ and $\\epsilon_{\\scalebox{0.7}{\\textrm{bia}}} = \\epsilon_{zz} - \\frac{1}{2} ( \\epsilon_{xx} + \\epsilon_{yy} )$ of the strain fields in the structure. The hydrostatic strain reaches values as low as $-10.5$\\% within the GaSb regions, reflecting that the QR is under significant compressive strain when grown epitaxially on GaAs. We note that strain relaxation in the structure acts to place the central GaAs region of the QR under a minor amount ($\\lesssim 1$\\%) of tensile strain, due to its being surrounded in the plane perpendicular to [001] by GaSb material having a larger lattice constant. The calculated biaxial strain resembles that associated with a cylindrical QD, \\cite{Andreev_JAP_1999} varying strongly at the interfaces with the surrounding barrier material, albeit with additional features due to the presence of the central GaAs barrier. We note that the strain-induced piezoelectric potential associated with this structure attains a maximum value of $\\approx 17$ meV throughout the calculational supercell, and has minimal impact on the electronic properties.\n\n\n\n\\begin{figure*}[ht!]\n\t\\includegraphics[width=1.00\\textwidth]{.\/figure_2.pdf}\n\t\\caption{(a) Linescan along the [100] in-plane direction, through the centre of the QR, showing the calculated hydrostatic (solid blue line) and biaxial (solid red line) components of the strain field for a GaSb\/GaAs QR having $a_{1} = 5$ nm, $a_{2} = 11.5$ nm and $h = 3$ nm. (b) Linescan along the [100] in-plane direction, through the centre of the QR, showing the calculated bulk band edge energies (band offsets) for the CB (solid blue line), and light-hole (LH; dashed green line) and heavy-hole (HH; solid red line) VBs, for the same QR as in (a). The dashed red line shows the energy of the highest energy bound hole state $h1$ -- i.e.~the intermediate band -- at energy $E_{\\protect\\scalebox{0.7}{\\textrm{IB}}}$, separated from the CB and VB edges by the sub-band gaps $E_{\\protect\\scalebox{0.7}{\\textrm{H}}}$ and $E_{\\protect\\scalebox{0.7}{\\textrm{L}}}$ respectively. (c) Side-on view, along the [010] direction, of the probability density associated with the lowest energy quasi-bound electron (green) and highest energy bound hole (orange) states in the same QR as in (a). (d) Top-down view, along the [001] direction, of the probability density associated with the same two states as in (c). (e) Lower energy sub-band gap $E_{\\protect\\scalebox{0.7}{\\textrm{L}}}$ as a function of QR height $h$ for GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs having inner and outer radii $a_{1} = 5$ nm and $a_{2} = 11.5$ nm, estimated based on Eqs.~\\eqref{eq:quantum_ring_transcendental} and~\\eqref{eq:quantum_ring_ground_state_energy}. (f) As in (e), but with the values of $E_{\\protect\\scalebox{0.7}{\\textrm{L}}}$ obtained from full numerical multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations.}\n \\label{fig:numerical}\n\\end{figure*}\n\n\n\nFigure~\\ref{fig:numerical}(b) shows a linescan of the band edge energies for the same GaSb\/GaAs QR, calculated using the strain fields shown (in part) in Fig.~\\ref{fig:numerical}(a). The band edge energy (confining potential) profiles are shown -- using solid blue, solid red and dashed green lines for the CB, HH VB and LH VB edges, respectively -- as a linescan along the [100] direction, through the centre of the QR. We firstly note the presence of large type-II band offsets in the QR. There exist strong confining potentials for holes within the GaSb region, reaching a maximum depth of $\\approx 820$ meV for HH states, while electrons are excluded from the GaSb region by a CB potential barrier having a maximum height of $\\approx 670$ meV. Due to the compressive strain in the GaSb region, the HH VB edge is pushed higher in energy than the LH VB edge, resulting in a larger confining potential for HH states and ensuring that the highest energy hole bound state -- i.e.~the IB in a hole-based GaAs$_{1-x}$Sb$_{x}$\/GaAs QR-IBSC -- is primarily HH-like. We therefore expect that hole states will be strongly confined within the GaSb region, experiencing strong annular localisation in the plane perpendicular to the [001] direction, as well as strong localisation along the [001] direction. We also note that the aforementioned minor tensile strain in the central barrier of the QR tends to reduce slightly the CB edge energy in the centre of the QR. This, combined with the presence of a confining potential for electrons in the plane perpendicular to the [001] direction, suggests the possibility of quasi-localised electron states residing in the central barrier region.\n\nThese expected trends in carrier localisation are verified by our calculated probability densities, shown in Figs.~\\ref{fig:numerical}(c) and~\\ref{fig:numerical}(d), which respectively show cross-sections through the centre of the QR of the calculated electron (green) and hole (orange) probability densities in the plane perpendicular to the [010] and [001] directions. We observe very strong localisation of the highest energy hole bound state within the QR, with only minimal penetration of the hole probability density into the GaSb region. We also note the presence of a resonant electron state within the central GaAs barrier of the QR, which our calculations identify as a consequence of the type-II band offsets (cf.~Fig.~\\ref{fig:numerical}(b)). While this electron state is strongly localised in the plane of the QR, it is less strongly localised along [001], due to the absence of a strongly confining CB potential along that direction.\n\nWe note that the quasi-localised electron state for which the probability density is shown in Figs.~\\ref{fig:numerical}(c) and~\\ref{fig:numerical}(d) was not the lowest energy electron state identified in our calculation: it was in fact the sixth lowest energy electron state. The type-II band alignment in this GaSb\/GaAs QR results in a large range of delocalised electron states lying at and above the GaAs CB edge in energy, which are characterised by their probability density being excluded from the central region of the QR in the plane of the QR. (The precise number of delocalised states appearing in a given energy range in a numerical calculation depends on a combination of the supercell dimensions and the size of the plane wave basis set employed.) The quasi-bound electron state identified in our analysis lies only 31 meV above the GaAs CB edge in energy and, given its higher spatial overlap with the highest energy bound hole state compared to the delocalised CB edge states described above, transitions involving this state can be expected to contribute significantly to the band edge optical absorption. From the perspective of IBSC design and optimisation, the presence of this unusual quasi-bound electron state is particularly appealing. By varying the QR morphology the in-plane confinement of this electron state can be engineered so as to control the spatial overlap with the highest energy bound hole state. This overlap governs a key trade-off in an IBSC, between (i) the generation rate associated with carriers occupying IB states (i.e.~the generation of holes in the VB via absorption of photons having energy $\\geq E_{\\scalebox{0.7}{\\textrm{L}}}$), and (ii) the radiative lifetime for recombination involving electron and hole states (i.e.~recombination of quasi-bound CB electrons with bound IB holes via emission of photons having energy $> E_{\\scalebox{0.7}{\\textrm{H}}}$). Furthermore, the weak electron localisation along [001] suggests that photo-generated electrons can be readily collected from the QRs in a real IBSC structure, with hole extraction proceeding via excitation of holes from the IB to the GaAs VB via TSPA. Finally, the large VB offsets in these QRs result in very large ionisation energies of $\\approx 0.5$ eV for IB hole states. This ionisation energy is far in excess of the thermal energy in the range of temperatures relevant to IBSC operation, and so electrical leakage of carriers via thermionic emission from the IB should be effectively suppressed in these heterostructures.\n\n\n\nFor this GaSb\/GaAs QR we calculate that the highest energy bound hole state -- the energy of which is denoted by a horizontal dotted red line in Fig.~\\ref{fig:numerical}(b) -- lies 502 meV above the GaAs barrier VB edge, corresponding to IBSC sub-band gaps $E_{\\scalebox{0.6}{\\textrm{L}}} = 0.502$ eV and $E_{\\scalebox{0.6}{\\textrm{H}}} = E_{g} (\\textrm{GaAs}) - E_{\\scalebox{0.6}{\\textrm{L}}} = 0.922$ eV at temperature $T = 300$ K. We note that these sub-band gaps are -- in addition to being in good quantitative agreement with experimental measurements \\cite{Wagener_JAP_2014} -- close to the optimum values associated with an IBSC implemented using a GaAs host matrix. Specifically, for GaAs (which has $E_{g} = 1.42$ eV at $T = 300$ K) the optimum sub-band gaps $E_{\\scalebox{0.6}{\\textrm{L}}} = 0.45$ eV and $E_{\\scalebox{0.6}{\\textrm{H}}} = 0.97$ eV correspond to a detailed balance efficiency limit of $\\approx 58$\\% under concentrated illumination, close to the overall efficiency limit for an ideal IBSC. However, for a given host matrix band gap $E_{g}$ the detailed balance efficiency is a sensitive function of the sub-band gaps $E_{\\scalebox{0.6}{\\textrm{L}}}$ and $E_{\\scalebox{0.6}{\\textrm{H}}}$, reducing rapidly as the sub-band gaps are detuned from their optimum values. \\cite{Wang_IETO_2014} It is therefore desirable to engineer the band structure such that the sub-band gaps are as close as possible to their optimum energies. The calculated value of $E_{\\scalebox{0.6}{\\textrm{L}}}$ ($E_{\\scalebox{0.6}{\\textrm{H}}}$) for the exemplar GaSb\/GaAs QR considered above is only $\\approx 50$ meV higher (lower) than the optimum value, suggesting that minor changes in morphology will be sufficient to realise optimum sub-band gaps.\n\nIn order to identify optimised QR morphologies we therefore proceed by repeating our analysis as a function of (i) QR height $h$, and (ii) QR Sb composition $x$, for QRs having the same inner and outer radii $a_{1} = 5$ nm and $a_{2} = 11.5$ nm considered above. We note that we have chosen to vary the QR height rather than in-plane dimensions since (i) real GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs typically have low aspect ratios $\\frac{ h }{ 2a_{2} } \\approx 0.1$, (ii) the confinement energies in such low aspect ratio structures, and hence the sub-band gaps, are dominated by confinement along the [001] direction (cf.~Eq.~\\eqref{eq:quantum_ring_ground_state_energy}), and (iii) characterisation of epitaxially grown GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs has revealed significantly larger \\textit{relative} variations in $h$ than in $a_{1}$ or $a_{2}$. The results of these calculations are summarised in Figs.~\\ref{fig:numerical}(e) and~\\ref{fig:numerical}(f), which respectively show the sub-band gap energy $E_{\\scalebox{0.7}{\\textrm{L}}}$ calculated based on the analytical and numerical models. Results are shown for GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs having Sb compositions $x = 100$, 90, 80 and 70\\% using red, green, blue and orange closed circles, respectively.\n\nNote that since the analytical model assumes a confining potential of infinite depth, it is not capable of directly predicting $E_{\\scalebox{0.7}{\\textrm{L}}}$ (which is obtained from the full numerical calculations as the difference $E_{h1} - E_{\\scalebox{0.7}{\\textrm{VB}}}$ in energy between the highest energy bound hole state and host matrix VB edge). In order to estimate $E_{\\scalebox{0.7}{\\textrm{L}}}$ using the analytical model we have therefore (i) used Eqs.~\\eqref{eq:quantum_ring_transcendental} and~\\eqref{eq:quantum_ring_ground_state_energy} to calculate the confinement energy $\\Delta E_{h1}$ associated with the HH ground state in a QR of infinite potential depth, and (ii) used the (maximum) HH band offset $\\Delta E_{\\scalebox{0.7}{\\textrm{HH}}}$ extracted from a full numerical calculation to compute $E_{\\scalebox{0.7}{\\textrm{L}}} = \\Delta E_{\\scalebox{0.7}{\\textrm{HH}}} - \\Delta E_{h1}$. Due to its assumption of an infinitely deep confining potential, the analytical approach naturally overestimates $\\Delta E_{h1}$ -- in particular the contribution associated with confinement along [001] -- and hence underestimates $E_{\\scalebox{0.7}{\\textrm{L}}}$ for short QRs having $h \\lesssim 3$ nm. Nonetheless, we observe that the analytical model accurately captures the key trends observed in the results of the full numerical calculations, and provides reasonably good quantitative agreement with the results of the numerical calculations for $h \\gtrsim 3$ nm, across the full range of Sb compositions considered.\n\nExamining Fig.~\\ref{fig:numerical}(f), the results of our numerical calculations suggest that optimum IBSC sub-band gaps are obtained in short GaSb\/GaAs QRs having $h \\approx 2$ nm, which lies well within the range of heights observed in real structures. \\cite{Khan_EMC_2016} By reducing the QR Sb composition $x$ at fixed $h$ to form alloyed GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs, we find that $E_{\\scalebox{0.7}{\\textrm{L}}}$ increases by $\\approx 45$ -- 50 meV for each 10\\% reduction in $x$. Therefore, in QRs having reduced Sb composition -- due, e.g., to interfacial Sb-As intermixing \\cite{Timm_JVSTB_2008,Carrington_PB_2012} -- the QR height should be slightly increased in order to restore $E_{\\scalebox{0.7}{\\textrm{L}}}$ to its optimum energy. In practice, this constitutes reducing the confinement energy $\\Delta E_{h1}$ so as to maintain fixed hole ionisation energy in response to the reduction in $\\Delta E_{\\scalebox{0.7}{\\textrm{HH}}}$ associated with a reduction in $x$. For Sb compositions $x \\gtrsim 75$\\% our calculations indicate that an optimum sub-band gap $E_{\\scalebox{0.7}{\\textrm{L}}} = 0.45$ eV is maintained for an $\\approx 1$ nm increase in $h$ in response to each 10\\% reduction in $x$. For Sb compositions $x \\lesssim 75$\\% our calculations indicate that QR heights $h \\gtrsim 5$ nm are required to obtain optimised electronic properties. Such heights are outside the range typically obtained via epitaxial growth, suggesting that only epitaxial growth of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs having $x \\gtrsim 75$\\% is likely to produce heterostructures possessing electronic properties which are well-suited to IBSC applications.\n\nWe note, however, that our analysis of the electronic properties here has focused solely on idealised QR structures, possessing exact cylindrical shape and uniform alloy composition throughout the GaAs$_{1-x}$Sb$_{x}$ region. The presence of Sb-As intermixing at the ring-barrier interface in real GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs \\cite{Timm_JVSTB_2008,Carrington_PB_2012} results in a non-uniform alloy composition, which may modify the strain fields and confining potentials compared to those considered in our exploratory calculations here. Overall, we expect the resulting modifications of the electronic properties to be quantitative rather than qualitative in nature compared to those described above for ideal QRs. Of more importance from the perspective of designing real IBSC devices is to build upon our initial analysis here by undertaking theoretical investigations of the optical properties of, and radiative and non-radiative losses in, GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs and vertical QR stacks.\n\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\n\nIn summary, we have presented a theoretical analysis of the electronic properties of type-II GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs based on a combined analytical and numerical approach, and identified optimised combinations of QR morphology and alloy composition for the realisation of hole-based IBSCs -- in which the IB is formed by the highest energy bound hole state in the QR -- offering the maximum theoretical efficiency available via inclusion of an IB in a GaAs matrix.\n\n\n\nAnalytically, we presented the solution of the time-independent Schr\\\"{o}dinger equation for a cylindrical QR of infinite potential depth and derived a transcendental equation which must be satisfied by bound QR eigenstates. The relationship to the solution of the well-known problem of the cylindrical QD was described, and it was demonstrated that (i) the QR eigenstates evolve smoothly from those of the QD, and (ii) the convergence properties of the QR ground state allow the confinement energy to be estimated straightforwardly, and to high accuracy, for realistic QRs having dimensions typical of those achieved via epitaxial growth. Our analytical analysis demonstrated that type-II GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs offer significant benefits from the perspective of band structure engineering for IBSC applications, allowing the IB energy in a hole-based IBSC to be tuned across a broad range via changes in QR morphology.\n\nNumerically, we used multi-band \\textbf{k}$\\cdot$\\textbf{p} calculations -- including full strain and piezoelectric effects -- to analyse the electronic properties of GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs, both as a function of QR dimensions and Sb composition. We further demonstrated that the nature of the carrier confinement in these heterostructures is ideally suited to IBSC applications. Strong hole localisation, with large ionisation energies in excess of 0.4 eV, can be expected to mitigate carrier leakage from the IB via thermionic emission. Additionally, the interplay between strain relaxation and type-II band alignment in these QRs was demonstrated to give rise to electron states which, in the plane of the QR, are strongly localised in the central barrier of the QR. The unusual nature of the carrier localisation in these heterostructures suggests the potential to engineer the trade-off between the electron-hole overlap (which mediates carrier generation via optical absorption) and the radiative lifetime for photo-generated electron-hole pairs (which mediates carrier loss via radiative recombination).\n\nFor inner and outer radii $a_{1} = 5$ nm and $a_{2} = 11.5$ nm, typical of epitaxially grown QRs, our calculations indicate that an optimum IB sub-band gap $E_{\\scalebox{0.7}{\\textrm{L}}} = E_{\\scalebox{0.7}{\\textrm{IB}}} - E_{\\scalebox{0.7}{\\textrm{VB}}}$ ($E_{\\scalebox{0.7}{\\textrm{H}}} = E_{\\scalebox{0.7}{\\textrm{CB}}} - E_{\\scalebox{0.7}{\\textrm{IB}}}$) of 0.45 eV (0.97 eV) can be obtained in GaSb\/GaAs QRs having height $h \\approx 2$ nm. For GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs having reduced Sb compositions $x$, our calculations indicate that decreases in $x$ can be compensated by slight increases in $h$ -- by $\\approx 1$ nm for each 10\\% reduction in $x$ -- in order to maintain optimum sub-band gap energies. QRs grown to these specifications have a detailed balance efficiency limit of $\\approx 58$\\% under concentrated illumination, close to the overall limit of 63.8\\% for ideal IBSCs. Given the sensitivity of the theoretical IBSC efficiency to the IB energy, our analysis suggests that careful control of QR morphology provides a viable route to realising optimised heterostructures suitable for IBSC applications.\n\nOur initial calculations here, however, were guided by detailed balance efficiency limits which include a number of assumptions -- e.g.~optimium absorption spectra, infinite carrier mobilities, absence of non-radiative carrier losses, etc.~-- which are not reflective of the conditions in real quantum-confined heterostructure. Further theoretical work is therefore required to quantify, and identify pathways towards simultaneously optimising the optical properties and mitigating losses (both radiative and non-radiative) in real GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs and vertical QR stacks, in order to identify rigorously optimised heterostructures for IBSC applications and to quantify photovoltaic efficiencies that can be realistically achieved using this novel platform.\n\n\n\nOverall, our calculations highlight the suitability of type-II GaAs$_{1-x}$Sb$_{x}$\/GaAs QRs for applications in hole-based IBSCs, and provide initial information regarding optimised combinations of QR alloy composition and morphology to guide the growth and fabrication of prototype QR-IBSC devices.\n\n\n\n\\section*{Acknowledgements}\n\nThis work was supported by the European Commission via the Marie Sk\\l{}odowska-Curie Innovative Training Network PROMIS (project no.~641899), by the National University of Ireland (NUI; via the Post-Doctoral Fellowship in the Sciences, held by C.A.B.), and by Science Foundation Ireland (SFI; project no.~15\/IA\/3082). The authors thank Prof.~Anthony Krier and Dr.~Denise Montesdeoca (Lancaster University, U.K.) for useful discussions.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec:intro}}\n\nIt is a classical result in Newtonian mechanics that (uncharged) $n$-body configurations cannot be in equilibrium for $n>1$ if the bodies are separated by a plane. Indeed, since the Newtonian gravitational force is always attractive, it is clear that separated bodies cannot be in balance. Interestingly, the situation might be rather different in the nonlinear theory of general relativity. If we consider \\emph{rotating} objects, then the effect of spin--spin repulsion might be able to compensate for the gravitational attraction. Therefore, in order to better understand the nature of the gravitational interaction in general relativity, it is an important question as to whether an equilibrium of (physically reasonable) rotating bodies is possible.\n\nProbably the simplest type of such an equilibrium configuration might be a configuration consisting of two aligned rotating black holes in vacuum, i.e.\\ a nonlinear superposition of two Kerr black holes. Especially since the discovery of a family of exact candidate solutions --- the \\emph{double-Kerr-NUT solution} \\cite{KramerNeugebauer1980,Neugebauer1980} --- such two-black-hole configurations have attracted great interest. It remained unclear, however, whether there is any choice of the parameters for which the double-Kerr-NUT solution does indeed describe regular spacetimes containing two black holes in equilibrium. Moreover, it was not a priori guaranteed that there could not be other candidate solutions that were not contained in this family. These questions were addressed in a series of papers \\cite{NeugebauerHennig2009,HennigNeugebauer2011,NeugebauerHennig2012,Chrusciel2011} with the following results. Firstly, by studying a boundary value problem for an asymptotically flat spacetime with two black hole event horizons, it turned out that any regular solution would necessarily be a member of the double-Kerr-NUT family. Secondly, it was shown that none of the candidate solutions can describe a regular equilibrium configuration, for at least one of the two black holes would necessarily violate a geometric inequality between horizon area and angular momentum \\cite{HennigAnsorgCederbaum2008}, which needs to be satisfied by physically reasonable black holes. Hence it turned out that stationary two-black-hole configurations in vacuum do not exist --- the spin--spin repulsion is not strong enough to compensate for the gravitational attraction.\n\nWhat happens if we study the more general situation of \\emph{electrovacuum} rather than vacuum solutions, i.e.\\ if we allow for electromagnetic fields and consider possible equilibrium configurations with charged bodies? In non-relativistic physics, it is easily possible to construct charged $n$-body configurations. One simply needs to choose sufficiently large charges such that the electromagnetic forces cancel the gravitational forces. In the context of \\emph{relativistic} two-black-hole configurations, however, there are upper limits for the charges (and rotation rates), since we otherwise obtain naked singularities instead of black holes. Hence it was not a priori guaranteed that there are any relativistic configurations where the combined spin--spin and charge--charge repulsions lead to stationary equilibrium.\n\nNevertheless, a class of static configurations is given by the well-known \\emph{Majumdar--Papapetrou solution} \\cite{Majumdar1947,Papapetrou1947}, which describes the superposition of an arbitrary number of extremal Reissner--Nordstr\\\"om black holes at arbitrary positions\\footnote{A remarkable extension of the Majumdar--Papapetrou solution in the cosmological setting (with cosmological constant $\\Lambda>0$) was constructed in \\cite{KastorTraschen1993}. This solution contains an arbitrary number of black holes at fixed coordinate positions. However, due to the cosmological expansion, the proper distances between the black holes vary, i.e.\\ it is not an equilibrium configuration in the context discussed here. \nAnother interesting configuration is the superpartner of the Majumdar--Papapetrou solution in $N=2$ supergravity, which was derived in \\cite{AichelburgEmbacher1986}. The black holes in this family of solutions carry angular momenta of a quantum mechanical nature, and the analysis of forces in \\cite{KastorTraschen1999} shows that static configurations can exist due to a balance of the gauge and gravitational spin--spin forces.}.\n\nHence general relativity does evidently permit equilibrium states with charged black holes. \nHowever, this example requires \\emph{extremal} black holes, i.e.\\ degenerate horizons with vanishing surface gravity $\\kappa$. According to the third law of black hole thermodynamics, however, it should not be possible by any procedure to reduce $\\kappa$ to zero by any finite sequence of operations. In line with this principle is a result by Thorne \\cite{Thorne1974}, who studied a black hole that swallows matter and radiation from an accretion disk. It turned out that the accreting matter can spin up the black hole up to a limiting state in which the ratio of the black hole rotation parameter and mass is about $0.998$ --- close to, but not quite at the extremal limit of $1$. Hence, while extremal black holes are certainly mathematically perfectly valid solutions to Einstein's field equations, they should rather be considered as limiting configurations that cannot exactly be realised in nature. Therefore, the above example for an equilibrium configuration of charged extremal black holes is most likely an unphysical idealisation, and the question remains as to whether equilibrium states with more realistic \\emph{non-extremal} black holes are possible.\n\nAs far as \\emph{static} solutions are concerned, this was answered in the negative. It was shown by Chru\\'sciel and Tod \\cite{ChruscielTod2007} that every static solution to the electrovacuum Einstein--Maxwell equations with disconnected horizons (i.e.\\ multiple black holes) can only contain degenerate horizons. Moreover, such solutions are necessarily locally diffeomorphic to an open subset of a Majumdar--Papapetrou spacetime.\n\nThe problem is considerably more complicated in the case of \\emph{rotating} black holes, i.e.\\ non-static solutions, and it is currently not known whether physically reasonable equilibrium configurations do exist. Nevertheless, some families of exact candidate solutions were constructed \\cite{ChamorroMankoSibgatullin1993,MankoMartinRuiz1994}. These were obtained by assuming that the solution (in terms of the Ernst potentials $\\mathcal E$ and $\\Phi$, see below) does have particular boundary values $\\mathcal E_+(\\zeta)$ and $\\Phi_+(\\zeta)$ on the upper part of the symmetry axis (above both black holes) in terms of a cylindrical coordinate $\\zeta$. The chosen boundary values already determine the solution uniquely \\cite{HauserErnst1981}, and the explicit solution in the entire spacetime was calculated by applying a particular technique from soliton theory (`Sibgatullin's integral method' \\cite{Sibgatullin1984,MankoSibgatullin1993}).\nA plausible form of the axis data $\\mathcal E_+$ and $\\Phi_+$ was obtained by starting from the Kerr--Newman data of a single black hole with mass $M$, rotation parameter $a$, and charge $Q$,\n\\begin{equation}\n \\mathcal E_+(\\zeta)=1-\\frac{2M}{\\zeta+M-\\mathrm{i} a}\\equiv\\frac{\\zeta-M-\\mathrm{i} a}{\\zeta+M-\\mathrm{i} a},\\quad\n \\Phi_+(\\zeta)=\\frac{Q}{\\zeta+M-\\mathrm{i} a}\n\\end{equation}\nand including additional terms to describe a second black hole. In \\cite{ChamorroMankoSibgatullin1993}, boundary data of the form\n\\begin{equation}\\fl\\label{eq:bdata1}\n \\mathcal E_+(\\zeta)=1-\\frac{2M_1}{\\zeta+\\zeta_1-\\mathrm{i} a_1}-\\frac{2M_2}{\\zeta+\\zeta_2-\\mathrm{i} a_2},\\quad\n \\Phi_+(\\zeta)=\\frac{Q_1}{\\zeta+\\zeta_1-\\mathrm{i} a_1}+\\frac{Q_2}{\\zeta+\\zeta_2-\\mathrm{i} a_2}\n\\end{equation}\nwere considered. On the other hand, the solution in \\cite{MankoMartinRuiz1994} was constructed from the data\n\\begin{eqnarray}\n \\label{eq:bdata2a}\n \\mathcal E_+(\\zeta) &=&\\frac{(\\zeta+\\zeta_1-M_1-\\mathrm{i} a_1)(\\zeta+\\zeta_2-M_2-\\mathrm{i} a_2)}\n {(\\zeta+\\zeta_1+M_1-\\mathrm{i} a_1)(\\zeta+\\zeta_2+M_2-\\mathrm{i} a_2)},\\\\\n \\label{eq:bdata2b}\n \\Phi_+(\\zeta)&=&\\frac{Q_1(\\zeta+\\zeta_2-\\mathrm{i} a_2)+Q_2(\\zeta+\\zeta_1-\\mathrm{i} a_1)}\n {(\\zeta+\\zeta_1+M_1-\\mathrm{i} a_1)(\\zeta+\\zeta_2+M_2-\\mathrm{i} a_2)}.\n\\end{eqnarray}\nIn both cases, the boundary values of the potentials are rational functions of $\\zeta$ that depend on a number of free parameters.\n\nIn order to decide whether the candidate solutions \\cite{ChamorroMankoSibgatullin1993,MankoMartinRuiz1994} contain any physically reasonable equilibrium configurations, it needs to be studied whether the parameters can be chosen such that all of the following requirements are satisfied:\n\\begin{enumerate}\n \\item the solutions have a vanishing NUT parameter (corresponding to the appropriate behaviour at infinity in an asymptotically flat spacetime),\n \\item there are no conical singularities on the symmetry axes, in particular, between the two black holes (which would correspond to `struts' that keep the two black holes apart),\n \\item the norm of the axial Killing vector vanishes on the symmetry axis, \n \\item there is no global magnetic charge,\n \\item the solutions are free of singularities off the symmetry axis.\n\\end{enumerate}\nUsing the conditions (i)--(iv), one can write down an algebraic system of equations for the parameters that ensures the correct behaviour at infinity and on the axis. Unfortunately, the equations are rather involved, which makes it very difficult to decide whether there are parameter values satisfying those conditions. However, even if the correct behaviour on the axis is obtained in a subset of the parameter space, the solutions would likely violate condition (v), and it is probably even more difficult to check regularity off the axis. Hence it is not clear whether the solution families \\cite{ChamorroMankoSibgatullin1993} and \\cite{MankoMartinRuiz1994} contain any physically acceptable equilibrium configurations.\n\nMoreover, the question remains as to whether the boundary data \\eref{eq:bdata1} or \\eref{eq:bdata2a} and \\eref{eq:bdata2b} do contain the axis potentials for actual equilibrium configurations with non-extremal rotating and charged black holes, if any exist, or whether data of some other form need to be considered. This is exactly the problem that we address in this paper. Generalising the considerations for one black hole in vacuum \\cite{Neugebauer2000} (which leads to a constructive uniqueness proof of the Kerr solution), two black holes in vacuum \\cite{NeugebauerMeinel,NeugebauerHennig2009,HennigNeugebauer2011,NeugebauerHennig2012}, or a single black hole in electrovacuum \\cite{Meinel2012} (which extends the constructive uniqueness proof to the Kerr--Newman solution), we study a boundary value problem for two aligned charged and rotating black holes. As a result, we will obtain the most general form of the axis potentials $\\mathcal E_+$ and $\\Phi_+$.\n\nThese considerations crucially rely on the fact that the Einstein--Maxwell equations in electrovacuum for axisymmetric and stationary spacetimes can be reformulated in terms of a linear matrix problem, as a consequence of which methods from soliton theory are applicable. Note that closely related techniques also work in the context of Gowdy-symmetric cosmological models (which have two spacelike Killing vectors rather than a spacelike and a timelike Killing vector) \\cite{HennigAnsorg2010,BeyerHennig2012,Hennig2016b,Hennig2019}. Yet another type of application is the investigation of the interior region of axisymmetric and stationary black holes \\cite{AnsorgHennig2009,HennigAnsorg2009}.\n\nThis paper is organised as follows. In Sec.~\\ref{sec:fieldeqns}, we recapitulate the Ernst formulation of the Einstein--Maxwell equations and the associated linear problem. Then we integrate the linear problem along the black hole horizons, symmetry axis and at infinity in Sec.~\\ref{sec:integration}. This will eventually allow us to obtain the general form of the axis data. Finally, in Sec.~\\ref{sec:discussion}, we summarise our results.\n\n\\section{Field equations\\label{sec:fieldeqns}}\n\\subsection{Ernst formulation}\n\nWe describe the exterior electrovacuum region of an axisymmetric and stationary spacetime containing two aligned rotating and charged black holes with Weyl--Lewis--Papapetrou coordinates $(\\rho,\\zeta,\\varphi,t)$. The line element can be written in the standard form\n\\begin{equation}\\label{eq:metric}\n \\mathrm{d} s^2=f^{-1\n \\left[\\mathrm{e}^{2k}(\\mathrm{d}\\rho^2+\\mathrm{d}\\zeta^2)+\\rho^2\\,\\mathrm{d}\\varphi^2\\right]\n -f(\\mathrm{d} t+a\\,\\mathrm{d}\\varphi)^2,\n\\end{equation}\nwhere the three metric functions $f$, $k$ and $a$ are functions of $\\rho$ and $\\zeta$ alone. The electromagnetic field can be given in terms of an electromagnetic 4-potential of the form $(A_\\mu)=[0,0,A_\\varphi(\\rho,\\zeta),A_t(\\rho,\\zeta)]$.\n\nIt is well-known that the corresponding Einstein--Maxwell equations can be written in a very elegant and concise form if we replace the metric functions and electromagnetic potential in terms of the two corresponding complex Ernst potentials $\\mathcal E(\\rho,\\zeta)$ and $\\Phi(\\rho,\\zeta)$ \\cite{Ernst1968b}. The resulting \\emph{Ernst equations} read\n\\begin{equation}\\label{eq:Ernst}\n f\\Delta\\mathcal E = (\\nabla\\mathcal E+2\\bar\\Phi\\nabla\\Phi)\\cdot\\nabla\\mathcal E,\\quad\n f\\Delta\\Phi = (\\nabla\\mathcal E+2\\bar\\Phi\\nabla\\Phi)\\cdot\\nabla\\Phi,\n\\end{equation}\nwhere $\\Delta$ and $\\nabla$ refer to the Laplace and nabla operators in flat cylindrical coordinates $(\\rho,\\zeta,\\varphi)$, respectively, and a bar denotes complex conjugation. Note that the metric function $f$ is related to the Ernst potentials as follows,\n\\begin{equation}\n f=\\Re(\\mathcal E)+|\\Phi|^2.\n\\end{equation}\nHence, if we define $b=\\Im(\\mathcal E)$, then the Ernst potential $\\mathcal E$ can be expressed as\n\\begin{equation}\\label{eq:b}\n \\mathcal E=f-|\\Phi|^2+\\mathrm{i} b.\n\\end{equation}\nMore details about the Ernst formulation of the field equations and the relation to the metric and electromagnetic functions can be found in \\cite{Ernst1968b,Stephani}.\n\nIn our coordinates, the event horizon of a black hole is necessarily located at $\\rho=0$ and corresponds to an interval on the $\\zeta$-axis.\n\\begin{figure}\\centering\n \\includegraphics[width=4.5cm]{IntegrationPathX.pdf}\n \\caption{In Weyl--Lewis--Papapetrou coordinates, the event horizons ${\\mathcal H_1}$ and ${\\mathcal H_2}$ are located on the $\\zeta$-axis. The symmetry axis has the three parts $\\mathcal A_+$, $\\mathcal A_0$ and $\\mathcal A_-$. In Sec.~\\ref{sec:integration} below we will integrate the linear problem, which is equivalent to the field equations, along the dashed path. The part ${\\mathcal C}$ of this path is an infinitely large semicircle.}\n \\label{fig:IntegrationPath}\n\\end{figure}\nThe present situation of a (candidate) spacetime with two black holes is sketched in Fig.~\\ref{fig:IntegrationPath}. We denote the endpoints of the horizons by $A$, $B$, and $C$, $D$, and the corresponding $\\zeta$-intervals by $[K_B,K_A]$ and $[K_D,K_C]$, respectively.\n\n\\subsection{The linear problem}\nIt is a most remarkable property of the Ernst equations \\eref{eq:Ernst} that they belong to the class of integrable partial differential equations. They are equivalent to an associated \\emph{linear} matrix problem, and techniques from soliton theory, like the inverse scattering method, can be used to study properties of the solutions and to construct exact solutions.\n\nA linear problem (LP) for the electrovacuum Ernst equations was first found by Belinski \\cite{Belinski1979}, and a modified version was constructed by Neugebauer and Kramer \\cite{NeugebauerKramer}. Here we will use a minor reformulation of Neugebauer and Kramer's LP, which is due to Meinel \\cite{Meinel2012}. In order to state the LP, we first define the complex coordinates\n\\begin{equation}\n z=\\rho+\\mathrm{i}\\zeta,\\quad\\textrm{and}\\quad \\bar z=\\rho-\\mathrm{i}\\zeta,\n\\end{equation}\nand the function\n\\begin{equation}\\label{eq:lambda}\n \\lambda=\\sqrt{\\frac{K-\\mathrm{i}\\bar z}{K+\\mathrm{i} z}},\n\\end{equation}\nwhich depends on the complex coordinates and on an important additional degree of freedom, the \\emph{spectral parameter} $K$. \nDue to the square root, the complex function $\\lambda$ is defined on a two-sheeted Riemannian $K$-surface with branch points at $K_1=\\mathrm{i}\\bar z$ and $K_2=-\\mathrm{i} z$.\n\nThe LP is a system of equations for a $3\\times 3$ matrix function ${\\mathbf Y}={\\mathbf Y}(\\rho,\\zeta;K)$ and reads\n\\begin{eqnarray}\n\\label{eq:LP1}\n {\\mathbf Y}_{,z} &=& \\left[\n \\left(\\begin{array}{ccc}\n B_1 & 0 & C_1\\\\ 0 & A_1 & 0\\\\ D_1 & 0 & 0\n \\end{array}\\right)\n +\\lambda\n \\left(\\begin{array}{ccc}\n 0 & B_1 & 0\\\\ A_1 & 0 & -C_1\\\\ 0 & D_1 & 0\n \\end{array}\\right)\\right]{\\mathbf Y},\\\\\n \\label{eq:LP2}\n {\\mathbf Y}_{,\\bar z} &=& \\left[\n \\left(\\begin{array}{ccc}\n B_2 & 0 & C_2\\\\ 0 & A_2 & 0\\\\ D_2 & 0 & 0\n \\end{array}\\right)\n +\\frac{1}{\\lambda}\n \\left(\\begin{array}{ccc}\n 0 & B_2 & 0\\\\ A_2 & 0 & -C_2\\\\ 0 & D_2 & 0\n \\end{array}\\right)\\right]{\\mathbf Y}.\n\\end{eqnarray}\nThe matrix elements are given in terms of the Ernst potentials by\n\\begin{eqnarray}\n A_1 &= \\bar B_2 = \\frac{1}{2f}(\\mathcal E_{,z}+2\\bar\\Phi\\Phi_{,z}),\\quad\n C_1 &= f\\bar D_2 = \\Phi_{,z},\\\\\n A_2 &= \\bar B_1 = \\frac{1}{2f}(\\mathcal E_{,\\bar z}+2\\bar\\Phi\\Phi_{,\\bar z}),\\quad\n C_2 &= f\\bar D_1 = \\Phi_{,\\bar z}.\n\\end{eqnarray}\nNote that integrability of the LP \\eref{eq:LP1}, \\eref{eq:LP2} is ensured by virtue of the Ernst equations \\eref{eq:Ernst}, since the integrability condition ${\\mathbf Y}_{,z\\bar z}={\\mathbf Y}_{,\\bar z z}$ turns out to be equivalent to the Ernst equations.\n\nSince the LP contains $\\lambda$, the matrix function ${\\mathbf Y}$ will in general also take on different values on the two Riemannian $K$-sheets. Only at the branch points the function values are unique. If some function ${\\mathbf Y}$ solves the LP on one sheet, then one can show that a particular solution on the other sheet is given by ${\\mathbf J}{\\mathbf Y}$ with\n\\begin{equation}\n {\\mathbf J}=\\mathrm{diag}(1,-1,1).\n\\end{equation}\nThe general solution on the other sheet can be obtained by multiplying ${\\mathbf J}{\\mathbf Y}$ on the right by a matrix that depends on $K$ only. However, only for a particular choice of this matrix, we obtain a solution that correctly connects to the solution on the first sheet through the branch cut. Hence the solutions on the two sheets are related via\n\\begin{equation}\\label{eq:sheets}\n {\\mathbf Y}|_{-\\lambda}={\\mathbf J}{\\mathbf Y}|_{\\lambda}{\\mathbf B}(K)\n\\end{equation}\nfor some $3\\times 3$ matrix ${\\mathbf B}$. It is possible to impose certain gauge conditions which enforce that ${\\mathbf B}$ takes on a particular form. For example, the matrix obtained with conditions used in \\cite{Meinel2012} is given by ${\\mathbf B}=\\small\\left(\\begin{array}{ccc}0&1&0\\{\\mathbf 1}&0&0\\{\\mathbf 0}&0&1\\end{array}\\right)$. Here, however, we will demonstrate that the discussion can easily be done in full generality, i.e.\\ without referring to any particular gauge. Indeed, the final physical results will, of course, be independent of any gauge choice. The only property of the matrix ${\\mathbf B}$ that we will later use is that\n\\begin{equation}\\label{eq:B2}\n {\\mathbf B}^2={\\mathbf 1},\n\\end{equation}\nwhere ${\\mathbf 1}$ is the $3\\times 3$ identity matrix. This immediately follows by applying the transformation \\eref{eq:sheets} twice and using that this must lead back to the original solution ${\\mathbf Y}$.\n\nAn important ingredient of our construction of two-black-hole solutions is to study the LP not only in the coordinates introduced above, but also in certain rotating frames of reference. They can be introduced by a simple transformation of the coordinate $\\varphi$,\n\\begin{equation}\n \\tilde\\varphi=\\varphi-\\Omega t,\n\\end{equation}\nwhere $\\Omega$ is the angular velocity of the rotating frame. The other coordinates $\\rho$, $\\zeta$ and $t$ remain unchanged. In the following, we will consider the two particular frames that are co-rotating with either the first or the second black hole. If we denote the angular velocities of the two black holes by $\\Omega_1$ and $\\Omega_2$, then these frames correspond to choosing $\\Omega=\\Omega_{1\/2}$. Note that we always assume rotating black holes with $\\Omega_1\\neq0$ and $\\Omega_2\\neq0$.\n\nFortunately, we do not need to solve the LP both in the original and the rotating frame, in order to obtain the two solutions ${\\mathbf Y}$ and $\\tilde{\\mathbf Y}$. Instead, there is a simple relation between both solutions. The corresponding transformation in the vacuum case was given in \\cite{NeugebauerMeinel}, and the generalisation to electrovacuum was presented in \\cite{AnsorgHennig2009,HennigAnsorg2009}. In the present formulation of the LP, it reads \\cite{Meinel2012}\n\\begin{equation}\\fl\\label{eq:rotframe}\n \\tilde{\\mathbf Y}(\\rho,\\zeta; K)=\\left[\n \\left(\\begin{array}{ccc}\n c_- & 0 & 0\\\\\n 0 & c_+ & 0\\\\\n 0 & 0 & 1\n \\end{array}\\right)\n +\\mathrm{i}(K+\\mathrm{i} z)\\frac{\\Omega}{f}\n \\left(\\begin{array}{ccc}\n -1 & -\\lambda & 0\\\\\n \\lambda & 1 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\n\\right]{\\mathbf Y}(\\rho,\\zeta;K),\n\\end{equation}\nwhere\n\\begin{equation}\n c_{\\pm}=1+\\Omega\\left(a\\pm\\frac{\\rho}{f}\\right).\n\\end{equation}\nThe reason why using additional coordinate systems does actually add some new information is that the above transformation formula depends on the metric function $a$, which takes on specific boundary values at the symmetry axis and horizons (see next section). Considering not only ${\\mathbf Y}$ but also $\\tilde{\\mathbf Y}$ does therefore incorporate these boundary conditions into our calculations.\n\n\n\\section{Integration of the linear problem\\label{sec:integration}}\n\n\\subsection{Solutions on the axis parts and horizons}\n\nSimilarly to the study of two-black-hole configurations in vacuum and the other applications of the LP mentioned in the introduction, we intend to integrate the LP along the boundaries of the physical domain. The integration path consists of the event horizons ${\\mathcal H_1}$ and ${\\mathcal H_2}$ of the two black holes, the three parts $\\mathcal A_+$, $\\mathcal A_0$ and $\\mathcal A_-$ of the symmetry axis, and a semicircle ${\\mathcal C}$ in the limit of an infinite radius, cf.\\ Fig.~\\ref{fig:IntegrationPath}.\n\nIn the following discussion, we will make use of the well-known boundary values for the metric and Ernst potentials at symmetry axes and black hole (Killing) horizons in Weyl--Lewis--Papapetrou coordinates, as well as the behaviour at infinity,\n\\begin{eqnarray}\n \\mathcal A_+,\\ \\mathcal A_0,\\ \\mathcal A_-:\\quad && a=0,\\label{eq:condA}\\\\\n {\\mathcal H_1}: && a=-\\frac{1}{\\Omega_1},\\label{eq:condH1}\\\\\n {\\mathcal H_2}: && a=-\\frac{1}{\\Omega_2},\\label{eq:condH2}\\\\\n A,\\ B,\\ C,\\ D: && f=0,\\label{eq:condf}\\\\\n {\\mathcal C}: && \\mathcal E\\to 1,\\quad \\Phi\\to 0.\\label{eq:condC}\n\\end{eqnarray}\nHere, $A$, $B$, $C$, $D$ refers to the endpoints of the horizons, see Fig.~\\ref{fig:IntegrationPath}.\n \n\nFirstly, we consider the LP anywhere on the $\\zeta$-axis, i.e.\\ at $\\rho=0$. According to \\eref{eq:lambda}, the function $\\lambda$ simplifies to $\\lambda=\\pm1$ on the two Riemannian sheets. As a consequence, the LP \\eref{eq:LP1}, \\eref{eq:LP2} also becomes particularly simple and reduces to an ODE. The general solution can easily be derived. In the sheet with $\\lambda=1$ it reads\n\\begin{equation}\n {\\mathbf Y}(0,\\zeta; K)={\\mathbf E}(\\zeta){\\mathbf C}(K),\\quad\n {\\mathbf E}:=\\left(\\begin{array}{ccc}\n \\bar\\mathcal E+2|\\Phi|^2 & 1 & \\Phi\\\\\n \\mathcal E & -1 & -\\Phi\\\\\n 2\\bar\\Phi & 0 & 1\n \\end{array}\\right).\n\\end{equation}\nHence ${\\mathbf Y}$ depends on the boundary values of the Ernst potentials and on a $K$-dependent `integration constant', a $3\\times 3$ matrix ${\\mathbf C}$. The solution in the other sheet with $\\lambda=-1$ is readily obtained from \\eref{eq:sheets}.\n\nUsing \\eref{eq:rotframe}, we can also construct the solution in the frame that rotates with angular velocity $\\Omega$. The result is\n\\begin{equation}\\fl\n \\tilde{\\mathbf Y} = \\left[\\left(\\begin{array}{ccc}\n 1+\\Omega a & 0 & 0\\\\\n 0 & 1+\\Omega a & 0\\\\\n 0 & 0 & 1\n \\end{array}\\right){\\mathbf E}\n +2\\mathrm{i}\\Omega(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}.\n\\end{equation}\nThis expression simplifies further if we specialise to the symmetry axis or horizons, using the boundary conditions \\eref{eq:condA}, \\eref{eq:condH1}, \\eref{eq:condH2}, where we consider the co-rotating frames with $\\Omega=\\Omega_1$ or $\\Omega=\\Omega_2$.\n\nNow we can write down the expressions for ${\\mathbf Y}$ and $\\tilde{\\mathbf Y}$ on the three parts of the symmetry axis and on the two horizons. In terms of $K$-dependent $3\\times 3$ matrices ${\\mathbf C}_+$, ${\\mathbf C}_-$, ${\\mathbf C}_0$, ${\\mathbf C}_1$ and ${\\mathbf C}_2$, we have for $\\lambda=+1$,\n\\begin{eqnarray}\n \\label{eq:ApY}\n \\mathcal A_+:\\quad && {\\mathbf Y}={\\mathbf E}{\\mathbf C}_+,\\\\ \n \\label{eq:ApYs}\n && \\tilde{\\mathbf Y}=\\left[{\\mathbf E}+2\\mathrm{i}\\Omega_{1}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_+,\\\\[1.5ex]\n \\mathcal A_0:\\quad && {\\mathbf Y}={\\mathbf E}{\\mathbf C}_0,\\\\\n && \\tilde{\\mathbf Y}=\\left[{\\mathbf E}+2\\mathrm{i}\\Omega_{1\/2}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_0,\\\\[1.5ex] \n \\label{eq:AmY}\n \\mathcal A_-:\\quad && {\\mathbf Y}={\\mathbf E}{\\mathbf C}_-,\\\\\n && \\tilde{\\mathbf Y}=\\left[{\\mathbf E}+2\\mathrm{i}\\Omega_{2}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_-,\\\\[1.5ex] \n \\label{eq:H1Y}\n {\\mathcal H_1}:\\quad && {\\mathbf Y}={\\mathbf E}{\\mathbf C}_1,\\\\\n \\label{eq:H1Ys}\n && \\tilde{\\mathbf Y}=\\left[\\left(\\begin{array}{ccc}\n 0 & 0 & 0\\\\ 0 & 0 & 0\\\\ 0 & 0 & 1\n \\end{array}\\right)\n{\\mathbf E}+2\\mathrm{i}\\Omega_{1}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_1,\\\\[1.5ex] \n{\\mathcal H_2}:\\quad && {\\mathbf Y}={\\mathbf E}{\\mathbf C}_2,\\\\\n && \\tilde{\\mathbf Y}=\\left[\\left(\\begin{array}{ccc}\n 0 & 0 & 0\\\\ 0 & 0 & 0\\\\ 0 & 0 & 1\n \\end{array}\\right)\n{\\mathbf E}+2\\mathrm{i}\\Omega_{2}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_2. \n\\end{eqnarray}\nNote that we consider \\emph{both} co-rotating frames with $\\Omega=\\Omega_1$ and $\\Omega=\\Omega_2$ on the axis part $\\mathcal A_0$, but otherwise only that co-rotating frame with the angular velocity of the nearest horizon. Again, the expressions in the Riemannian sheet with $\\lambda=-1$ can be obtained from the above equations using \\eref{eq:sheets}.\n\nSecondly, we consider the LP on the infinitely large semicircle ${\\mathcal C}$. A semicircle with finite radius $R$ can be parametrised by $\\rho=R\\sin\\alpha$, $\\zeta=R\\cos\\alpha$, $0\\le\\alpha\\le\\pi$. On this semicircle, the function $\\lambda$ becomes\n\\begin{equation}\n \\lambda=\\sqrt{\\frac{K-\\mathrm{i} R(\\sin\\alpha-\\mathrm{i}\\cos\\alpha)}{K+\\mathrm{i} R(\\sin\\alpha+\\mathrm{i}\\cos\\alpha)}},\n\\end{equation}\nwhich simplifies to $\\lambda=\\pm\\mathrm{e}^{\\mathrm{i}\\alpha}$ in the limit $R\\to\\infty$. Hence, if we start on $\\mathcal A_+$ with $\\lambda=+1$, then the semicircle ${\\mathcal C}$ leads us to the sheet on $\\mathcal A_-$ with $\\lambda=-1$, and vice versa. This will be important later, when we continuously connect the solutions on the various parts of the boundary. Then we need to compare the $\\lambda=1$ solution on $\\mathcal A_+$ with the $\\lambda=-1$ solution on $\\mathcal A_-$.\n\nIf we consider the LP on $\\mathcal C$, using the asymptotic behaviour \\eref{eq:condC} of the Ernst potentials, we simply obtain ${\\mathbf Y}_{,z}={\\mathbf 0}$ and ${\\mathbf Y}_{,\\bar z}={\\mathbf 0}$. Therefore, ${\\mathbf Y}$ is constant on ${\\mathcal C}$. (More precisely, the Ernst potentials in an asymptotically flat spacetime approach their constant limits at infinity at a rate for which the coefficient matrices on the right-hand side of the LP are of $\\mathcal O (R^{-2})$ on a semicircle with coordinate radius $R$, while the length of the semicircle only increases proportional to $R$ as $R\\to\\infty$.)\n\nNext we have a closer look at the various ${\\mathbf C}$-matrices that appear in the solution to the LP on the different parts of the boundary. These matrices cannot be chosen independently of each other. Instead we have to ensure that the solutions ${\\mathbf Y}$ and $\\tilde{\\mathbf Y}$ are continuous at the points $A$, $B$, $C$, $D$, cf.\\ Fig.~\\ref{fig:IntegrationPath}, and that the solutions on $\\mathcal A_+$ and $\\mathcal A_-$ are correctly connected via ${\\mathcal C}$ as discussed above.\n\nWe start by considering continuity of ${\\mathbf Y}$ at point $A$. For $\\lambda=1$, using \\eref{eq:ApY} and \\eref{eq:H1Y}, we obtain the condition\n\\begin{equation}\\label{eq:Acont}\n {\\mathbf E}{\\mathbf C}_+={\\mathbf E}{\\mathbf C}_1\\quad\\textrm{at}\\quad \\rho=0,\\ \\zeta=K_A.\n\\end{equation}\nThe same condition also ensures continuity of ${\\mathbf Y}$ in the sheet $\\lambda=-1$. Note that the nine components of the matrix condition \\eref{eq:Acont} are not independent. Instead, the second row is the negative of the first row. Hence we will only use the second and third row conditions.\n\nSimilarly, considering continuity of $\\tilde{\\mathbf Y}$ in the frame with $\\Omega=\\Omega_1$, we obtain the condition (for $\\lambda=1$)\n\\begin{eqnarray}\\\n \\fl\n \\left[{\\mathbf E}+2\\mathrm{i}\\Omega_{1}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_+\n \\nonumber\\\\\n \\fl\n \\quad=\\left[\\left(\\begin{array}{ccc}\n 0 & 0 & 0\\\\ 0 & 0 & 0\\\\ 0 & 0 & 1\n \\end{array}\\right)\n{\\mathbf E}+2\\mathrm{i}\\Omega_{1}(K-\\zeta)\\left(\\begin{array}{ccc}\n -1 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{array}\\right)\\right]{\\mathbf C}_1\n \\quad\\textrm{at}\\quad \\rho=0,\\ \\zeta=K_A.\n\\end{eqnarray}\nAgain, the nine components are not independent: the second row is the negative of the first row. Moreover, the third row is identical with the third row of \\eref{eq:Acont}. Hence we obtain one new row of conditions.\n\nThe three rows of independent conditions can be combined as follows,\n\\begin{equation}\\fl\n \\left(\\begin{array}{ccc}\n \\mathcal E_A & -1 & -\\Phi_A\\\\\n \\mathcal E_A+2\\mathrm{i}\\Omega_1(K-K_A) & -1 & -\\Phi_A\\\\\n 2\\bar\\Phi_A & 0 & 1\n \\end{array}\\right){\\mathbf C}_+\n =\\left(\\begin{array}{ccc}\n \\mathcal E_A & -1 & -\\Phi_A\\\\\n 2\\mathrm{i}\\Omega_1(K-K_A) & 0 & 0\\\\\n 2\\bar\\Phi_A & 0 & 1\n \\end{array}\\right){\\mathbf C}_1,\n\\end{equation}\nwhere, here and in the following, subscripts $A$, $B$, $C$, $D$ refer to evaluation of function values at the indicated points, i.e.\\ at $\\rho=0$ and $\\zeta=K_A,\\ K_B,\\ K_C$, or $K_D$, respectively. Solving for ${\\mathbf C}_1$, we can also write this condition in the form\n\\begin{equation}\\label{eq:condC1}\n {\\mathbf C}_1=\\left({\\mathbf 1}+\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right){\\mathbf C}_+,\n\\end{equation}\nwhere\n\\begin{equation}\n \\alpha_A:=2\\mathrm{i}\\Omega_1(K-K_A)\n\\end{equation}\nand\n\\begin{equation}\\label{eq:MA}\n {\\mathbf M}_A:=m_A n_A^T,\\quad\n m_A:=\\left(\\begin{array}{c}\n -1\\\\ \\bar\\mathcal E_A\\\\ 2\\bar\\Phi_A\n \\end{array}\\right),\\quad\n n_A:=\\left(\\begin{array}{c}\n -\\mathcal E_A\\\\ 1\\\\ \\Phi_A\n \\end{array}\\right).\n\\end{equation}\nNote that $n_A\\cdot m_A\\equiv n_A^T m_A=0$ [cf.~\\eref{eq:condf}], which implies ${\\mathbf M}_A^2={\\mathbf 0}$. As a consequence, matrices of the form ${\\mathbf 1}+c{\\mathbf M}_A$ can easily be inverted, and we have\n$({\\mathbf 1}+c{\\mathbf M}_A)^{-1}={\\mathbf 1}-c{\\mathbf M}_A$.\n\nRepeating the above calculations at the points $B$, $C$, and $D$, we obtain the additional conditions\n\\begin{eqnarray}\n \\label{eq:condC0}\n {\\mathbf C}_0 &=& \\left({\\mathbf 1}-\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right){\\mathbf C}_1,\n \\quad \\alpha_B:=2\\mathrm{i}\\Omega_1(K-K_B),\\\\\n \\label{eq:condC2}\n {\\mathbf C}_2 &=& \\left({\\mathbf 1}+\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right){\\mathbf C}_0,\n \\quad \\alpha_C:=2\\mathrm{i}\\Omega_2(K-K_C),\\\\\n \\label{eq:condCm}\n {\\mathbf C}_- &=& \\left({\\mathbf 1}-\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right){\\mathbf C}_2,\n \\quad \\alpha_D:=2\\mathrm{i}\\Omega_2(K-K_D),\n\\end{eqnarray}\nwhere the matrices ${\\mathbf M}_B$, ${\\mathbf M}_C$, ${\\mathbf M}_D$ are defined as in \\eref{eq:MA}, but with the Ernst potentials evaluated at the points $B$, $C$, or $D$, respectively.\n\nIf we would know the exact form of the matrix ${\\mathbf C}_+$, then the above conditions would allow us to compute all of the remaining ${\\mathbf C}$-matrices. This would indeed be possible if we imposed suitable gauge conditions for the LP, see \\cite{Meinel2012}. Here, however, as mentioned before, we intend to demonstrate that the final results can easily be obtained without any particular gauge.\n\nFinally, we consider the transition from the solution on $\\mathcal A_+$ to $\\mathcal A_-$ via ${\\mathcal C}$. Based on our earlier discussion, we arrive at the condition \n\\begin{equation}\n \\lim_{\\zeta\\to\\infty}{\\mathbf Y}(0,\\zeta;K)|_{\\lambda=1}\n =\\lim_{\\zeta\\to-\\infty}{\\mathbf Y}(0,\\zeta;K)|_{\\lambda=-1}.\n\\end{equation}\nWith the explicit solutions \\eref{eq:ApY} and \\eref{eq:AmY}, together with \\eref{eq:sheets} and the asymptotic values \\eref{eq:condC}, the previous equation becomes\n\\begin{equation}\n \\left(\\begin{array}{ccc}\n 1 & 1 & 0\\\\\n 1 &-1 & 0\\\\\n 0 & 0 & 1\n \\end{array}\\right){\\mathbf C}_+\n ={\\mathbf J}\\left(\\begin{array}{ccc}\n 1 & 1 & 0\\\\\n 1 &-1 & 0\\\\\n 0 & 0 & 1\n \\end{array}\\right){\\mathbf C}_-{\\mathbf B}.\n\\end{equation}\nThis can be rearranged to\n\\begin{equation}\\label{eq:cond2}\n {\\mathbf C}_-{\\mathbf B}=\\P{\\mathbf C}_+,\n\\end{equation}\nwhere $\\P$ is the following permutation matrix,\n\\begin{equation}\n \\P:=\\left(\\begin{array}{ccc}\n 0 & 1 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 1\n \\end{array}\\right).\n\\end{equation}\n\n\n\\subsection{Parameter conditions}\n\nThe solutions of the LP at the symmetry axis and horizon discussed in the previous subsection depend on the values of the Ernst potentials at the points $A$, $B$, $C$, $D$, the $\\zeta$-coordinates $K_A$, $K_B$, $K_C$, $K_D$ of these points, and the angular velocities $\\Omega_1$, $\\Omega_2$ of the two horizons. These parameters, however, cannot be chosen independently of each other. Instead, we have to impose a number of parameter conditions. In the following, we show how these conditions can be obtained.\n\nCombining \\eref{eq:condC1}, \\eref{eq:condC0}, \\eref{eq:condC2}, \\eref{eq:condCm}, we obtain an equation relating ${\\mathbf C}_-$ and ${\\mathbf C}_+$,\n\\begin{equation}\\fl\n {\\mathbf C}_-=\\left({\\mathbf 1}-\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right)\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right){\\mathbf C}_+.\n\\end{equation}\nAnother relation between ${\\mathbf C}_-$ and ${\\mathbf C}_+$ was previously obtained in \\eref{eq:cond2}. Combining those equations, we can eliminate ${\\mathbf C}_-$ and solve for ${\\mathbf B}$,\n\\begin{equation}\\fl\\label{eq:Cpluscond}\n {\\mathbf B}={\\mathbf C}_+^{-1}\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right)\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right)\\P{\\mathbf C}_+.\n\\end{equation}\nPlugging this into the relation ${\\mathbf B}^2={\\mathbf 1}$ [cf.~\\eref{eq:B2}], we obtain \n\\begin{eqnarray}\n \\fl\n \\P\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right)\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right)\\nonumber\\\\\n \\fl\\quad\n =\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right)\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right)\\P.\n\\end{eqnarray}\nAs expected, the gauge dependent matrices ${\\mathbf B}$ and ${\\mathbf C}_+$ have cancelled, and hence the physical restrictions are independent of any gauge choice.\nFinally, simplifying and multiplying both sides by $\\alpha_A\\alpha_B\\alpha_C\\alpha_D$, we obtain the condition that two matrix polynomials of third degree in $K$ must be the same. Equating the coefficients of $K^0$, $K^1$, $K^2$ and $K^3$ leads to a number of constraints for the parameters.\n\nNote that the corresponding conditions in the case of a single black hole in electrovacuum can easily be solved explicitly \\cite{Meinel2012}. In the present case, unfortunately, they are much more involved, and an explicit solution may be difficult to obtain. Fortunately, however, most of the conditions and their solution are not required in the following.\n\nAs an example, we only give the three simplest conditions here, which are\n\\begin{equation}\\label{eq:parcond}\n \\Omega_1(|\\mathcal E_C|^2-|\\mathcal E_D|^2)+\\Omega_2(|\\mathcal E_A|^2-|\\mathcal E_B|^2)=0,\n\\end{equation}\n\\begin{equation}\n \\Omega_1(\\mathcal E_C+\\bar\\mathcal E_C-\\mathcal E_D-\\bar\\mathcal E_D)+\\Omega_2(\\mathcal E_A+\\bar\\mathcal E_A-\\mathcal E_B-\\bar\\mathcal E_B)=0,\n\\end{equation}\n\\begin{equation}\n \\Omega_1[(1-\\bar\\mathcal E_C)\\Phi_C-(1-\\bar\\mathcal E_D)\\Phi_D]\n +\\Omega_2[(1-\\bar\\mathcal E_A)\\Phi_A-(1-\\bar\\mathcal E_B)\\Phi_B]=0.\n\\end{equation}\nFor our derivation of the most general form of the axis potentials on $\\mathcal A_+$ in the next subsection, we will explicitly only require Eq.~\\eref{eq:parcond}.\n\n\n\\subsection{Construction of the axis potentials}\n\nWith the preparations from the previous subsection, we are now in a position to obtain the Ernst potentials $\\mathcal E_+=\\mathcal E(0,\\zeta)$ and $\\Phi_+=\\Phi(0,\\zeta)$ on $\\mathcal A_+$. The key ingredient for this construction is the above-mentioned property that ${\\mathbf Y}$ can generally take on different values on the two Riemannian sheets, but must be unique at the branch points $K_1=\\mathrm{i}\\bar z=\\zeta+\\mathrm{i}\\rho$ and $K_2=-\\mathrm{i} z=\\zeta-\\mathrm{i}\\rho$ where the two sheets are connected. In the limit $\\rho\\to0$, as we approach the $\\zeta$-axis, both branch points converge to $K_1=K_2=\\zeta$, i.e.\\ we have confluent branch points.\n\nWith \\eref{eq:ApY} and \\eref{eq:sheets}, the condition that ${\\mathbf Y}$ for $\\lambda=1$ and ${\\mathbf Y}$ for $\\lambda=-1$ on $\\mathcal A_+$ coincide at $K=\\zeta$ becomes\n\\begin{equation}\n {\\mathbf E}{\\mathbf C}_+={\\mathbf J}{\\mathbf E}{\\mathbf C}_+{\\mathbf B} \\quad\\textrm{at}\\quad K=\\zeta.\n\\end{equation}\nUsing \\eref{eq:B2}, we can rewrite this equation as\n\\begin{equation}\\label{eq:PCBC}\n {\\mathbf C}_+{\\mathbf B}{\\mathbf C}_+^{-1}\\P={\\mathbf E}^{-1}{\\mathbf J}{\\mathbf E}\\P \\quad\\textrm{at}\\quad K=\\zeta.\n\\end{equation}\nNow we define\\footnote{Note that ${\\mathbf N}$ generalises the $2\\times2$ matrix $\\mathcal N$ used in the discussion of two-black-hole configurations in vacuum, see Eq.~(22) in \\cite{NeugebauerHennig2009}.}\n\\begin{equation}\\fl\n {\\mathbf N}(\\zeta):={\\mathbf E}^{-1}{\\mathbf J}{\\mathbf E}\\P\n \\equiv\n \\frac{1}{f_+}\\left(\\begin{array}{ccc}\n 1 & |\\Phi_+|^2-\\mathrm{i} b_+ & \\Phi_+\\\\\n |\\Phi_+|^2+\\mathrm{i} b_+ & |\\mathcal E_+|^2 & -\\Phi_+\\bar\\mathcal E_+\\\\\n -2\\bar\\Phi_+ & 2\\bar\\Phi_+\\mathcal E_+ & f_+-2|\\Phi_+|^2\n \\end{array}\\right),\n\\end{equation}\nwhere $b=\\Im(\\mathcal E)$, cf.~\\eref{eq:b}. If we reformulate the parameter condition \\eref{eq:Cpluscond} in the form\n\\begin{equation}\\fl\n {\\mathbf C}_+{\\mathbf B}{\\mathbf C}_+^{-1}\\P=\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right)\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right)\n\\end{equation}\nand specialise to $K=\\zeta$, then, using \\eref{eq:PCBC}, we see that the matrix ${\\mathbf N}$, which contains various combinations of the Ernst potentials, can be obtained from\n\\begin{equation}\\fl\\label{eq:N}\n {\\mathbf N}= \\left.\\left({\\mathbf 1}-\\frac{1}{\\alpha_A}{\\mathbf M}_A\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_B}{\\mathbf M}_B\\right)\n \\left({\\mathbf 1}-\\frac{1}{\\alpha_C}{\\mathbf M}_C\\right)\n \\left({\\mathbf 1}+\\frac{1}{\\alpha_D}{\\mathbf M}_D\\right)\\right|_{K=\\zeta}.\n\\end{equation}\nNote that the components of the matrix on the right-hand side simplify to rational functions in $\\zeta$ with polynomials of at most fourth degree. \nMoreover, similarly to the parameter conditions, the gauge dependent matrices ${\\mathbf B}$ and ${\\mathbf C}_+$ do not appear in this formula, so the axis potentials are certainly independent of any gauge choice for the LP.\n\nNow we consider the following combinations of components of ${\\mathbf N}$, where we evaluate the right-hand side of \\eref{eq:N} in each case, in order to determine the polynomial structure of the relevant components,\n\\begin{eqnarray}\n \\fl\\label{eq:f+}\n f_+ &=& \\frac{1}{N_{11}} = \\frac{\\pi_4(\\zeta)}{p_4(\\zeta)},\n \\hspace{2cm} \\pi_4:=(\\zeta-K_A)(\\zeta-K_B)(\\zeta-K_C)(\\zeta-K_D)\\\\\n \\fl\n &&\\hspace{4.52cm}p_4:\\ \\mbox{real monic polynomial of 4th degree}\\nonumber\\\\\n \\fl\\label{eq:b+}\n b_+ &=&\\frac{N_{21}-N_{12}}{2N_{11}}=\\frac{p_2(\\zeta)}{p_4(\\zeta)},\n \\hspace{0.895cm} p_2:\\ \\mbox{real polynomial of 2nd degree}\\\\\n \\fl\\label{eq:E+2}\n |\\mathcal E_+|^2 &=& \\frac{N_{22}}{N_{11}}=\\frac{q_4(\\zeta)}{p_4(\\zeta)},\n \\hspace{2.08cm} q_4:\\ \\mbox{real monic polynomial of 4th degree}\\\\\n \\fl\\label{eq:Phi+}\n \\Phi_+ &=& \\frac{N_{13}}{N_{11}}=\\frac{p_3(\\zeta)}{p_4(\\zeta)},\n \\hspace{2.06cm} p_3:\\ \\mbox{complex polynomial of 3rd degree}\\\\\n \\fl\\label{eq:Phi+2}\n |\\Phi_+|^2 &=& \\frac{N_{12}+N_{21}}{2N_{11}}=\\frac{q_2(\\zeta)}{p_4(\\zeta)},\\\n \\hspace{0.79cm} q_2:\\ \\mbox{real polynomial of 2nd degree}\\\\\n \\fl\\label{eq:Phi+E+}\n \\Phi_+\\bar\\mathcal E_+ &=& -\\frac{N_{23}}{N_{11}}=\\frac{q_3(\\zeta)}{p_4(\\zeta)}.\n \\hspace{1.64cm} q_3:\\ \\mbox{complex polynomial of 3rd degree}\n\\end{eqnarray}\nAll polynomials in the above formulae can be given explicitly, but the coefficients are rather lengthy expressions depending on the parameters $\\mathcal E_A,\\dots,\\mathcal E_D$, $\\Phi_A,\\dots,\\Phi_D$, $K_A,\\dots,K_D$, $\\Omega_1,\\ \\Omega_2$. Also note that the two polynomials $p_2$ and $q_2$ initially appear to be of third degree, but in both cases the leading coefficients are proportional to the left-hand side of \\eref{eq:parcond} and hence vanish as a consequence of the parameter conditions.\n\nFirstly, we construct $\\mathcal E_+$ using \\eref{eq:f+}, \\eref{eq:b+}, \\eref{eq:Phi+2}, \n\\begin{equation}\\label{eq:E+formula}\n \\mathcal E_+=f_+-|\\Phi_+|^2+\\mathrm{i} b_+=\\frac{\\pi_4-q_2+\\mathrm{i} p_2}{p_4},\n\\end{equation}\nand compare $|\\mathcal E_+|^2$ as obtained from this expression with \\eref{eq:E+2}. This leads to the condition\n\\begin{equation}\n (\\pi_4-q_2+\\mathrm{i} p_2)(\\pi_4-q_2-\\mathrm{i} p_2)=q_4p_4.\n\\end{equation}\nComparing zeros of both sides and using that the two terms on the left-hand side are complex conjugate polynomials, and the factors on the right-hand side are real polynomials, we observe that each bracket on the left-hand side has two zeros of $q_4$ and two zeros of $p_4$. Hence, in the expression \\eref{eq:E+formula} for $\\mathcal E_+$, two linear factors cancel and we actually have\n\\begin{equation}\\label{eq:E+}\n \\mathcal E_+=\\frac{\\pi_2(\\zeta)}{r_2(\\zeta)},\n\\end{equation}\nwhere $\\pi_2$ and $r_2$ are complex monic polynomials of 2nd degree. \n\nSecondly, we use the expression \\eref{eq:Phi+} for $\\Phi_+$ to calculate $|\\Phi_+|^2$ and compare the result with \\eref{eq:Phi+2}. In this way, we obtain the condition\n\\begin{equation}\n p_3\\bar p_3=q_2p_4.\n\\end{equation}\nSimilarly to the above discussion, we conclude that $p_3$ has two zeros of $p_4$ and one of $q_2$. Hence \\eref{eq:Phi+} simplifies to\n\\begin{equation}\\label{eq:Phi+1}\n \\Phi_+=\\frac{\\pi_1(\\zeta)}{R_2(\\zeta)},\n\\end{equation}\nwhere $\\pi_1$ and $R_2$ are complex polynomials of first and second degrees, respectively, which we choose such that $R_2$ is a monic polynomial.\n\nFinally, we show that, in fact, $R_2=r_2$, i.e.\\ $\\Phi_+$ has the same denominator as $\\mathcal E_+$. For that purpose, we use \\eref{eq:E+} and \\eref{eq:Phi+1} to construct $\\Phi_+\\bar\\mathcal E_+=\\frac{\\pi_1\\bar\\pi_2}{R_2 \\bar r_2}$ and compare with \\eref{eq:Phi+E+}, which shows that\n\\begin{equation}\\label{eq:p4a}\n p_4=R_2\\bar r_2.\n\\end{equation}\n(Since both sides in the previous equation are monic polynomials, we indeed have equality and not just proportionality.)\nWe also use \\eref{eq:Phi+1} to obtain $|\\Phi_+|^2=\\frac{\\pi_1\\bar\\pi_1}{R_2\\bar R_2}$, which, together with \\eref{eq:Phi+2}, implies that \n\\begin{equation}\\label{eq:p4b}\n p_4=R_2\\bar R_2\n\\end{equation}\nis another representation of $p_4$. Combining \\eref{eq:p4a} and \\eref{eq:p4b}, we immediately confirm that $R_2=r_2$ must hold. \n\nHence the previous formulae for the axis potentials finally simplify to\n\\begin{equation}\n \\mathcal E_+=\\frac{\\pi_2(\\zeta)}{r_2(\\zeta)},\\quad\n \\Phi_+=\\frac{\\pi_1(\\zeta)}{r_2(\\zeta)},\n\\end{equation}\ni.e.\\ the axis values are given in terms of complex polynomials $\\pi_2$, $r_2$ and $\\pi_1$ of the indicated degrees, where $\\pi_2$ and $r_2$ are monic polynomials. \n\n\n\\section{Discussion\\label{sec:discussion}}\n\nWe have derived the most general axis data for candidate solutions that could describe axisymmetric and stationary two-black-hole configurations with non-extremal rotating and charged black holes. Necessarily, the axis values of the Ernst potentials must be of the form\n\\begin{equation}\n \\mathcal E_+(\\zeta)=\\frac{(\\zeta-c_1)(\\zeta-c_2)}{(\\zeta-d_1)(\\zeta-d_2)},\\quad\n \\Phi_+(\\zeta)=\\frac{e_1\\zeta+e_2}{(\\zeta-d_1)(\\zeta-d_2)}\n\\end{equation}\nwith complex constants $c_i$, $d_i$, $e_i$, $i=1,2$, which corresponds to 12 real degrees of freedom. Note that we can immediately reduce the available degrees of freedom by comparing the asymptotic expansions of these data,\n\\begin{equation}\n \\mathcal E_+(\\zeta)=1-\\frac{c_1+c_2-d_1-d_2}{\\zeta}+\\mathcal O(\\zeta^{-2}),\\quad\n \\Phi_+(\\zeta)= \\frac{e_1}{\\zeta}+\\mathcal O(\\zeta^{-2}),\n\\end{equation}\nwith the general behaviour of the axis potentials in an asymptotically flat spacetime with total (ADM) mass $M$ and charge $Q$, and without NUT parameter and without magnetic charge,\n\\begin{equation} \n \\mathcal E_+(\\zeta)=1-\\frac{2M}{\\zeta}+\\mathcal O(\\zeta^{-2}),\\quad\n \\Phi_+(\\zeta)=\\frac{Q}{\\zeta}+\\mathcal O(\\zeta^{-2}).\n\\end{equation}\nObviously, we require the constraints\n\\begin{equation}\n \\Im(c_1+c_2-d_1-d_2)=0,\\quad \\Im(e_1)=0,\n\\end{equation}\nwhich leaves us with 10 degrees of freedom.\n\nNote that the boundary data \\eref{eq:bdata1} and \\eref{eq:bdata2a}, \\eref{eq:bdata2b} for the existing 8-parametric exact candidate solutions discussed in Sec.~\\ref{sec:intro} are all of the above form. Hence it remains to decide in future investigations whether these solutions do indeed describe physically acceptable equilibrium configurations, i.e.\\ solutions for which the regularity requirements (i)--(v) from Sec.~\\ref{sec:intro} are all satisfied for particular choices of the parameters. Moreover, it should be studied whether slightly larger solution classes (with the above-mentioned 10 degrees of freedom) need to be considered as well.\n\n\n\n\n\n\n\\section*{Acknowledgments}\nI would like to thank Dominic Searles for commenting on the manuscript. \n \n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section_introduction}}\n\\else\n\\section{Introduction}\n\\label{section_introduction}\n\\fi\n\n\\IEEEPARstart{C}{onvolutional} Neural Networks (CNNs), as the dominant technique of deep learning~\\cite{lecun2015deep}, have shown remarkable superiority in various real-world applications over most machine learning approaches~\\cite{krizhevsky2012imagenet,sainath2013deep,sutskever2014sequence,alphago}. Since the year of 1998 when the first version of the CNN (i.e., LeNet5~\\cite{lecun1998gradient}) was proposed, diverse CNN variants have been developed, such as AlexNet~\\cite{krizhevsky2012imagenet}, VGG~\\cite{simonyan2014very}, GoogleNet~\\cite{szegedy2015going}, ResNet~\\cite{he2016deep} and DenseNet~\\cite{huang2017densely}, to name a few. There is a trend among the CNN variants that their architectures become increasingly deeper. For example, the depth of LeNet5 is six, VGG is $16$, while ResNet~\\cite{he2016identity} achieves at a depth of $1,202$. The principle behind such designs is that a deeper CNN typically has a more powerful capability to address much complex and large-scale data. \n\nThe state-of-the-art CNNs are typically designed by experts who have rich domain knowledge from both investigated data and CNNs. Because the performance of the CNNs strongly relies on the investigated data, it is expected that there is a major limitation to this design manner. For example, researchers who are familiar with the data at hand do not necessarily have the experience in designing the architectures of CNNs, and vice versa. To this end, there is a great demand for developing algorithms which allow researchers without any expertise to automatically derive the best performing CNN for the given data. Indeed, multiple algorithms for this purpose have been proposed in recent years.\n\n\nIn general, the algorithms of designing architectures for CNNs can be divided into two different categories according to their base techniques. The first covers methods using evolutionary algorithms~\\cite{back1996evolutionary}, such as the genetic CNN method (Genetic CNN)~\\cite{xie2017genetic}, the large-scale evolution method (Large-scale Evolution)~\\cite{real2017large}, the hierarchical representation method (Hierarchical Evolution)~\\cite{liu2017hierarchical} and the Cartesian genetic programming method (CGP-CNN)~\\cite{suganuma2017genetic}. These algorithms follow the standard flow of an evolutionary algorithm to heuristically discover the optimal solution. The second refers to algorithms based on reinforcement learning~\\cite{sutton1998reinforcement}, such as the neural architecture search method (NAS)~\\cite{zoph2016neural}, the meta-modelling method (MetaQNN)~\\cite{baker2016designing}, the efficient architecture search method (EAS)~\\cite{cai2018efficient} and the block design method (Block-QNN-S)~\\cite{zhong2017practical}. The algorithms in the second category resemble those in the first category, in addition to the employed heuristic nature that algorithms in the second category utilize the reward-penalty principle of reinforcement learning.\n\nExperimental results of these algorithms have demonstrated promising classification accuracy in the challenging benchmark datasets, such as CIFAR10 and CIFAR100~\\cite{krizhevsky2009learning}, but limitations also exist. Firstly, the algorithms in the first category do not make full use of the advantages of evolutionary algorithms, which results in consuming extensive computational resource and not promising classification accuracy. Secondly, the algorithms in the second category require even more computational resources than the algorithms in the first category due to the nature of reinforcement learning. Thirdly, manual assistances based on domain expertise are required for most algorithms in both categories. To this end, the development of algorithms automatically 1) discovering the best CNN architectures for given data, 2) relying on the limited computational resources and 3) being directly used without any manual refinement or re-composition regarding their discovered CNNs, is still in its infancy. Note that, depending on whether expertise in CNNs is required or not in using these algorithms, they can also be classified into the automatic and the semi-automatic categories. The first includes Large-scale Evolution, CGP-CNN, NAS and Meta-CNN, while the second is composed of Genetic CNN, Hierarchical Evolution, EAS and Block-QNN-S.\n\n\nEvolutionary algorithm~\\cite{back1996evolutionary} is a class of population-based meta-heuristic optimization paradigm inspired by the biological evolution. Typical evolutionary algorithms include genetic algorithms (GAs)~\\cite{davis1991handbook}, genetic programming~\\cite{banzhaf1998genetic}, evolutionary strategy~\\cite{janis1976evolutionary}, etc., among which GAs are the most popular one mainly because of their theoretical evidences~\\cite{schmitt2001theory} and promising performance in solving different optimization problems~\\cite{deb2002fast,sun2018igd,sun2017reference,sun2018improve,transferjiang2018}. It has also been recognized that GAs are capable of generating high-quality optimal solutions by using bio-inspired operators, i.e., mutation, crossover and selection~\\cite{mitchell1998introduction}. Our goal in this paper is to develop an effective and efficient algorithm by using GA, in short, termed as CNN-GA, to automatically discover the best architectures of CNNs for given image classification tasks, so that the discovered CNN can be directly used without any manual refinement or re-composition. The contributions of the proposed CNN-GA method are summarized as follows:\n\n\\begin{enumerate}\n\t\\item The depth of CNNs is not limited to a predefined number in the proposed algorithm, but instead the best depth is discovered during the evolutionary process, by finding the CNN with the best classification accuracy for the given data. Although this design could produce the optimal CNN, the crossover operation, which plays the role of exploitation search (i.e., the local search), cannot work in such a situation due to individuals with unequal (variable) lengths. To address this need, we also design a crossover operator to adapt for these individuals during the evolutionary progress.\n\t\n\t\\item The skip connection, of which the superiority has been theoretically and experimentally proven in effectively training deep architectures, is directly incorporated into the proposed algorithm. In such a way, the evolved CNNs are capable of dealing with complex data by using deep architectures, by avoiding the Gradient Vanishing (GV) problems~\\cite{hochreiter1997long}. Furthermore, this design can also reduce the search space so that the best performance can be achieved within the limited time. In addition, compared to other models with similar performance, the architectures evolved by the proposed algorithm have a much smaller number of parameters.\n\t \n\t\n\n\t\\item The proposed algorithm is completely automated in discovering the best CNN architectures, and does not require any manual intervention during the evolutionary search. When evolution is finished, the obtained CNNs can be directly used to process the data, and do not need further refinement, such as adding more convolutional or pooling layers. Furthermore, the proposed algorithm can be directly used by other researchers who do not need to do any preparations such as providing a manually tuned network in advance.\n\t\n\t\n\t\\item An asynchronous computational component is designed to make full use of the given computational resources to accelerate the evaluation of fitness for the individuals in the same generation of the proposed algorithm. In addition, a cache component is also developed which is expected to further reduce the fitness evaluation time for the whole population.\n\n\\end{enumerate}\n\nThe remainder of this paper is organized as follows. Firstly, the background is presented in Section~\\ref{section_literature}. Then, the details of the proposed algorithm are documented in Section~\\ref{section_algorithm}. Next, the experimental designs and experimental results are shown in Sections~\\ref{section_exp_settings} and~\\ref{section_exp_results}, respectively. Finally, conclusions and future works are outlined in Section~\\ref{section_conclusion}.\n\n\n\n\\section{Background}\n\\label{section_literature}\n\nIn this section, CNNs, skip connections and GAs, which are the background of the proposed algorithm, are introduced to help readers better understand the related works and the proposed algorithm.\n\n\\subsection{CNNs}\n\\label{sub_sub_section_cnn} \n\nIn this subsection, we mainly introduce the building blocks of CNNs, i.e., the convolutional and pooling layers, which are the basic objects encoded by GAs to represent CNNs.\n\nSpecifically, the convolutional layer employs filters to perform convolutional operations on the input data. One filter can be viewed as a matrix. In this paper, we focus on 2-dimension convolution operators (with a 2-dimension filter) because the proposed algorithm aims at processing image data. During the convolutional operation, the filter horizontally slides (with a given step size), then vertically moves (with another step size) for the next horizontal slide, until the whole image has been scanned. At each position, the filter is applied to the image by multiplying each filter value with the corresponding pixel value, and summing the results to give the output of the filter. The set of filter outputs form a new matrix called the feature map. The horizontal and vertical step sizes are called the width and height of a stride. In a convolutional layer, multiple filters (typically with the same sizes and using the same stride) are allowed to coexist, producing a set of feature maps. The exact number of feature maps used is a parameter in the architecture of the corresponding CNN. The number of filters is derived from the number of resulting feature maps and the spatial size of the input data. In addition, two convolutional operations are applied: the \\textit{same} convolutional operation which pads zeros to the input data when there is no enough area for the\nfilter to overlap, and the \\textit{valid} convolutional operation which does not pad anything. Hence, the parameters of a convolutional layer are the number of feature maps, the filter size, the stride size and the convolutional operation type.\n\nA pooling layer has common components of a convolutional layer except that 1) the filter is called the kernel which has no value, 2) the output of a kernel is the maximal or mean value of the area it stops, and 3) the spatial size of the input data is not changed through a pooling layer. When the maximal value is returned, it is a pooling layer with the type of \\textit{max}, otherwise of \\textit{mean}. Hence, the parameters of a pooling layer are the kernel size, the stride size and the pooling type used.\n\nIn addition, the fully-connected layers are usually incorporated into the tail of a CNN. Because it is very common and also not the building blocks of CNNs, we will not detail it here. But note that, the number of fully-connected layers and the number of neurons in each fully-connected layer are also the parameters of a CNN's architecture, if the fully-connected layers are used in the CNN. In the proposed algorithm, the fully-connected layers are discarded, and the justifications are given in Subsection~\\ref{sub_section_pop_init}.\n\n\n\\subsection{Skip Connections}\n\\label{sub_sub_section_skip_connections}\nCommonly, the connections in CNNs exist between the neurons of two adjacent layers. Analogously, the skip connections refer to those connecting the neurons of the layers that are not adjacent. The skip connection was firstly introduced in~\\cite{hochreiter1997long} as a gate mechanism, effectively training a recurrent neural network with long and short-term memory~\\cite{gers1999learning} and avoiding the GV problems~\\cite{hochreiter1997long}. Specifically, the GV problems refer to the gradient becoming very small or explosion during back propagation training in a deep neural network. Because gradient-based algorithms are the dominant learning algorithms of neural networks, the GV problems are the main obstacle to effectively training \\textit{deep} neural networks. The skip connections were experimentally proven to be able to train very deep neural networks~\\cite{srivastava2015training}. Indeed, the promising performance of ResNet~\\cite{he2016deep}, which was proposed very recently, also benefits from the skip connections. A simple example of using a skip connection is shown in Fig.~\\ref{fig_skip_connection}, where the dashed line denotes the skip connection from the input $X$ to the output of the $N$-th building block, and the symbol ``$\\oplus$'' refers to the element-wise addition.\n\n\\begin{figure}[!htp]\n\t\\centering\n\t\\includegraphics[width=0.3\\columnwidth]{skip_connection}\\\\\n\t\\caption{An example of using the skip connection.}\\label{fig_skip_connection}\n\\end{figure}\n\nOver the past few years, an increasing number of researchers has attempted to theoretically reveal the mechanisms behind the skip connections. For example, the skip connections have also been claimed to be able to eliminate singularities~\\cite{orhan2017skip}. However, a completely satisfactory explanation is still elusive. Among the existing theoretical evidence, skip connections defying the GV problems receives the most recognition~\\cite{gers1999learning,hochreiter1997long}. The GV problems frequently occur when the error has been back-propagated over many layers of a given neural network. Because the skip connections shorten the number of layers of back-propagation, the GV problems should be alleviated. As discussed above, the deeper the CNN is, the more powerful capability it would have to process complex data. Combined with the connections that are not skipped, a CNN with the skip connections can have the capability that a deep architecture has and can also be effectively trained.\n\nOwning to the promising performance of the skip connections, it is naturally incorporated into the proposed algorithm to discover the architectures of an optimal CNN. In addition, the search space can also be reduced if these skip connections are directly used. Consequently, the required computational resources can be minimized, and the promising architectures of CNNs can be discovered within a limited time.\n\n\\begin{figure*}[!htp]\n\t\\centering\n\t\\includegraphics[width=1.8\\columnwidth]{ga_flow_chart}\\\\\n\t\\caption{The flowchart of genetic algorithm.}\\label{fig_ga_flow}\n\\end{figure*}\n\n\\subsection{Genetic Algorithms}\n\\label{sub_sub_section_gas}\nThe flowchart of a GA is shown in Fig.~\\ref{fig_ga_flow}. Specifically, a population of individuals (i.e., CNNs with architecture variants) is randomly initialized first, and then the fitness of each individual is evaluated. The fitness is measured by a deterministic function which is known as the fitness function, based on the context of the problem to be optimized (i.e., performance of CNNs on specific image classification tasks). The input of the function is the decision variables encoded in the individuals. After that, the individuals who have better fitness will be chosen by the selection operation, to hopefully generate offspring with better fitness. Specifically, new offspring are generated by exchanging or varying the encoded information of the selected parent individuals, i.e. by the crossover and mutation operations. These generated offspring are evaluated for the fitness, and then the population surviving into the next generation are selected from the current population, which is composed of the parent population and generated offspring, by the environmental selection. Through repeating a series of these operations, it is expected to find the optimal solution from the population when the GA terminates. Note that, the typical criterion to terminate a GA is a predefined maximal generation number, say $20$ in our experiments.\n\n\\section{The Proposed Algorithm}\n\\label{section_algorithm}\n\nIn this section, we firstly present the framework of the proposed algorithm in Subsection~\\ref{sub_section_alg_overview}, and then detail the main steps in Subsections~\\ref{sub_section_pop_init} to \\ref{sub_section_environmental_selection}. To help readers better understand the proposed algorithm, we will not only document the details of each main step, but also provide the justifications for such designs. \n\\subsection{Algorithm Overview}\n\\label{sub_section_alg_overview}\n\\begin{algorithm}\n\t\\label{alg_framework}\n\t\\caption{Framework of The Proposed Algorithm}\n\t\\KwIn{A set of predefined building blocks, the population size, the maximal generation number, the image dataset for classification.}\n\t\\KwOut{The discovered best architecture of CNN.}\n\t$P_0\\leftarrow$ Initialize a population with the given population size\\;\n\t\\label{alg_framework_line1}\n\t$t\\leftarrow 0$\\;\n\t\\label{alg_framework_line2}\n\t\\While{$t<$ the maximal generation number}\n\t{\n\t\tEvaluate the fitness of each individual in $P_t$\\;\n\t\t\\label{alg_framework_evo_line1}\n\t\t$Q_t\\leftarrow$ Generate offspring by genetic operators from the selected parent individuals\\;\n\t\t\\label{alg_framework_evo_line2}\n\t\t$P_{t+1}\\leftarrow$ Environmental selection from $P_t\\cup Q_t$\\;\n\t\t\\label{alg_framework_evo_line3}\n\t\t$t\\leftarrow t+1$\\;\n\t}\n\t\\textbf{Return} the individual which has the best fitness in $P_{t}$.\n\t\n\\end{algorithm}\nAlgorithm~\\ref{alg_framework} shows the framework of the proposed algorithm. Specifically, by giving a set of predefined building blocks of CNNs, the population size as well as the maximal generation number for the GA and the image classification dataset, the proposed algorithm begins to work, through a series of evolutionary processes, and finally discovers the best architecture of the CNN to classify the given image dataset. During evolution, a population is randomly initialized with the predefined population size, using the proposed encoding strategy to encode the predefined building blocks (line~\\ref{alg_framework_line1}). Then, a counter for the current generation is initialized to zero (line~\\ref{alg_framework_line2}). During evolution, the fitness of each individual, which encodes a particular architecture of the CNN, is evaluated on the given dataset (line~\\ref{alg_framework_evo_line1}). After that, the parent individuals are selected based on the fitness, and then generate new offspring by the genetic operators (line~\\ref{alg_framework_evo_line2}). Then, a population of individuals surviving into the next generation are selected by the environmental selection from the current population (line~\\ref{alg_framework_evo_line3}). Specifically, the current population is composed of the parent population and the generated offspring population. Finally, the counter is increased by one, and the evolution continues until the counter exceeds the predefined maximal generation. As shown in Fig.~\\ref{fig_ga_flow}, the proposed algorithm follows the standard pipeline of a GA (the phases of selection, crossover and mutation, and the offspring population shown in Fig.~\\ref{fig_ga_flow} are collectively described in line~\\ref{alg_framework_evo_line2} of Algorithm~\\ref{alg_framework}). Note that, the genetic operator is composed of the crossover and mutation operations.\n\n\\subsection{Population Initialization}\n\\label{sub_section_pop_init}\n\nAs introduced in Section~\\ref{section_literature}, a CNN is composed of the convolutional layers, pooling layers and occasionally fully-connected layers. The performance of a CNN highly relies on its depth, and the skip connections could turn the \\textit{deep depth} to be a reality. In the proposed encoding strategy, we design a new building block by directly using the skip connections, named the skip layer, to replace the convolutional layer when forming a CNN. In addition, the fully-connected layers are discarded in the proposed encoding strategy (the reason will be given later in this subsection). In summary, only the skip layers and the pooling layers are used to construct a CNN in the proposed encoding strategy.\n\n\\begin{figure}[!htp]\n\t\\centering\n\t\\includegraphics[width=0.3\\columnwidth]{skip_layer}\\\\\n\t\\caption{An example of the skip layer in the proposed encoding strategy.}\\label{fig_skip_layer}\n\\end{figure}\n\nSpecifically, a skip layer is composed of two convolutional layers and one skip connection. The skip connection connects from the input of the first convolutional layer to the output of the second convolutional layer. An example of the skip layer is shown in Fig.~\\ref{fig_skip_layer}. As introduced above, the parameters of a convolutional layer are the number of feature maps, the filter size, the stride size and the convolutional operation type. In the proposed encoding strategy, we use the same settings for the filter sizes, stride sizes and convolutional operations. Particularly, the filter and stride sizes are set to $3\\times 3 $ and $1\\times 1$, respectively, and only the \\textit{same} convolutional operation type is used. To this end, the parameters encoded for a skip layer are the numbers of the feature maps for the two convolutional layers (denoted as $F1$ and $F2$, respectively). In addition, the pooling layers used in the proposed encoding strategy are set to be $2\\times 2$ for both the kernel sizes and the stride sizes. To this end, the parameter encoded for a pooling layer is only the pooling type (denoted as $P1$). Note that, the reasons for this design and adopting the settings for such a design will be explained later in this subsection.\n\n\\begin{algorithm}\n\t\\label{alg_pop_init}\n\t\\caption{Population Initialization}\n\t\\KwIn{The population size $T$.}\n\t\\KwOut{The initialized population $P_0$.}\n\t$P_0\\leftarrow \\emptyset$ \\;\n\t\\While{$|P_0|v_{best}$}\n\t\t{\n\t\t\t$v_{best}\\leftarrow v$\\;\n\t\t}\n\t}\n\tSet $v_{best}$ as the fitness of $individual$\\;\\label{alg_ini_eval_train_line3}\n\tPut the identifier of $individual$ and $v_{best}$ into $Cache$\\;\n\t\\label{alg_ini_eval_train_line4}\n\t\\textbf{Return} $individual$.\n\\end{algorithm}\n\nThe details of evaluating the fitness of one individual are shown in Algorithm~\\ref{alg_fitness_individual_eval}. Firstly, a CNN is decoded from $individual$, and a classifier is added to this CNN (line~\\ref{alg_ini_eval_line1}) based on the given image classification dataset. In the proposed algorithm, a softmax classifier~\\cite{nasrabadi2007pattern} is used, and the particular number of classes is determined by the given image dataset. When decoding a CNN, a rectifier activation function~\\cite{glorot2011deep} followed by a batch normalization~\\cite{NIPS2017_6790} operation is added to the output of the convolutional layer, which is based on the conventions of modern CNNs~\\cite{he2016deep,huang2017densely}. In addition, when the spatial number of the skip layer differs from that of the input data, a convolutional layer, which is with the unit filter and the unit stride but the corresponding number of the feature maps, is added to the input data~\\cite{he2016deep,huang2017densely}. After that, the CNN is trained by the Stochastic Gradient Descent (SGD) algorithm~\\cite{bottou2012stochastic} on the training data by using the given GPU (line~\\ref{alg_ini_eval_train_line1}), and the classification accuracy is calculated on the validation data (line~\\ref{alg_ini_eval_train_line2}). Note that, the use of the softmax classifier and SGD training method are based on the conventions of the deep learning community. When the training phase is finished, the best classification accuracy on the validation data is set as the fitness of $individual$ (line~\\ref{alg_ini_eval_train_line3}). Finally, the identifier and fitness of $individual$ are associated and put into $Cache$ (line~\\ref{alg_ini_eval_train_line4}).\n\nNext, the reasons for designing such an asynchronous and a cache components are given. In summary, because the training on CNNs is very time-consuming, ranging from several hours to even several months depending on the particular architecture, they are designed to speed up the fitness evaluation in the proposed algorithm. Specifically, the asynchronous component is a parallel computation platform based on GPUs. Due to the computational nature of calculating the gradients, deep learning algorithms are typically placed on GPUs to speed up the training~\\cite{helfenstein2012parallel}. Indeed, existing deep learning libraries, such as Tensorflow~\\cite{abadi2016tensorflow} and PyTorch~\\cite{paszke2017automatic}, support the calculation on multiple GPUs. However, their parallel calculations are based on the data-parallel and model-parallel pipelines. In the data-parallel pipeline, the input data is divided into several smaller groups, and each group is placed on one GPU for calculation. The reason is that the limited memory of one GPU cannot effectively handle the whole data at the same time. In the model-parallel pipeline, a model is divided into several smaller models, and each GPU carries one smaller model. The obvious reason is the limited computational capability on one GPU cannot run a whole model. However, the designed parallel pipeline obviously does not fall into either of the pipelines, but at a higher level. Hence, such an asynchronous component is designed to make full use of the GPU computational resource, especially for the population-based algorithms. Furthermore, the asynchronous component is widely used in solving a large problem, if the problem can be divided into several independent sub-problems. By parallelly performing these sub-problems in different computational platforms, the total processing time of the whole problem is consequently shortened. In the past, evolutionary algorithms are typically used to solve the problems of which the fitness evaluation is not time-consuming\\footnote{Although there is a type of computationally expensive problems, their fitness is commonly calculated by a surrogate model to bypass the direct fitness evaluation.}, and there is no critical need in developing such asynchronous components. Occasionally, they just use the built-in components based on the adopted programming languages. However, almost all such built-in components are based on CPUs, and they cannot effectively train deep neural networks, mainly because the acceleration platform for neural networks are based on GPUs. Furthermore, the fitness evaluation of each individual is independent, which just satisfies the scenario of using this technique. Motivated by the reasons described above, such an asynchronous component is designed in the proposed algorithm. The cache component is also used to speed up the fitness evaluation, which is based on the considerations of: 1) the individuals surviving into the next generation do not need to evaluate the fitness again if its architecture is not changed, and 2) the architecture, which has been evaluated, could be regenerated with the mutation and crossover operations in another generation.\n\n\\subsection{Offspring Generating}\n\\label{sub_section_offspring_gen}\n\\begin{algorithm}\n\t\\label{alg_offspring_gene}\n\t\\caption{Offspring Generating}\n\t\\KwIn{The population $P_t$ containing individuals with fitness, the probabity for crossover operation $p_c$, the probability for mutation operation$p_m$, the mutation operation list $l_m$, the probabilities of selecting different mutation operations $p_l$.}\n\t\\KwOut{The offspring population $Q_t$.}\n\t\n\t$Q_t\\leftarrow \\emptyset$\\;\n\t\\label{alg_crossover_begin}\n\t\\While{$|Q_t|<|P_t|$}\n\t{\n\t\t$p_1\\leftarrow$ Randomly select two individuals from $P_t$, and from the two then select the one with the better fitness\\;\n\t\t\\label{alg_off_gene_p_selc}\n\t\t$p_2\\leftarrow$ Repeat Line~\\ref{alg_off_gene_p_selc}\\;\n\t\t\\label{alg_off_gene_pp_selc}\n\t\t\\While{$p_2 == p_1$}\n\t\t{\n\t\t\tRepeat Line~\\ref{alg_off_gene_pp_selc}\\;\n\t\t}\\label{alg_off_gene_p_selc_end}\n\t\t\n\t\t$r\\leftarrow$ Randomly generate a number from $(0,1)$\\;\n\t\t\\label{alg_off_gene_r1}\n\t\t\\eIf{$r2.\n\t\\end{array}\n\t\\right.\n\t\\end{equation*}\nIn \\cite{ACRWJ} it has been shown that for $u$ sufficiently smooth, with probability one\n\\[\n\\frac{1}{\\varepsilon^{2} n^2} J_{n}(\\mathbf{ u})\\rightarrow \\sum_{i=1}^k\\int_\\Omega|\\nabla u_i(x)|^{2} \\rho(x) \\, dx:= J_{\\infty}(\\mathbf{ u}),\n\\]\nwhere $\\rho$ is density function of a probability measure that data points are generated.\n\nIn this paper, we propose a novel classification scheme based on the segregation model. Our motivation for the current work is based on\nproperties of a class of reaction diffusion system with highly comparative rate which yields segregation of different components which means at each point in the domain different components can not coexist. In this model, we solve problem \\eqref{objective} with additional constraint\n\\begin{equation*}\nu_i(x)\\cdot u_j(x)=0, \\qquad\\text{ for all }x\\in {\\mathcal X}, \\;1\\leq i\\neq j\\leq k.\n\\end{equation*}\n\nThe continuous form corresponding to the segregation model has been studied extensively, for instance \\cite{CL, F1, Av3}.\nWe state the results related to limiting configuration of the following coupled system as parameter $ \\varepsilon$ tends to zero.\n\n\n\n \\begin{align}\\label{f20}\n\\begin{cases}\n\\Delta u_{i}^{\\varepsilon}= \\frac{ u_{i}^{\\varepsilon}}{\\varepsilon} \\sum\\limits_{j \\neq i} u_{j}^{\\varepsilon} (x)\\qquad\\qquad & \\text{ in } \\Omega,\\\\\nu_{i}^{\\varepsilon} \\ge 0\\; & \\text{ in } \\Omega,\\\\\nu^{\\varepsilon}_{i}(x) =\\phi_{i}(x) & \\text{ on} \\, \\partial \\Omega,\\\\\n \\end{cases}\n\\end{align}\nfor $i=1,\\cdots, m.$ The boundary values satisfy\n\\[\n\\phi_{i}(x) \\cdot \\phi_{j}(x)=0, \\quad i \\neq j \\textrm{ on the boundary}.\n\\]\nFirst, for each fixed $\\varepsilon $ the exist unique positive solution $ (u_{1}^{\\varepsilon},\\cdots ,u_{m}^{\\varepsilon})$. Next, to explain the asymptotic behaviour of \\ref{f20} by construction barrier functions, one can show that the normal derivative of $ u_{i}^{\\varepsilon}$ is bounded independent of $ \\varepsilon$, this consequently proves that the $H^1$-norm of $ u_{i}^{\\varepsilon}$\nis bounded. Next integrating the equation in \\ref{f20} over $\\Omega$ indicates\n\\[\n (u_{i}^{\\varepsilon} \\sum\\limits_{j \\neq i} u_{j}^{\\varepsilon} (x)) \\rightarrow 0 \\,\\, \\text{as $\\varepsilon$ tends to zero}.\n\\]\nLet $ (u_1, \\cdots ,u_m)$ be the limiting configuration, then the results in \\cite{CL} shows that $u_i$ are pairwise segregated, i.e., $u_{i}(x)\\cdot u_{j}(x)=0,$ harmonic in their supports and satisfy the following differential inequalities\n\n \\begin{itemize}\n\\item $ -\\Delta u_{i} \\ge 0$,\\\\\n\\item $ -\\Delta (u_{i}- \\sum\\limits_{j\\neq i}u_j )\\le 0$,\n \\item Let $x$ belongs to interface then\n \\[\n \\underset{ y\\rightarrow x} {\\text{ lim}} \\ \\nabla u_{i}(y)=-\n\\underset{ y\\rightarrow x}{ \\text{ lim}} \\ \\nabla u_{j}(y)\\quad \\text{Free } \\, \\text{boundary}\\, \\text{condition}.\n \\]\n \\end{itemize}\n\n\n\n In \\cite{Conti-Terracini}, it has been shown that the limiting solution of (\\ref{f20}) minimizes the following functional\n\n\\begin{equation}\\label{continuous-model}\n\\left\\{\n\\begin{split}\n&\\min J(\\mathbf{ u}):=\\frac{1}{2}\\int_\\Omega\\sum\\limits_{i=1}^k|\\nabla u_i|^2\\, dx\\\\\n&\\text{subject to } u_i=\\phi_i,\\quad \\text{ on } \\partial\\Omega, \\\\[10pt]\n&\\qquad u_i\\geq0, \\quad\\text{ and }\\quad u_i\\cdot u_j=0\\quad\\text{ in }\\Omega,\n\\end{split}\\right.\n\\end{equation}\n\n In \\cite{F1, BA, Av1, Av2, Av3}, the authors have proposed and analyzed the following numerical scheme to solving limiting configuration of system \\eqref{f20} and \\eqref{continuous-model}\n \\begin{equation*}\nu_{i}^{t+1} (x) =\\max\\Big(\\overline{u}^{t}_{i}(x)-\\sum\\limits_{j\\neq i}\\overline{u}^{t}_{j}(x),\\, 0\\Big)\n\\end{equation*}\nwhere $\\bar v(x)$ denotes the average of values of $v$ of neighbors of mesh point $x,$ and $t$ refers to iterations.\n\n\n\n\n\n\n\n\\section{Calculus on graphs and setting the problem }\nThis section is devoted to review some facts about the calculus on graphs and setting our problem.\nLet ${\\mathcal X}=\\{x_1,\\cdots,x_n\\}$ denote the vertices of a graph with the symmetric edge weight $w_{xy}$ between $x,y\\in {\\mathcal X}$.\nThe degree of a vertex $x$ is given by $d(x)=\\sum_{y\\in {\\mathcal X}} w_{xy}$. Let $\\ell^{2}({\\mathcal X})$ denote the set of functions $u: {\\mathcal X} \\rightarrow \\mathbb{R}$ equipped with the inner product\n\\[\n(u,v)=\\sum\\limits_{x\\in {\\mathcal X}} u(x)v(x),\n\\]\nfor functions $ u,v : {\\mathcal X}\\rightarrow \\mathbb{R}$.\nWe also define a vector field on the graph to be an antisymmetric function $V:{\\mathcal X}\\times {\\mathcal X}\\rightarrow \\mathbb{R}^2$, i.e. $V(x,y)=-V(y,x)$ and denote the space of all vector fields by $\\ell^2({\\mathcal X}^2)$.\nThe gradient of a function $ u\\in \\ell^{2}({\\mathcal X}) $ is the vector field\n\\[\n\\nabla u(x,y)=u(y)-u(x).\n\\]\nFor two vector fields $V_1, V_2$ the inner product is\n\\[\n(V_1, V_2)_{\\ell^{2}({\\mathcal X}^2)}= \\frac{1}{2} \\sum\\limits_{x,y\\in {\\mathcal X}}w_{xy} V_{1} (x, y) V_{2} (x, y),\n\\]\nso the norm of vector field $V$ is $\\|V\\|_{\\ell^{2}({\\mathcal X}^2)}= \\sqrt{(V, V)_{\\ell^{2}({\\mathcal X}^2)}}$.\nThe graph divergence of a vector field $V$ is defined by\n\\[\n{\\rm div\\,} V(x)=\\sum_{y\\in X}w_{xy}V(x,y),\n\\]\nwhich satisfies the divergence formula\n\\[\n(\\nabla u, V)_{\\ell^{2}({\\mathcal X}^2)}=-(u,{\\rm div\\,} V).\n\\]\nThe unnormalized graph laplacian ${\\mathcal L} $ of a function $u\\in \\ell^2({\\mathcal X})$ is defined as\n\\[\n{\\mathcal L} u(x):=-{\\rm div\\,}(\\nabla u)(x)=\\sum_{y\\in {\\mathcal X}}w_{xy}(u(x)-u(y)).\n\\]\nThe operator ${\\mathcal L}$ satisfies\n\\begin{equation}\\label{green-formula}\n({\\mathcal L} u,v)=(\\nabla u,\\nabla v)_{\\ell^2({\\mathcal X}^2)}.\n\\end{equation}\nIn Appendix, we revisit some important tools for PDE on graphs, such as Poincar\\'e inequality and maximum principle.\n\n\nWe consider a subset of the nodes $ \\Gamma \\subset {\\mathcal X} $ as the boundary of the graph and define the admissible set\n\\[\n{\\mathcal K}:=\\left\\{\\mathbf{ u}=(u_1,\\cdots, u_k)\\in \\left(\\ell^2({\\mathcal X})\\right)^k: u_i=\\phi_i\\text{ on } \\Gamma\\text{ for }i=1,\\dots,k\\right\\},\n\\]\nwhere the boundary data $\\phi_i$ are known and satisfy the following assumption\n\\begin{equation}\\label{phi-assumption}\n \\phi_i\\in \\{ 0,1\\}.\n\\end{equation}\nWe are going to solve the optimization problem\n\\begin{align}\\label{L2-Problem}\n&\\min_{\\mathbf{ u}\\in{\\mathcal K}}J(\\mathbf{ u}):=\\|\\nabla\\mathbf{ u}\\|^2=\\sum_{i=1}^k\\|\\nabla u_i\\|_{\\ell^2({\\mathcal X}^2)}^2\\notag\\\\\n & \\textrm{subject to: }\\\\\n & u_i(x)\\geq0, \\ \\forall x\\in {\\mathcal X}, \\\\\n&u_i(x)\\cdot u_j(x)=0\\ \\forall x\\in {\\mathcal X}\\text{ and }i\\neq j.\\notag\n\\end{align}\n\n\nThe following theorem states the existence of solution to problem \\eqref{L2-Problem} and describes some properties of the solution.\n\n\\begin{theorem} \\label{property-minimizers}\nProblem \\eqref{L2-Problem} has a solution. Moreover, the solution satisfies\n\\begin{enumerate}[(i)]\n\\item\n${\\mathcal L} u_i(x)\\leq 0, \\text{ if } u_i(x)=0,\\text { and }x\\in {\\mathcal X}.$\n\n\\item\n${\\mathcal L} u_i(x)=0, \\text{ if } u_i(x)>0,\\text { and }x\\in {\\mathcal X}\\setminus \\Gamma.$\n\n\\item\nFor every $x\\in {\\mathcal X}\\setminus \\Gamma$, there is one component $u_i$ such that $u_i(x)>0$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nConsider a minimizing sequence $\\mathbf{ u}^n\\in {\\mathcal K}$ for problem \\eqref{L2-Problem}. By Poincar\\'e inequality, Proposition \\ref{prop:Poincare inequaity}, we obtain that\n\\[\n\\|u_i^n\\|\\leq \\frac1{\\lambda_1}\\|\\nabla(u_i^n-\\phi_i)\\|+\\|\\phi_i\\|\\leq \\frac1{\\lambda_1}(\\|\\nabla u_i^n\\|+\\|\\nabla \\phi_i\\|)+\\|\\phi_i\\|.\n\\]\nThus for every $i=1,\\cdots, k$ the sequence $\\{u^n_i\\}$ is bounded. Hence, there exists a subsequence such that for every components $i$\n\\[\nu^{n_j}_i\\rightarrow u_i.\n\\]\nIt is obvious that $\\mathbf{ u}=(u_1,\\cdots, u_k)$ satisfies the constraints in \\eqref{L2-Problem} and is a minimizer.\n\n$(i)$ To prove this part of theorem, notice that if $u_i(x)=0$ for some $x\\in {\\mathcal X}$, then\n$${\\mathcal L} u_i(x)=\\sum_{y\\in {\\mathcal X}}w_{xy}(u_i(x)-u_i(y))=-\\sum_{y\\in {\\mathcal X}}w_{xy}u_i(y)\\leq 0.$$\n\n$(ii)$ Now consider the case $u_i(x)>0$ for some fixed node $x\\in {\\mathcal X}\\setminus\\Gamma$. Let us define\n$$ v_i=u_i+t\\delta_x, \\quad v_j=u_j \\text{ when }j\\neq i,$$\nwhere $\\delta_x$ is delta function which is $\\delta_x(y)=0$ for every $y\\neq x$ and $\\delta_x(x)=1$. We also consider some values of $t$ such that $v_i(x)\\geq0$, ($t$ can be negative).\nObviously, $\\mathbf{ v}\\in{\\mathcal K}$ and satisfies the constraints in \\eqref{L2-Problem}.\nTherefore,\n\\begin{align*}\n0\\leq &\\|\\nabla\\mathbf{ v}\\|^2-\\|\\nabla\\mathbf{ u}\\|^2=\\|\\nabla v_i\\|^2-\\|\\nabla u_i\\|^2\\\\\n=&\\frac12\\sum_{y,z\\in {\\mathcal X}}w_{yz}\\left((v_i(z)-v_i(y))^2-(u_i(z)-u_i(y))^2\\right)\\\\\n=&\\sum_{y\\in{\\mathcal X}}w_{xy}\\left(t^2+2t(u_i(x)-u_i(y))\\right)\\\\\n=&t^2d(x)+2t{\\mathcal L} u_i(x).\n\\end{align*}\nSince $u_i(x)>0$, so parameter $t$ can be take some negative values and when $t\\rightarrow 0^\\pm$ we conclude that ${\\mathcal L} u_i(x)=0$.\n\n$(iii)$ Let $A:=\\{x\\in {\\mathcal X}\\setminus \\Gamma: u_1(x)=\\cdots=u_k(x)=0\\}$. We claim that $A=\\emptyset$.\nOtherwise, there is some $x\\in A$ such that $w_{xy}\\neq0$ and $u_i(y)>0$ for some $i\\in\\{1,\\cdots,k\\}$.\nThus\n\\[\n{\\mathcal L} u_i(x)= - \\sum_{z\\in{\\mathcal X}}w_{xz} u_i(z) \\leq -w_{xy}u_i(y)<0.\n\\]\nNow choose the competitor $\\mathbf{ v}$ with\n\\[\n v_i=u_i+t\\delta_x, \\quad v_j=u_j \\text{ when }j\\neq i,\n\\]\nfor some $t\\geq0$ and repeat the calculation in the previous part to get\n\\[\n0\\leq \\|\\nabla\\mathbf{ v}\\|^2-\\|\\nabla\\mathbf{ u}\\|^2 = t^2d(x)+2t{\\mathcal L} u_i(x).\n\\]\nSince ${\\mathcal L} u_i(x)<0$, we can choose small value for $t$ to arrive at a contradiction.\n\\end{proof}\n\n\n\\begin{remark}\nProblem \\eqref{L2-Problem} has not necessary a unique solution.\nFor example, in a symmetry model, there are different choices for classification.\nIn a toy example, consider a graph with four vertices $A$, $B$, $C$ and $D$.\nLet $w_{AB}=w_{BC}=w_{CD}=w_{AD}=1$ and $w_{AC}=w_{BD}=0$.\nAlso, $A$ and $C$ are boundary points with boundary data $u_1(A)=u_2(C)=1$ and $u_1(C)=u_2(A)=0$.\nThis problem has four solutions\n\n\\begin{enumerate}[(i)]\n\\item\n$u_1(B)=u_1(D)=\\frac12, \\quad u_2(B)=u_2(D)=0,$\n\\item\n$u_2(B)=u_2(D)=\\frac12, \\quad u_1(B)=u_1(D)=0,$\n\\item\n$u_1(B)=u_2(D)=\\frac12, \\quad u_2(B)=u_1(D)=0,$\n\\item\n$u_1(D)=u_2(B)=\\frac12, \\quad u_1(B)=u_2(D)=0.$\n\\end{enumerate}\n\\end{remark}\n\n\n\n\\section{Gradient projection method}\\label{section:Gradient projection method}\nGradient projection is one method that we use to solve the problem \\eqref{L2-Problem}.\nIn the sequel, we use the following notation for averaging of a function\n\\[\n{\\mathcal A} u(x):= \\frac 1{d(x)}\\sum_{y\\in {\\mathcal X}} w_{xy} u(y),\n\\]\nwhere \n\\[\nd(x) = \\sum_{y\\in {\\mathcal X}} w_{xy},\n\\]\nand the admissible set\n\\[\n{\\mathcal S}:=\\left\\{\\mathbf{ u}=(u_1,\\cdots, u_k)\\in \\left(\\ell^2({\\mathcal X})\\right)^k: u_i=\\phi_i\\text{ on } \\Gamma, u_i\\geq0, u_i\\cdot u_j=0 \\text{ for }i\\neq j\\right\\}.\n\\]\nWe also use the projection ${\\mathcal P}:\\left(\\ell^2({\\mathcal X})\\right)^k\\rightarrow {\\mathcal S}$ which is defined as follows.\nFor every $\\mathbf{ v}\\in \\left(\\ell^2({\\mathcal X})\\right)^k$ and every $x\\in {\\mathcal X}\\setminus \\Gamma$, first we find\n\\[\ni_x:=\\arg\\max_{1\\leq j\\leq k} (v_j(x))^+\n\\]\nand if it has more than one solution we choose the smallest index. ($v^+:=\\max(v,0)$.)\nThen we define $\\mathbf{ u}:={\\mathcal P}\\mathbf{ v}$ with $u_{i_x}(x)=(v_{i_x}(x))^+$ and $u_j(x)=0$ for $j\\neq i_x$.\nFor any $x\\in \\Gamma$, we obviously define $u_i(x)=\\phi_i(x)$.\n\n\nThe following lemma shows why ${\\mathcal P}$ is a projection.\n\n\\begin{lemma}\\label{lem:projection}\nConsider $\\mathbf{ v}\\in \\left(\\ell^2({\\mathcal X})\\right)^k$, then\n\\[\n\\|\\mathbf{ v}-{\\mathcal P}\\mathbf{ v}\\| \\leq \\|\\mathbf{ v}-\\mathbf{ w}\\|,\\quad \\text{ for all }\\mathbf{ w}\\in{\\mathcal S}.\n\\]\n\\end{lemma}\n\\begin{proof}\nConsider $\\mathbf{ w}\\in{\\mathcal S}$ and define the index function $\\sigma:{\\mathcal X}\\rightarrow \\{1,\\cdots, k\\}$ such that $w_j(x)=0$ for $j\\ne \\sigma(x)$.\nSo,\n\\[\\begin{split}\n\\|\\mathbf{ v}-\\mathbf{ w}\\|^2=&\\sum_{x\\in{\\mathcal X}}\\left((v_{\\sigma(x)}(x)-w_{\\sigma(x)}(x))^2+\\sum_{i\\ne \\sigma(x)}(v_i(x))^2\\right)\\\\\n=&\\sum_{x\\in{\\mathcal X}}\\sum_{i=1}^k (v_i(x))^2 + \\sum_{x\\in{\\mathcal X}}\\left((v_{\\sigma(x)}(x)-w_{\\sigma(x)}(x))^2 - (v_{\\sigma(x)}(x))^2\\right)\n\\end{split}\\]\nSimilarly we have,\n\\[\n\\|\\mathbf{ v}-{\\mathcal P}\\mathbf{ v}\\|^2=\\sum_{x\\in{\\mathcal X}}\\sum_{i=1}^k (v_i(x))^2 + \\sum_{x\\in{\\mathcal X}}\\left((v_{i_x}(x)-(v_{i_x}(x))^+)^2 - (v_{i_x}(x))^2\\right).\n\\]\nIt is enough to show\n\\begin{equation}\\label{grad-proj:eq1}\n(v_{i_x}(x)-(v_{i_x}(x))^+)^2 - (v_{i_x}(x))^2 \\le (v_{\\sigma(x)}(x)-w_{\\sigma(x)}(x))^2 - (v_{\\sigma(x)}(x))^2,\n\\end{equation}\nfor every $x\\in {\\mathcal X}$.\nIf $v_{i_x}(x)\\le 0$, then $v_{\\sigma(x)}(x)\\le 0$ by the definition of $i_x$.\nThus the left hand side of \\eqref{grad-proj:eq1} is zero as well as the right hand side is positive (recall that $w_{\\sigma(x)}(x)\\ge0$).\nIf $v_{i_x}(x)\\ge 0\\ge v_{\\sigma(x)}(x)$, the left hand side of \\eqref{grad-proj:eq1} is negative and the right hand side will be positive.\nIf $v_{i_x}(x)\\ge v_{\\sigma(x)}(x) \\ge 0$, then \\eqref{grad-proj:eq1} will hold trivially.\n\\end{proof}\n\n\nOur algorithm according to the gradient projection method is as follows:\n\\begin{enumerate}[(1)]\n\\item\nChoose an initial guess in ${\\mathcal S}$. It might be an extension of boundary data as $u_{i,0}=\\phi_i$ on $\\Gamma$ and $u_{i,0}=0$ in ${\\mathcal X}\\setminus\\Gamma$.\n\n\\item\nFor $t\\geq 0$, calculate the gradient of the cost function $J$ at $\\mathbf{ u}^t=(u_{1,t},\\cdots,u_{k,t})$. It is equal to\n\\[\n\\delta J(\\mathbf{ u}^t):=({\\mathcal L} u_{1,t},\\cdots,{\\mathcal L} u_{k,t}).\n\\]\n\n\\item\nUpdate the value of each components by\n\\[\n\\mathbf{ u}^{t+1}:={\\mathcal P}(\\mathbf{ u}^{t}-\\frac 1d{\\mathcal L}\\mathbf{ u}^t)={\\mathcal P}({\\mathcal A}\\mathbf{ u}^t).\n\\]\n\n\\item\nIf $\\| \\mathbf{ u}^{t+1} - \\mathbf{ u}^t\\|$ is small then stop the algorithm, otherwise set $t=t+1$ and iterate the previous steps.\n\\end{enumerate}\n\nThe following proposition proves why the algorithm works.\n\n\\begin{proposition}\nAssume $\\mathbf{ u}$ is a solution of problem \\eqref{L2-Problem}.\nConsider an arbitrary point $x\\in {\\mathcal X}\\setminus\\Gamma$ such that $u_i(x)>0$, then\n\\[u_i(x)={\\mathcal A} u_i(x) \\geq {\\mathcal A} u_j(x),\\qquad\\text{ for all }j\\neq i.\\]\n\\end{proposition}\n\\begin{proof}\nFor a fixed index $j\\neq i$, define a competitor $\\mathbf{ v}\\in{\\mathcal S}$\n\\[\nv_i:=u_i- u_i(x)\\delta_x,\\quad v_j:=u_j+ u_i(x)\\delta_x, \\quad v_\\ell:=u_\\ell,\\text{ for }\\ell\\neq i,j.\n\\]\nTherefore,\n\\[\\begin{split}\n0\\leq & \\|\\nabla\\mathbf{ v}\\|^2-\\|\\nabla\\mathbf{ u}\\|^2 = \\|\\nabla v_i\\|^2+\\|\\nabla v_j\\|^2-\\|\\nabla u_i\\|^2-\\|\\nabla u_j\\|^2 \\\\\n= &\\sum_{y}w_{xy}\\left((v_i(x)-v_i(y))^2+(v_j(x)-v_j(y))^2-(u_i(x)-u_i(y))^2-(u_j(x)-u_j(y))^2\\right)\\\\\n=&\\sum_{y\\neq x}w_{xy}\\left(u_i(y)^2+(u_i(x)-u_j(y))^2-(u_i(x)-u_i(y))^2-u_j(y)^2\\right)\\\\\n=&\\sum_{y\\neq x}2w_{xy}u_i(x)\\left(u_i(y)-u_j(y)\\right).\n\\end{split}\\]\nSince $u_i(x)>0$, we get\n\\[\n\\frac 1{d(x)}\\sum_{y}w_{xy}u_i(y)\\geq \\frac 1{d(x)}\\sum_{y}w_{xy}u_j(y)={\\mathcal A} u_j(x).\n\\]\nNow apply result $(ii)$ of Theorem \\ref{property-minimizers}, we obtain that $u_i(x)=\\frac 1{d(x)}\\sum\\limits_{y}w_{xy}u_i(y)$.\n\\end{proof}\n\n\nDefine the map ${\\mathcal G}:(\\ell_+^2({\\mathcal X}))^k\\longrightarrow {\\mathcal S}$ with rule ${\\mathcal G}\\mathbf{ u}=\\mathbf{ v}$, where\n\\[\nv_i =\\max \\left( u_i-\\sum_{j\\ne i} u_j , 0\\right),\n\\]\nand $\\ell_+^2({\\mathcal X})$ is the set of nonnegative functions.\nIf we replace this map instead of projection ${\\mathcal P}$ in the gradient projection algorithm, we will obtain the segregation method.\nWe will study this method in Section \\ref{cluste_seg}.\n\n\n\\begin{proposition}\nSuppose ${\\mathcal P}$ is the projection on ${\\mathcal S}$ defined in Section \\ref{section:Gradient projection method}.\nThen there is a positive constant $C_0$ such that\n\\[\n\\norm{{\\mathcal P} \\mathbf{ u} -\\mathbf{ u}} \\le \\norm{{\\mathcal G} \\mathbf{ u}-\\mathbf{ u}}\\le C_0\\norm{{\\mathcal P} \\mathbf{ u} -\\mathbf{ u}},\n\\]\nfor every $\\mathbf{ u}\\in (\\ell_+^2({\\mathcal X}))^k$.\n\\end{proposition}\n\\begin{proof}\nThe left inequality will be hold according to Lemma \\ref{lem:projection}.\nFor a fixed node $x\\in{\\mathcal X}$, we need to show\n\\begin{equation}\\label{prop:segr-proj-eq1}\n\\sum_{j=1}^k|v_j(x)-u_j(x)|^2 \\le C_0 \\sum_{j=1}^k |w_j(x)-u_j(x)|^2,\n\\end{equation}\nwhere ${\\mathcal G}\\mathbf{ u}=\\mathbf{ v}$ and ${\\mathcal P}\\mathbf{ u}=\\mathbf{ w}$.\nLet $i:=\\arg\\max_{1\\leq j\\leq k} u_j(x)$.\nIf there is more than one index for $i$, then $\\mathbf{ v}(x)={\\mathcal G}(\\mathbf{ u})(x)=0$.\nThus \\eqref{prop:segr-proj-eq1} holds for $C_0\\ge 2$.\n\nNow assume that $i$ is the unique solution of $i:=\\arg\\max_{1\\leq j\\leq k} u_j(x)$. Hence,\n\\[\nw_i(x)=u_i(x),\\text{ and } w_j(x)=v_j(x)=0\\text{ for }j\\ne i.\n\\]\nTherefore, using the definition of $v_i$ and applying Cauchy-Schwartz inequality we obtain\n\\[\\begin{split}\n\\sum_{j=1}^k|v_j(x)-u_j(x)|^2 &= \\left(\\sum_{j\\ne i} u_j(x)\\right)^2 + \\sum_{j\\ne i}|u_j(x)|^2 \\\\\n&\\le ((k-1)+1)\\sum_{j\\ne i}|u_j(x)|^2 =k\\sum_{j=1}^k|w_j(x)-u_j(x)|^2,\n\\end{split}\\]\nwhich implies that \\eqref{prop:segr-proj-eq1} holds for $C_0\\ge k$.\n\\end{proof}\n\n\n\n\\section{Penalization method}\n\nIn this section, we apply the penalization method to solve problem \\eqref{L2-Problem}.\nSince finding the solution directly is not efficient (the optimization problem \\eqref{L2-Problem} is a problem with $(n-m)^k$ parameters), we would prefer to solve a PDE instead.\nIn this case, we can just find a PDE for points that $u_i>0$ and this subdomain is not known.\nIn fact, we have a free boundary problem and if we know the domain $\\{u_i>0\\}$, we are able to find the solution.\nIn order to overcome this difficulty, we relax the constraint with a penalty term and try to estimate the solution for the original problem \\eqref{L2-Problem}.\n\n\nIndeed, we consider the following problem\n\\begin{equation}\\label{L2-Problem*}\n\\min_{\\mathbf{ u}\\in{\\mathcal K}}J_\\varepsilon(\\mathbf{ u}):=\\sum_{i=1}^k\\|\\nabla u_i\\|_{\\ell^2(X^2)}^2+\\frac1\\varepsilon\\sum_{i\\neq j}(u_i^2,u_j^2).\n\\end{equation}\nSince the energy function is convex, it is straightforward that the problem has a unique solution which satisfies\n\\begin{equation}\\label{PDE-penalized}\n{\\mathcal L} u_i+\\frac{u_i}\\varepsilon\\sum_{j\\neq i}u_j^2=0,\\quad\\text{ in }X\\setminus\\Gamma.\n\\end{equation}\nFurthermore, we know that the solution is nonnegative due to the maximum principle, Proposition \\ref{Maximum principle}.\n\n\\begin{theorem}\nLet $\\mathbf{ u}^\\varepsilon$ be the solution of \\eqref{L2-Problem*} for every $\\varepsilon>0$.\nFor any sequence $\\varepsilon_n\\rightarrow0$, there is a subsequence of $\\mathbf{ u}^{\\varepsilon_n}$ which converges to a minimizer of \\eqref{L2-Problem}.\n\\end{theorem}\n\\begin{proof}\nLet $\\mathbf{ v}$ be an arbitrary minimizer of \\eqref{L2-Problem}, then we have $J_\\varepsilon(\\mathbf{ v})=J(\\mathbf{ v})$ thanks to the constraint in \\eqref{L2-Problem}. So,\n$$J_\\varepsilon(\\mathbf{ u}^\\varepsilon)\\leq J_\\varepsilon(\\mathbf{ v})=J(\\mathbf{ v})=:\\Lambda.$$\nTherefore $\\|\\nabla \\mathbf{ u}^\\varepsilon\\|\\leq \\sqrt\\Lambda$ is uniformly bounded and then by Poincar\\'e inequality we get $\\|\\mathbf{ u}^\\varepsilon\\|$ is bounded, since\n$$\\lambda_1\\|u_i^\\varepsilon-\\phi_i\\|\\leq \\|\\nabla(u_i^\\varepsilon-\\phi_i)\\|\\leq\\sqrt\\Lambda+\\|\\nabla\\phi_i\\|.$$\nHence, toward a subsequence we can assume that $\\mathbf{ u}^{\\varepsilon_n}\\rightarrow\\mathbf{ u}\\in{\\mathcal K}$.\nWe need to show that $\\mathbf{ u}$ is a minimizer of \\eqref{L2-Problem} and satisfies its constraints.\nFirst, we have\n$$\n\\frac1{\\varepsilon_n}\\sum_{i\\neq j}\\left((u_i^{\\varepsilon_n})^2,(u_j^{\\varepsilon_n})^2\\right)\\leq J_{\\varepsilon_n}(\\mathbf{ u}^{\\varepsilon_n})\\leq \\Lambda,\n$$\nso,\n$$\n\\sum_{i\\neq j}\\left((u_i^{\\varepsilon_n})^2,(u_j^{\\varepsilon_n})^2\\right)\\longrightarrow0.\n$$\nThus, $(u_i^2,u_j^2)=0$ and taking into account that $u^\\varepsilon_i$ is nonnegative we obtain that the constraint in \\eqref{L2-Problem} holds for $\\mathbf{ u}$.\nTo close the argument, note that\n$$\\|\\nabla\\mathbf{ u}\\|^2=\\lim_{\\varepsilon_n\\rightarrow0}\\|\\nabla\\mathbf{ u}^{\\varepsilon_n}\\|^2\\leq J_{\\varepsilon_n}(\\mathbf{ u}^{\\varepsilon_n})\\leq \\Lambda.$$\nSo, $J(\\mathbf{ u})=\\Lambda$ and $\\mathbf{ u}$ is a minimizer.\n\\end{proof}\n\nIn the sequel we introduce an algorithm to solve problem \\eqref{L2-Problem*} or its equivalent version \\eqref{PDE-penalized}.\nThe later is a nonlinear system of PDEs and is not easy to solve directly.\nFor an explanation of our algorithm, we define the following sequence which converges to the solution \\eqref{PDE-penalized}.\nFirst, consider the harmonic extension $u_{i,0}$ of boundary data given by\n$$\n\\left\\{\\begin{array}{ll}\n{\\mathcal L} u_{i,0}=0, & \\text{ in }X\\setminus\\Gamma,\\\\[8pt]\nu_{i,0}=\\phi_i& \\text{ on }\\Gamma,\n\\end{array}\\right.\n$$\nwhich is a nonnegative function according to the maximum principle.\nNext, given nonnegative functions $\\mathbf{ u}_m:=(u_{1,m},\\cdots,u_{k,m})$, let $u_{i,m+1}$ be the solution of the following system\n$$\n\\left\\{\\begin{array}{ll}\n{\\mathcal L} u_{i,m+1}+\\frac{u_{i,m+1}}{\\varepsilon}\\sum\\limits_{j\\neq i}u_{j,m}^2=0, & \\text{ in }X\\setminus\\Gamma,\\\\[10pt]\nu_{i,m+1}=\\phi_i,& \\text{ on }\\Gamma,\n\\end{array}\\right.\n$$\nThe following theorem shows that why our algorithm works for solving problem \\eqref{PDE-penalized}.\n\n\\begin{theorem}\\label{thm-order-seq}\nSuppose that the boundary data $\\phi_i$ satisfy \\eqref{phi-assumption}.\nThen the sequence $\\mathbf{ u}_m$ makes the following order\n\\begin{equation}\\label{order-sequence-u}\n1\\geq u_{i,0}\\geq u_{i,2}\\geq\\cdots\\geq u_{i,2m}\\geq\\cdots\\geq u_{i,2m+1}\\geq\\cdots\\geq u_{i,3}\\geq u_{i,1}\\geq0.\n\\end{equation}\nMoreover, the limit of this sequence is the solution of \\eqref{PDE-penalized}.\n\\end{theorem}\n\n\n\\begin{proof}\n{\\bf Step 1: } We show that $u_{i,m}$ is nonnegative. \\\\\nThis is a matter of maximum principle, Proposition \\ref{Maximum principle}.\nFor $m=0$, it is obvious due to maximum principle and taking account that $u_{i,0}$ is a harmonic function. In fact, we consider $p(x)\\equiv0$ in Proposition \\ref{Maximum principle}.\nTo show $u_{i,m+1}\\geq0$, again apply Proposition \\ref{Maximum principle} for nonnegative function $p(x)=\\frac1{\\varepsilon}\\sum_{j\\neq i}u_{j,m}^2$.\n\n\\medskip\n{\\bf Step 2: } $u_{i,0}\\leq 1$. \\\\\nApply the maximum principle for harmonic function $1 - u_{i,0}$ and recall the assumption \\eqref{phi-assumption}.\n\n\n\\medskip\n{\\bf Step 3: } In this step we show that $u_{i,m}\\leq u_{i,0}$.\\\\\nWe just need to note that ${\\mathcal L} u_{i,m+1}\\leq 0={\\mathcal L} u_{i,0}$. Then maximum principle yields that $u_{i,m+1}\\leq u_{i,0}$.\n\n\\medskip\n{\\bf Step 4:} Here, we claim that $u_{i,2}\\geq u_{i,1}$.\\\\\nBy the result of Step 2, we can write\n\\begin{align*}\n0&={\\mathcal L} u_{i,2}+\\frac{u_{i,2}}{\\varepsilon}\\sum_{j\\neq i}u_{j,1}^2\\\\\n&\\leq {\\mathcal L} u_{i,2}+\\frac{u_{i,2}}{\\varepsilon}\\sum_{j\\neq i}u_{j,0}^2.\n\\end{align*}\nThis together with the equation of $u_{i,1}$, the maximum principle yields that $u_{i,2}\\geq u_{i,1}$.\n\n\n\\medskip\n{\\bf Step 5:} Now we close the argument with the induction. Assume that\n\\begin{equation}\\label{order-u-induction}\nu_{i,0}\\geq u_{i,2}\\geq\\cdots\\geq u_{i,2m}\\geq u_{i,2m-1}\\geq\\cdots\\geq u_{i,1}\\geq0.\n\\end{equation}\nfor some $m\\geq1$, we will extend the string for $m+1$.\nBy the following inequality\n$$\n0={\\mathcal L} u_{i,2m+1}+\\frac{u_{i,2m+1}}{\\varepsilon}\\sum_{j\\neq i}u_{j,2m}^2\\geq {\\mathcal L} u_{i,2m+1}+\\frac{u_{i,2m+1}}{\\varepsilon}\\sum_{j\\neq i}u_{j,2m-1}^2\n$$\nwe can apply Proposition \\ref{Maximum principle} for function $(u_{i,2m}-u_{i,2m+1})$ when $p=\\frac{1}{\\varepsilon}\\sum_{j\\neq i}u_{j,2m-1}^2$\nto deduce that $u_{i,2m}\\geq u_{i,2m+1}$.\nSimilarly, we have\n$$\n0={\\mathcal L} u_{i,2m+1}+\\frac{u_{i,2m+1}}{\\varepsilon}\\sum_{j \\neq i}u_{j,2m}^2\\leq {\\mathcal L} u_{i,2m+1}+\\frac{u_{i,2m+1}}{\\varepsilon}\\sum_{j \\neq i}u_{j,2m-2}^2\n$$\naccording the induction assumption $u_{j,2m-2}\\geq u_{j,2m}$.\nComparing with the equation for $u_{i,2m-1}$ we obtain $u_{i,2m+1}\\geq u_{i, 2m-1}$.\nNow repeat this argument to show that $u_{i,2m}\\geq u_{i,2m+2}\\geq u_{i,2m+1}$.\n\n\n\\medskip\n{\\bf Step 6:}\nFrom \\eqref{order-sequence-u}, we know that there are $\\overline u_i$ and $\\underline u_i$ as the limit of\n\\begin{align*}\nu_{i,2m}\\longrightarrow \\overline u_i,\\\\\nu_{i,2m+1}\\longrightarrow \\underline u_i.\n\\end{align*}\nThese limits satisfy\n\\begin{align}\\label{limit-equation}\n\\left\\{\\begin{array}{ll}\n{\\mathcal L}\\overline u_i+\\frac{\\overline u_i}{\\varepsilon}\\sum_{j\\neq i}\\underline u_j^2=0, \\quad\\text{ in }{\\mathcal X}\\setminus\\Gamma\\\\[10pt]\n{\\mathcal L}\\underline u_i+\\frac{\\underline u_i}{\\varepsilon}\\sum_{j\\neq i}\\overline u_j^2=0, \\quad\\text{ in }{\\mathcal X}\\setminus\\Gamma\\\\[10pt]\n\\overline u_i=\\underline u_i=\\phi_i\\quad\\text{ on }\\Gamma.\n\\end{array}\\right.\n\\end{align}\nMultiply in inner product $\\ell^2({\\mathcal X})$ both equations by $\\overline u_i - \\underline u_i$ and subtract to get\n\\[\n\\varepsilon\\|\\nabla (\\overline u_i - \\underline u_i)\\|_{\\ell^2({\\mathcal X}^2)}^2 = \\Big( \\underline u_i (\\overline u_i - \\underline u_i), \\sum_{j\\neq i}\\overline u_j^2\\Big)-\\Big( \\overline u_i (\\overline u_i - \\underline u_i), \\sum_{j\\neq i}\\underline u_j^2\\Big).\n\\]\nIt is worthwhile noticing that although the equation holds in ${\\mathcal X}\\setminus\\Gamma$, since $\\overline u_i - \\underline u_i=0$ on $\\Gamma$ we are able to utilize the relation \\eqref{green-formula}.\nNow sum over $i$, we obtain\n\\begin{align*}\n\\varepsilon\\|\\nabla (\\overline \\mathbf{ u} - \\underline \\mathbf{ u})\\|_{\\ell^2({\\mathcal X}^2)}^2\n= & \\sum_i \\Big( \\underline u_i \\overline u_i , \\sum_{j\\neq i}(\\overline u_j^2+\\underline u_j^2)\\Big)\n-\\sum_i \\Big( \\underline u_i^2, \\sum_{j\\neq i}\\overline u_j^2 \\Big)\n-\\sum_i \\Big( \\overline u_i^2, \\sum_{j\\neq i}\\underline u_j^2 \\Big) \\\\\n= & \\sum_{i,j} \\big( \\underline u_i \\overline u_i , \\overline u_j^2+\\underline u_j^2\\big) - \\sum_i \\big( \\underline u_i \\overline u_i , \\overline u_i^2+\\underline u_i^2\\big) \\\\\n& - 2 \\sum_{i,j} \\big( \\underline u_i^2, \\overline u_j^2 \\big) + 2 \\sum_i \\big( \\underline u_i^2, \\overline u_i^2 \\big)\\\\\n= & \\sum_{x\\in {\\mathcal X}} \\sum_{i,j} \\underline u_i(x) \\overline u_i(x) (\\overline u_j^2(x)+\\underline u_j^2(x)) \\\\\n& - \\sum_{x\\in{\\mathcal X}} \\sum_i \\underline u_i(x) \\overline u_i(x) (\\overline u_i(x)-\\underline u_i(x))^2 \\\\\n& -2 \\sum_{x\\in {\\mathcal X}} \\Big(\\sum_i (\\underline u_i(x))^2 \\Big)\\Big( \\sum_i (\\overline u_i(x))^2 \\Big) \\\\\n\\leq & 2 \\sum_{x\\in {\\mathcal X}} \\left(\\sum_{i} \\underline u_i(x) \\overline u_i(x) - \\Big(\\sum_i (\\underline u_i(x))^2 \\Big)\\Big( \\sum_i (\\overline u_i(x))^2 \\Big) \\right) \\leq 0,\n\\end{align*}\nwhere we have used the relation $0\\leq \\underline u_i \\leq \\overline u_i \\leq 1$ and in the last line the Cauchy-Schwartz inequality has been applied.\n\nThen $\\|\\nabla (\\overline \\mathbf{ u} - \\underline \\mathbf{ u})\\|_{\\ell^2({\\mathcal X}^2)}^2\\leq 0$ and so, $\\overline \\mathbf{ u} - \\underline \\mathbf{ u}$ is constant in ${\\mathcal X}$. Taking this along to the boundary condition implies that $\\overline \\mathbf{ u} = \\underline \\mathbf{ u}$ in ${\\mathcal X}$.\nRecall \\eqref{limit-equation}, $\\underline u_i =\\overline u_i$ is a solution of \\eqref{PDE-penalized}.\n\\end{proof}\n\n\n\n\n\n\n\n\n \\section{The main algorithm for clustering}\\label{cluste_seg}\n In the previous sections of the paper we considered a minimization problem \\eqref{L2-Problem}, which unfortunately has no unique solution over connected graphs. In the current section in order to overcome the lack of uniqueness we consider different functional and prove the existence and uniqueness of the minimizer. The definition of a new functional is inspired from the numerical results of the spatial segregation of reaction-diffusion systems (see \\cite{Av1}).\n\n\\subsection{Existence and uniqueness of a minimizer}\n\nWe introduce the discrete counterpart of the spatial segregation problem defined on connected graphs.\nIn the rest of the paper the following notation\n\\[\n\\hat{z}_q=z_q-\\sum_{j\\neq q}z_j,\n\\]\nfor elements $(z_1,z_2,\\dots,z_k)$, will play a crucial role.\nLet $\\overline{u}_{i}(x_{l} )$ for $ i=1,2\\cdots k$ denote the average value of $u_{i}$ for all neighbor points of $x_{l}:$\n\\[\n\\overline{u}_{i}(x_{l})= \\frac{1}{\\deg(x_{l})} \\sum_{p \\sim l} w_{lp} u_{i} (x_p),\n\\]\nwhere\n\\[\n\\deg(x_{l})= \\sum_{(x_l,y)\\in E} w_{x_ly},\n\\]\nand $V$ and $E$ stand for a set of vertices and edges respectively. We will set a graph ${\\mathcal X}$ to be a tuple $(V, E)$ in the rest of this section.\n\n\n\nWhen ${\\mathcal X}$ is a connected graph and also consist of discrete and finite number of points, it turns out that we have to consider slightly different functional (see \\cite[Section $2$]{Av1}). Since $\\mathcal L$ is a self-adjoint operator, then we set:\n \\begin{equation}\n\\label{sun2}\nJ(u_1,\\dots,u_k)=\\frac{1}{2} \\sum_{i=1}^{k} \\|\\nabla u_i\\|^2_{\\ell^{2}(X^2)}- \\sum_{i\\neq j} \\left(\\nabla u_i, \\nabla u_j\\right)_{\\ell^{2}({\\mathcal X}^2)}, \\end{equation}\nover the set\n\\begin{equation}\\label{disc_min_set}\n\t\\mathcal K=\\left\\{\\mathbf{ u}=(u_1,\\cdots, u_k)\\in \\left(\\ell^2({\\mathcal X})\\right)^k: u_i=\\phi_i\\text{ on } \\Gamma, u_i\\geq0, u_i\\cdot u_j=0 \\text{ for }i\\neq j\\right\\}.\n\\end{equation}\n\n\n\n\n\n\n\n\\begin{theorem}\n\tThe following minimization problem\n\t\\begin{equation}\\label{disc_min_problem}\n\t\\inf_{\\mathcal K}J(u_1,u_2,\\dots,u_k)\\\n\t\\end{equation}\n\thas a solution.\n\\end{theorem}\n\n\\begin{proof}\nThe proof repeats the same lines as in Theorem $2$ in \\cite{Av1}. In \\cite{Av1} the functional is defined for standard difference scheme, but it can be easily concluded for the connected graphs as well.\n\\end{proof}\n\nNow, by following the proofs of Proposition $1$ and Lemma $2$ for $F_l(x,s)=0$ in \\cite{Av1}, We can observe that the similar results can be obtained for connected graphs instead of finite difference discretization. Although, it is worth to notice that the standard finite difference grid is itself a particular case of connected graphs.\n\nThus, we conclude the following result:\n\n\\begin{theorem}\\label{limit_pde_discrete}\n\tFor every minimizer $(u_1,\\dots,u_k)\\in \\mathcal K,$ the following properties hold:\n\\begin{itemize}\n\t\\item $\\mathcal{L}\\hat{u}_i(x) =0 \\;\\; whenever \\;\\; u_i(x)>0.$\n\t\\item $\\mathcal{L}\\hat{u}_i(x)\\geq 0 \\;\\; whenever \\;\\; x\\in {\\mathcal X} \\setminus\\Gamma.$\n\\end{itemize}\n\\end{theorem}\n\n\n\nTo prove the uniqueness of the minimizer $(u_1,\\dots,u_k)\\in \\mathcal K$ one needs some technical lemmas.\n\\begin{lemma}\\label{lemma1}\n\tLet ${\\mathcal X}=(V,E)$ be a connected graph. If any two vectors $(u_{1},u_{2},\\dots,u_{k})$ and $(v_{1},v_{2},\\dots,v_{k})$ are minimizers to the \\eqref{disc_min_problem}, then the following equation holds:\n\t\\[\n\t\\max_{x\\in {\\mathcal X}}\\left(\\hat{u}_l(x)-\\hat{v}_l(x)\\right)=\\max_{\\{ x\\in {\\mathcal X}\\;:\\; u_l(x)\\leq v_l(x)\\}}\\left(\\hat{u}_l(x)-\\hat{v}_l(x)\\right),\n\t\\]\n\tfor all $l=1,2,\\dots,k$.\n\\end{lemma}\n\n\\begin{proof}\n\tWe argue by contradiction. Suppose for some $l_0$ we have\n\t\\begin{equation}\\label{init_assmp}\n\t\t\\begin{multlined}\n\t\t\t\\hat{u}_{l_0}(x_0)-\\hat{v}_{l_0}(x_0)=\n\t\t\t\\max_{x\\in {\\mathcal X}}(\\hat{u}_{l_0}(x)-\\hat{v}_{l_0}(x))=\\\\=\n\t\t\t\\max_{\\{ x\\in {\\mathcal X}\\;:\\; u_{l_0}(x)> v_{l_0}(x)\\}}(\\hat{u}_{l_0}(x)-\\hat{v}_{l_0}(x))>\n\t\t\t\\max_{\\{ x\\in {\\mathcal X}\\;:\\; u_{l_0}(x)\\leq v_{l_0}(x)\\}}(\\hat{u}_{l_0}(x)-\\hat{v}_{l_0}(x)).\n\t\t\\end{multlined}\n\t\\end{equation}\t\nIt is easy to observe that the following simple chain of inclusions hold:\n\t\\begin{equation}\\label{incl_chain}\n\t\t\\{u_l(x)> v_l(x)\\}\\subset\\{\\hat{u}_l(x)> \\hat{v}_l(x)\\}\\subset\\{u_l(x)\\geq v_l(x)\\}.\n\t\\end{equation}\n\tWe obviously see that $ u_{l_0}(x_0)> v_{l_0}(x_0)\\geq 0 $ implies\n\t$\\hat{u}_{l_0}(x_0)>\\hat{v}_{l_0}(x_0)$. On the other hand, Theorem \\ref{limit_pde_discrete} gives us\n\t$$\n \\mathcal{L}\\hat{u}_i(x_0)= 0,\n\t$$\n\tand\t\n\t$$\n \\mathcal{L}\\hat{v}_i(x_0)\\geq 0.\t \t\n\t$$\n\tTherefore \t\n\t\\begin{equation*}\n\t\\mathcal{L}(\\hat{u}_i-\\hat{v}_i)(x_0)\\leq 0.\n\t\\end{equation*}\n\tThus,\n\t\n\t\\begin{align}\n\t\t0 < \\left(\\hat{u}_{l_0}(x_0)- \\hat{v}_{l_0}(x_0)\\right)&\\leq\n\t \\frac{1}{\\deg(x_0)}\\sum_{y\\in {\\mathcal X}}w_{x_{0}y}\\left(\\hat{u}_{l_0}(y)-\\hat{v}_{l_0}(y)\\right)\\nonumber,\n\t\\end{align}\n\twhich implies that $\\hat{u}_{l_0}(x_0)-\\hat{v}_{l_0}(x_0)=\\hat{u}_{l_0}(y)-\\hat{v}_{l_0}(y)>0,$ when $w_{x_{0}y}\\ne0$. Due to the chain \\eqref{incl_chain}, we apparently have ${u}_{l_0}(y)\\geq {v}_{l_0}(y)$. According to our assumption \\eqref{init_assmp}, the only possibility is ${u}_{l_0}(y)>{v}_{l_0}(y)$ for all $y\\in {\\mathcal X}$.\n\tNow we can proceed the previous steps for all $y\\in V$ such that $(x_0,y)\\in E,$ and then for each one we will get corresponding neighbours with the same strict inequality and so on. Since the graph ${\\mathcal X}$ is connected, then one can always find the shortest path from a given vertex $y$ to the vertex belonging $\\Gamma.$ Continuing above procedure along this path we will finally approach to a vertex on $\\Gamma,$ where as we know ${u}_{l_0}(x)={v}_{l_0}(x)={\\phi}_{l_0}(x)$ for all $x\\in\\Gamma$. Hence, the strict inequality fails, which implies that our initial assumption \\eqref{init_assmp} is false. Observe that the same arguments can be applied if we interchange the roles of ${u}_{l}(x)$ and ${v}_{l}(x)$. Thus, we also have\n\t\\[\n\t\\max_{V}\\left(\\hat{v}_l(x)-\\hat{u}_l(x)\\right)=\\max_{\\{ v_l(x)\\leq u_l(x)\\}}\\left(\\hat{v}_l(x)-\\hat{u}_l(x)\\right),\n\t\\]\n\tfor every $l=1,2,\\dots,m$.\n\t\n\t\n\n\tParticularly, for every fixed $l=1,2.\\dots,m$ and $x\\in V$ we have\t\\begin{multline}\\label{double_ineq}\n\t\t-\\max_{\\{v_l(x)\\leq u_l(x)\\}}(\\hat{v}_l(x)-\\hat{u}_l(x))=\t-\\max\\limits_{x\\in V}(\\hat{v}_l(x)-\\hat{u}_l(x))\\leq\\\\\\leq \\hat{u}_l(x)-\\hat{v}_l(x) \\leq \t\\max\\limits_{x\\in V}(\\hat{u}_l(x)-\\hat{v}_l(x))=\\max_{\\{u_l(x)\\leq v_l(x)\\}}(\\hat{u}_l(x)-\\hat{v}_l(x)).\n\t\\end{multline}\n\\end{proof}\n\nThanks to Lemma \\ref{lemma1} in the sequel we will use the following notations:\n$$\nA:=\\max_l\\;\\left(\\max\\limits_{x\\in V}(\\hat{u}_l(x)-\\hat{v}_l(x))\\right)=\\max_l\\;\\left(\\max\\limits_{\\{u_l(x)\\leq v_l(x)\\}}(\\hat{u}_l(x)-\\hat{v}_l(x))\\right),\n$$\nand\n$$\nB:=\\max_l\\;\\left(\\max\\limits_{x\\in V}(\\hat{v}_l(x)-\\hat{u}_l(x))\\right)=\\max_l\\;\\left(\\max\\limits_{\\{v_l(x)\\leq u_l(x)\\}}(\\hat{v}_l(x)-\\hat{u}_l(x))\\right).\n$$\nNext lemma we write down without a proof. The proof can be easily adapted from \\cite{Av1}[Lemma $4$].\n\\begin{lemma}\\label{lemma2}\n\tLet ${\\mathcal X}=(V,E)$ be a connected graph. Assume given two vectors $(u_{1},u_{2},\\dots,u_{k})$ and $(v_{1},v_{2},\\dots,v_{k})$ are minimizers to the \\eqref{disc_min_problem}. For them we set $A$ and $B$ as defined above. If $A>0$ and it is attained for some $l_0$, then $A=B>0$ and there exists some $t_0\\neq l_0,$ and $y_0\\in V,$ such that\n\t$$\n\t00$. If we assume that $A\\leq 0,$ then according to Lemma \\ref{lemma2}, we get $B\\leq 0$. But if $A$ and $B$ are non-positive, then the uniqueness follows. Indeed, due to \\eqref{double_ineq} we have the following obvious inequalities\n\t$$\n\t0\\leq -B\\leq \\hat{u}_l(x)-\\hat{v}_l(x)\\leq A\\leq 0.\n\t$$\n\tThis provides for every $l=\\overline{1,k}$ and $x\\in V$ we have $\\hat{u}_l(x)=\\hat{v}_l(x),$ which in turn implies $$\n\t{u}_l(x)={v}_l(x).\n\t$$\n\tNow suppose $A>0$. Our aim is to prove that this case leads to a contradiction. Let the value $A$ is attained for some $l_0\\in\\overline{1,k},$ then\n\tdue to Lemma \\ref{lemma2} there exist $y_0\\in V$ and $t_0\\neq l_0$ such that:\n\t\\begin{align*}\n\t\t0\\hat{u}_{t_0}(y_0)$ implies ${v}_{t_0}(y_0)\\geq {u}_{t_0}(y_0),$ we can repeat the same steps as in the proof of Lemma \\ref{lemma1} to obtain\n\t$$\n\t\\left(\\hat{v}_{t_0}(y_0)- \\hat{u}_{t_0}(y_0)\\right)\\leq \\frac{1}{\\deg(y_0)}\\sum_{(y_0,z)\\in E}w_{y_{0}z}\\left(\\hat{v}_{t_0}(z)-\\hat{u}_{t_0}(z)\\right).\n\t$$\n\tThis implies $A=\\hat{v}_{t_0}(y_0)-\\hat{u}_{t_0}(y_0)=\\hat{v}_{t_0}(z)-\\hat{u}_{t_0}(z)>0$ for all $(y_0,z)\\in E$. The chain \\eqref{incl_chain} provides that for all $(y_0,z)\\in E,$ we have ${v}_{t_0}(z)\\geq{u}_{t_0}(z)$. Since a graph ${\\mathcal X}$ is connected, then one can always find a shortest path from $y_0$ to some vertex $w\\in\\Gamma.$ Assume the vertices along this path are $y_0;y_1;\\dots;y_{k-1};y_q=w.$ Hence, for every $0 \\leq j\\leq q-1,$ we have $(y_j,y_{j+1})\\in E,$ i.e. every vertex $y_{j+1}$ is a closest neighbor for $y_{j}$ and $y_{j+2}.$\n\t\n\t\n\t\n\t\n\tAccording to the above arguments for the neighbor vertex $y_1\\in V$ we proceed as follows: If ${v}_{t_0}(y_1)>{u}_{t_0}(y_1),$ then obviously\n\t$$\n\t\\left(\\hat{v}_{t_0}(y_1)- \\hat{u}_{t_0}(y_1)\\right)\\leq \\frac{1}{\\deg(y_1)}\\sum_{(y_1,z)\\in E}w_{y_{1}z}\\left(\\hat{v}_{t_0}(z)-\\hat{u}_{t_0}(z)\\right).\n\t$$\n\tThis, as we saw a few lines above, leads to $A=\\hat{v}_{t_0}(y_1)-\\hat{u}_{t_0}(y_1)=\\hat{v}_{t_0}(z)-\\hat{u}_{t_0}(z)>0$ for all $(y_1,z)\\in E$. In particular,\n\t$A = \\hat{v}_{t_0}(y_2)-\\hat{u}_{t_0}(y_2)> 0.$\n\t\n\tIf ${v}_{t_0}(y_1)={u}_{t_0}(y_1),$ then due to $\\hat{v}_{t_0}(y_1)-\\hat{u}_{t_0}(y_1)=A=B>0,$ there exists some $\\lambda_0\\neq t_0,$ such that\n\t$$\n\t00$ implies ${u}_{l}(y_1)=0$ for all $l\\neq \\lambda_0,$ and particularly ${v}_{t_0}(y_1)={u}_{t_0}(y_1)=0.$\n\tFollowing the definition of $A,$ we get\n\t$$\n\tA={u}_{\\lambda_0}(y_1)-\\sum_{l\\neq t_0}{v}_{l}(y_1)\\geq \\hat{u}_{\\lambda_0}(y_1)-\\hat{v}_{\\lambda_0}(y_1),\n\t$$\n\twhich in turn gives $2\\sum\\limits_{l\\neq \\lambda_0}{v}_{l}(y_1)\\leq 0,$ and therefore\n\t${v}_{l}(y_1)=0$ for all $l\\neq \\lambda_0$. Hence\n\t$$\n\tA={u}_{\\lambda_0}(y_1)-\\sum_{l\\neq t_0}{v}_{l}(y_1)= \\hat{u}_{\\lambda_0}(y_1)-\\hat{v}_{\\lambda_0}(y_1),\n\t$$\n\tThis suggests us to apply the same approach as above to arrive at\n\t$$\n\t\\left(\\hat{v}_{\\lambda_0}(y_1)- \\hat{u}_{\\lambda_0}(y_1)\\right)\\leq \\frac{1}{\\deg(y_1)}\\sum_{(y_1,z)\\in E}w_{y_{1}z}\\left(\\hat{v}_{\\lambda_0}(z)-\\hat{u}_{\\lambda_0}(z)\\right),\n\t$$\n\twhich leads to\n\t$A=\\hat{u}_{\\lambda_0}(y_1)-\\hat{v}_{\\lambda_0}(y_1)=\\hat{u}_{\\lambda_0}(z)-\\hat{v}_{\\lambda_0}(z)>0,$ for all $(y_1,z)\\in E$. In particular,\n\t$A = \\hat{u}_{\\lambda_0}(y_2)-\\hat{v}_{\\lambda_0}(y_2)> 0.$\n\tThus, combining two cases we observe that for $y_2\\in V$ there exist an index $1\\leq l_{y_2}\\leq m$ (in our case $l_{y_2}=t_0$ or $l_{y_2}=\\lambda_0$) such that\n\t\\begin{equation}\\label{contradiction}\n\t\t\\mbox{either}\\;\\;\t\\hat{u}_{l_{y_2}}(y_2)-\\hat{v}_{l_{y_2}}(y_2)=A, \\;\\;\\mbox{or}\\;\\;\n\t\t\\hat{u}_{l_{y_2}}(y_2)-\\hat{v}_{l_{y_2}}(y_2)=-A.\n\t\\end{equation}\n\tIt is not hard to understand that the same procedure can be repeated for a vertex $y_2$ instead of $y_1$ and come to the same conclusion \\eqref{contradiction} for $y_3\\in V$ and some index $l_{y_3}$ and so on. This allows to claim that for every $y_j\\in V$ along the path $(y_0,\\dots,y_q)$ there exist some $l_{y_j}$ such that\n\t\\[\n\t|\\hat{u}_{l_{y_j}}(y_j)-\\hat{v}_{l_{y_j}}(y_j)|=A>0.\n\t\\]\n\tBut this means that above equality holds also for $y_q=w\\in \\Gamma,$ which will lead to a contradiction, because for every $z\\in\\Gamma,$ and $l=1,\\cdots,k$ one has $\\hat{u}_{l}(z)-\\hat{v}_{l}(z)=0$. This completes the proof of uniqueness.\n\\end{proof}\n\n\n\n\n\n\\subsection{Semi-supervised learning algorithm}\n\n\nUsing definition of graph Laplacian in $${\\mathcal L} (u_{i}- \\sum\\limits_{j\\neq i}u_j )\\ge 0,$$ yields\n\\begin{equation}\\label{17}\n\t{\\mathcal L} (u_{i} - \\sum\\limits_{j\\neq i}u_{j})(x_l) =\n\t\\sum\\limits_{s=1}^{n} w_{ls}\\, \\left( u_{i}(x_{l})- u_{i}(x_s) - \\sum\\limits_{j\\neq i}(u_{j}(x_l) - u_{j}(x_s)) \\right).\n\\end{equation}\nTo obtain $u_{i} (x_{l})$ from (\\ref{17}) we impose the following conditions\n\\[\nu_{i} (x_{l}) \\cdot u_{j} (x_{l}) =0 \\text{ and } u_{i}(x_l)\\geq 0.\n\\]\nFrom these\n\\[\n\\deg(x_{l}) u_{i} (x_{l})-\\sum\\limits_{s=1}^{n} w_{ls}\\,u_{i} (x_{s})+ \\sum\\limits_{s=1}^{n}\\sum\\limits_{j\\neq i}w_{ls} \\,u_{j} (x_{s})=0\n\\]\nThen\n\\[\nu_{i} (x_{l})=\\overline{u}_{i}(x_{l})-\\sum\\limits_{j\\neq i}\\overline{u}_{j}(x_{l}).\n\\]\n\n\nAccording to the above ideas and following the Theorem \\ref{limit_pde_discrete} we can easily check that if $(u_1,u_2,\\dots,u_k)\\in \\mathcal K$ is a unique minimizer to \\eqref{disc_min_problem}, then it satisfies the following system of equations:\n\\begin{equation}\\label{scheme_sys}\n\t\\begin{cases}\n\t\tu_{1}(x) =\\max \\left(\\overline{u}_1(x) - \\sum\\limits_{p \\neq 1} \\overline{u}_p(x), \\, 0\\right),\\;\\;x\\in {\\mathcal X}\\setminus\\Gamma,\\\\\n\t\tu_{2}(x) =\\max \\left(\\overline{u}_2(x) - \\sum\\limits_{p \\neq 2} \\overline{u}_p(x), \\, 0\\right),\\;\\;x\\in {\\mathcal X}\\setminus\\Gamma,\\\\\n\t\t\\dots\\dots\\dots\\dots\\\\\n\t\tu_{k}(x) =\\max \\left(\\overline{u}_k(x) - \\sum\\limits_{p \\neq k} \\overline{u}_p(x), \\, 0\\right),\\;\\;x\\in {\\mathcal X}\\setminus\\Gamma,\\\\\n\t\tu_{i}(x) =\\phi_{i}(x),\\;\\;x\\in \\Gamma,\\; \\mbox{for all}\\; i=1,2,\\dots,k.\n\t\\end{cases}\n\\end{equation}\n\\begin{remark}\\label{remark_1}\n\tWe remark that the system \\eqref{scheme_sys} itself implies the disjointness property, i.e. it is easy to see that if a vector $(u_1,u_2,\\dots,u_k)$ satisfies the system \\eqref{scheme_sys}, then $u_i(x)\\cdot u_j(x)=0,$ for every $x\\in V$ and $i\\neq j.$\n\\end{remark}\n\nIn order to approximate the solution of system \\eqref{scheme_sys} we propose the iterative scheme which is easy to implement as follows:\nFor $i=1,\\cdots k,$ and $x_l\\in{\\mathcal X}\\setminus\\Gamma$ we set\n\\begin{equation*}\nu_{i}^{(t+1)} (x_{l}) =\\max\\left(\\overline{u}^{(t)}_{i}(x_{l})-\\sum\\limits_{j\\neq i}\\overline{u}^{(t)}_{j}(x_{l}),\\, 0\\right).\n\\end{equation*}\n\nIn the lite of Remark \\ref{remark_1} it can be seen that for every iteration the disjointness property is preserved. In other words the following lemma is true.\n\\begin{lemma}\\label{lemma}\n\t\tLet ${\\mathcal X}=(V,E)$ be a connected graph. The above iterative method satisfies\n\t\\[u_i^{(t)}(x)\\cdot u_j^{(t)}(x) =0,\\; \\forall x\\in V,\\; i\\neq j.\\]\n\\end{lemma}\n\n\nThe label decision for vertex $x_l$ is determined by the strictly positive component\n$u_i(x_l),$ i.e. find an index $i_0$ such that $u_{i_0}(x_l)>0$. Thus, in this case the label corresponding to a vertex $x_l$ will be $i_0.$\n\n\n\\\n\n\n\n\\section{Experimental results}\n\nIn this section we are going to test and compare two well-known semi-supervised learning algorithms to the one we have developed based on segregation theory. We note that our taken dataset for visual implementations will be the random generated half-moons and for the statistic analysis we will use the well-known MNIST. Thus, we will depict the predictions for Laplace learning, Poisson learning and our learning (We call it Segregation learning) algorithms.\n\nWe run the learning algorithms for different number of initial label of classes and for different number of classes (basically we will run for $3, 4$ and $5$ classes). For each implementation all the classes have the same number of nodes, i.e, either all classes have $200$ or $300$ nodes. The reader can also observe the red nodes on every figure. They correspond to the randomly chosen initial known labels.\nIn the figures \\ref{fig1}--\\ref{fig15} below one can observe that, when the initial number of labels per class is small, i.e. $2, 3$ or $5$ labels, then the Laplace learning algorithm is performing poorly, whilst both the Poisson and our Segregation learning algorithms are performing much better and have more or less the same accuracy.\n\nWhen the initial number of labels per class is $10$ or $20$ labels, then the performance of the Laplace learning becomes more accurate and is getting close to the results depicted for Poisson and Segregation learning algorithms.\n\n\n\nTables \\ref{table_1} and \\ref{table_2} show the average accuracy over all $100$ trials for various low and high label rates. The implementations have been done on MNIST dataset only for $3$ classes. We see that for low label rates Laplace learning performs poor as we noted in the depicted figures. On the other hand Poisson and Segregation learning perform better and predicted more or less with the same accuracy. For high label rates Laplace learning performs much better and gets close to Poisson and Segregation learning results.\n\\begin{figure}[!htb]\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_3_classes_and_2_labels}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_3_classes_and_2_labels}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_3_classes_and_2_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 3 classes and initial 2 labels per class.}\t\n\t\\label{fig1}\n\n\n\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_3_classes_and_3_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_3_classes_and_3_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_3_classes_and_3_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 3 classes and initial 3 labels per class.}\n\t\\label{fig2}\t\n\n\n\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig3}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_3_classes_and_5_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_3_classes_and_5_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_3_classes_and_5_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 3 classes and initial 5 labels per class.}\t\n\n\n\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig4}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_3_classes_and_10_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_3_classes_and_10_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_3_classes_and_10_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 3 classes and initial 10 labels per class.}\t\t\t\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig5}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_3_classes_and_20_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_3_classes_and_20_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_3_classes_and_20_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 3 classes and initial 20 labels per class.}\t\t\t\n\\end{figure}\n\n\\begin{figure}[!htb]\\label{fig6}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_4_classes_and_2_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_4_classes_and_2_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_4_classes_and_2_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 4 classes and initial 2 labels per class.}\t\t\t\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig7}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_4_classes_and_3_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_4_classes_and_3_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_4_classes_and_3_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 4 classes and initial 3 labels per class.}\t\t\t\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig8}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_4_classes_and_5_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_4_classes_and_5_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_4_classes_and_5_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 4 classes and initial 5 labels per class.}\t\t\t\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig9}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_4_classes_and_10_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_4_classes_and_10_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_4_classes_and_10_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 4 classes and initial 10 labels per class.}\t\t\t\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig10}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_4_classes_and_20_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_4_classes_and_20_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_4_classes_and_20_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 4 classes and initial 20 labels per class.}\t\t\t\n\\end{figure}\n\n\\begin{figure}[!htb]\\label{fig11}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_5_classes_and_2_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_5_classes_and_2_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_5_classes_and_2_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 5 classes and initial 2 labels per class.}\t\t\t\n\\end{figure}\n\\begin{figure}[!htb]\\label{fig12}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_5_classes_and_3_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_5_classes_and_3_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_5_classes_and_3_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 5 classes and initial 3 labels per class.}\t\t\t\n\\end{figure}\n\n\\begin{figure}[!htb]\\label{fig13}\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_5_classes_and_5_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_5_classes_and_5_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_5_classes_and_5_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 5 classes and initial 5 labels per class.}\t\t\t\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_5_classes_and_10_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_5_classes_and_10_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_5_classes_and_10_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 5 classes and initial 10 labels per class.}\t\n\t\\label{fig14}\t\t\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Seg_learning_with_5_classes_and_20_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Poisson_learning_with_5_classes_and_20_labels} \\includegraphics[width=.32\\linewidth,valign=m]{paper_figures\/Laplace_learning_with_5_classes_and_20_labels}\n\t\\caption{Comparison of Laplace, Poisson and Segregation learning algorithms for 5 classes and initial 20 labels per class.}\t\n\t\\label{fig15}\t\t\n\\end{figure}\n\n\\begin{table}[h]\n\t\\caption{Average accuracy scores over 100 trials for 3 classes on MNIST dataset.}\n\t\\centering\n\t\\begin{tabular}{cllllll}\n\t\t\\hline\\hline\n\t\tLabels per class &\\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5}& \\textbf{10}& \\\\ [0.5ex]\n\t\t\\hline\n\t\tLaplace learning &31.3 & 45.4 & 58.2 & 67.7 & 83.4\\\\\n\t\tPoisson learning & 93.6 & 94.5 & 94.9 & 95.3 & 96.7&\\\\\n\t Segregation learning & \\textbf{90.3} & \\textbf{92.1} & \\textbf{93.5} & \\textbf{95.4} & \\textbf{96.2} &\\\\[1ex]\n\t\t\\hline\\hline\n\t\\end{tabular}\n\t\\label{table_1}\n\\end{table}\n\n\\begin{table}[h]\n\t\\caption{Average accuracy scores over 100 trials for 3 classes on MNIST dataset.}\n\t\\centering\n\t\\begin{tabular}{c llllll}\n\t\t\\hline\\hline\n\t\tLabels per class &\\textbf{20} & \\textbf{40} & \\textbf{80} & \\textbf{100}& \\textbf{120}&\\\\ [0.5ex]\n\t\t\\hline\n\tLaplace learning &88.6 & 91.3 & 93.7 & 95.2 & 97.6\\\\\n\tPoisson learning & 95.6 & 97.2 & 97.9 & 98.3 & 99.4&\\\\\n\tSegregation learning & \\textbf{94.3} & \\textbf{96.7} & \\textbf{98.2} & \\textbf{98.8} & \\textbf{99.2} &\\\\[1ex]\n\t\t\\hline\\hline\n\t\\end{tabular}\n\t \\label{table_2}\n\\end{table}\n\n\\section{Conclusion}\nIn this work we develop several semi-supervised algorithms on graphs. Our main algorithm is a new approach for graph-based semi-supervised learning based on spatial segregation theory. The method is efficient and simple to implement.\n We presented numerical results showing that Segregation Learning performs more or less as Poisson Learning algorithm not only at high label rates, but also at low label rates on MNIST dataset.\n\n\n\n\n\n\n\\renewcommand{\\refname}{REFERENCES }\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfitd b/data_all_eng_slimpj/shuffled/split2/finalzzfitd new file mode 100644 index 0000000000000000000000000000000000000000..a85142c06298eb41aa77573247a119d3ee8b24ef --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfitd @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\nAn essential task in experimental quantum information\nprocessing is the characterization of quantum states and\ntheir dynamics, which is typically achieved via quantum\nstate tomography (QST)~\\cite{li-pra-2017} and quantum\nprocess tomography (QPT)\\cite{chuang-jmo-09}. The \nexperimental resources\nrequired to implement QST and QPT \ngrow exponentially with the size of\nsystem, which makes these methods infeasible beyond a few\nqubits\\cite{mohseni-pra-2008}. Hence developing techniques\nto reduce the experimental resources required for quantum\nprocess tomography is of paramount importance in scaling up\nquantum technologies. Several strategies have been designed\nto address these issues, such as methods based on the\nleast-square (LS) linear inversion\ntechnique\\cite{miranowicz-pra-2014}, linear regression\nestimation\\cite{qi-quantum-inf-2017}, maximum likelihood\nestimation (MLE)\\cite{james-pra-2001}, self-guided\ntomography\\cite{rambach-prl-2021} and numerical\nstrategies\\cite{kaznady-pra-2009}. Several QST protocols\nhave been extended to perform QPT, which include MLE-based\nQPT\\cite{obrien-prl-04}, LS-based\nQPT\\cite{trystan-arxiv-2021}, simplified\nQPT\\cite{kosut-njp-2009}, convex optimization-based\nQPT\\cite{jin-2019}, selective and efficient QPT\n\\cite{perito-pra-2018}, adaptive\nQPT\\cite{pogorelov-pra-2017}, and ancilla-assisted\nQPT\\cite{altepeter-prl-2003}. These protocols have been\nsuccessfully demonstrated on various physical systems such\nas\nNMR~\\cite{gaikwad-pra-2018,xin-npj-2019,xin-phys-app-2020,Gaikwad-qip-2021,zhao-pra-2021},\nNV-centers\\cite{zhang-prl-2014}, linear\noptics\\cite{paz-prl-2010}, superconducting\nqubits\\cite{neeley-nature-2008,\nchow-prl-2009,gaikwad-ijqi-2020} and ion trap-based quantum\nprocessors\\cite{riebe-prl-2006}.\n\n\nMethods such as Monte-Carlo process\ncertification\\cite{silva-prl-2011} and randomized\nbenchmarking\\cite{knill-pra-2008} have been developed to\naddress scalability issues in standard QST and QPT methods.\nHowever, they are limited in scope as they do not provide\nthe full process matrix and hence cannot be used to identify\ngate errors or improve gate fidelity. Other methods such as\nancilla-assisted QPT are able to significantly reduce the\nexperimental complexity, however the issues of scalability\nremain. The CS algorithm borrows ideas from classical\nsignal processing which posits that even heavily\nundersampled sparse signals can be efficiently\nreconstructed. The CS algorithm relies on reformulating QST\nand QPT tasks as a constrained convex optimization problem,\nand is able to perform complete and true characterization of\na given quantum process from a heavily reduced data set\nwithout performing actual projective measurements, and does\nnot need any extra resources such as ancilla qubits. CS-QST\nand CS-QPT have been successfully used to reconstruct\nunknown quantum states from NMR data~\\cite{yang-pra-2017},\nto characterize quantum gates based on superconducting Xmon\nand phase qubits~\\cite{rodionov-prb-2014}, and to perform\nefficient estimation of process matrices of photonic\ntwo-qubit gates~\\cite{shabani-prl-2011}.\n\n\n\nIn this work we utilize the CS algorithm to perform QPT of\nvarious two- and three-qubit quantum gates on an NMR quantum\nprocessor. We also demonstrate the efficacy of the CS-QPT\nprotocol in characterizing two-qubit dynamics in a\nthree-qubit system. We experimentally estimate the full\nprocess matrix corresponding to a given quantum process with\na high fidelity, from a drastically reduced set of initial\nstates and outcomes. The CS-QPT algorithm is able to\nefficiently characterize a given quantum process provided\nthe corresponding process matrix is sufficiently sparse (\\textit{i.e.~}\nmost of its matrix elements are zero). We use two different\noperator basis sets to estimate the process matrix using the\nCS algorithm, namely, the standard Pauli basis and the\nPauli-error basis (where the process matrix is maximally\nsparse~\\cite{rodionov-prb-2014}, \\textit{i.e.~} it contains only one\nnon-zero element). We also compare the performance of the\nCS-QPT and the LS-QPT methods using significantly reduced\ndata sets in both the standard Pauli basis and the\nPauli-error basis. We obtained experimental fidelities of\ngreater than 0.9 from a reduced data set of a size\napproximately 5 to 6 times smaller than the size of a full\ndata set, and our results indicate that the CS-QPT method is\nsignificantly more efficient than standard QPT methods.\n\n\nThis paper is organized as follows: In Section~\\ref{qpt} we\ndetail the implementation of the CS algorithm in the context\nof QPT. The standard QPT protocol is briefly described\nin Section~\\ref{sec2.1}, while the CS-QPT method is\ngiven in Section~\\ref{sec2.2}.\nSection~\\ref{nmr-expt} describes the experimental\nimplementation of the CS-QPT methods using two and three\nNMR qubits.\nIn Sections~\\ref{csqpt2q} and \\ref{csqpt3q},\nwe present the quantum\ncircuit and the corresponding NMR implementation of the\nCS-QPT method for two- and three-qubit quantum gates, respectively. \nSection~\\ref{sec3.3} contains a description of the CS-QPT\nimplementation to capture\ntwo-qubit quantum dynamics embedded in a three-qubit system.\nSection~\\ref{sec3.4} contains a comparison of the\nCS-QPT and LS-QPT protocols.\nSection~\\ref{concl} contains a few concluding remarks. \n\\section{QPT for a reduced data set} \n\\label{qpt}\n\\subsection{Standard QPT and $\\chi$ matrix representation}\n\\label{sec2.1}\nIn a fixed basis set $\\lbrace E_i \\rbrace$, a quantum map\n(a completely positive map)\n$\\Lambda$ can be written as~\\cite{kraus-book-1983}:\n\\begin{equation}\n\\Lambda(\\rho) = \\sum_{m,n} \\chi_{mn} E_m \\rho E_n^{\\dagger}\n\\label{eq2}\n\\end{equation}\nwhere the Kraus operators are expanded as $A_i =\n\\sum_{k}a_{ik}E_k$ and the quantities\n$\\chi_{mn}=\\sum_{i}a_{im}a_{in}^*$ are the elements of the\nprocess matrix $\\chi$ characterizing the quantum map\n$\\Lambda$. In a $d$-dimensional Hilbert space, $\\chi$ is a\n$d^2 \\times d^2$ dimensional positive semi-definite matrix\nand $d^4$ real independent parameters are required to\nuniquely represent it. The number of required parameters\nreduces from $d^4$ to ($d^4-d^2$) for trace preserving\nprocesses~\\cite{obrien-prl-04}. \n\n\nThe standard QPT protocol estimates the complete $\\chi$\nmatrix by preparing the system in different quantum states,\nletting it evolve under the given quantum process, and then\nmeasuring a set of observables~\\cite{childs-pra-2001}. The\nfull data set for QPT can be acquired using tomographically\ncomplete sets of input states $\\lbrace \\rho_1,\n\\rho_2,....,\\rho_k \\rbrace $, letting them\nundergo the desired quantum process $\\chi$, and \nmeasuring an observable $M$ from the set of measurement operators\n$\\lbrace M_1, M_2,..., M_l \\rbrace$, leading to:\n\\begin{equation}\nB^i_j=\\text{Tr}(M_j\\Lambda(\\rho_i)) = \n\\sum_{m,n} \\chi_{mn} \\text{Tr}(M_j E_m \\rho_i E_n^{\\dagger})\n\\label{eq3}\n\\end{equation}\nFor all input states $\\lbrace \\rho_i \\rbrace$ \nand measurement operators $\\lbrace M_j \\rbrace$ \nin Eq.~(\\ref{eq3}), the relationship between\nthe vector of outcomes and the true process matrix\n can be rewritten in \na compact form~\\cite{childs-pra-2001}:\n\\begin{equation}\n\\overrightarrow{B}(\\chi) = \\Phi \\overrightarrow{\\chi}\n\\label{eq4}\n\\end{equation} \nwhere $\\overrightarrow{B}(\\chi)$ and $\\overrightarrow{\\chi}$\nare vectorized forms of $B^i_j$ and $\\chi_{mn}$\nrespectively, and $\\Phi$ is the coefficient matrix with the\nentries $\\Phi_{ji,mn} = \\text{Tr}(M_j E_m \\rho_i\nE_n^{\\dagger})$. \n\nWe note here that using the standard QPT method may not\nalways lead to a positive semi-definite experimentally\nconstructed $\\chi$ matrix, due to experimental\nuncertainties. This problem can be resolved by reformulating\nthe linear inversion problem as a constrained convex\noptimization problem as follows\\cite{Gaikwad-qip-2021}:\n \n\\begin{subequations} \n\\begin{alignat}{2}\n\\min_{\\chi}\\quad & \\Vert\n\\overrightarrow{B}^{exp}-\\overrightarrow{B}(\\chi)\\Vert_{l_2}\\label{eq5}\\\\\n\\text{subject to}\\quad & \\chi \\geq 0,\\label{eq5:constraint1}\\\\\n &\n\\sum_{m,n}\\chi_{mn}E_m^{\\dagger}E_n =\nI_d.\\label{eq5:constraint2} \\end{alignat} \n\\end{subequations}\nwhere the vector $\\overrightarrow{B}^{exp}$ is constructed\nusing experimental measurement outcomes. This method is\nreferred to as the least square (LS) optimization method. In\nthis work, we study the performance of the LS-QPT method for\na reduced data set.\n\\subsection{Compressed sensing QPT} \n\\label{sec2.2}\nCompressed sensing methods work well if the process matrix\nis sparse in some known basis and\nrely on compressing the information \ncontained in a process of large size into \none of much smaller size and use \nefficient convex optimization algorithms to \n``unpack'' this compressed information.\nThe\nCS-QPT method hence provides a way to reconstruct the complete and\ntrue $\\chi$ matrix of a given quantum process from a drastically\nreduced data set, provided that the $\\chi$ matrix is\nsufficiently sparse in some known basis \\textit{i.e.~}, the number of\nnon-zero entries in the $\\chi$ matrix is small.\nIt is to be noted that the sparsity is a property of the\nmap representation and not the map itself.\nSpecifically, for quantum gates which are trace-preserving\nunitary quantum processes, one can always find the proper\nbasis in which the corresponding $\\chi$ matrix is maximally\nsparse~\\cite{korotkov-arxiv-2013,kosut-arxiv-2008}.\n\n\nEstimating a sparse process matrix with an unknown\nsparsity pattern from an underdetermined set of\nlinear equations can be done using numerical optimization\ntechniques.\nFor trace-preserving maps, the complete\nconvex optimization problem for CS-QPT is formulated as\nfollows: \n\\begin{subequations} \n\\begin{alignat}{2}\n\\!\\min_{\\chi} \\quad&\n\\Vert{\\overrightarrow{\\chi}}\\Vert_{l_1}\\label{eq6}\\\\\n\\text{subject to}\\quad & \\Vert\n\\overrightarrow{B}^{exp}-\\Phi \\overrightarrow{\\chi}\n\\Vert_{l_2} \\leq \\epsilon,\\label{eq6:constraint1}\\\\ &\n\\chi \\geq 0,\\label{eq6:constraint2}\\\\ &\n \\sum_{m,n}\\chi_{mn}E_m^{\\dagger}E_n =\nI_d.\\label{eq6:constraint3} \n\\end{alignat} \n\\end{subequations}\nwhere Eq.~(\\ref{eq6}) is the main objective function which is\nto be minimized and Eq.~(\\ref{eq6:constraint1}) is the\nstandard constraint involved in the CS algorithm;\nEq.~(\\ref{eq6:constraint2}) and Eq.~(\\ref{eq6:constraint3}) denote\nthe positivity and trace preserving constraints of the\nprocess matrix, respectively. The parameter $\\epsilon$\nquantifies the level of uncertainty in the measurement, \\textit{i.e.~}\nthe quantity $\\overrightarrow{B}^{{\\rm exp}}=\\Phi\n\\overrightarrow{\\chi}_0+\\overrightarrow{z}$ is observed,\nwith $\\Vert \\overrightarrow{z} \\Vert_{l_2} \\leq \\epsilon$,\nwhere $\\overrightarrow{\\chi}_0$ is the vectorized form of\nthe true process matrix and $\\overrightarrow{z}$ is an unknown\nnoise vector. The general $l_p$- norm of\na given vector $ {\\overrightarrow{x}}$ is defined as:\n$\\|x\\|_{p}=\\left(\\sum_{i}\\left|x_{i}\\right|^{p}\\right)^{1 \/\np}$. If the process matrix is sufficiently sparse and the\ncoefficient matrix $\\Phi$ satisfies the restricted isometry\nproperty (RIP) condition, then by solving the optimization\nproblem delineated in Eq.~(\\ref{eq6}), one can accurately\nestimate the process matrix~\\cite{shabani-prl-2011}.\nThe RIP condition is satisfied if the coefficient\nmatrix $\\Phi$ satisfies the following\nconditions~\\cite{shabani-prl-2011,rodionov-prb-2014}:\n\\begin{itemize} \\item[(i)] \\begin{equation} 1-\\delta_s \\leq\n\\frac{\\Vert \\Phi \\overrightarrow{\\chi}_1 - \\Phi\n\\overrightarrow{\\chi}_2 \\Vert_{l_2}^2}\n{\\Vert\\overrightarrow{\\chi}_1 - \\overrightarrow{\\chi}_2\n\\Vert_{l_2}^2}\\leq 1+\\delta_s \\label{rip1} \\end{equation}\nfor all $s$-sparse vectors $\\overrightarrow{\\chi}_1$ and\n$\\overrightarrow{\\chi}_2$. \nAn $N \\times 1$ dimensional\nvector $\\overrightarrow{x}$ is $s$-sparse, if only $s < N$\nelements are non-zero. \n\\item [(ii)] The value of the\nisometry constant $\\delta_s < \\sqrt{2}-1$.\nThe restricted isometry constant (RIC) of a\nmatrix $A$ measures how close to an isometry is the action of\n$A$ on vectors with a few nonzero entries, measured in the\n$l_2$-norm~\\cite{emmanuel-crm-2008}. \nSpecifically, the upper and lower RIC of a\nmatrix $A$ of size $n \\times N$ is the maximum and the minimum\ndeviation from unity (one) of the largest and smallest,\nrespectively, square of singular values of $\\text { all\n}\\left(\\begin{array}{c} N \\\\ k \\end{array}\\right) \\text {\nmatrices }$ formed by taking $k$ columns from $A$. \n\\item[(iii)] The size of the data set is sufficiently large\n\\textit{i.e.~} $m_{\\text{conf}}\\geq C_0 s \\text{log}(d^4\/s)$ where\n$C_0$ is a constant, \n$m_{\\text{conf}}$ is the size of the data set, $s$ is \nthe sparsity of\nthe process matrix and $d$ is the dimension of \nthe Hilbert space. \n\\end{itemize} \n\nOnce the basis operators $\\lbrace E_{\\alpha}\n\\rbrace$ and the configuration space $\\lbrace \\rho_i,\nM_j\\rbrace $ are chosen, the coefficient matrix\n$\\Phi_{\\text{full}}$ corresponding to the entire data set is\nfully defined and does not depend on the measurement\noutcomes. It has been shown that if $\\Phi_{m}$ is built by\nrandomly selecting $m$ rows (\\textit{i.e.~} $m$ number of random\nconfigurations) from $\\Phi_{\\text{full}}$ then it is most\nlikely to satisfy the RIP\nconditions~\\cite{rodionov-prb-2014}. \nHence the sub-matrix\n$\\Phi_m \\in \\Phi_{\\text{full}} $ together with the\ncorresponding observation vector $\\overrightarrow{B}^{exp}_m\n\\in \\overrightarrow{B}^{exp}_\\text{full}$ can be used to\nestimate the process matrix by solving the optimization\nproblem (Eq.~(\\ref{eq6})). \n \nIn this study, we use two different operator basis sets,\nnamely the standard Pauli basis (PB) and the Pauli-error\nbasis (PEB). For both bases, the orthogonality condition is\ngiven by $ \\langle E_{\\alpha} \\vert E_{\\beta} \\rangle = d\n\\delta_{\\alpha \\beta} $. \nFor\nan $n$-qubit system, \nthe basis operators $P_i$ in the PB set are \n$P_i = \\lbrace I, \\sigma_x, \\sigma_y, \\sigma_z \\rbrace\n^{\\otimes n}$, while the basis operators $E_i$ in \nthe PEB set are:\n$E_i = UP_i$, where $U$ is the desired unitary matrix for\nwhich the process matrix needs to be estimated. Furthermore,\nthe process matrix in PEB corresponding to the desired $U$,\nis always maximally sparse, \\textit{i.e.~} it contains only one\nnon-zero element~\\cite{rodionov-prb-2014}. The convex\noptimization problems involved in LS-QPT and CS-QPT\n(Eq.~(\\ref{eq5}) and Eq.~(\\ref{eq6}), respectively) can be solved\nefficiently using the YALMIP\\cite{lofberg-2004} MATLAB\npackage, which employs SeDuMi\\cite{sturm-oms-1999} as a\nsolver.\n\\begin{figure}\n\\includegraphics[angle=0,scale=1]{fig1-csqpt.pdf} \n\\caption{(Color online) (a) Quantum circuit\nto implement CS-QPT of a CNOT gate. Single-qubit unitary\noperations $R_{\\phi}^\\theta$ are achieved via rotations by\nan angle $\\theta$ and phase $\\phi$. The first \nblock represents the preparation of the desired input state, while\nthe second and third blocks represent the quantum\nprocess corresponding to the CNOT gate and the measurement,\nrespectively. (b) NMR implementation of the quantum circuit\ngiven in panel (a). The rectangles filled with red, orange,\nyellow and green color denote $\\frac{\\pi}{2}$, $\\pi$,\n$\\frac{\\pi}{4}$ and $\\frac{\\pi}{3}$ pulses, respectively, \nwith the rf phase\nwritten above each pulse. \nThe unfilled rectangles with\nthe phases $\\phi_1$, $\\phi_2$, $\\phi_3$, and $\\phi_4$\nrepresent pulses with flip angles $\\theta_1$,\n$\\theta_2$, $\\theta_3$, and $\\theta_4$, respectively.\nThe gradient line denotes \n$z$-gradient pulses. The evolution time\nperiod $\\tau=\\frac{1}{2J_{CH}}$ where $J_{CH}$ is the scalar\ncoupling constant. (c) ${}^{13}$C-labeled chloroform molecule\nwith ${}^{1}$H and ${}^{13}$C labeling the\nfirst and second qubits, respectively. (d) and (e) depict\nthe\nNMR spectra of ${}^{13}$C and ${}^{1}$H, respectively,\ncorresponding to the configuration $\\lbrace \\vert ++ \\rangle\n\\langle ++ \\vert, IX \\rbrace$. } \n\\label{ckt1}\n\\end{figure}\n\\section{Experimental implementation of CS-QPT}\n\\label{nmr-expt} \n\\subsection{CS-QPT of two-qubit gates}\n\\label{csqpt2q} \nWe implemented the CS-QPT protocol for \ntwo, two-qubit nonlocal quantum gates,\nnamely, the CNOT gate and the\ncontrolled-rotation gate. The controlled-rotation gate is a\nnonlocal gate which rotates the state of the second qubit\nvia $R_x(\\theta)$, if the first qubit is in the state $\\vert\n1 \\rangle $.\n\nFor two qubits the tomographically complete set of input\nstates is given by: $\\lbrace \\vert 0 \\rangle, \\vert 1\n\\rangle, \\vert + \\rangle, \\vert - \\rangle \\rbrace ^{\\otimes\n2}$ where $\\vert + \\rangle = (\\vert 0 \\rangle + \\vert 1\n\\rangle)\/\\sqrt{2} $ and $\\vert - \\rangle = (\\vert 0 \\rangle\n+ i\\vert 1 \\rangle)\/\\sqrt{2} $. \nIn NMR, tomographic measurements are carried out by applying\na set of unitary rotations followed by signal\nacquisition\\cite{long-job-2001}. The time-domain NMR signal\nis recorded as a free induction decay and then Fourier\ntransformed to obtain the frequency spectrum, which\neffectively measures the net magnetization in the transverse\n($x-y$) plane. For two NMR qubits, the tomographically\ncomplete set of unitary rotations is given\nby~\\cite{li-pra-2017}: $\\lbrace II, IX, IY, XX\\rbrace$ where\n$II$ denotes the no operation on both the qubits, $IX$\ndenotes no operation on the first qubit and\na $90^{\\circ}$ $x$-rotation on the second qubit, $IY$ denotes\nno operation on the first qubit and a $90^{\\circ}$\n$y$-rotation on the second qubit and $XX$ denotes a\n$90^{\\circ}$ $x$-rotation on both qubits. \n\n\nAs an illustration, the quantum circuit and corresponding\nNMR implementation of the CS-QPT protocol for a two-qubit\nCNOT gate is given in Fig.~\\ref{ckt1}. Fig.\\ref{ckt1}(a)\ndepicts the general quantum circuit to acquire data for\nCS-QPT and contains all possible settings corresponding to a\ntomographically complete set of input quantum states and\nmeasurements. The first block in Fig.\\ref{ckt1}(a) prepares\nthe desired initial input state from $\\vert 00 \\rangle$. In\nthe second block the quantum process (CNOT gate in this\ncase) which is to be tomographed, is applied to the system\nqubits and in the third block, a set of tomographic\noperations are applied, followed by measurements on each\nqubit. To implement CS-QPT for any other two-qubit quantum\ngate, the CNOT gate should be replaced with the desired\ngate, while the remaining circuit remains unaltered. The\nfirst block in the Fig.\\ref{ckt1}(b) represents the NMR\npulse sequence which prepares the spin ensemble in the\npseudo pure state (PPS) $\\vert 00 \\rangle$ and then\ngenerates the desired input state from the $\\vert 00 \\rangle\n$ state. The pulse sequence corresponding to the CNOT gate\n(the quantum process which is to be tomographed) is given in\nthe second block and finally, in the last block, the desired\nset of tomographic pulses are applied and the NMR signal is\nacquired.\n\n\nWe used $^{13}C$-enriched chloroform molecule\n(Fig.\\ref{ckt1}(c)) dissolved in acetone-D6 to\nphysically realize a two-qubit system, with the\n${}^{1}$H and ${}^{13}$C spins denoting the \nfirst and second qubits, respectively. \nThe NMR\nHamiltonian in the rotating\nframe is given by: \n\\begin{equation}\n\\mathcal{H} = - \\sum_{i=1}^2 \\nu_i I_{iz} + J_{\\rm CH}\nI_{1z} I_{2z} \\label{eq7} \n\\end{equation}\nwhere $\\nu_1$, $\\nu_2$ are the chemical shifts, $I_{1z}$,\n$I_{2z}$ are the $z$-components of the spin angular momentum\noperators of the ${}^{1}$H and ${}^{13}$C spins\nrespectively, and $J_{{\\rm CH}}$ is the scalar coupling\nconstant. We used the spatial averaging technique to\ninitialize the system in the PPS corresponding to $\\vert 00\n\\rangle$, with the density matrix $\\rho_{00}$ given\nby~\\cite{oliveira-book-07}: \\begin{equation}\n\\rho_{00}=\\frac{1}{4}(1-\\eta)I_4+\\eta \\vert 00\\rangle\n\\langle 00 \\vert \\label{eq8} \\end{equation} where $\\eta$\ncorresponds to the net spin magnetization at thermal\nequilibrium, and $I_4$ is a $4 \\times 4$ identity operator.\nFigs.~\\ref{ckt1}(d) and \\ref{ckt1}(e) depict the NMR spectra\ncorresponding to carbon and hydrogen respectively, obtained\nfor the configuration $\\lbrace \\vert ++ \\rangle, IX \\rbrace\n$, where $\\lbrace \\vert ++ \\rangle$ refers to the initial\nstate and $IX$ denotes the tomographic pulse set used. The\nsystem is prepared in the initial input state $\\vert ++\n\\rangle$, a CNOT gate is applied, and finally the\ntomographic pulse $IX$ is applied to obtain the NMR\nspectrum. For the first qubit, the area under the spectrum\nis related to the density matrix elements $\\rho_{24}$ and\n$\\rho_{13}$, while for the second qubit, the area under the\nspectrum is related to the density matrix elements\n$\\rho_{34}$ and $\\rho_{12}$. In general, the four readout\nelements of the density matrix are complex numbers; in NMR\nthe imaginary part of the density matrix can be calculated\nby applying a $90^{\\circ}$ phase shift to the spectrum\n(post-processing) and then measuring the\narea~\\cite{long-job-2001}. Hence a given configuration\ncomprises four data points (two for each qubit). Since the\nsize of the full configuration space is 64 (16 states\n$\\times$ 4 tomographic rotations), the size of the full data\nset for two qubits is $64 \\times 4 = 256$. The vector\n$\\overrightarrow{B}^{exp}_\\text{full}$ (256$\\times$1\ndimensional) can be experimentally constructed by computing the area under\nthe spectrum for the full configuration space. One can hence\nconstruct $\\overrightarrow{B}^{exp}_m $ and the\ncorresponding sub-matrix $\\Phi_{m}$ by randomly selecting\n$m$ rows from $\\overrightarrow{B}^{exp}_\\text{full}$ and\n$\\Phi_\\text{full}$ respectively, solving the optimization\nproblem (Eq.~(\\ref{eq6})) for a reduced data set of size $m$,\nand estimating the process matrix; $m$ here refers to one\nparticular configuration randomly chosen from the set of all possible\n256 configurations.\n\n\\subsection{CS-QPT of three-qubit gates}\n\\label{csqpt3q}\n\\begin{figure}\n\\includegraphics[angle=0,scale=0.95]{fig2-csqpt.pdf}\n\\caption{(Color online) Quantum circuit to implement CS-QPT\nof a Control-NOT-NOT ($U_{\\rm {CNN}}$) gate. The first\nblock prepares the desired input state while the second and\nthird blocks represent the quantum process corresponding to\nthe ($U_{\\rm {CNN}}$) gate and the measurement,\nrespectively. (b) NMR implementation of the quantum circuit\ngiven in panel (a). Solid black rectangles are refocusing\npulses with flip angle $180^{\\circ}$, while the gray\nrectangles represent pulses with flip angle $45^{\\circ}$;\nthe corresponding rf phases are written below each pulse.\nThe value of $\\beta$ is set to $60^{\\circ}$, while the\nunfilled rectangles with rf phases $\\phi_1$, $\\phi_2$ and\n$\\phi_3$ correspond to flip angles $\\theta_1$, $\\theta_2$\nand $\\theta_3$, respectively. The black rectangles\nrepresent pulses with flip angle $90^{\\circ}$. The unfilled\nrectangles in the last block, of phases $\\phi_1$, $\\phi_2$\nand $\\phi_3$ correspond to flip angles $\\theta_1$,\n$\\theta_2$ and $\\theta_3$, respectively which implement\ntomographic operations followed by measurement on each\nqubit; $\\tau_{ij} = \\frac{ 1}{2J_{ij}}$. (c)\n$^{13}C$-labeled diethyl fluoromalonate with ${}^1$H,\n${}^{19}$F and ${}^{13}$C nuclei labeled as the first,\nsecond and third qubits, respectively. NMR spectra depicted\nin (d), (e) and (f) correspond to ${}^{1}$H, ${}^{19}$F and\n${}^{13}$C nuclei respectively, for the configuration\n$\\lbrace \\vert 11+ \\rangle \\langle 11+ \\vert, XYX \\rbrace$. \n}\n\\label{ckt3}\n\\end{figure}\nWe have implemented the CS-QPT protocol to characterize the\nthree-qubit controlled-NOT-NOT ($U_{{\\rm CNN}}$) gate with\nmultiple targets, with the first qubit being denoted the control\nqubit,\nwhile the other two qubits are the target qubits. The \ncontrolled-NOT-NOT gate\ncan be decomposed using two CNOT gates as:~$U_{{\\rm CNN}} \\equiv$\nCNOT$_{13}.$CNOT$_{12}$, and is widely used in encoding\ninitial input states in error correction codes, fault\ntolerant operations~\\cite{egan-arxiv-2021,shor-pra-1995} and\nin the preparation of three-qubit maximally entangled\nstates~\\cite{mooney-arxiv-2021,singh-pra-2018,dogra-pra-2015}. \n\nThe NMR Hamiltonian for three qubits\nin the rotating frame is given by:\n\\begin{equation}\n\\mathcal{H} = - \\sum_{i=1}^3 \\nu_i I_{iz} + \\sum_{i,j=1\n(i \\ne j)}^3 J_{ij} I_{iz} I_{jz} \\label{ham3q} \n\\end{equation}\nwhere the indices $i,j$ label the qubit and\n$\\nu_i$ denotes the respective chemical shift. The \nquantity $J_{ij}$ denotes the scalar coupling strengths\nbetween the $i$th and $j$th qubits, while $I_{iz}$ represents\nthe $z$-component of the spin angular momentum of the $i$th qubit. We have\nused ${}^{13}$C-labeled diethyl fluoromalonate\n(Fig.\\ref{ckt3}(c)) dissolved in acetone-D6 to physically\nrealize a three-qubit system, with the\n${}^{1}$H, ${}^{19}$F and ${}^{13}$C nuclei being labeled as the first,\nsecond and third qubits, respectively. State initialization is\nperformed by preparing the system in the PPS\n$\\vert 000 \\rangle$ via the spatial averaging\ntechnique with the corresponding density matrix being given\nby: \n\\begin{equation} \n\\rho_{000} = (\\frac{1-\\epsilon}{8})I_8\n+ \\epsilon \\vert 000 \\rangle \\langle 000 \\vert \\label{pps3}\n\\end{equation} \nwhere $\\epsilon \\approx 10^{-5}$ represents\nthe net thermal magnetization and $I_8$ is the 8$\\times$8 identity\noperator.\n\n \nFor a three-qubit system, the tomographically complete set\nof input states is given by: $\\lbrace \\vert 0 \\rangle, \\vert\n1 \\rangle, \\vert + \\rangle, \\vert - \\rangle \\rbrace\n^{\\otimes 3}$ where $\\vert + \\rangle = (\\vert 0 \\rangle +\n\\vert 1 \\rangle)\/\\sqrt{2} $ and $\\vert - \\rangle = (\\vert 0\n\\rangle + i\\vert 1 \\rangle)\/\\sqrt{2} $ and the tomographically\ncomplete set of unitary rotations is given\nby:~$\\lbrace III, IIY, IYY, YII, XYX, XXY,\nXXX \\rbrace$~\\cite{li-pra-2017}. \nThe quantum circuit and the corresponding NMR\npulse sequence to perform CS-QPT for the three-qubit gate $U_{\\rm\nCNN}$ is given in Fig.~\\ref{ckt3}. The first block in\nFig.\\ref{ckt3}(a) represents the input state\npreparation while the second block represents the\napplication of quantum gate $U_{\\rm{CNN}}$ (i.e. quantum\nprocess which is to be tomographed),\nand tomographic unitary rotations are\napplied in the last block, followed by measurement on each qubit. \nFig.\\ref{ckt3}(b) represents the corresponding NMR\nimplementation of quantum circuit given in the\nFig.\\ref{ckt3}(a). The spatial averaging\ntechniques are used in the first block~\\cite{singh-pra-2019} \nto initialize system in the desired PPS,\nfollowed by the application of spin-selective rf\npulses to prepare the desired input state. \nIn the second\nblock the pulse sequence corresponding to $U_{\\rm{CNN}}$ is\napplied on the input state and in the last block after\napplication of tomographic pulses, the signal \nof the desired nucleus is recorded.\nThe NMR spectra corresponding to ${}^1$H, ${}^{19}$F and\n${}^{13}$C are given in\nFigs.~\\ref{ckt3}(d), (e) and (f), respectively,\nfor the configuration $\\lbrace \\vert\n11+ \\rangle \\langle 11+ \\vert , XYX \\rbrace $, \\textit{i.e.~} the\ninput state $ \\vert 11+ \\rangle \\langle 11+ \\vert $ is\nprepared, evolved under the quantum process corresponding\nto $U_{\\rm{CNN}}$, the tomographic set of pulses $XYX$ is\napplied, and finally the NMR signal is recorded. \nFor the first qubit (${}^{1}$H)\nthe area under the four spectral lines \ncorrespond to the\ndensity matrix elements\n$\\rho_{48}$, $\\rho_{26}$, $\\rho_{37}$ and $\\rho_{15}$,\nfor the second qubit (${}^{19}$F)\nthe area under the four spectral lines \ncorrespond to the\ndensity matrix elements\n$\\rho_{57}$, $\\rho_{13}$, $\\rho_{68}$ and $\\rho_{24}$, while\nfor the third qubit (${}^{13}$C),\nthe area under the four spectral lines \ncorrespond to the\ndensity matrix elements\n$\\rho_{56}$, $\\rho_{12}$, $\\rho_{78}$ and \n$\\rho_{34}$, respectively. \nFor a three-qubit\nsystem there are 12 experimental data points (4 per\nqubit) for a given configuration and the total number\nconfigurations are 448 (64 input states $\\times$ 7\ntomographic unitary operations) which yields the\n$\\overrightarrow{B}^{exp}_\\text{full}$ of size = 5376 (448\nconfigurations $\\times $ 12 data points per configuration).\nOne can construct $\\overrightarrow{B}^{exp}_\\text{m}$ by\nrandomly selecting $m$ number of rows from\n$\\overrightarrow{B}^{exp}_\\text{full}$, and using the\ncorresponding coefficient matrix $\\Phi_{m}$ one can solve\nthe optimization problem (Eq.~(\\ref{eq6})), and construct \nthe process matrix for a reduced data\nset of size $m$.\n\n\\subsection{CS-QPT of two-qubit processes in \na three-qubit system}\n\\label{sec3.3}\nIn order to experimentally implement a two-qubit CNOT gate\nin a multi-qubit system, one needs to allow the two system\nqubits to interact with each other \\textit{i.e.~}, let them evolve under\nthe internal coupling Hamiltonian for a finite time. In\nreality, this is non-trivial to achieve experimentally, as\nduring the evolution time the other qubits are also\ncontinuously interacting with system qubits, and one has to\n``decouple'' the system qubits from the other qubits. In the\nlanguage of NMR, this is referred to as refocusing of the\nscalar $J$-coupling. \n\nTo implement a two-qubit CNOT gate we need four single-qubit\nrotation gates and one free evolution under the internal\ncoupling Hamiltonian (Fig.~\\ref{ckt1}). The single-qubit\nrotation gates are achieved by applying very short duration\nrf pulses of length $\\approx 10^{-6}$ s, while the time\nrequired for free evolution under the coupling Hamiltonian\nis $\\approx 10^{-3}$ s. The quality of the experimentally\nimplemented quantum gate depends on the time required for\ngate implementation, which for the two-qubit CNOT gate, is\nprimarily determined by the free evolution under the\ncoupling Hamiltonian.\nWe use the CS-QPT protocol to\nefficiently characterize three coupling evolutions\ncorresponding to $U^J_{ij}$ of the form: \n\\begin{equation}\nU^J_{ij}(t) = e^{-i 2 \\pi J_{ij} I_{iz} I_{jz} t} \\label{hf}\n\\end{equation} \nwhere the indices $i$ and $j$ label the qubits\nand $J_{ij}$ is the \nstrength of the scalar\ncoupling between the $i$th and the $j$th qubit; \nfor the CNOT gate,\n$t = \\vert \\frac{1}{2 J_{ij}} \\vert $. \nA three-qubit system is continuously evolving\nunder all the three $J_{ij}$ couplings, so in order to let a\nsubsystem of two qubits effectively evolve under one \nof these\ncouplings, we have to refocus all the other $J$-couplings.\nFor example, consider the two-qubit subsystem of the $i$th and $j$th\nqubit with the effective evolution $U_{ij}^{J}(t)$ given by:\n\\begin{equation}\nU_{ij}^{J}(t) = U_{\\rm{int}}(\\frac{t}{2}) R_{x}^k(\\pi)\nU_{\\rm{int}}(\\frac{t}{2}) R_{x}^k(-\\pi)\n\\end{equation}\nwhere $R_{x}^k(\\pm \\pi)$ is an $x$-rotation on the $k$th\nqubit by an angle $\\pm \\pi$ and $U_{{\\rm int}}(\\frac{t}{2})$\nis the unitary operator corresponding to free evolution for\na duration $\\frac{t}{2}$ under the internal Hamiltonian\n$\\mathcal{H}_{{\\rm int}} = \\sum_{i,j=1, i>j}^3 J_{ij} I_{iz}\nI_{jz}$. The procedure for tomographic reconstruction of\nthe reduced two-qubit density matrix from the full\nthree-qubit density matrix is given in Table.~\\ref{table1}.\nWe were able to successfully characterize all three\n$U_{ij}^{J}(t)$ via the CS-QPT method and constructed the\ncorresponding process matrices, using a heavily reduced data\nset of size $\\approx 20$, with experimental fidelities $>\n0.94$. Using the information given in Table~\\ref{table1},\none can efficiently characterize a general quantum state as\nwell as the dynamics of a two-qubit subsystem in a\nthree-qubit system, wherein the experimental data is\nacquired by measuring only the two qubits under\nconsideration; hence the complete set of input states and\ntomographic rotations required are the same as for the\ntwo-qubit protocol described in Section~\\ref{csqpt2q}. \n\n\n\\begin{table}[h!] \\caption{\\label{complexity3} Relation\nbetween the readout positions \nof the reduced density matrix of the subsystem ($\\rho_{ij}^{\\prime}$)\nand the readout positions $\\rho_{mn}$.}\n\\begin{ruledtabular}\n\\begin{tabular}{c | c c c c\n~Subsystem ~& \\multicolumn{4}{c}{Readout positions of the reduced density\nmatrix}~ \\\\\n~~ & ~$\\rho_{24}'$ & ~$\\rho_{13}'$ & ~$\\rho_{34}'$ &\n~$\\rho_{12}'$ ~~~~\\\\ \\colrule\n~${}^{1}$H+${}^{19}$F~ & ~$\\rho_{48}+\\rho_{37}$ &\n~$\\rho_{26}+\\rho_{15}$ & ~$\\rho_{57}+\\rho_{68}$ &~\n$\\rho_{13}+\\rho_{24}$ ~~~ \\\\\n~${}^{1}$H+${}^{13}$C~ & ~$\\rho_{48}+\\rho_{26}$ &\n~$\\rho_{37}+\\rho_{15}$ & ~$\\rho_{56}+\\rho_{78}$ &~\n$\\rho_{12}+\\rho_{34}$ ~~~ \\\\\n~${}^{19}$F+${}^{13}$C~ & ~$\\rho_{68}+\\rho_{24}$ &~\n$\\rho_{57}+\\rho_{13}$ & ~$\\rho_{78}+\\rho_{34}$ &~\n$\\rho_{56}+\\rho_{12}$ ~~\n\\label{table1}\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\begin{figure*}[t]\n\\includegraphics[angle=0,scale=1]{fig3-csqpt.pdf} \n\\caption{(Color online)\nThe top panel represents the average gate fidelity\n$\\overline{\\mathcal{F}}$ corresponding to (a) a three-qubit\n$U_{\\rm {CNN}}$ gate, (b) a CNOT gate and (c) $U^J_{23}$\nagainst the number of data points $m_{\\text{data}}$ on the\n$x$-axis. The bottom panel represents the standard\ndeviation in average fidelity $\\sigma$, corresponding to (d)\na three-qubit $U_{\\rm {CNN}}$ gate, (e) a CNOT gate, and (f)\n$U^J_{23}$, plotted against the number of data points\n$m_{\\text{data}}$ on the $x$-axis. The data points in red,\nblue, green and pink correspond to CS-PEB, CS-PB, LS-PEB and\nLS-PB methods, respectively. The CS-PEB method shows the\nbest performance for all three quantum gates.\n}\n\\label{plots}\n\\end{figure*}\n\n\\begin{table*}[t]\n\\caption{\\label{complexity3}\nThe minimum value of $m_{\\rm{data}}$ at which the experimental average gate\nfidelity $\\overline{\\mathcal{F}}$ turns to be $> 0.9$ is computed (alongwith\nthe standard deviation $\\sigma$) for different quantum processes, via the\nCS-PEB, CS-PB, LS-PEB and LS-PB methods.\n}\n\\begin{ruledtabular}\n\\begin{tabular}{r|c c c|c c c|c c c|c c c \n & \\multicolumn{3}{c|}{CS-PEB} & \\multicolumn{3}{c|}{CS-PB}\n& \\multicolumn{3}{c|}{LS-PEB} & \\multicolumn{3}{c}{LS-PB} \\\\\nGate & $m_{\\rm {data}}$ &$\\overline{\\mathcal{F}}$ & $\\sigma$\n& $m_{\\rm {data}}$ &$\\overline{\\mathcal{F}}$ & $\\sigma$&\n$m_{\\rm {data}}$ &$\\overline{\\mathcal{F}}$ & $\\sigma$&\n$m_{\\rm {data}}$ &$\\overline{\\mathcal{F}}$ & $\\sigma$ ~~~\\\\\n\\colrule\n$U_{\\rm{CNN}}$ & 30 & 0.9920 & ~0.0081~ & - & - & - & 320 &\n0.9109 & ~0.0123~ & 290 & 0.9006 & 0.0147 ~~~\\\\\nCNOT & 44 & 0.9798 & 0.0701 & 62 & 0.9203 & ~0.0905~ &\n52 & 0.9514 & 0.0263 & 48 & 0.9308 &\n0.0475~~~\\\\\nC-$R_x^{\\pi}$ & 48 & 0.9728 & 0.0797 & 58 & 0.9068\n& 0.0746 & 48 & 0.9332 & 0.0805 & 52 & 0.9503 &\n0.0504 ~~~\\\\\n$U^J_{12}$ & 14 & 0.9549 & 0.0963 & 24 & 0.9464\n& 0.0468 & 32 & 0.9075 & 0.0459 & 34 & 0.9071 &\n0.0561 ~~~\\\\\n$U^J_{23}$ & 14 & 0.9641 & 0.0734 & 28 & 0.9145 &\n0.0710 & 66 & 0.9019 & 0.0217 & 68 & 0.9048 &\n0.0198~~~\\\\\n$U^J_{13}$ & 18 & 0.9417 & 0.0980 & 38 & 0.9067 & 0.0695\n & - & - & - & - & - & - \n\\label{table2}\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\\subsection{Comparison of CS-QPT and LS-QPT protocols}\n\\label{sec3.4}\n\\begin{table}[h!] \n\\centering\n\\caption{\\label{fid} Experimental quantum process\nfidelities obtained via CS and LS methods using \nthe full data set $m^{\\rm{full}}_{\\rm{data}}$.}\n\\begin{tabular}{r c c c c\n\\hline\nGate & ~~CS-PEB~~& ~~CS-PB~~ & ~~LS-PEB~~& ~~LS-PB\\\\\n\\hline $U_{\\rm{CNN}}$ & 0.9980 & 0.8877 & 0.9542 &~ 0.9542\\\\ \nCNOT & 0.9984 & 0.9843 & 0.9817 &~ 0.9817\\\\ \nC-$R^{\\pi}_x$ & 0.9980 & 0.9744 & 0.9831 &~ 0.9831\\\\\n$U^J_{12}$ & 0.9967 & 0.9894 & 0.9819 &~ 0.9819 \\\\\n$U^J_{23}$ & 0.9976 & 0.9793 & 0.9273 &~ 0.9273 \\\\\n$U^J_{13}$ & 0.9895 & 0.9710 & 0.8942 &~ 0.8942 \\\\\n\\hline \n\\end{tabular}\n\\end{table}\nThe fidelity of the experimentally estimated $\\chi_{{\\rm exp}}$\nis computed using the\nmeasure\\cite{zhang-prl-2014}:\n\\begin{equation}\n{\\mathcal F}(\\chi^{}_{\\rm exp},\\chi^{}_{\\rm ideal})=\n\\frac{|{\\rm Tr}[\\chi^{}_{\\rm exp}\\chi_{\\rm ideal}^\\dagger]|}\n{\\sqrt{{\\rm Tr}[\\chi_{\\rm exp}^\\dagger\\chi^{}_{\\rm exp}]\n{\\rm Tr}[\\chi_{\\rm ideal}^\\dagger\\chi^{}_{\\rm ideal}]}}\n\\label{eq9}\n\\end{equation} \nwhere\n$\\chi_{\\text{ideal}}$ is the theoretically constructed process\nmatrix,\nand as $\\chi_{\\rm{exp}} \\rightarrow\n\\chi_{\\rm{ideal}}$, ${\\mathcal F}(\\chi^{}_{\\rm\nexp},\\chi^{}_{\\rm ideal}) \\rightarrow 1$. \n\n\nWe performed QPT of several two- and three-qubit quantum gates using both\nCS-QPT and LS-QPT protocols on a reduced data set. The CS-QPT method was\nimplemented for the PEB and PB basis sets. For a two-qubit system\n$m_{\\text{data}}^{\\rm full} = 256$, while for a three-qubit system,\n$m_{\\text{data}}^{\\rm full} = 5376$, where $m_{\\text{data}}^{\\rm full}$\ndenotes the size of the full data set obtained using the complete set of input\nstates and tomographic rotation operators for two and three qubits as given in\nSections~\\ref{csqpt2q} and \\ref{csqpt3q}, respectively. In the PEB basis,\n$\\chi_{\\text{ideal}}$ is maximally sparse for all unitary quantum gates, while\nin the PB basis, $\\chi_{\\text{ideal}}$ corresponding to the two-qubit CNOT,\ncontrolled-$R_{x}^{\\pi}$ and $U^J_{ij}$ gates have 16, 16 and 4 non-zero\nelements, respectively (out of a total of 256 elements). For the three-qubit\ngate $U_{\\rm CNN}$, $\\chi_{\\text{ideal}}$ has 16 non-zero elements (out of a\ntotal of 4096 elements).\n \n\nThe performance of the CS-QPT method was compared with the\nLS-QPT method for six different quantum processes corresponding to: (i) a\nthree-qubit $U_{\\rm CNN}$ gate, (ii) a two-qubit CNOT gate, (iii) a\ncontrolled-$R_x^{\\pi}$ rotation (iv) $U^J_{23}$, (v) $U^J_{13}$ and (vi)\n$U^J_{12}$, of which the results of the quantum process corresponding to (a) a\nthree-qubit $U_{\\rm CNN}$ gate, (b) a two-qubit CNOT gate and (c) $U^J_{23}$,\nare displayed in Fig.~\\ref{plots}. The top panel in Fig.~\\ref{plots} represents\nthe average gate fidelity $\\overline{\\mathcal{F}}$ \nplotted against $m_{\\text{data}}$, while the\nbottom panel represents the standard deviation $\\sigma$ in average gate fidelity\nplotted against $m_{\\text{data}}$. The average gate fidelity is\nobtained using the average process matrix \nestimated via the LS and CS algorithm in the PEB\nand PB bases. \nThe plots in red and blue color represent the results of \nthe CS-QPT method implemented\nin PEB and PB basis respectively, while the \nplots in green and pink color represent\nthe results of the LS-QPT \nmethod implemented in the PEB and PB basis, respectively. The average fidelity\nand the value of $\\sigma$ is computed by implementing the CS-QPT and LS-QPT\nprotocols 50 times for randomly selected $m_{\\text{data}}$ number of data\npoints, and $\\sigma$ is calculated from: \n\\begin{equation}\n\\sigma = \\sqrt{\\frac{\\sum_{i=1}^{N}\n(\\mathcal{F}_i-\\overline{\\mathcal{F}})^2}{N-1}} \\label{eq10}\n\\end{equation} \nwhere $N=50$ and $\\overline{\\mathcal{F}}$ is\nthe average fidelity.\n\n\n\nThe plots in the first column of Fig.~\\ref{plots} correspond to the\nthree-qubit gate $U_{\\rm{CNN}}$, where Fig.~\\ref{plots}(a) depicts the\naccuracy, while Fig.~\\ref{plots}(d) gives the precision in characterizing\n$U_{\\rm{CNN}}$, for a given value of $m_{\\text{data}}$. Similarly, the second\nand third columns in Fig.~\\ref{plots} represent the experimental results\ncorresponding to the CNOT gate and the $U^J_{23}$ quantum \nprocess, respectively. The plots\ncorresponding to the two-qubit controlled-rotation gate (C-R$_{x}^{\\pi}$) is\nsimilar to the CNOT gate, while the plots corresponding to $U^J_{13}$ and\n$U^J_{12}$ are similar to $U^J_{23}$ (plots not shown). As seen from\nFig.~\\ref{plots}, the CS-QPT method implemented in the PEB basis, performs\nsignificantly better than the LS-QPT and the CS-QPT methods implemented in the\nPB basis, for all the quantum processes considered. The performance of the\nLS-QPT method is independent of the choice of basis operators. On the other\nhand, the CS-QPT method may yield a lower fidelity as compared to the LS-QPT\nmethod, if the basis operators are not properly chosen. Using a reduced data\nset, the overall performance for the three-qubit gate $U_{\\rm{CNN}}$ is CS-PEB\n$>$ CS-PB $>$ LS-PEB $\\approx$ LS-PB, while for the two-qubit CNOT and\nC-R$_x^{\\pi}$ gates, CS-PEB $>$ LS-PEB$\\approx$ LS-PB $>$ CS-PB. For the\ntwo-qubit $U_{ij}^J$ processes, CS-PEB $>$ CS-PB $>$ LS-PEB $\\approx$ LS-PB.\n\nFor the two-qubit CNOT and C-R$_x^{\\pi}$ gates, the LS algorithm performs\nbetter than the CS algorithm in the PB basis for all values of $m_{\\rm{data}}$,\nwhile for the three-qubit $U_{\\rm{CNN}}$ gate, the LS algorithm performs\nbetter than the CS algorithm in the PB basis for $m_{\\rm{data}} \\geq 160$,\nwhich clearly shows the importance of selecting an appropriate operator basis\nset while implementing the CS algorithm. \nThe plots given in Fig.~\\ref{plots} provide information about\nthe experimental complexity of the CS and LS algorithms \\textit{i.e.~} the\nnumber of experiments required in each case to characterize a\ngiven quantum process.\nWe note here in passing that the\nstandard deviation in average fidelity ($\\sigma$) is not monotonic. For small\nvalues of $m_{\\rm {data}}$, the process of randomly selecting $m_{\\rm {data}}$\ndata points to estimate the process matrix is more likely to lead to a lower\nfidelity and higher values of the standard deviation $\\sigma$, and hence lower\nprecision. For the two-qubit CNOT and controlled-R$_x^{\\pi}$ gates, $\\sigma$\nhas a maximum around $m_{\\rm {data}}\\approx 20 $, while for the $U_{CNN}$,\n$U^J_{23}$, $U^J_{13}$ and $U^J_{12}$ quantum processes, $\\sigma$ is maximum\naround $m_{\\rm {data}}\\approx 10 $. For all the cases, the CS-PEB method yields\nbetter precision as compared to the CS-PB, LS-PEB and LS-PB methods. \n\n \nThe experimentally obtained minimum value of $m_{\\rm {data}}$ at which the\nexperimentally computed average gate fidelity is $> 0.9$ is given in\nTable~\\ref{table2}, for all the quantum processes. For the three-qubit\n$U_{\\rm{CNN}}$ gate, we experimentally obtained\n$\\overline{\\mathcal{F}}_{\\rm{CS-PEB}} = 0.9920 \\pm 0.0081$ for a reduced data\nset of size $m_{\\rm{data}} = 30$. For the two-qubit CNOT and\ncontrol-$R_x^{\\pi}$ gates, $\\overline{\\mathcal{F}}_{\\rm{CS-PEB}} \\geq 0.9790\n\\pm 0.0701 $ and $\\overline{\\mathcal{F}}_{\\rm{CS-PEB}} \\geq 0.9729 \\pm 0.0797 $\nfor $m_{\\rm{data}} \\geq 44$ and $m_{\\rm{data}} \\geq 48$, respectively. The\nreduced data set is $\\approx 5$ times smaller than the full data set, which\nimplies that the experimental complexity is reduced by $\\approx 80 \\%$ as\ncompared to the standard QPT method. Furthermore, for all the two-qubit quantum\nprocesses corresponding to $U_{ij}^J$, $\\overline{\\mathcal{F}}_{\\rm{CS-PEB}}\n\\geq 0.9417 \\pm 0.0980 $ for $m_{\\rm{data}} \\geq 18$. This reduced data set is\n$\\approx 12$ times smaller than the full data set which implies that the\nexperimental complexity in these cases is reduced by $\\approx 92 \\%$ as\ncompared to the standard QPT method. \n\\section{Concluding Remarks}\n\\label{concl} \nWe designed a general quantum circuit to acquire experimental data compatible\nwith the CS-QPT algorithm. The proposed quantum circuit can also be used for\nother experimental platforms and can be extended to higher-dimensional systems.\nWe successfully demonstrated the efficacy of the CS-QPT protocol for various\nquantum processes corresponding to the three-qubit $U_{\\rm{CNN}}$ gate,\ntwo-qubit CNOT and controlled-rotation gates and several two-qubit $U_{ij}^J$\nunitary operations. Our experimental comparison of the CS-QPT and LS-QPT\nschemes demonstrate that the CS-QPT protocol is far more efficient, provided\nthat the process matrix is maximally sparse and that an appropriate operator\nbasis is chosen. \n\nStandard QPT protocols do not have access to prior information about the\nintended target unitary and hence require a large number of parameters to\ncompletely characterize the unknown quantum process. CS methods can be used to\ndramatically reduce the resources required to reliably estimate the full\nquantum process, in cases where there is substantial prior information\navailable about the quantum process to be characterized. Since the CS-QPT\nmethod is uses fewer resource and is experimentally viable, it can be used to\ncharacterize higher-dimensional quantum gates and to validate the performance\nof large-scale quantum devices. \n\n\n\\begin{acknowledgments}\nAll the experiments were performed on a Bruker Avance-III\n600 MHz FT-NMR spectrometer at the NMR Research Facility of\nIISER Mohali. Arvind acknowledges financial support from\nDST\/ICPS\/QuST\/Theme-1\/2019\/Q-68.\nK.~D. acknowledges financial support from\nDST\/ICPS\/QuST\/Theme-2\/2019\/Q-74.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nObservation of neutron star properties such as \nmass, size and temperature\nprovides us with important clues to the understanding of the state of matter\nat extremely high densities. \nIn the 1970's, the maximum mass of the neutron star\nwas calculated with the $NN$ potentials available at that time \n\\cite{pand-npa71,bj-npa74,ps-npa75,fp-npa81}\nand mean field models \\cite{wal-ap74,ps-plb75}. \nMost of the calculations done in the 1970's resulted in \nstiff equations of state, \nand thus the maximum mass of \na neutron star was predicted to be larger than $2 M_\\odot$,\nwhere $M_\\odot$ is the solar mass.\nOnly the Reid soft core potential yielded a soft equation of \nstate and consequently a small maximum mass of a neutron star,\n$1.6 M_\\odot$ \\cite{pand-npa71}.\nRecent observations of the masses of binary pulsars \\cite{tc-apj99},\nwhich are candidates of neutron stars indicate that the\nmaximum mass of neutron stars are roughly around $1.5 M_\\odot$,\nsubstantially smaller than most of the predicted values in the 1970's. \n(However, very recent observations seem to suggest possible\nexistence of more massive pulsars in the range $(1.8 - 2.0) M_\\odot$\n\\cite{apj2004,apj2005}, though further confirmation is needed.)\nOn the other hand, exotic forms of matter, i.e., matter consisting\nof degrees of freedom other than the nucleons have been proposed \nalready many years ago.\nSome of the proposed exotic states of matter include those with\ncreation of hyperons \\cite{hyperon}, \nBose-Einstein condensation (pions \\cite{pion} or kaons \\cite{kaon}), \nstrange matter \\cite{strange}, and \nquark deconfinement \\cite{itoh70,bc-plb76,kk-plb76}.\nThese exotic states seem to reduce the maximum mass of a neutron star \nclose to the observations \n\\cite{kpe-prc95, s-pal, nkg-prd92}, \nimplying that exotic degrees of freedom seem to be needed\nto reproduce the observed masses of neutron stars.\n\nIn this work, we consider the strangeness degrees of freedom by including\nboth hyperon creation and kaon condensation in the neutron star matter.\n(It is the anti-kaon that matters here, but we simply refer to\nboth kaons and anti-kaons as kaons for brevity.)\nThe masses and energies of the hyperons and kaons in medium are sensitive to\ntheir interactions with the surrounding matter. \nIn the meson-exchange picture,\nmeson-hyperon and meson-kaon coupling constants can fix the\nstrength of these interactions. \nThe meson-hyperon coupling constants may be\ndetermined from the binding energies of hyperons in hypernuclei.\nThe meson-kaon coupling constants have been studied by using\nthe kaon-nucleon scattering \\cite{Cieply,Kaiser} \nand kaonic atom data \\cite{Cieply}.\nRecently, the magnitudes of the kaon-nucleus potential \nin matter have attracted much attention.\nSome calculations \\cite{Schaffner,oset20,Cieply}\nshow that the real part of the $K^-$-nucleus optical potential $U_{K^-}$ \nis shallow ($U_{K^-} \\approx$ $-$50 MeV), but some other calculations\nsuggest that $U_{K^-}$ can be as large as about $-120$ MeV \n\\cite{gal94,Kaiser} or even close to $-200$ MeV \\cite{Batty}.\n\nAkaishi and Yamazaki predicted possible existence of\ndeeply bound kaonic nuclei \\cite{akaishi},\nin which $U_{K^-}$ at normal density $\\rho_0$ was\nestimated to be about $-120$ MeV.\nThen, experiments at KEK claimed the observation of \ntribaryon kaonic nuclei, \nS$^0$ \\cite{s0} and S$^+$ \\cite{suzuki1}, which seem to suggest that \n$K^-$ may be even more deeply bound than \nthe theoretical prediction \\cite{akaishi}\n(The former claim \\cite{s0}, however, was withdrawn by the experimental group\n\\cite{iwasaki}). \nFINUDA collaboration at DA$\\Phi$NE \\cite{FINUDA} \nand a BNL experiment with $^{16}{\\rm O}(K^-, n)$ reaction \\cite{kishi-npa05} \nalso reported distinct peaks. \nMore recently there was a theoretical work which considered\nlarge kaonic binding energies and calculated widths of kaonic nuclear\nbound states \\cite{Mares}.\nThe identities of these experimental peaks need to be\nstudied further experimentally and theoretically. However, in this work, \nwe consider the possibility of \ndeep optical potential of kaons in nuclei and \nexplore the consequences in the \ncomposition profile of neutron star matter.\n\nIn this work, for the description of dense matter\nwe employ the modified quark-meson coupling (MQMC) model \n\\cite{mqmc}.\nNucleons and hyperons in the baryon octet are treated as MIT bags.\nThe bag constant $B_B$ and phenomenological constant $Z_B$ for a baryon \n$B$ are fixed to reproduce the free mass of each baryon $B$.\nCoupling constants between ($u$, $d$)-quarks in the bags and ($\\sigma$, \n$\\omega$, $\\rho$)-mesons are adjusted to give us the \nbinding energy per a nucleon $E_b\/A = 16$ MeV and symmetry energy \n$a_{\\rm sym} = 32.5$ MeV at the\nsaturation density $\\rho_0 = 0.17\\, {\\rm fm}^{-3}$. \nSince the interaction between the $s$-quark and mesons are not well known,\nwe adopt the standard quark counting rule and assume the $s$-quark is\ndecoupled from ($\\sigma$, $\\omega$, $\\rho$)-mesons.\nTo take into account the interactions between $s$-quarks,\nwe introduce $\\sigma^* (980)$ and $\\phi(1020)$ mesons\nfollowing Ref.~\\cite{s-pal} for the baryon and Ref.~\\cite{bb-prc01} \nfor the kaon.\nWe also assume the kaon as a point particle.\nThis treatment allows us to use $U_{K^-}$\nas an input to fix the coupling constant between the $\\sigma$-meson \nand the kaon, $g_{\\sigma K}$. \nIn our model the real part of the kaon optical potential at $\\rho = \\rho_0$\ncan be written as\n$U_{K^-} = - (g_{\\sigma K} \\sigma(\\rho_0)\n+ g_{\\omega K} \\omega(\\rho_0)) $,\nwhere $\\sigma(\\rho_0)$ and $\\omega(\\rho_0)$ are the values of\nthe meson fields at $\\rho_0$. Using the value of $g_{\\omega K}$ \ngiven by the quark counting rule,\nwe can fix $g_{\\sigma K}$ for each given value of $U_{K^-}$. \nOnce the parameters of the model are fixed,\nthe composition profile of neutron star matter can be obtained from\nthe $\\beta$-equilibrium and charge neutrality.\nWe find that the composition of neutron star matter changes \ndramatically depending on the value of $U_{K^-}$.\n\nTo investigate the model dependence of the results\nwe also employ quantum hadrodynamics (QHD)\nmodel \\cite{sw-qhd} for calculating the composition of matter.\nThe parameters of the QHD model are calibrated to produce exactly \nthe same saturation properties as in the MQMC model.\nOur calculations show that the onset densities of \nthe kaon condensation and the compositions of matter\nat high densities are substantially model dependent.\nIn Sect. II, we introduce model Lagrangians and fix the model parameters.\nThe results are discussed in Sect. III. \nConclusions and discussions follow in Sect. IV.\n\n\\section{Theory}\n\nIn this section we first briefly sketch the MQMC and QHD models\nby presenting the model Lagrangians.\nThe models are calibrated so as to be consistent with each other\nat the saturation density by fixing the \ncoupling constants of both models to produce exactly\nthe same saturation properties; the saturation density,\nthe binding energy, the symmetry energy, the nucleon effective\nmass, and the compression modulus.\nWe then show how the physical quantities that will determine \nthe composition of the neutron star matter can be obtained self-consistently.\n\\subsection{{\\bf Models}}\n\\label{sect:The MQMC model with kaons}\n\nThe model Lagrangian comprises\nthe terms for the octet baryons, exchange mesons, leptons and kaons;\n$\\mathcal L_{tot} = \\mathcal L_B + \\mathcal L_M + \\mathcal L_l + \\mathcal L_K$.\nOctet baryon, exchange meson and lepton terms \nin the mean field approximation can be written as\n\\begin{eqnarray}\n\\mathcal L_B & =& \\sum_B \\bar \\psi_B \\left[i\\gamma \\cdot \\partial - m_B^* \n(\\sigma, \\sigma^*)\n-\\gamma^0 \\left(g_{\\omega B}\\omega_0 + g_{\\phi B}\\phi_0 + \\frac 12 g_{\\rho B}\n\\tau_z \\rho_{03} \\right) \\right] \\psi_B \\label{eq:lagb} \\\\ \n\\mathcal L_M &=& -\\frac 12 m_\\sigma^2 \\sigma^2 - \\frac 12 m_{\\sigma^*}^2 {\\sigma^*}^2\n+ \\frac 12 m_\\omega^2 \\omega_0^2 + \\frac 12 m_\\phi^2 \\phi_0^2 + \\frac 12 m_\\rho^2 \\rho_{03}^2, \\label{eq:lagm} \\\\ \n\\mathcal L_l &=& \\sum_l \\bar \\psi_l ( i \\gamma \\cdot \\partial - m_l)\\psi_l ,\n\\label{eq:lagl}\n\\end{eqnarray}\nwhere $B$ denotes the sum over all the octet baryons \n($p, ~n, ~\\Lambda, ~\\Sigma^+, ~\\Sigma^0, ~\\Sigma^-, ~\\Xi^0, ~\\Xi^-$),\nand $l$ stands for the sum over the free electrons and muons ($e^-$, $\\mu^-$).\n$\\sigma$, $\\omega$ and $\\rho$ mesons mediate the interactions\nbetween the non-strange light quarks ($u$ and $d$).\n$\\sigma^*$ and $\\phi$ mesons are introduced to take into\naccount the interactions between $s$ quarks.\n$\\mathcal L_B$ is of the identical form for both \nthe MQMC and QHD models,\nbut differs in the definition of\nthe effective baryon mass $m^*_B$\nas will be shown below. \n\n{\\bf MQMC}\n\nIn the MQMC model, a baryon is a composite system with quarks \nin a spherical bag, and its mass is given in terms of bag\nparameters and quark eigenenergy.\nThe effective mass of a baryon in matter $m^{*}_B(\\sigma, \\sigma^*)$\ncan be written as \\cite{s-pal,mqmc,fleck,st,stt}\n\\begin{eqnarray}\nm^*_B = \\sqrt{E^2_B - \\sum_q \\left(\\frac{x_q}{R} \\right)^2}.\n\\label{eq:efmass}\n\\end{eqnarray}\nThe bag energy of a baryon is given by \n\\begin{eqnarray}\nE_B &=& \\sum_q \\frac{\\Omega_q}{R} - \\frac{Z_B}{R}\n+ \\frac{4}{3} \\pi\\, R^3\\, B_B,\n\\label{eq:bagery}\n\\end{eqnarray}\nwhere $B_B$ and $Z_B$ are the bag constant and \na phenomenological constant for the zero-point motion of a baryon $B$,\nrespectively.\n$\\Omega_q = \\sqrt{x^2_q + (R m^*_q)^2}$, \nwhere $m^*_q (= m_q - g^q_\\sigma \\sigma - g^q_{\\sigma^*} \\sigma^*)$\nis the effective mass of a quark whose free mass is $m_q$.\nWe take $m_q =0$ for $q=u,d$ and $m_q = 150$ MeV for $q=s$.\n(Other choices of $m_{q=s}$ values do not make differences in the \nresults~\\cite{Theta+}.)\n$x_q$ is determined from the boundary condition on the bag surface $r = R$,\n\\begin{equation}\nj_0(x_q) = \\beta_q j_1(x_q),\n\\label{eq:bessel}\n\\end{equation}\nwhere\n$\n\\beta_q = \\sqrt{\\frac{\\Omega_q - R m^*_q}{\\Omega_q + R m^*_q}}.\n$\nIn the MQMC model, the bag constant $B_B$ is assumed to depend on density\n\\cite{mqmc,stt}.\nIn this work, we use the extended form in Ref.~\\cite{s-pal} to include the \ncontribution from $\\sigma^*$ as\n\\begin{eqnarray}\nB_B (\\sigma,\\, \\sigma^*) = \nB_{B0} \\exp \\left\\{ -\\frac{4}{m_B} \\left({g'}_\\sigma^B \\sum_{q=u,d} n_q \\sigma\n+ {g'}_{\\sigma^*}^B (3-\\sum_{q=u,d}n_q) \\sigma^*\\right) \\right\\},\n\\label{eq:bag}\n\\end{eqnarray}\nwhere $m_B$ is the free mass of the baryon $B$,\nassuming that the $\\sigma$ meson couples to $u$ and $d$ quarks only\nand that the $\\sigma^*$ meson couples to the $s$ quark only.\n\n{\\bf QHD}\n\nIn the QHD model, a baryon is treated as a point particle,\nand thus its effective mass is simply written as \n\\begin{eqnarray}\nm^*_B = m_B - g_{\\sigma B} \\sigma - g_{\\sigma^* B} \\sigma^*.\n\\end{eqnarray}\nIn order to reproduce the same saturation properties \nas obtained in the MQMC model, \nself-interactions of the $\\sigma$-field \\cite{bb-npa77}\n\\begin{eqnarray}\nU^{\\rm QHD}_{\\sigma} = \\frac{1}{3} g_2\\, \\sigma^3 + \n\\frac{1}{4} g_3\\, \\sigma^4 \n\\label{eq:U_QHD}\n\\end{eqnarray}\nare added to Eq.~(\\ref{eq:lagm}) so that\n\\begin{eqnarray}\n\\mathcal L_M^{\\rm QHD} = \\mathcal L_M - U^{\\rm QHD}_{\\sigma}.\n\\end{eqnarray}\nAs mentioned above, the baryon and the lepton Lagrangians for the QHD model \ntake the form given by \nEqs.~(\\ref{eq:lagb}) and (\\ref{eq:lagl}).\n\n{\\bf Kaon}\n\nThe effective Lagrangian for the kaon may be expressed as \\cite{glend}\n\\begin{eqnarray}\n\\mathcal L_K = D_\\mu^* K^* D^\\mu K - {m_K^*}^2 K^* K,\n\\label{eq:lagk}\n\\end{eqnarray}\nwhere \n$\nD_\\mu = \\partial_\\mu + i g_{\\omega K}\\omega_\\mu \n-i g_{\\phi K} \\phi_\\mu + i \\frac 12 g_{\\rho K} \\vec \\tau \\cdot \\vec \\rho_\\mu.\n$\nIn this work we treat the kaon\nas a point particle in both MQMC and QHD models,\nand its effective mass is given by\n\\begin{equation}\nm_K^* = m_K - g_{\\sigma K} \\sigma - g_{\\sigma^* K} \\sigma^*.\n\\label{eq:keffmass}\n\\end{equation}\nThe equation of motion for a kaon is given by \n\\begin{eqnarray}\n[D_\\mu D^\\mu + {m_K^*}^2] K(x) = 0.\n\\end{eqnarray}\nIn uniform infinite matter, the kaon field $K(x)$ \ncan be written as a plane wave.\nSubstituting the plane wave solution into the equation \nof motion, we obtain the dispersion relation for the anti-kaon\n\\begin{eqnarray}\n\\omega_K = m_K^* - g_{\\omega K} \\omega_0 \n+ g_{\\phi K} \\phi_0 - g_{\\rho K} \\frac 12 \\rho_{03}.\n\\label{eq:kaon-dispersion}\n\\end{eqnarray}\n\n\\subsection{Model parameters}\n\n{\\bf MQMC}\n\nIn the MQMC model, MIT bag parameters $B_{B0}$ and $Z_B$ are determined\nto reproduce the free mass of a baryon $B$,\n$m^*_B \\left|_{\\rho = 0} = m_B \\right. $ \nwith the minimization condition \n$\\left. \\frac{\\partial m_B}{\\partial R} \\right|_{R = R_0} =0$\nat a free bag radius $R_0$, which we choose as $R_0 = 0.6$ fm.\nThe bag parameters $B_{B0}$ and $Z_B$ for the octet baryons\nare listed in Table~\\ref{tab:b-z}.\n\\begin{table}[tbp]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|} \\hline\n~~ $B$ ~~ & $m_B$ (MeV) & $B^{1\/4}_{B0}$ (MeV) & $Z_B$ \\\\ \\hline\n $N$ & 939.0 & ~~188.1 ~~& ~~2.030 ~~\\\\ \\hline\n$\\Lambda$ & 1115.6 & 197.6 & 1.926 \\\\ \\hline\n$\\Sigma^+$ & 1189.4 & 202.7 & 1.829 \\\\ \\hline\n$\\Sigma^0$ & 1192.0 & 202.9 & 1.826 \\\\ \\hline\n$\\Sigma^-$ & 1197.3 & 203.3 & 1.819 \\\\ \\hline\n$\\Xi^0 $ & 1314.7 & 207.6 & 1.775 \\\\ \\hline\n$\\Xi^- $ & 1321.3 & 208.0 & 1.765 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Bag constants $B_{B0}$ and phenomenological constants $Z_B$ for\noctet baryons to reproduce the free mass of each baryon. \nThe bag radius is chosen as $R_0=0.6$ fm for all octet baryons, \nand the bare masses of quarks are fixed as\n$m_{u(d)}=0$ MeV and $m_s = 150$ MeV.}\n\\label{tab:b-z}\n\\end{table}\n\nThree saturation\nconditions $\\rho_0$, $E_b\/A$, and $a_{\\rm sym}$ could determine\nthree quark-meson coupling constants $g^{u,d}_\\sigma$,\n$g^{u,d}_\\omega$ and $g^{u,d}_\\rho$, assuming $u$ and $d$ quarks\nto be identical in the isodoublet.\nThe MQMC model, however, introduces an additional constant\n$g'^B_\\sigma$ in Eq.~(\\ref{eq:bag}). \nThus we fix $g^{u,d}_\\sigma = 1$, and adjust the remaining\nthree constants to meet the three conditions.\nThe resulting coupling constants are given in Table~\\ref{tab:coupling}\ntogether with the ratio of the effective mass of the nucleon $m^*_N\/m_N$\nand the compression modulus $K$.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c||c|c|} \\hline\n~~$g_\\sigma^q$~~ & ~~$g_\\omega^q$~~ & ~~${g'}_\\sigma^B$~~ & \n~~$g_\\rho^q$~~ & $m_N^* \/ m_N$ & $K$ (MeV) \\\\ \\hline\n 1.0 & 2.71 & 2.27 &\n 7.88 & 0.78 & 285.5 \\\\ \\hline\n\\end{tabular}\n\\end{center} \n\\caption{The coupling constants between $(u,\\, d)$-quarks and \n$(\\sigma,\\, \\omega,\\, \\rho)$-mesons in the MQMC \nmodel to reproduce the binding energy $E_b\/A=16$ MeV \nand symmetry energy $a_{\\rm sym}=32.5$ MeV\nat the saturation density 0.17 ${\\rm fm}^{-3}$. \n$m^*_N\/m_N$ and $K$ are the ratio of the \neffective mass to the free mass of \nthe nucleon and the compression modulus at the saturation density,\nrespectively.}\n\\label{tab:coupling}\n\\end{table}\n$m^*_N$ and $K$ are within reasonable ranges: \n$m^*_N = (0.7 \\sim 0.8) m_N$ and $ K = (200 \\sim 300)$ MeV.\n\nThe coupling constants between $s$-quarks and mesons\ncannot be determined from the\nsaturation properties.\nIn principle, experimental data from hypernuclei \nand kaon-nucleus scattering could be used to determine \nthe coupling constants between $s$-quarks and mesons (for example,\nsee Ref.~\\cite{zakout05}).\nHowever, these coupling constants are not well known yet, \nand for simplicity\nwe assume that the quark counting rule holds and that\nthe $s$ quark does not interact with $u$ and $d$ quarks.\nThen we have\n\\begin{equation}\ng^s_{\\sigma} = g^s_{\\omega} = g^s_{\\rho} = \ng^{u,d}_{\\sigma*} = g^{u,d}_\\phi = 0.\n\\label{eq:smcoupling}\n\\end{equation}\nTo fix the meson-baryon coupling constants in the model Lagrangian,\nwe also use the quark counting rule \n\\begin{eqnarray}\n\\frac 13 g_{\\omega N} & = & \\frac 12 g_{\\omega \\Lambda}\n= \\frac 12 g_{\\omega \\Sigma} =g_{\\omega \\Xi} = g_\\omega ^q , \\nonumber \\\\\ng_{\\rho N} & = & g_{\\rho \\Sigma} = g_{\\rho \\Xi} = g_\\rho^q , \n~~g_{\\rho \\Lambda}=0, \\nonumber \\\\\ng_{\\phi \\Lambda} & = & g_{\\phi \\Sigma} = \\frac 12 g_{\\phi \\Xi} = g_\\phi^s,\n\\label{eq:211}\n\\end{eqnarray}\nand the SU(6) symmetry\n\\begin{eqnarray}\ng^s_{\\sigma*} &=& \\sqrt{2} g^{u,d}_\\sigma = \\sqrt{2}, \\nonumber \\\\\ng_\\phi^s &=& \\sqrt 2 g_\\omega^{u,d} = 3.83, \\nonumber \\\\\n{g'}^B_{\\sigma^*} &=& \\sqrt{2}\\, {g'}^B_\\sigma.\n\\label{eq:su6}\n\\end{eqnarray}\nThe quark-meson coupling constants $g^q _{\\omega}$ and $g^q _{\\rho}$\ngiven in Table~\\ref{tab:coupling} and\nthe relations in Eqs.~(15)-(17)\ndetermine all the meson-baryon coupling of the MQMC model.\n\n{\\bf QHD}\n\nIn the QHD model, $g_{\\sigma N}$ and $g_{\\omega N}$ are adjusted\nto yield $\\rho_0$ and $E_b$, and $g_{\\rho N}$ \nis fitted to produce $a_{\\rm sym}$.\n$g_2$ and $g_3$ in $U^{\\rm QHD}_{\\sigma}$ of Eq.~(\\ref{eq:U_QHD})\nare fixed to \nreproduce the same $m^*_N$ and $K$ values as listed in \nTable~\\ref{tab:coupling} for the MQMC model.\nThe coupling constants determined in this way\nare given in Table~\\ref{tab:coupling-qhd}.\n\\begin{table}[tbp]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\\hline\n~~$g_{\\sigma N}$~~ & ~~$g_{\\omega N}$~~ & ~~$g_{\\rho N}$~~ &\n$g_2$ (fm$^{-1}$) & $g_3$ \\\\ \\hline\n8.06 & 8.19 & 7.88 & 12.139 & 48.414 \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\caption{The meson-nucleon coupling constants and the coefficients\nof the $\\sigma$-meson self interaction terms used in the QHD model.\nThey reproduce the same saturation properties as in the MQMC model; \n$\\rho_0 = 0.17$ fm$^{-3}$, $E_b = 16 A$ MeV, $a_{\\rm sym} = 32.5$ MeV, \n$m^*_N = 0.78 m_N$ and $K = 285.5$ MeV.} \n\\label{tab:coupling-qhd}\n\\end{table}\nIn the MQMC model, meson-baryon coupling constants are\nobtained from the quark-meson coupling constants.\nOn the other hand, in QHD\nmeson-nucleon coupling constants provide the starting point\nfor the determination of remaining other meson-baryon\ncoupling constants.\nOnce meson-nucleon coupling constants are fixed from the saturation\nproperties, meson-hyperon coupling constants can be obtained \nby the quark counting rule (as in Eq.~(\\ref{eq:211}))\nand the SU(6) symmetry (as in Eq.~(\\ref{eq:su6})).\nThe coupling constants between strange mesons and hyperons can be obtained\nby combining the quark counting rule and the SU(6) symmetry,\n{\\it e.g.}, $g_{\\phi \\Lambda} = \\sqrt{2} g_{\\omega N} \/3$ and\n$g_{\\sigma^* \\Lambda} = \\sqrt{2}g_{\\sigma N}\/3$.\n\n{\\bf Kaon}\n\nThere are 5 kaon-meson coupling constants in our models; $g_{\\sigma K}$,\n$g_{\\omega K}$, $g_{\\rho K}$, $g_{\\sigma^* K}$ and $g_{\\phi K}$.\n$g_{\\omega K}$ and $g_{\\rho K}$ can be fixed\nfrom the quark counting rule: $g_{\\omega K} = g_\\omega^q$ and \n$g_{\\rho K}=g_\\rho^q$\nfor the MQMC model, and $g_{\\omega K} = g_{\\omega N}\/3$ and\n$g_{\\rho K}=g_{\\rho N}$ for QHD. \n(Obviously $g_{\\rho K}$ from the MQMC model is the same\nas that from QHD. $g_{\\omega K} (= 2.71)$ from the MQMC model is\nessentially the same as $g_{\\omega K} (=2.73)$ from QHD.)\n$g_{\\sigma^* K}$ may be fixed from $f_0(980)$ decay \\cite{wa76-91}, \nand $g_{\\phi K}$ from the SU(6) relation \n$\\sqrt{2} g_{\\phi K} = g_{\\pi \\pi \\rho} = 6.04$ \\cite{j-schaf}. \n$g_{\\sigma^* K}$ and $g_{\\phi K}$ thus fixed are 2.65 and 4.27,\nrespectively.\nThe remaining coupling constant, $g_{\\sigma K}$, \ncan be related to the real part of the optical potential \nof a kaon at the saturation density through\n$U_{K^-} = -(g_{\\sigma K}\\sigma + g_{\\omega K}\\omega_0)$.\n$g_{\\sigma K}$ values corresponding to\nseveral values of $U_{K^-}$ are listed in Table~\\ref{tab:gsigmaK}\nfor both MQMC and QHD models.\n\nThus, out of 5 kaon-meson coupling constants\n$g_{\\rho K}$, $g_{\\sigma^* K}$, and $g_{\\phi K}$ are the same for \nboth models. $g_{\\omega K}$'s are essentially the same for both models.\n$g_{\\sigma K}$ values are also very similar \nin both models for all $U_{K^-}$ values as seen in Table IV. \nTherefore, all the 5 kaon-meson coupling constants are practically\nthe same for both MQMC and QHD models.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|} \\hline\n$ U_{K^-} $ (MeV) & $-80$ & $-100$ & $-120$ & $-140$ & $-160$ \\\\ \\hline\n$g_{\\sigma K}$ (MQMC) & 1.25 & 2.01 & 2.75 & 3.50 & 4.25 \\\\ \\hline\n$g_{\\sigma K}$ (QHD) & 1.26 & 2.04 & 2.82 & 3.61 & 4.39 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{$g_{\\sigma K}$ determined for several $U_{K^-}$ values\nin the MQMC and QHD models.}\n\\label{tab:gsigmaK}\n\\end{table}\n\n\\subsection{Other quantities relevant to neutron star matter} \n\nTo obtain the composition of neutron star matter, \nwe need to determine 16 unknown variables at \neach matter density, which include 5 meson fields \n($\\sigma , \\omega , \\rho , \\sigma^* , \\phi$ ), \n8 octet baryon densities, 2 lepton densities \nand the kaon density $\\rho_K$.\nFive meson fields can be determined from their equations of motion:\n\\begin{eqnarray}\nm_\\sigma^2 \\sigma + \\frac{\\partial}{\\partial \\sigma} U^{\\rm QHD}_{\\sigma}\n= \\sum_B g_{\\sigma B} C_B(\\sigma)\n\\frac{2J_B+1}{2\\pi^2} \\int_0^{k_B} \\frac{m_B^*}{[k^2+{m_B^*}^2]^{1\/2}}k^2 dk\n+ g_{\\sigma K} \\rho_K,\n\\label{sigma}\n\\end{eqnarray}\n\\begin{eqnarray}\nm_{\\sigma^*}^2 \\sigma^* = \\sum_B g_{\\sigma^* B} C_B(\\sigma^*)\n\\frac{2J_B+1}{2\\pi^2} \\int_0^{k_B} \\frac{m_B^*}{[k^2+{m_B^*}^2]^{1\/2}}k^2 dk\n+ g_{\\sigma^* K}\\rho_K,\n\\label{sigmastar}\n\\end{eqnarray}\n\\begin{eqnarray}\nm_\\omega^2 \\omega_0 = \\sum_B g_{\\omega B} (2J_B + 1) k_B^3 \/ (6\\pi^2)\n- g_{\\omega K}\\rho_K,\n\\label{eq:omega}\n\\end{eqnarray}\n\\begin{eqnarray}\nm_\\phi^2 \\phi_0 = \\sum_B g_{\\phi B} (2J_B + 1) k_B^3 \/ (6\\pi^2)\n+ g_{\\phi K}\\rho_K,\n\\label{eq:phi}\n\\end{eqnarray}\n\\begin{eqnarray}\nm_\\rho^2 \\rho_{03} = \\sum_B g_{\\rho B} I_{3B} (2J_B + 1) k_B^3 \/ (6\\pi^2)\n-g_{\\rho K} \\frac 12 \\rho_K ,\n\\label{eq:rho}\n\\end{eqnarray}\nwhere $J_B$ and $I_{3B}$ are the spin and the isospin projection, respectively, \nand $k_B$ is the Fermi momentum of the baryon $B$. \nIn Eq.~(\\ref{sigma}) $\\frac{\\partial} {\\partial \\sigma} U^{\\rm QHD}_{\\sigma}$ \nterm needs to be there only for QHD \nand is not to be included in the MQMC model.\n$C_B(\\sigma)$ and $C_B(\\sigma^*)$ are determined from the relations\n$g_{\\sigma B} C_B(\\sigma) = - \\frac{\\partial m_B^*}{\\partial \\sigma}$\nand\n$g_{\\sigma^* B} C_B(\\sigma^*) = - \\frac{\\partial m_B^*}{\\partial \\sigma^*}$ .\nFor QHD, $C_B(\\sigma) = C_B(\\sigma^*) = 1$. \nFor MQMC, the explicit forms of $C_B(\\sigma)$ and $C_B(\\sigma^*)$ \nare given in Ref. \\cite{s-pal}.\n\nCharge neutrality condition of neutron star matter is expressed as \n\\begin{eqnarray}\n\\sum_B q_B \\rho_B - \\rho_K - \\rho_e - \\rho_\\mu = 0,\n\\label{eq:baryonconserv}\n\\end{eqnarray}\nwhere $q_B$ is the charge of baryon $B$ and $\\rho_B$ is the number\ndensity of $B$.\nUsing the charge neutrality and the baryon number conservation conditions,\none can fix two quantities, {\\it e.g.,} the density\nof the neutron and the electron.\nWith these two variables fixed,\n$\\beta$-equilibrium conditions of the baryons give us the following 7 relations\nfor the chemical potentials of \n$p,\\, \\Lambda,\\, \\Sigma^+,\\, \\Sigma^-,\\, \\Sigma^0,\\, \\Xi^-$\nand $\\Xi^0$ as\n\\begin{eqnarray}\n\\mu_n = \\mu_\\Lambda &=& \\mu_{\\Sigma^0} = \\mu_{\\Xi^0} , \\nonumber \\\\\n\\mu_n + \\mu_e &=& \\mu_{\\Sigma^-} = \\mu_{\\Xi^-} , \\nonumber \\\\\n\\mu_n - \\mu_e &=& \\mu_p = \\mu_{\\Sigma^+},\n\\label{eq:chemi-eq}\n\\end{eqnarray}\nwhere the chemical potential of baryon $B$ is given by\n$\n\\mu_B = \\sqrt{k_B^2 + {m_B^*}^2(\\sigma,\\sigma^*)} + g_{\\omega B}\\omega_0\n+ g_{\\phi B} \\phi_0 + g_{\\rho B} I_{3B}\\rho_{03}.\n$\nThe chemical potential of a non-interacting lepton $l$ is simply written as\n$\n\\mu_l = \\sqrt{k^2_l + m^2_l}.\n$\nThe $\\beta$-equilibrium condition for leptons\n\\begin{equation}\n\\mu_e = \\mu_\\mu \n\\end{equation}\ndetermines the density of muons.\nAt a density where the condition\n\\begin{equation}\n\\omega_K = \\mu_n - \\mu_p\n\\label{eq:kaonequil}\n\\end{equation}\nis satisfied, kaons are condensed and\nthe kaon density $\\rho_K$ becomes non-zero.\nSolving the Eqs.~(\\ref{sigma}--\\ref{eq:kaonequil})\nself-consistently and simultaneously,\none can determine the 16 variables uniquely.\n\n\\section{Results}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\epsfig{file=ppl_mqmc120_new.eps, width=6.5cm}\n\\epsfig{file=ppl_qhd120_new.eps, width=6.5cm} \\\\\n\\epsfig{file=ppl_mqmc140_new.eps, width=6.5cm}\n\\epsfig{file=ppl_qhd140_new.eps, width=6.5cm}\\\\\n\\epsfig{file=ppl_mqmc160_new.eps, width=6.5cm}\n\\epsfig{file=ppl_qhd160_new.eps, width=6.5cm}\n\\end{center}\n\\caption{Compositions of neutron star matter calculated from the\nMQMC (left panels) and the QHD (right panels) models.}\n\\label{fig:population}\n\\end{figure}\nFig.~\\ref{fig:population} shows the relative populations,\nthe ratios of the densities of octet baryons, leptons and $K^-$ to the total\nbaryon density, in the neutron star matter as functions of\n$\\rho \/ \\rho_0$ up to $\\rho=10 \\rho_0$.\nThe left panels show the results from the MQMC model and the right \nfrom the QHD model for $U_{K^-} = -120, -140,$ and $-160$ MeV.\n(Figures for $U_{K^-} = -80$ and $-100$ MeV are not shown here since\nthey are not too much different from the one for $U_{K^-} = -120$ MeV\nparticularly for QHD.)\nFigures from both models show that the onset density of the\nkaon condensation $\\rho_{\\rm crit}$ becomes lower as\n$|U_{K^-}|$ increases.\n\nTo see how $\\rho_{\\rm crit}$ changes depending on $U_{K^-}$,\nlet us consider Eq.~(\\ref{eq:kaonequil}), which determines $\\rho_{\\rm crit}$.\nFig.~\\ref{fig:kaonenergy} displays $\\omega_K$ and \n$\\mu_n - \\mu_p$, which are, respectively,\nthe left and the right\nhand sides of Eq.~(\\ref{eq:kaonequil}) \n(computed without including kaon condensation just for producing this figure).\nThe left panel is from the MQMC model, and the right panel from QHD.\nAt a density where the curve of $\\omega_K$ intersects with\nthat of $\\mu_n - \\mu_p$, kaon condensation sets in.\nAmong the coupling constants and meson fields that determine \n$\\omega_K$, only $g_{\\sigma K}$ depends on $U_{K^-}$. \nThe $\\sigma$-meson contributes to $\\omega_K$ \nattractively, as can be seen in Eq.~(\\ref{eq:kaon-dispersion}).\nThus $\\omega_K$ becomes smaller \nfor a larger $|U_{K^-}|$, as shown in Fig.~\\ref{fig:kaonenergy}.\n\nFigure~\\ref{fig:population} also shows \nthat as $|U_{K^-}|$ increases from $120$ MeV to $140$ MeV\n$\\rho_{\\rm crit}$ changes drastically in both MQMC and QHD models, \nbut as $|U_{K^-}|$ increases further above $140$ MeV,\n$\\rho_{\\rm crit}$ changes only moderately.\nThis can also be seen from Fig.~\\ref{fig:kaonenergy}.\nAs $|U_{K^-}|$ increases from $120$ MeV to $140$ MeV\nthe intersection between the curves for $\\mu_n - \\mu_p$ \nand $\\omega_K$ moves rapidly (particularly for QHD), \nwhereas when $|U_{K^-}|$ increases further above $140$ MeV\nthe intersection \nshifts only a little to lower densities.\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=energy_kvsnp_mqmc.eps, width=7cm}\n\\epsfig{file=energy_kvsnp_qhd.eps, width=7cm}\n\\end{center}\n\\caption{Kaon energies $\\omega_K$ for $U_{K^-}$ = $-120$ MeV (dashed),\n$-140$ MeV (dotted) and $-160$ MeV (dot-dashed) \nare compared with $\\mu_n - \\mu_p$ (solid).\nAt the densities where $\\omega_K$ and $\\mu_n - \\mu_p$ intersect, \nkaons start to condense.\nThe left panel is for the MQMC model, and the right for QHD.} \n\\label{fig:kaonenergy}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=meson_mqmc140.eps, width=7.2cm}\n\\epsfig{file=meson_qhd140.eps, width=7.2cm}\n\\end{center}\n\\caption{Meson fields calculated from the MQMC (left) and the QHD (right)\nmodels as functions of the matter density for $U_{K^-} = -140$ MeV.}\n\\label{fig:meson}\n\\end{figure}\n\nAnother common feature of the two models is that\nregardless of $\\rho_{\\rm crit}$, once the kaon is created,\nthe density of $K^-$ piles up very quickly and overwhelms\nthe population of the hyperons and even the nucleons.\nThis behavior was also obtained by other authors\n\\cite{glend,kpe-prc95,bb-prc01,mpp-prc05}.\nThe reason can be partly attributed to the $\\omega$-meson. \nThe $\\omega$-meson term in the energy of $K^-$ (in Eq.~(14)) \nhas a negative sign and is thus attractive,\nbut it is repulsive for octet baryons.\nFig.~\\ref{fig:meson} shows the $\\omega$-meson\nis a dominant meson at higher densities\nin both MQMC and QHD models. \nThus the $\\omega$-meson enhances the population of $K^-$ \nbut suppresses baryons, and the kaon density increases rapidly.\nIn addition, due to the competition between\nthe negatively charged hyperons and $K^-$\nin the charge neutrality condition, the negative hyperons are highly \nsuppressed and in some cases not even created at all \nas soon as the kaon condensation sets in.\nPositively charged hyperons, on the other hand, receive the \nopposite effects from the kaon condensation, and \n$\\Sigma^+$ is created at lower densities as $|U_{K^-}|$ increases more.\nThe proton density is also enhanced by large abundance of $K^-$, \nwhich facilitates in turn the enhancement of $\\Sigma^+$ population\nthrough the chemical equilibrium condition\nof the positively charged hyperons in Eq.~(\\ref{eq:chemi-eq}).\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=mNKstar140.eps, width=7.2cm}\n\\epsfig{file=mYstar140.eps, width=7.2cm}\n\\end{center}\n\\caption{Effective masses of the nucleons, kaon, \nand hyperons are plotted for $U_{K^-} = - 140$ MeV.}\n\\label{fig:mNstar}\n\\end{figure}\n \nLet us now discuss different aspects of the two model calculations.\nFirst, Fig.~\\ref{fig:population} shows that $\\rho_{\\rm crit}$ from \nthe MQMC model is lower than that from QHD.\nFor $U_{K^-} = -120, -140$, and $-160$ MeV, $\\rho_{\\rm crit}$\nvalues are $5.9 \\rho_0$, $3.8 \\rho_0$ and $3.0 \\rho_0$ \nin the MQMC model, respectively, while they are\n$9.8 \\rho_0$, $4.3 \\rho_0$ and $3.3 \\rho_0$ in QHD.\nSecondly, the MQMC model predicts a larger population of the kaon\nthan QHD for a given $U_{K^-}$ value.\nFigure~\\ref{fig:kaonenergy} shows that\n$\\omega_K$ calculated from the MQMC model decreases more\nrapidly with density than $\\omega_K$ from QHD \nfor each $U_{K^-}$ value.\nThe curves for $\\mu_n - \\mu_p$ are more or less the same for both models\nat $\\rho \\lesssim 4\\rho_0$, but at $\\rho > 4\\rho_0$ \n$\\mu_n - \\mu_p$ decreases faster in QHD.\nThus the intersection and kaon condensation occur at lower densities\nin the MQMC model. \nThis behaviour of the intersection in Fig.~\\ref{fig:kaonenergy} \nis well reflected in the kaon condensation onset density \n$\\rho_{\\rm crit}$ in Fig.~\\ref{fig:population}.\nFig.~\\ref{fig:meson} shows that the \n$\\sigma$-meson field calculated by the MQMC model is larger than that\ncalculated by QHD.\nA larger $\\sigma$-field in the MQMC model makes $m^*_K$ and consequently \n$\\omega_K$ smaller.\nOn the other hand, as seen in Fig.~\\ref{fig:kaonenergy},\n$\\mu_n - \\mu_p$ from QHD decreases faster with density\nat higher densities than that from MQMC.\nThus the intersection of the $\\omega_K$ curve with the curve for $\\mu_n - \\mu_p$\noccurs at lower densities with the MQMC model.\nTherefore, $\\rho_{\\rm crit}$ is smaller in the MQMC model.\n\nAnother model dependency of the results can be seen from \nthe population of kaons, which is larger in the MQMC model.\nThe effective mass of a kaon as a point particle is \ndetermined by $\\sigma$ and $\\sigma^*$ mesons through the relation \n$m_K^* = m_K - g_{\\sigma K}\\sigma -g_{\\sigma^* K} \\sigma^*$\nand is plotted in Fig.~\\ref{fig:mNstar}. \nSince the $\\sigma$ fields are larger in the MQMC model \n(as shown in Fig.~\\ref{fig:meson}), the effective mass and \nthe energy of a kaon are smaller in the MQMC model than QHD.\nThus, kaon condensation takes place more in the MQMC model.\n\nFigures~\\ref{fig:meson} and \\ref{fig:mNstar} also show that\neven though $\\sigma$-meson field from the MQMC model is larger \nthan that from QHD as the densities increase, \nthe reduction of the effective mass of baryons \nis smaller (or similar) with the MQMC model.\nIf one could parametrize the effective mass of baryons from \nthe MQMC model in the form of\n$m_B^* = m_B - g_{\\sigma B}(\\sigma) \\sigma \n-g_{\\sigma^* B}(\\sigma^*) \\sigma^*$\nwhere $g_{\\sigma B}(\\sigma)$ and $g_{\\sigma^* B}(\\sigma^*)$ are \nfunctions of $\\sigma$ and $\\sigma^*$, respectively,\nthe results in Fig.~\\ref{fig:mNstar}\nmight imply that $g_{\\sigma B}(\\sigma)$ and \n$g_{\\sigma^* B}(\\sigma^*)$ are decreasing functions \nwith respect to the density.\nThe rate of decrease is rather high since the\nproduct $(g_{\\sigma B}(\\sigma)\\, \\sigma)_{\\rm MQMC}$\nis smaller than (or similar to) $(g_{\\sigma B}\\, \\sigma)_{\\rm QHD}$ \nwhile the $\\sigma$-field value from MQMC is much greater than \nthat from QHD.\nSuch a decrease of $g_{\\sigma B}(\\sigma)$ in the MQMC model\nmay be regarded as partial restoration of the chiral symmetry\nat high densities.\n\nWe have calibrated both the MQMC and QHD model parameters to the same\nsaturation properties. However, we find that the neutron star matter composition\nprofiles from the two models are quite different and\nthat they show significant model dependence.\nQHD assumes the baryons as point particles, \nwhereas the MQMC model treats the baryons as MIT bags. \nThus, the major difference between the two models is in the \ndefinition of the \neffective mass of baryons, $m^*_B$. \nThe equation of motion for the $\\sigma$-meson field\nis also different accordingly.\n$m^*_B$ in QHD is a simple linear function of \nthe $\\sigma$-field, and the factor $C_B(\\sigma)$ \nin Eq~(\\ref{sigma}), is a constant.\nIn the MQMC model, $m^*_B$ is a\nnon-linear function of $\\sigma$-field, and thus \n$C_B(\\sigma)$ is highly non-linear.\nWhen these non-linear $m^*_B$ and $C_B(\\sigma)$ are expanded\nin powers of the $\\sigma$-field, an infinite number of $\\sigma$-field\nterms would appear. \n(Cubic and quartic terms are explicitly taken into account\nin the QHD model as in Eq.~(\\ref{eq:U_QHD}).)\nHigher order terms can be interpreted as higher order contributions\nsuch as self interactions of meson fields,\nwhich are believed to be more important at high densities.\nBut at high densities it can be questioned \nwhether the non-linear terms of the\n$\\sigma$-meson in the MQMC model \naccount for the higher order effects properly and consistently.\nFor instance, it is generally known that as the baryons come closer\nto each other the interplay of heavy mesons becomes more important.\nAt high enough densities, their self interaction contributions may need\nto be included on the same ground as for the $\\sigma$-meson,\nbut the present MQMC model truncates the heavy\nmeson terms at the leading order. \n\nIt seems worthwhile to discuss at this point two more aspects \nof our results. The first\none is the EoS and the resulting mass-radius relation of the\nneutron star. The second point is\nthe dependence of our results on the $\\Sigma$ hyperon interaction\nin matter, which is not well known yet. \n\n\\begin{figure}[tbp]\n\\begin{center}\n\\epsfig{file=eos_comp.eps, width=7.5cm}\n\\end{center}\n\\caption{Comparison of the EoS with and without kaons in QHD model.\nThe Gibbs condition is used to treat the mixed phase.}\n\\label{fig:eos_comp}\n\\end{figure}\n\nLet us first consider the EoS and the maximum mass of the neutron star.\nAs the kaon ($K^-$) appears and condensates, \nthe number of negatively-charged hyperons decreases to satisfy the\ncharge neutrality.\nThe decrease in the number of baryons will result in the reduction\nof the pressure and lead to softnening the EoS. \nFig.~\\ref{fig:eos_comp} shows such a softening of the EoS due to the \nkaon condensation. \nIn calculating the EoS of a system consisting of multicomponent \nsubstances, as in the case with the kaon condensation, \nGibbs condition has to be employed for the proper description of the \nmixed phase.\nThe curves in Fig.~\\ref{fig:eos_comp}\nare the results obtained with the QHD and Gibbs condition. \nAs kaons appear, the EoS becomes considerably soft and the effect\nbecomes more pronounced with a stronger attraction, \ni.e., for a larger $|U_{K^-}|$ value.\nFor the MQMC model, however, applying the Gibbs conditions do not\ngive us a converging solution. Solving the 16 highly nonlinear \nequations together with Gibbs conditions double the number of equations\nto be solved, and the convergence could not be reached.\nIt is not clear to us whether the convergence problem is due to\nnumerical problems or due to non-linearity which can cause \nbifurcation or chaos. \nTherefore, we have used Maxwell construction for the MQMC model.\n(Some literatures \\cite{Maxwell} show that Maxwell construction\nis a good approximation to the Gibbs condition, but in some other \nliterature \\cite{glend} it was emphasized that Gibbs condition \nproduces significantly different results from those of Maxwell \nconstruction. Below we show that in our case the neutron star mass\nitself does not change much whether we use Maxwell or Gibbs conditions\nfor QHD. Thus our use of Maxwell construction for the MQMC model\nmay be considered as an acceptable approximation.)\nWe solve Tolman-Oppenheimer-Volkoff equation \nto calculate the maximum mass of a neutron star. \nThe results are shown in Tab.~\\ref{tab:mass-radius},\nwhere the central density, the maximum mass, and the corresponding radius\nare listed for both MQMC and QHD models.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc|ccc} \\hline\n & \\multicolumn{3}{c|}{MQMC (Mx)} & \\multicolumn{3}{c|}{QHD (Mx)} &\n\\multicolumn{3}{c}{QHD (Gb)} \\\\\n\\hline\n$U_{K^-}$ &\n$\\rho_c\/\\rho_0$ & $M\/M_\\odot$ & $R$ &\n$\\rho_c\/\\rho_0$ & $M\/M_\\odot$ & $R$ &\n$\\rho_c\/\\rho_0$ & $M\/M_\\odot$ & $R$ \\\\\n\\hline\n$-120$ & 6.2 & 1.61 & 11.8 & 6.1 & 1.50 & 11.4\n& 6.1 & 1.50 & 11.4 \\\\ \\hline\n$-140$ & 4.6 & 1.53 & 12.8 & 5.0 & 1.46 & 12.1\n& 5.0 & 1.45 & 12.1 \\\\ \\hline\n$-160$ & 4.6 & 1.45 & 13.1 & 4.0 & 1.32 & 12.7\n& 4.3 & 1.19 & 12.3 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{The maximum mass of a neutron star $M$, the corresponding radius $R$\nand the density at the center of the star $\\rho_c$ are listed for\n$U_{K^-} = -120, -140,$ and $-160$ MeV. $U_{K^-}$ is in units of MeV,\nand $R$ in km. ``Mx\" and ``Gb\" refer to Maxwell and Gibbs\nconditions, respectively.}\n\\label{tab:mass-radius}\n\\end{table}\nThe maximum mass calculated with QHD model is roughly 10 \\% smaller than\nthat with MQMC model. For both models the maximum mass becomes smaller\nfor a larger $|U_K |$. \nThe maximum mass calculated with MQMC and $|U_{K^-}| = $ 160 MeV\nis compatible with observation, while\nthe maximum mass calculated with QHD and $|U_{K^-}| = $ 160 MeV \nbecomes too small to be compatible with the observed values.\nHowever, this fact may not necessarily rule out the possibility of\n$|U_{K^-}| $ becoming as large as 160 MeV \nbecause there are other possible mechanisms\nwhich are not included.\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\epsfig{file=ppl_repul_Sigma_uk140_mqmc.eps, width=6.5cm}\n\\epsfig{file=ppl_repul_Sigma_uk140_qhd.eps, width=6.5cm}\n\\end{center}\n\\caption{Comparison of the population from the MQMC (left) with\nthat of the QHD (right). \nThe calculations are done with $U_\\Sigma = +30$ MeV and $U_{K^-} = -140$ MeV.}\n\\label{fig:ppl_repul_Sigma}\n\\end{figure}\n\nNow let us consider the second aspect mentioned above.\nIn the calculations made so far, we have assumed quark-counting rule \nin determining the hyperon-meson coupling constants. \nExperiments on $\\Lambda$-hypernuclei indicate that quark counting is a good\napproximation of the realistic interaction of $\\Lambda$ hyperons in nuclei,\nwhich gives the value of $\\Lambda$ optical potential at saturation \ndensity in the range $-40 \\sim -30$ MeV.\nOn the other hand, there is large ambiguity in the $\\Sigma$ hyperon \ninteraction strength. Ref.~\\cite{npa95} shows that $\\Sigma$ hyperon \nfeels repulsion rather than attraction in nuclear medium. \nThere are also experimental indications that the $\\Sigma$ hyperon\ninteraction is repulsive \\cite{saha}.\nIf the $\\Sigma$ interaction is indeed repulsive, the population profile\nof neutron star matter can change\nsignificantly from what is shown earlier in this work since\nwe have used the quark counting rule.\nThe number of $\\Sigma^-$ is closely related to the onset of\n$K^-$ condensation since they compete with each other for\nthe charge neutrality condition.\nTo see the effect of possible repulsive nature of $\\Sigma$ interaction\non the kaon condensation, we have repeated the calculations\nwith repulsive $\\Sigma$ interaction. \nWe first fit the coupling constants $g'^{\\Sigma}_\\sigma$ in MQMC and \n$g_{\\sigma \\Sigma}$ in QHD so that the $\\Sigma$ optical potential value\nat the saturation density equals to $+30$ MeV, and fix the remaining \nmeson-$\\Sigma$ coupling constants with the quark counting rule.\nThe resulting population profiles with the kaon optical potential \n$U_{K^-} = -140$ MeV are shown in Fig.~\\ref{fig:ppl_repul_Sigma}.\nCompared to the quark-counting results in Fig.~\\ref{fig:population},\nthe onset of kaon condensation occurs at slightly lower densities. \nThis minor change happens regardless of $U_{K^-}$ value. \nHowever, the main features of\nkaon condensation, i.e., its onset density, fast increase of population\nand its dominance at high densities are not much affected by the\nchange of $\\Sigma$ interaction in nuclear medium. \n\n\\section{Conclusions and discussions}\n\nUsing the modified quark-meson coupling model,\nwe have obtained the composition profiles of neutron star matter, \nfocusing on the effects of the strange particles of hyperons and kaons.\nMotivated by recent theoretical predictions of deeply bound kaonic states\n\\cite{akaishi} and the subsequent claims of the observations of \ninteresting peaks found \nin KEK\\cite{s0, suzuki1}, FINUDA\\cite{FINUDA}, and BNL\\cite{kishi-npa05} \nexperiments, large kaon optical potential $U_{K^-}$ was considered.\nBy varying the value of $U_{K^-}$, we have investigated how the \nonset density of the kaon condensation and\nthe composition of the stellar matter change.\nEmploying the QHD model parameters which satisfy exactly \nthe same saturation conditions\nas the MQMC model, we have investigated \nthe model dependence of the results.\n\nWe observed two common features from the two model calculations.\nFirst, a larger $|U_{K^-}|$ produces a smaller onset density\nof the kaon condensation.\nThis behavior is easily understood from the relation between\n$U_{K^-}$ and $g_{\\sigma K}$ together with the role of $g_{\\sigma K}$\nto the energy of the kaon $\\omega_K$.\nSecondly, the number of kaons rapidly increases, \nand the number of negatively charged hyperons is strongly suppressed.\nThis is due to the fact that the $\\omega$-meson \ngives rise to attraction to $K^-$\nwhereas it couples to baryons repulsively.\n\nModel dependence was also observed.\nThe kaon condensation takes place at lower densities\nin the MQMC model. \nThe number of kaons is always larger with the MQMC model \nfor given $U_{K^-}$ values.\nLarger $\\sigma$-meson fields in the MQMC model can explain \nthese behaviors.\nThe differences in the results from the two models become more prominent\nat larger densities.\nGrowing discrepancies at higher densities have its\norigin partly in the effective mass of baryons in\neach model, which greatly affects the self-consistency\ncondition of the $\\sigma$-meson.\nThe factor $C_B(\\sigma)$ in the self-consistency condition\nof the $\\sigma$-meson is highly non-linear in the MQMC model,\nwhich can be interpreted as \nan infinite number of $\\sigma$-meson self-interaction terms. \nThese higher order terms may require more proper and consistent treatment \nat high densities.\n\nAn important issue in the dense matter physics is the restoration \nof the chiral symmetry.\nAccording to Ref.~\\cite{br-prl91}, not only the\nmass but also the pion decay constant and meson-nucleon coupling\nconstants decrease at a similar ratio at around the\nnuclear saturation density.\nIn Ref.~\\cite{rhhj-epja05} \nthe idea of scaling behaviour is applied to the neutron star matter\nusing MQMC and QHD models\nwith only nucleon degrees of freedom, and it is shown that \nthe equation of state becomes stiffer when \nscaling effects are considered.\nThis implies that if we include a scaling behaviour in our present models,\nit may ignite the onset of exotic states earlier than the present results\nwhich do not include a scaling.\n\nIn the kaon sector, \nthe coupling constants of a kaon and exchange mesons\nare currently an important issue.\nWe took various values of the optical potential of $K^-$ as an input\nto fix $g_{\\sigma K}$.\nOther kaon-meson coupling constants are fixed from naive quark counting.\nIt is known, however, that $K^+$ potential is repulsive \nwith the magnitude $U_{K^+} \\sim 10$ MeV \nat the saturation density \\cite{llb-prl97}.\nIf $U_{K^+}$ as well as $U_{K^-}$ is used as an input, $g_{\\sigma K}$ and\n$g_{\\omega K}$ can be determined uniquely.\nFor instance, if we take $U_{K^-} = -120$ MeV and $U_{K^+} = 20$ MeV,\nthen we get $g_{\\sigma K} = 2.041$ and $g_{\\omega K} = 4.187$.\n$g_{\\sigma K}$ becomes smaller than the value listed in Table IV, \nwhile $g_{\\omega K}$ becomes nearly twice of $g_{\\omega K}$\nfixed from the quark counting.\nBoth $\\sigma$ and $\\omega$ mesons contribute to the $K^-$ energy \nattractively, but since the $\\omega$ meson becomes a dominant \ncomponent at higher densities, taking into account of $U_{K^+}$ can produce\nappreciably different results.\nIt may be interesting to see the effects of $U_{K^+}$\non the kaon condensation.\n\nIn our calculations, we have assumed the kaon as a point particle in both \nquark and hadron models. \nComparison of the two models, however, shows that whether we treat \na hadron as a bag (MQMC) or a point particle (QHD) can produce a significant \ndifference.\nTherefore, it is worthwhile to treat the kaon as a bag and \ncompare the corresponding result with that of a point-like kaon.\nIn Ref.~\\cite{mpp-prc05} a kaon is treated as a bag in the framework\nof the QMC model, but no work has been done yet with the MQMC model.\n\nWe assume $m^*_K$ as a linear function of \n$\\sigma$-field, but \nsome authors employ a non-linear form \\cite{kpe-prc95, j-schaf}:\n\\begin{equation}\nm^{*2}_K = m^2_K - g_{\\sigma K} m_K \\sigma.\n\\end{equation}\nIf we expand this expression in powers of $\\sigma\/m_K$, we obtain\n\\[\nm^*_K \\simeq m_K \\left[\n1 - \\frac{1}{2} g_{\\sigma K} \\frac{\\sigma}{m_K} + O(\\sigma^2\/m^2_K)\n\\right].\n\\]\nThe leading order term of the $\\sigma$-field has a factor 1\/2,\nwhich is not present in Eq.~(12).\nDue to the factor 1\/2, \nthe rate of decrease in $m^*_K$ with density \nwould be reduced by a factor 2,\nand this would shift the kaon condensation onset density to higher densities.\nThis dependence on the kaon Lagrangian may be worthwhile to be studied.\n\n\\section*{Acknowledgments}\nSWH thanks B. K. Jennings and TRIUMF for hospitality during his sabbatical\nleave.\nThis work was supported by Korea Research Foundation Grant\nfunded by Korea Government (MOEHRD, Basic Research Promotion Fund)\n(KRF-2005-206-C00007).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecently, Chubanov \\cite{Chubanov2012, Chubanov2015} proposed a new polynomial-time algorithm for solving the problem (${\\rm P}(A)$), \n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}(A)& \\mbox{find} & x \\in \\mathbb{R}^n \\\\\n\t&\\mbox{s.t.} & Ax = \\bm{0}, \\\\\n\t& & x > \\bm{0}, \n\\end{array}\n\\notag\n\\end{equation}\nwhere $A$ is a given integer (or rational) matrix and $\\mbox{rank} (A) = m$ and $\\bm{0}$ is an $n$-dimensional vector of $0$s. The method explores the feasibility of the following problem ${\\rm P}_{S_1}(A)$, which is equivalent to ${\\rm P}(A)$ and given by\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_1}(A)& \\mbox{find} & x \\in \\mathbb{R}^n \\\\\n\t&\\mbox{s.t.} & Ax = \\bm{0}, \\\\\n\t& & \\bm{0} < x \\leq \\bm{1},\n\\end{array}\n\\notag\n\\end{equation}\nwhere $\\bm{1}$ is an $n$-dimensional vector of $1$s. Chubanov's method consists of two ingredients, the ``main algorithm'' and the ``basic procedure.'' The structure of the method is as follows: In the outer iteration, the main algorithm calls the basic procedure, which generates a sequence in $\\mathbb{R}^n$ using projection to the set $\\mbox{Ker}A$. The basic procedure terminates in a finite number of iterations returning one of the following:\n\\begin{enumerate}\n\\item a solution of problem ${\\rm P}(A)$, \n\\item a solution of the alternative problem of problem ${\\rm P}(A)$, or\n\\item a cut of ${\\rm P}(A)$, i.e., an index $j \\in \\{ 1,2,\\dots,n \\}$ for which $0 < x_j \\leq \\frac{1}{2} \\notag $ holds for any feasible solution of problem ${\\rm P}_{S_1}(A)$.\n\\end{enumerate}\nIf result 1 or 2 is returned by the basic procedure, then the feasibility of problem ${\\rm P}(A)$ can be determined and the main procedure stops. If result 3 is returned, then the main procedure generates a diagonal matrix $D \\in \\mathbb {R}^{n \\times n} $ with a $ (j, j) $ element of $2$ and all other diagonal elements of $1$ and rescales the matrix as $AD ^ {-1} $. Then, it calls the basic procedure with the rescaled matrix. Chubanov's method checks the feasibility of ${\\rm P}(A) $ by repeating the above procedures.\n\nFor problem ${\\rm P}(A)$, several variations of Chubanov's method have been proposed and computational experiments have been conducted \\cite{Li2015,Roos2018,Wei2019}. Among these, \\cite{Roos2018} proposed a tighter cut criterion of the basic procedure than the one used in \\cite{Chubanov2015}. \\cite{Chubanov2015} used the fact that \n\n\\begin{equation}\nx_j \\leq \\frac{ \\sqrt{n} \\|z\\|_2}{y_j} \\notag\n\\end{equation}\nholds for any $y \\in \\mathbb{R}^n$ satisfying $\\sum_{i=1}^n y_i = 1 , y \\geq 0$ and $y \\notin \\mbox{range} A^T $, $z \\in \\mathbb{R}^n $ obtained by projecting this $y$ onto $\\mbox{Ker} A$, and any feasible solution $x \\in \\mathbb{R}^n$ of ${\\rm P}_{S_1}(A)$, and the basic procedure is terminated if a $y$ is found for which $\\frac{ \\sqrt{n} \\|z\\|_2}{y_j} \\leq \\frac{1}{2}$ holds for some index $j$.\nOn the other hand, \\cite{Roos2018} showed that for $v = y - z$, \n\\begin{equation}\nx_j \\leq \\mbox{\\rm min} \\left( 1 , \\bm{1}^T \\left[ \\frac{-v}{v_j} \\right]^+\\right) \\leq \\frac{ \\sqrt{n} \\|z\\|_2}{y_j} \\notag\n\\end{equation}\nholds if $v_j \\neq 0$, where $\\left[ \\frac{-v}{v_j} \\right]^+$ is the projection of $\\frac{-v}{v_j} \\in \\mathbb{R}^n$ onto the nonnegative orthant and $\\bm{1}$ is the vector of ones, and the basic procedure is terminated if a $y$ is found for which $\\bm{1}^T \\left[ \\frac{-v}{v_j} \\right]^+ \\leq \\frac{1}{2}$ holds.\n\nChubanov's method has also been extended to include the feasibility problem over the second-order cone \\cite{Kitahara2018} and the symmetric cone \\cite{Pena2017,Bruno2019}. The feasibility problem over the symmetric cone is of the form, \n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}(\\mathcal{A})& \\mbox{find} & x \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (x) = \\bm{0}, \\\\\n\t& & x \\in \\mbox{int} \\mathcal{K}, \\notag\n\\end{array}\n\\end{equation}\nwhere $\\mathcal{A}$ is a linear operator, $\\mathcal{K}$ is a symmetric cone, and $\\mbox{int} \\mathcal{K}$ is the interior of the set $\\mathcal{K}$.\n\nAs proposed in \\cite{Pena2017,Bruno2019}, for problem ${\\rm P}(\\mathcal{A})$, the structure of Chubanov's method remains the same; i.e., the main algorithm calls the basic procedure, and the basic procedure returns one of the following in a finite number of iterations: \n\\begin{enumerate}\n\\item a solution of problem ${\\rm P}(\\mathcal{A})$, or\n\\item a solution of the alternative problem of problem ${\\rm P}(\\mathcal{A})$, or\n\\item a recommendation of scaling problem ${\\rm P}(\\mathcal{A})$.\n\\end{enumerate}\nIf result 1 or 2 is returned by the basic procedure, then the feasibility of the problem ${\\rm P}(\\mathcal{A})$ can be determined and the main procedure stops. If result (3) is returned, the problem is scaled appropriately and the basic procedure is called again. It should be noted that the purpose of rescaling differs between \\cite{Bruno2019} and \\cite{Pena2017}.\n\nIn \\cite{Pena2017}, the authors devised a rescaling method so that the following value becomes larger:\n\\begin{equation}\n\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mathcal{K}) := \\max_x \\left \\{ {\\rm det}(x) \\mid x \\in \\mbox{Ker} \\mathcal{A} \\cap \\mathcal{K} , \\|x\\|_J^2 = r \\right\\}, \\notag\n\\end{equation}\nwhere $\\mbox{Ker} \\mathcal{A} :=\\{ x \\mid \\mathcal{A}(x) = \\bm{0} \\}$ and $\\|x\\|_J$ is the norm induced by the inner product $\\langle x, y \\rangle = {\\rm trace}(x \\circ y)$ defined in section \\ref{sec: notation}. After showing that their algorithm terminates in $\\log_{1.5} 1 \/ \\max \\left( \\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mathcal{K}) , \\delta (\\mbox{range} \\mathcal{A}^* \\cap \\mathcal{K}) \\right)$ iterations, they proposed four updating schemes to be employed in the basic procedure (the perceptron scheme, von Neumann scheme, smooth perceptron scheme, and von Neumann with the away-step scheme) and conducted numerical experiments to compare the effect of these schemes when the symmetric cone is the nonnegative orthant \\cite{Pena2019}.\n\nIn \\cite{Bruno2019}, the authors assumed that the symmetric cone $\\mathcal{K}$ is given by the Cartesian product of $p$ simple symmetric cones $\\mathcal{K}_1, \\mathcal{K}_2, \\ldots, \\mathcal{K}_p$, and they investigated the feasibility of the problem (${\\rm P}_{S_{1,\\infty}}(\\mathcal{A})$),\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_{1,\\infty}}(\\mathcal{A})& \\mbox{find} & x \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (x) =\\bm{0}, \\\\\n\t& & \\|x\\|_{1 , \\infty} \\leq 1, \\\\\n\t& & x \\in \\mbox{int} \\mathcal{K}, \\notag\n\\end{array}\n\\end{equation}\nwhere for each $x = (x_1, x_2, \\ldots, x_p) \\in \\mathcal{K} =\\mathcal{K}_1 \\times \\mathcal{K}_2 \\times \\cdots \\mathcal{K}_p$, $\\|x\\|_{1,\\infty}$ is defined by \n\\begin{equation}\n\\|x\\|_{1,\\infty} := \\max \\{ \\|x_1\\|_1 , \\dots ,\\|x_p\\|_1 \\}, \\notag\n\\end{equation}\nand $\\|x\\|_1$ is the sum of the absolute values of all eigenvalues of $x$. Note that if $p=1$, then problem ${\\rm P}_{S_{1,\\infty}}(\\mathcal{A})$ turns out to be ${\\rm P}_{S_1}(\\mathcal{A})$, which is equivalent to ${\\rm P}(\\mathcal{A})$:\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_1}(\\mathcal{A})& \\mbox{find} & x \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (x) = \\bm{0}, \\\\\n\t& & \\|x\\|_1 \\leq 1, \\\\\n\t& & x \\in \\mbox{int} \\mathcal{K}. \\notag\n\\end{array}\n\\end{equation}\n\nThe authors focused on the volume of the feasible region of ${\\rm P}_{S_{1,\\infty}}(\\mathcal{A})$ and devised a rescaling method so that the volume becomes smaller.\nTheir method will stop when the feasibility of problem {${\\rm P}_{S_{1,\\infty}}(\\mathcal{A})$ or the fact that the minimum eigenvalue of any feasible solution of problem ${\\rm P}_{S_{1,\\infty}}(\\mathcal{A})$ is less than $\\varepsilon$ is determined.\nIt stops in $\\frac{r}{\\varphi(2)} \\log \\left( \\frac{1}{\\varepsilon} \\right) - \\sum_{l=1}^p \\frac{r_\\ell \\log(r_\\ell)}{\\varphi(2)}$ iterations, where $r$ is the rank of $\\mathcal{K}$, $r_\\ell$ is the rank of $\\mathcal{K}_{\\ell} (\\ell=1,2,\\ldots,p)$, $\\varphi(\\rho) = 2 - \\frac{1}{\\rho} - \\sqrt{ 3 - \\frac{2}{\\rho}}$, and $\\varepsilon$ is a sufficiently small positive value.\n\nThe aim of this paper is to devise a new variant of Chubanov's method for solving ${\\rm P}(\\mathcal{A})$ by extending Roos's method \\cite{Roos2018} to the following feasibility problem (${\\rm P}_{S_{\\infty}}(\\mathcal{A})$) over the symmetric cone $\\mathcal{K}$:\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_{\\infty}}(\\mathcal{A})& \\mbox{find} & x \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (x) = \\bm{0}, \\\\\n\t& & \\|x\\|_\\infty \\leq 1, \\\\\n\t& & x \\in \\mbox{int} \\mathcal{K}, \\notag\n\\end{array}\n\\end{equation}\nwhere $\\|x\\|_\\infty$ is the maximum absolute eigenvalue of $x$.\nThroughout this paper, we will assume that $\\mathcal{K}$ is the Cartesian product of $p$ simple symmetric cones $\\mathcal{K}_1, \\dots, \\mathcal{K}_p$, i.e., $\\mathcal{K} = \\mathcal{K}_1 \\times \\dots \\times \\mathcal{K}_p$.\n\n\\modifySecond{Here, we should mention an important issue about Lemma 4.2 in \\cite{Roos2018}, which is one of the main results of \\cite{Roos2018}.\nThe proof of Lemma 4.2 given in the paper \\cite{Roos2018} is incorrect and a correct proof is provided in the paper \\cite{Wei2019}, while this study derives theoretical results without referring to the lemma.\n}\n\nOur method has a feature that the main algorithm works while keeping information about the minimum eigenvalue of any feasible solution of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ and, in this sense, it is closely related to Louren\\c{c}o et al.'s method \\cite{Bruno2019}. Using the norm $\\| \\cdot \\|_\\infty$ in problem ${\\rm P}_{S_\\infty} (\\mathcal{A})$ makes it possible to \n\\begin{itemize}\n\\item \ncalculate the upper bound for the minimum eigenvalue of any feasible solution of ${\\rm P}_{S_\\infty} (\\mathcal{A})$, \n\\item\nquantify the feasible region of ${\\rm P} (\\mathcal{A})$, and hence, \n\\item\ndetermine whether there exists a feasible solution of ${\\rm P} (\\mathcal{A})$ whose minimum eigenvalue is greater than $\\varepsilon$ as in \\cite{Bruno2019}.\n\\end{itemize}\n\nNote that the symmetric cone optimization includes several types of problems (linear, second-order cone, and semi-definite optimization problems) with various settings and the computational bound of an algorithm depends on these settings. As we will describe in section \\ref{sec: compare}, the theoretical computational bound of our method is \n\\begin{itemize}\n\\item equivalent to that of Roos's original method \\cite{Roos2018} and superior to that of Louren\\c{c}o et al.'s method \\cite{Bruno2019} when the symmetric cone is the nonnegative orthant,\n\\item superior to that of Louren\\c{c}o et al.'s method when the symmetric cone is a Cartesian product of second-order cones, and\n\\item equivalent to that of Louren\\c{c}o et al.'s method when the symmetric cone is the simple positive semidefinite cone, under the assumption that the costs of computing the spectral decomposition and of the minimum eigenvalue are of the same order for any given symmetric matrix.\n\\item superior to that of Pena and Soheili's method~\\cite{Pena2017} for any simple symmetric cones under the feasibility assumption of the problem imposed in~\\cite{Pena2017}.\n\\end{itemize}\n}\n\nAnother aim of this paper is to give comprehensive numerical comparisons of the existing algorithms and our method. \nAs described in section \\ref{sec: numerical experiments}, we generate the following three types of instance:\n\\begin{itemize}\n\\item \\modifyFirst{strongly feasible ill-conditioned instances, i.e., $\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\neq \\emptyset$ and $x \\in \\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}$ has positive but small eigenvalues,}\n\\item weakly feasible instances, i.e., $\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} = \\emptyset$, but $\\mbox{Ker} \\mathcal{A} \\cap \\mathcal{K} \\setminus \\{ \\bm{0} \\} \\neq \\emptyset$, and\n\\item infeasible instances, i.e., $\\mbox{Ker} \\mathcal{A} \\cap \\mathcal{K} = \\{ \\bm{0} \\} $\n\\end{itemize}\nfor the simple positive semidefinite cone $\\mathcal{K}$, and conduct numerical experiments. \nThe results show that our method is reliable and quite a bit faster than the existing algorithms.\nWe focus on comparing our method with Louren\\c{c}o et al.'s in section \\ref{sec: comparisons} and show that it can reduce the search region more efficiently than Louren\\c{c}o et al.'s.\n\nThe paper is organized as follows:\nSection \\ref{sec:EJA} contains a brief description of Euclidean Jordan algebras and their basic properties.\nSection \\ref{sec: extension} gives a collection of propositions which are necessary to extend Roos's method to problem ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ over the symmetric cone.\nIn sections \\ref{sec: basic procedure} and \\ref{sec: main algorithm}, we explain the basic procedure and the main algorithm of our variant of Chubanov's method.\n\\modifySecond{Section \\ref{sec: compare} compares the theoretical computational bounds of Louren\\c{c}o et al.'s method~\\cite{Bruno2019}, Pena and Soheili's method~\\cite{Pena2017} and our method.}\nIn section \\ref{sec: numerical experiments}, we conduct numerical experiments comparing our variant with the existing methods.\nThen, in section \\ref{sec: comparisons}, we make more detailed comparisons of Louren\\c{c}o et al.'s method and our method in terms of the performance of the cut obtained from the basic procedure and the detection performance of an $\\varepsilon$-feasible solution. \nThe conclusions are summarized in section \\ref{sec: concluding remarks}.\n\n\n\\section{Euclidean Jordan algebras and their basic properties}\n\\label{sec:EJA}\n\\modifyThird{\nIn this section, we briefly introduce Euclidean Jordan algebras and symmetric cones. For more details, see~\\cite{Faraut1994}. In particular, the relation between symmetry cones and Euclidean Jordan algebras is given in Chapter III (Koecher and Vinberg theorem) of~\\cite{Faraut1994}.\n}\n\\subsection{Euclidean Jordan algebras}\n\\modifyThird{Let $\\mathbb{E}$ be a real-valued vector space equipped with an inner product $\\langle \\cdot, \\cdot \\rangle$ and a bilinear operation $\\circ$ : $\\mathbb{E} \\times \\mathbb{E} \\rightarrow \\mathbb{E}$,\nand $e$ be the identity element, i.e.,$x \\circ e = e \\circ x = x$ holds for any $ x \\in \\mathbb{E}$.\n}\n\n\\modifyThird{\n$(\\mathbb{E}, \\circ)$ is called a Euclidean Jordan algebra if it satisfies\n\\begin{align*}\nx \\circ y = y \\circ x, \\ \\\nx \\circ (x^2 \\circ y) = x^2 \\circ (x \\circ y), \\ \\\n\\langle x \\circ y , z \\rangle = \\langle y , x \\circ z \\rangle\n\\end{align*}\nfor all $x,y,z \\in \\mathbb{E}$ and $x^2 := x \\circ x$.\n}\n\nWe denote $y \\in \\mathbb{E}$ as $x^{-1}$ if $y$ satisfies $x \\circ y = e$.\n$c \\in \\mathbb{E}$ is called an {\\em idempotent} if it satisfies $c \\circ c = c$, and an idempotent $c$ is called {\\em primitive} if it can not be written as a sum of two or more nonzero idempotents. \nA set of primitive idempotents $c_1, c_2, \\ldots c_k$ is called a {\\em Jordan frame} if $c_1, \\ldots c_k$ satisfy\n\\begin{equation}\n\\notag\nc_i \\circ c_j = 0 \\ (i \\neq j), \\ \\\nc_i \\circ c_i = c_i \\ (i= 1 , \\dots , k), \\ \\\n\\sum_{i=1}^k c_i = e.\n\\end{equation}\nFor $x \\in \\mathbb{E}$, the {\\em degree} of $x$ is the smallest integer $d$ such that the set $\\{e,x,x^2,\\ldots,x^d\\}$ is linearly independent.\nThe {\\em rank} of $\\mathbb{E}$ is the maximum integer $r$ of the degree of $x$ over all $x \\in \\mathbb{E}$.\nThe following properties are known.\n\n\\begin{proposition}[Spectral theorem (cf. Theorem III.1.2 of \\cite{Faraut1994})]\n\\label{prop:spectral}\nLet $(\\mathbb{E}, \\circ)$ be a Euclidean Jordan algebra having rank $r$. \nFor any $x \\in \\mathbb{E} $, there exist real numbers $\\lambda_1 , \\dots , \\lambda_r$ and a Jordan frame $c_1 , \\dots , c_r $ for which the following holds:\n\\begin{equation}\nx = \\sum_{i=1}^r \\lambda_i c_i \\notag.\n\\end{equation}\nThe numbers $\\lambda_1 , \\dots , \\lambda_r$ are uniquely determined {\\em eigenvalues} of $x$ (with their multiplicities).\nFurthermore,\n\\begin{equation}\n{\\rm trace}(x) := \\sum_{i=1}^r \\lambda_i, \\ \\ {\\rm det}(x) := \\prod_{i=1}^r \\lambda_i \\notag.\n \\notag\n\\end{equation}\n\\end{proposition}\n\n\n\n\\subsection{Symmetric cone}\n\\label{subsec:symmetric cone}\n\nA proper cone is symmetric if it is self-dual and homogeneous. \nIt is known that the set of squares\n\\begin{equation}\n\\mathcal{K} = \\{ x^2 : x \\in \\mathbb{E} \\} \\notag\n\\end{equation}\nis the symmetric cone of $\\mathbb{E}$ (cf. Theorems III.2.1 and III.3.1 of \\cite{Faraut1994}).\n\nThe following properties can be derived from the results in \\cite{Faraut1994}, as in Corollary 2.3 of \\cite{Yoshise2007}:\n\\begin{proposition}\n\\label{prop:lambda-Jordan}\nLet $x \\in \\mathbb{E}$ and let $\\sum_{j=1}^r \\lambda_j c_j$ be a decomposition of $x$ given by Propositoin \\ref{prop:spectral}. Then\n\\begin{description}\n\\item[(i)]\n$x \\in \\mathcal{K}$ if and only if $\\lambda_j \\geq 0 \\ (j=1,2,\\ldots,r)$, \n\\item[(ii)]\n$x \\in {\\rm int} \\mathcal{K}$ if and only if $\\lambda_j > 0 \\ (j=1,2,\\ldots,r)$.\n\\end{description}\n\\end{proposition}\n\n\\modifyThird{\nFrom Proposition \\ref{prop:lambda-Jordan} and Proposition \\ref{prop:spectral}, for any $x \\in \\mathbb{E}$, its projection $P_\\mathcal{K} (x)$ onto the symmetry cone $\\mathcal{K}$ can be written as an operation to round all negative eigenvalues of $x$ to $0$, i.e., \n\\begin{equation}\nP_\\mathcal{K} (x) = \\sum_{i=1}^r [\\lambda_i]^+ c_i \\notag\n\\end{equation}\nwhere $[\\cdot]^+$ denotes the projection onto the nonnegative orthant. }\n\\modifyThird{\nUsing $P_\\mathcal{K}$, we can decompose any $x \\in \\mathbb{E}$ as follows.\n\\begin{lemma}\n\\label{lemma:-1}\nLet $x \\in \\mathbb{E}$, and $\\mathcal{K}$ be the symmetric cone corresponding to $\\mathbb{E}$.\nThen, $x$ can be decomposed as follows:\n\\begin{equation}\nx = P_\\mathcal{K} (x) - P_\\mathcal{K} (-x). \\notag \n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFrom Propositoin \\ref{prop:spectral}, let $x$ be given as $x = \\sum_{i=1}^r \\lambda_i c_i$. \nLet $I_1$ be the set of indices such that $\\lambda_i \\geq 0$ and $I_2$ be the set of indices such that $\\lambda_i < 0$. Then, we have\n\\begin{equation}\n\\begin{array}{ll}\nP_\\mathcal{K} (x) = \\sum_{i \\in I_1} \\lambda_i c_i,\t&P_\\mathcal{K} (-x) = \\sum_{i \\in I_2} - \\lambda_i c_i\n\\end{array}\n\\notag\n\\end{equation}\nwhich implies that $x = \\sum_{i=1}^r \\lambda_i c_i = \\sum_{i \\in I_1} \\lambda_i c_i + \\sum_{i \\in I_2} \\lambda_i c_i = P_\\mathcal{K} (x) - P_\\mathcal{K} (-x)$.\n\\end{proof}\n}\n\nA Euclidean Jordan algebra $(\\mathbb{E} , \\circ)$ is called {\\em simple} if it cannot be written as any Cartesian product of non-zero Euclidean Jordan algebras.\nIf the Euclidean Jordan algebra $(\\mathbb{E} , \\circ)$ associated with a symmetric cone $\\mathcal{K}$ is simple, then we say that $\\mathcal{K}$ is {\\em simple}.\nIn this paper, we will consider that $\\mathcal{K}$ is given by a Cartesian product of $p$ simple symmetric cones $\\mathcal{K}_\\ell$,\n\\begin{equation}\n\\mathcal{K} := \\mathcal{K}_1 \\times \\dots \\times \\mathcal{K}_p, \\notag\n\\end{equation}\nwhose rank and identity element are $r_\\ell$ and $e_\\ell$ $(\\ell=1, 2, \\ldots, p)$.\nThe rank $r$ and the identity element of $\\mathcal{K}$ are given by\n\\begin{equation}\n\\label{eq:r and e}\nr = \\sum_{\\ell=1}^p r_\\ell, \\ \\ e = (e_1 , \\dots , e_p).\n\\end{equation}\nIn what follows, $x_\\ell$ stands for the $\\ell$-th block element of $x \\in \\mathcal{K}$, i.e., $x = (x_1 , \\dots . x_p) \\in \\mathcal{K}_1 \\times \\dots \\times \\mathcal{K}_p$.\nFor each $\\ell=1, 2, \\ldots, p$, we define \n\\begin{equation}\n\\notag\n\\lambda_{\\min}(x_\\ell) := \\min\\{ \\lambda_1, \\ \\lambda_2, \\ldots, \\lambda_{r_\\ell} \\}\n\\end{equation}\nwhere $\\lambda_1, \\ \\lambda_2, \\cdots, \\lambda_{r_\\ell}$ are eigenvalues of $x_\\ell$.\nThe minimum eigenvalue $\\lambda_{\\min}(x)$ of $x \\in \\mathcal{K}$ is given by \n\\begin{equation}\n\\lambda_{\\min}(x) = \\mbox{min} \\{ \\lambda_{\\min}(x_1) , \\lambda_{\\min}(x_2) , \\dots , \\lambda_{\\min}(x_p) \\} . \\notag\n\\end{equation}\n\n\nNext, we consider the {\\em quadratic representation} $Q_v(x)$ defined by\n\\begin{equation}\nQ_v(x) := 2 v \\circ ( v \\circ x ) - v^2 \\circ x. \\notag\n\\end{equation}\nFor the cone $\\mathcal{K} = \\mathcal{K}_1 \\times \\dots \\times \\mathcal{K}_p$, the quadratic representation $Q_v(x)$ of $x \\in \\mathcal{K}$ is denoted by $Q_v(x) = \\left(Q_{v_1} (x_1) , \\dots , Q_{v_p}(x_p) \\right)$.\nLetting $I_\\ell$ be the identity operator of the Euclidean Jordan algebra $(\\mathbb{E}_\\ell, \\circ_\\ell)$ associated with the cone $\\mathcal{K}_\\ell$, we have $Q_{e_\\ell} = I_\\ell$ for $\\ell=1, 2, \\ldots, p$.\n\n\nThe following properties can also be retrieved from the results in \\cite{Faraut1994} as in Proposition 3 of \\cite{Bruno2019}:\n\n\\begin{proposition}\n\\label{ptop:quadratic}\nFor any $v \\in {\\rm int}\\mathcal{K}$, $ Q_v (\\mathcal{K}) = \\mathcal{K}$.\n\\end{proposition}\n\n\\modifyThird{\nIt is also known that the following relations hold for the quadratic representation $Q_v$ and $\\det (\\cdot)$ \\cite{Faraut1994}.\n\\begin{proposition}[cf. Proposition II.3.3 and III.4.2-(i), \\cite{Faraut1994}] \n\\label{q-det-relation}\nFor any $v,x \\in \\mathbb{E}$,\n\\begin{enumerate}\n\\item $\\det Q_v(x) = \\det (v)^2 \\det (x)$,\n\\item $Q_{Q_v(x)} = Q_v Q_x Q_v$ (i.e., if $x =e$ then $Q_{v^2} = Q_v Q_v$) .\n\\end{enumerate}\n\\end{proposition}\n}\n\nMore detailed descriptions, including concrete examples of symmetric cone optimization, can be found in, e.g., \\cite{Faraut1994, Faybusovich1997, Schmieta2003, Alizadeh2012}. \n\n\\modifyThird{\nHere, we will use concrete examples of symmetric cones to explain the biliniear operation $\\circ$, the identity element $e$, the inner product $\\langle \\cdot , \\cdot \\rangle$, the eigenvalues $\\lambda_i$, the primitive idempotents $c_i$, the projection $P_\\mathcal{K} (\\cdot)$ on the symmetric cone and the quadratic representation $Q_{v}(\\cdot)$ on the cone.\n}\n\n\\modifyThird{\n\\begin{example}[$\\mathcal{K}$ is the semidefinite cone $\\mathbb{S}^n_+$]\n{\\rm \nLet $\\mathbb{S}^n$ be the set of symmetric matrices of $n \\times n$.The semidefinite cone $\\mathbb{S}^n_+$ is given by \n\\begin{equation}\n\\mathbb{S}^n_+ = \\{ X \\in \\mathbb{S}^n : X \\succeq O \\} . \\notag\n\\end{equation}\nFor any symmetric matrices $X , Y \\in \\mathbb{S}^n$, define the bilinear operation $\\circ$ and inner product as follows: \n\\begin{equation}\nX \\circ Y = \\frac{ XY + YX } {2}, \\ \\\n\\ \\\n\\langle X , Y \\rangle = \\mbox{tr}(XY) = \\sum_{i=1}^n \\sum_{j=1}^n X_{ij} Y_{ij} . \\notag\n\\end{equation}\nFor any $X \\in \\mathbb{S}^n$, perform the eigenvalue decomposition and let $u_1 , \\dots , \\lambda_n$ be the corresponding normalized eigenvectors for the eigenvalues $\\lambda_1 , \\dots , u_n$:\n\\begin{equation}\nX = \\sum_{i=1}^n \\lambda_i u_i u_i^T , \\notag\n\\end{equation}\nThe eigenvalues of $X$ in the Jordan algebra are $\\lambda_1 , \\dots , \\lambda_n$ and the primitive idempotents are $c_1 = u_1 u_1^T , \\dots , c_n = u_n u_n^T$, which implies that the rank of the semidefinite cone $\\mathbb{S}^n_+$ is $r=n$.\nThe identity element is the identity matrix $I$, and the projection $P_{\\mathbb{S}^n_+}(X)$ onto $\\mathbb{S}^n_+$ is given by $P_{\\mathbb{S}^n_+}(X) = \\sum_{i=1}^n [\\lambda_i]^+ u_i u_i^T$.\nThe quadratic representation of $V \\in \\mathbb{S}^n$ is given by $Q_V(X) = V X V$.\n}\n\\end{example}\n}\n\n\\modifyThird{\n\\begin{example}[$\\mathcal{K}$ is the second-order cone $\\mathbb{L}_n$]\n{\\rm \nThe second order cone is given by\n\\begin{equation}\n\\mathbb{L}_n = \\left \\{ \\begin{pmatrix} x_1 \\\\ \\bm{\\bar{x}} \\end{pmatrix} \\in \\mathbb{R}^n : x_1 \\geq \\| \\bm{\\bar{x}} \\|_2 \\right \\} . \\notag\n\\end{equation}\nFor any $x , y \\in \\mathbb{R}^n$, define the bilinear operation $\\circ$ and the inner product as follows: \n\\begin{equation}\nx \\circ y = \\begin{pmatrix} x_1 \\\\ \\bm{\\bar{x}} \\end{pmatrix} \\circ \\begin{pmatrix} y_1 \\\\ \\bm{\\bar{y}} \\end{pmatrix} = \\begin{pmatrix} x^Ty \\\\ x_1\\bm{\\bar{y}} + y_1\\bm{\\bar{x}} \\end{pmatrix}, \n\\\\ \\ \\\n\\langle x , y \\rangle = 2 \\sum_{i=1}^n x_i y_i . \\notag\n\\end{equation}\nFor any $x \\in \\mathbb{R}^n$, by the decomposition \n\\begin{equation}\nx = \\left( x_1 + \\| \\bm{\\bar{x}} \\|_2 \\right) \\begin{pmatrix} 1\/2 \\\\ \\frac{\\bm{\\bar{x}}}{2 \\| \\bm{\\bar{x}} \\|_2} \\end{pmatrix} + \\left( x_1 - \\| \\bm{\\bar{x}} \\|_2 \\right) \\begin{pmatrix} 1\/2 \\\\ -\\frac{\\bm{\\bar{x}}}{2 \\| \\bm{\\bar{x}} \\|_2} \\end{pmatrix} , \\notag\n\\end{equation}\nwe obtain the eigenvalues and the primitive idempotents as follows:\\begin{equation}\n\\lambda_1 = x_1 + \\| \\bm{\\bar{x}} \\|_2 \\ \\ , \\ \\ \\lambda_2 =x_1 - \\| \\bm{\\bar{x}} \\|_2 , \\notag\n\\end{equation}\n\\begin{equation}\nc_1 =\n\\begin{cases}\n\\begin{pmatrix} 1\/2 \\\\ \\frac{\\bm{\\bar{x}}}{2 \\| \\bm{\\bar{x}} \\|_2} \\end{pmatrix} & \\|\\bar{x}\\|_2 \\neq 0 \\\\\n\\\\\n\\begin{pmatrix} 1\/2 \\\\ \\frac{1}{2}z \\end{pmatrix} & \\|\\bar{x}\\|_2 = 0 \n\\end{cases}\n \\ \\ ,\\ \\ \nc_2 = \n\\begin{cases}\n\\begin{pmatrix} 1\/2 \\\\ -\\frac{\\bm{\\bar{x}}}{2 \\| \\bm{\\bar{x}} \\|_2} \\end{pmatrix} & \\|\\bar{x}\\|_2 \\neq 0 \\\\\n\\\\\n\\begin{pmatrix} 1\/2 \\\\ -\\frac{1}{2}z \\end{pmatrix} & \\|\\bar{x}\\|_2 = 0 \n\\end{cases}\n \\ \\ .\\ \\ \n\\notag\n\\end{equation}\nwhere $z \\in \\mathbb{R}^{n-1}$ is an arbitrary vector satisfying $\\|z\\|_2=1$.\nThe above implies that the rank of the second-order cone $\\mathbb{L}_n$ is $r=2$.\nThe identity element is given by \n$e = \n\\begin{pmatrix}\n1 \\\\\n\\bm{0}\n\\end{pmatrix}\n\\in \\mathbb{R}^n\n$.\nThe projection $P_{\\mathbb{L}_n}(x)$ onto $\\mathbb{L}_n$ is given by\n\\begin{equation}\nP_{\\mathbb{L}_n}(x) = \\left[ x_1 + \\| \\bm{\\bar{x}} \\|_2 \\right]^+ \\begin{pmatrix} 1\/2 \\\\ \\frac{\\bm{\\bar{x}}}{2 \\| \\bm{\\bar{x}} \\|_2} \\end{pmatrix} + \\left[ x_1 - \\| \\bm{\\bar{x}} \\|_2 \\right]^+ \\begin{pmatrix} 1\/2 \\\\ -\\frac{\\bm{\\bar{x}}}{2 \\| \\bm{\\bar{x}} \\|_2} \\end{pmatrix} .\n\\notag\n\\end{equation}\nLetting $I_{n-1}$ be the unit matrix of order $n-1$, the quadratic representation $Q_v(\\cdot)$ of $v \\in \\mathbb{R}^n$ is as follows:\n\\begin{equation}\nQ_v(x) = \n\\begin{pmatrix}\n\\| v \\|_2^2 & 2 v_1 \\bm{\\bar{v}}^T \\\\\n2 v_1 \\bm{\\bar{v}} & \\mbox{det}v I_{n-1} + 2 \\bm{\\bar{v}} \\bm{\\bar{v}}^T\n\\end{pmatrix} x.\n\\notag\n\\end{equation}\n}\n\\end{example}\n}\n\n\\subsection{Notation}\n\\label{sec: notation}\n\nThis subsection summarizes the notation used in this paper.\nFor any $x,y \\in \\mathbb{E}$, we define the inner product $\\langle \\cdot,\\cdot \\rangle$ and the norm $\\|\\cdot\\|_{J}$ as follows:\n\\begin{equation}\n\\langle x , y \\rangle := {\\rm trace}(x \\circ y) \\notag, \\ \\ \\| x \\|_J := \\sqrt{ \\langle x , x \\rangle }. \\notag\n\\end{equation}\nFor any $x \\in \\mathbb{E}$ having decomposition $x = \\sum_{i=1}^r \\lambda_i c_i$ as in Proposition \\ref{prop:spectral}, we also define \n\\begin{equation}\n\\notag\n\\| x \\|_1 := |\\lambda_1| + \\dots + |\\lambda_r|, \\ \\ \n\\|x\\|_\\infty := \\max \\{|\\lambda_1| , \\dots , |\\lambda_r| \\}.\n\\end{equation}\nFor $x \\in \\mathcal{K}$, we obtain the following equivalent representations:\n\\begin{equation}\n\\notag \n\\|x\\|_1 = \\langle e , x \\rangle, \\ \\ \n\\|x\\|_\\infty = \\lambda_{\\max} (x).\n\\end{equation}\n\nThe following is a list of other definitions and frequently used symbols in the paper.\n\\begin{itemize}\n\\item $d$: the dimension of the Euclidean space $\\mathbb{E}$ corresponding to $\\mathcal{K} = \\mathcal{K}_1 \\times \\dots \\times \\mathcal{K}_p$,\n\\item $F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$: the feasible region of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$,\n\\item $P_{\\mathcal{A}} (\\cdot) $: the projection map onto $\\mbox{Ker} \\mathcal{A}$,\n\\item $\\mathcal{P}_\\mathcal{K} (\\cdot) $: the projection map onto $\\mathcal{K}$,\n\\item $\\lambda(x) \\in \\mathbb{R}^r$: an $r$-dimensional vector composed of the eigenvalues of $ x \\in \\mathcal{K}$,\n\\item $\\lambda(x_\\ell) \\in \\mathbb{R}^{r_\\ell}$: an $r_\\ell$-dimensional vector composed of the eigenvalues of $x_\\ell \\in \\mathcal{K}_\\ell$ ($\\ell =1,2,\\ldots,p$),\n\\item \\modifySecond{$c(x_\\ell)_i \\in \\mathcal{K}_\\ell$: the $i$-th primitive idempotent of $x_\\ell \\in \\mathbb{E}_\\ell$. When $\\mathcal{K}$ is simple, it is abbreviated as $c_i$.}\n\\item $\\left[ \\cdot \\right]^+$: the projection map onto the nonnegative orthant, and\n\\item $\\mathcal{A}^*(\\cdot)$: the adjoint operator of the linear operator $\\mathcal{A}(\\cdot)$, i.e., $\\langle \\mathcal{A}(x) , y \\rangle = \\left \\langle x , \\mathcal{A}^* (y) \\right\\rangle$ for all $x \\in \\mathcal{K}$ and $y \\in \\mathbb{R}^m$.\n\\end{itemize}\n\n\n\\section{Extension of Roos's method to the symmetric cone problem}\n\\label{sec: extension}\n\\subsection{Outline of the extended method}\nWe focus on the feasibility of the following problem ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$, which is equivalent to ${\\rm P}(\\mathcal{A})$: \n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_{\\infty}}(\\mathcal{A})& \\mbox{find} & x \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (x) = \\bm{0}, \\\\\n\t& & \\|x\\|_\\infty \\leq 1, \\\\\n\t& & x \\in \\mbox{int} \\mathcal{K}. \\notag\n\\end{array}\n\\end{equation}\n\nThe alternative problem ${\\rm D}(\\mathcal{A})$ of ${\\rm P}(\\mathcal{A})$ is \n\\begin{equation}\n\\notag\n\\begin{array}{lll}\n{\\rm D}(\\mathcal{A})& \\mbox{find} & y \\\\\n\t\t\t&\\mbox{s.t.} & y \\in \\mbox{range} \\mathcal{A}^*, \\\\\n\t\t\t&\t& y \\in \\mathcal{K} , y \\neq \\bm{0},\n\\end{array}\n\\end{equation}\nwhere $\\mbox{range} \\mathcal{A}^*$ is the orthogonal complement of $\\mbox{Ker} \\mathcal{A}$.\nAs we mentioned in section \\ref{subsec:symmetric cone}, \nwe assume that $\\mathcal{K}$ is given by a Cartesian product of $p$ simple symmetric cones $\\mathcal{K}_\\ell (\\ell=1,2,\\ldots,p)$, i.e., $\\mathcal{K} = \\mathcal{K}_1 \\times \\mathcal{K}_2 \\times \\dots \\times \\mathcal{K}_p $.\n\nIn our method, the upper bound for the sum of eigenvalues of a feasible solution of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ plays a key role, whereas the existing work focuses on the volume of the set of the feasible region \\cite{Bruno2019} or the condition number of a feasible solution~\\cite{Pena2017}.\n\n\\modifySecond{\nBefore describing the theoretical results, let us outline the proposed algorithm when $\\mathcal{K}$ is simple. The algorithm repeats two steps: \n\\begin{description}\n\t\\item Step 1: find a cut for ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$, \n\t\\item Step 2: scale the problem to an isomorphic problem equivalent to ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ such that the region narrowed by the cut is efficiently explored.\n\\end{description}\nGiven a feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ and a constant $0<\\xi<1$, the proposed method first searches for a Jordan frame $\\{ c_1 , \\dots, c_r \\}$ such that the following is satisfied:\n\\begin{equation}\n\\notag\n\\langle c_i , x \\rangle \\leq \\xi \\ ( i \\in H), \\ \\ \\langle c_i , x \\rangle \\leq 1 \\ ( i \\notin H),\n\\end{equation}\nwhere $H \\subseteq \\{1, \\dots,r\\}$ and $|H|>0$.\nIn this case, instead of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$, we may consider ${\\rm P}^{\\rm Cut}_{S_{\\infty}}(\\mathcal{A})$ as follows:\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}^{\\rm Cut}_{S_{\\infty}}(\\mathcal{A})& \\mbox{find} & x \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (x) = \\bm{0}, \\\\\n\t&\t&\\langle c_i, x \\rangle \\leq \\xi, \\ \\ i \\in H \\\\\n\t&\t&\\langle c_i, x \\rangle \\leq 1, \\ \\ i \\notin H \\\\\n\t& &\\|x\\|_\\infty \\leq 1, \\\\\n\t& &x \\in \\mbox{int} \\mathcal{K}. \\notag\n\\end{array}\n\\end{equation}\nHere, we define the set $SR^{\\rm Cut} = \\{x \\in \\mathbb{E} : x \\in \\mbox{int} \\mathcal{K}, \\ \\|x\\|_\\infty \\leq 1, \\ \\langle c_i, x \\rangle \\leq \\xi \\ (i \\in H), \\ \\langle c_i, x \\rangle \\leq 1 \\ i \\notin H) \\}$ as the search range for the solutions of the problem ${\\rm P}^{\\rm Cut}_{S_{\\infty}}(\\mathcal{A})$. }\n\n\\modifySecond{\nThe proposed method then creates a problem equivalent and isomorphic to ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ such that $SR^{\\rm Cut}$, the region narrowed by the cut, can be searched efficiently.\nSuch a problem is obtained as follows:\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_{\\infty}}(\\mathcal{A}Q_g)& \\mbox{find} & \\bar{x} \\\\\n\t&\\mbox{s.t.} & \\mathcal{A}Q_g (\\bar{x}) = \\bm{0}, \\\\\n\t& &\\|\\bar{x}\\|_\\infty \\leq 1, \\\\\n\t& &\\bar{x} \\in \\mbox{int} \\mathcal{K}. \\notag\n\\end{array}\n\\end{equation}\nwhere $g$ is given by $g = \\sqrt{\\xi} \\sum_{i \\in H} c_i + \\sum_{i \\notin H} c_i \\in {\\rm int} \\mathcal{K}$ for which $e = Q_{g^{-1}} (u)$ holds for $u = \\sum_{i \\in H} \\xi c_i + \\sum_{i \\notin H} c_i$.\n}\n\n\\modifySecond{\nIn the succeeding sections, we explain how the cut for ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ is obtained from some $v \\in \\mbox{range} \\mathcal{A}^*$; we also explain the scaling method for the problem in detail.} \\modifyThird{To simplify our discussion, we will assume that $\\mathcal {K}$ is simple, i.e., $p=1$, in section \\ref{subsec:simple}. Then, in section \\ref{subsec:non-simple}, we will generalize our discussion to the case of $p \\geq 2$.\n}\n\n\\subsection{Simple symmetric cone case}\n\\label{subsec:simple}\n\nLet us consider the case where $\\mathcal {K}$ is simple, i.e., $p=1$. It is obvious that, for any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$, the constraint $\\|x\\|_\\infty \\leq 1$ implies that the sum of eigenvalues has an upper bound $\\langle e , x\\rangle \\leq r$, since $x \\in \\mathcal{K}$. In Proposition \\ref{prop:upper-bound}, we show that this bound may be improved as $\\langle e , x\\rangle 0$, if there exists an $x$ satisfying \n\\begin{equation}\n\\langle c_i,x \\rangle =\n\\begin{cases}\n0 & i \\in I_1 \\\\\n1 & i \\in I_2\n\\end{cases}, \n\\label{lemma:0-eq2}\n\\end{equation}\nthen such an $x$ is an optimal solution of (\\ref{lemma:0-eq1}). In fact, if we define $x^* = \\sum_{i \\in I_2} c_i$, then by the dedfinition of the Jordan frame, $x^*$ satisfies (\\ref{lemma:0-eq2}) and $\\bm{0} \\leq \\lambda(x) \\leq \\bm{1}$ and becomes an optimal solution of (\\ref{lemma:0-eq1}). In this case, the optimal value of (\\ref{lemma:0-eq1}) turns out to be\n\\begin{equation}\n\\max_{\\bm{0} \\leq \\lambda(x) \\leq \\bm{1}} \\sum_{i=1}^r \\lambda_i \\left \\langle c_i , x \\right \\rangle = \\sum_{i=1}^r \\lambda_i \\left \\langle c_i , x^* \\right \\rangle \n= \\sum_{i \\in I_2} \\lambda_i \n= \\sum_{i=1}^r [\\lambda_i]^+\n= \\langle \\mathcal{P}_\\mathcal{K} (y) , e \\rangle. \n\\notag\n\\end{equation}\n\\end{proof}\n}\n\n\\begin{proposition}\n\\label{prop:p-d-relation} \nLet $(\\mathbb{E}, \\circ)$ be a Euclidean Jordan Algebra with the corresponding symmetric cone $\\mathcal{K}$.\nFor a given $c \\in \\mathbb{E}$, consider the problem\n\\begin{equation}\n\\notag\n\\begin{array}{lll}\n\\mbox{\\rm max}\t& \\langle c , x \\rangle \\\\\n\\mbox{\\rm s.t}\t& \\mathcal{A}(x) = \\bm{0}, \\\\\n\t\t\t\t\t& \\bm{0} \\leq \\lambda (x) \\leq \\bm{1} \\notag.\n\\end{array}\n\\end{equation}\nThe dual problem of the above is \n\\begin{equation}\n\\notag\n\\begin{array}{lll}\n\\mbox{\\rm min} & \\left \\langle \\mathcal{P}_\\mathcal{K} \\left( c - u \\right) , e \\right \\rangle \\\\\n \\mbox{\\rm s.t} & u \\in \\mbox{\\rm range} \\mathcal{A}^*. \\notag\n\\end{array}\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof} \\mbox{}\\\\\n\nDefine the Lagrangian function $L(x,w)$ as\n\\begin{equation}\nL(x,w) := \\langle c , x \\rangle - w^\\top \\mathcal{A}(x) \\notag\n\\end{equation}\nwhere $w \\in \\mathbb{R}^m$ is the Lagrange multiplier. \nWe have \n\\begin{align*}\n\\max_{\\bm{0} \\leq \\lambda(x) \\leq \\bm{1}} \\min_w L(x,w) &\\leq \\min_w \\max_{\\bm{0} \\leq \\lambda(x) \\leq \\bm{1}} L(x,w) \\\\\n&= \\min_w \\max_{\\bm{0} \\leq \\lambda(x) \\leq \\bm{1}} \\{ \\langle c , x \\rangle - \\langle \\mathcal{A}^*(w) ,x \\rangle \\} \\\\\n&= \\min_w \\max_{\\bm{0} \\leq \\lambda(x) \\leq \\bm{1}} \\{ \\langle c - \\mathcal{A}^*(w) ,x \\rangle \\} \\\\\n&= \\min_w \\left \\langle \\mathcal{P}_\\mathcal{K} \\left( c - \\mathcal{A}^*(w) \\right) ,e \\right \\rangle \\ \\ \\ \\modifyFirst{\\mbox{(by lemma \\ref{lemma:0})}} \\\\\n&= \\min_{u \\in \\mbox{\\footnotesize range} \\mathcal{A}^*} \\langle \\mathcal{P}_\\mathcal{K} \\left( c - u \\right) , e \\rangle,\n\\end{align*}\nand the dual problem is \n\\begin{equation}\n\\notag\n\\begin{array}{lll}\n\\mbox{\\rm min} & \\langle \\mathcal{P}_\\mathcal{K} \\left( c - u \\right) , e \\rangle \\\\\n \\mbox{s.t.} & u \\in \\mbox{range} \\mathcal{A}^*. \\notag\n\\end{array}\n\\end{equation}\n\\end{proof}\n\nThe following is a key proposition that relates to the stopping criteria of our method.\n\n\\begin{proposition}\n\\label{prop:upper-bound}\n\nSuppose that $v \\in \\mbox{\\rm range} \\mathcal{A}^*$ is given by \n\\begin{equation}\nv = \\sum_{i=1}^r \\lambda_i c_i \\notag\n\\end{equation}\nas in Proposition \\ref{prop:spectral}. \nFor each $i \\in \\{ 1, \\dots , r \\}$ and $\\alpha \\in \\mathbb{R}$, define \n\\begin{equation}\n\\notag\nq_i(\\alpha) := \\left[ 1-\\alpha \\lambda_i \\right]^+ + \\sum_{j \\neq i }^r \\left[ - \\alpha \\lambda_j \\right]^+.\n\\end{equation}\nThen, \n\\begin{equation}\n\\label{eq:bound_q}\n\\langle c_i , x \\rangle \\leq \n\\min_{\\alpha \\in \\mathbb{R}} q_i(\\alpha) = \n\\begin{cases}\n\\min \\left\\{ 1 , \\left \\langle e , \\mathcal{P}_{\\mathcal{K}} \\left( - \\frac{1}{\\lambda_i} v \\right) \\right \\rangle \\right\\} & \\mbox{if $\\lambda_i \\neq 0$}, \\\\\n1 & \\mbox{if $\\lambda_i = 0$} \n\\end{cases}\n\\end{equation}\nhold for any $x \\in F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$ and $i \\in \\{ 1, \\dots , r \\}$. \n\\end{proposition}\n\n\n\\begin{proof}\n\nFor each $i \\in \\{ 1, 2, \\dots , r \\}$, we have \n\\begin{equation}\n\\mathcal{P}_\\mathcal{K} \\left( c_i - \\alpha v \\right) = \\mathcal{P}_\\mathcal{K} \\left( c_i - \\alpha \\sum_{j=1}^r \\lambda_j c_j \\right) = \\mathcal{P}_\\mathcal{K} \\left( (1-\\alpha \\lambda_i ) c_i - \\sum_{j \\neq i }^r \\alpha \\lambda_j c_j \\right), \\notag\n\\end{equation}\nand hence, \n\\begin{equation}\n\\label{eq:q_i(alpha)}\n\\left \\langle \\mathcal{P}_\\mathcal{K} \\left( c_i - \\alpha v \\right) , e \\right \\rangle\n=\n\\left \\langle \\mathcal{P}_\\mathcal{K} \\left( (1-\\alpha \\lambda_i ) c_i - \\sum_{j \\neq i }^r \\alpha \\lambda_j c_j \\right) , \\sum_{i=1}^r c_i \\right \\rangle \n= \\left[ 1-\\alpha \\lambda_i \\right]^+ + \\sum_{j \\neq i }^r \\left[ - \\alpha \\lambda_j \\right]^+ = q_i(\\alpha) .\n\\end{equation}\n\nNote that, since $q_i(\\alpha)$ is a piece-wise linear convex function, if $\\lambda_i = 0$, it attains the minimum at $\\alpha = 0$ with $q_i(0) = 1$, and if $\\lambda_i \\neq 0$, it attains the minimum at $\\alpha = 0$ with $q_i(0) = 1$ or at $\\alpha = \\frac{1}{\\lambda_i}$ with\n\\begin{equation}\n\\notag\nq \\left( \\frac{1}{\\lambda_i} \\right) = \\sum_{j \\neq i}^r \\left[ - \\frac{\\lambda_j}{\\lambda_i} \\right]^+ \n= \\sum_{j=1}^r \\left[ - \\frac{\\lambda_j}{\\lambda_i} \\right]^+ \n= \\left \\langle e , \\mathcal{P}_{\\mathcal{K}} \\left( - \\frac{1}{\\lambda_i} v \\right) \\right \\rangle.\n\\end{equation}\nThus, we obtain equivalence in (\\ref{eq:bound_q}).\n\nSince $\\alpha v \\in \\mbox{range} \\mathcal{A}^*$ for all $\\alpha \\in \\mathbb{R}$, \nfor each $i \\in \\{ 1, \\dots , r \\}$, Proposition \\ref{prop:p-d-relation} and (\\ref{eq:q_i(alpha)}) ensure that \n\\begin{equation}\n\\langle c_i , x \\rangle \\leq \\left \\langle \\mathcal{P}_\\mathcal{K} \\left( c_i - \\alpha v \\right) , e \\right \\rangle = q_i(\\alpha)\n\\notag\n\\end{equation}\nfor all $\\alpha \\in \\mathbb{R}$, which implies the inequality in (\\ref{eq:bound_q}).\n\\end{proof}\n\n\n\\modifyThird{Since $\\sum_{i=1}^r c_i = e$ holds, Proposition \\ref{prop:upper-bound} allows us to compute upper bounds for the sum of eigenvalues of $x$.} The following proposition gives us information about indices whose upper bound for $\\langle c_i , x \\rangle$ in Proposition \\ref{prop:upper-bound} is less than 1.\n\n\\begin{proposition}\n\\label{prop:cut-prb}\n\nSuppose that $v \\in \\mbox{\\rm range} \\mathcal{A}^*$ is given by \n\\begin{equation}\nv = \\sum_{i=1}^r \\lambda_i c_i \\notag\n\\end{equation}\nas in Proposition \\ref{prop:spectral}.\n\\modifyThird{\nIf $v$ satisfies\n\\begin{equation}\n\\left \\langle e , \\mathcal{P}_{\\mathcal{K}} \\left( - \\frac{1}{\\lambda_i} v \\right) \\right \\rangle = \\xi < 1 \\notag\n\\end{equation}\n}\nfor some $\\xi < 1$ and for some $i \\in \\{ 1, \\dots , r \\}$ for which $\\lambda_i \\neq 0$ holds, then $\\lambda_i$ has the same sign as $\\langle e ,v \\rangle$.\n\\end{proposition}\n\n\\begin{proof}\nFirst, we consider the case where $\\lambda_i > 0$.\nSince the assumption implies that $\\langle e , \\mathcal{P}_{\\mathcal{K}} (-v) \\rangle = \\lambda_i \\xi$,\nwe have \n\\begin{equation}\n\\notag\n\\langle e , v \\rangle = \\langle e , \\mathcal{P}_{\\mathcal{K}} (v) \\rangle - \\langle e , \\mathcal{P}_{\\mathcal{K}} (-v) \\rangle \\\\\n= \\langle e , \\mathcal{P}_{\\mathcal{K}} (v) \\rangle - \\lambda_i \\xi \\\\\n\\geq \\lambda_i (1-\\xi) > 0,\n\\end{equation}\n\\modifyThird{where the first equality comes from Lemma \\ref{lemma:-1}.}\n\nFor the case where $\\lambda_i < 0$, \nsince the assumption also implies that $- \\langle e , \\mathcal{P}_{\\mathcal{K}} (-v) \\rangle = - \\lambda_i \\xi$, \nwe have \n\\begin{equation}\n\\langle e , v \\rangle = \\langle e , \\mathcal{P}_{\\mathcal{K}} (v) \\rangle - \\langle e , \\mathcal{P}_{\\mathcal{K}} (-v) \\rangle \\\\\n= - \\lambda_i \\xi - \\langle e , \\mathcal{P}_{\\mathcal{K}} (-v) \\rangle \\\\\n\\leq - \\lambda_i \\xi - ( -\\lambda_i) \\\\\n= (1-\\xi) \\lambda_i < 0.\n\\notag\n\\end{equation}\nThis completes the proof.\n\\end{proof}\n\nThe above two propositions imply that,\nfor any $v \\in \\mbox{\\rm range} \\mathcal{A}^*$ with $v = \\sum_{i=1}^r \\lambda_i c_i$,\nif we compute $\\langle c_i, x \\rangle$ according to Proposition \\ref{prop:upper-bound} for $i \\in \\{ 1, \\dots , r \\}$ having the same sign as the one of $\\langle e , v \\rangle$, we obtain an upper bound for the sum of eigenvalues of $x$ over the set $F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$.\n\\modifyThird{The following proposition concerns the scaling method of problem ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ when we find such a $v \\in \\mbox{\\rm range} \\mathcal{A}^*$.\n }\n\\begin{proposition}\n\\label{prop:scaling}\nSuppose that a nonempty index set $H \\subseteq \\{1 , \\dots r \\}$, Jordan frame $c_1 , \\dots , c_r $, and $0 < \\xi < 1$ satisfy \n\\begin{equation}\n\\notag\n\\langle c_i , x \\rangle \\leq \\xi \\ ( i \\in H), \\ \\ \\langle c_i , x \\rangle \\leq 1 \\ ( i \\notin H)\n\\end{equation}\nfor any $x \\in F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$.\nLet us define $g \\in \\mbox{\\rm int} \\mathcal{K}$ as\n\\begin{equation}\n\\label{eq:d}\ng := \\sqrt{\\xi} \\sum_{h \\in H} c_h + \\sum_{h \\notin H} c_h \\ \\ \\ \\ \\mbox{i.e.,} \\ \\ \\ g^{-1} = \\frac{1}{\\sqrt{\\xi}} \\sum_{h \\in H} c_h + \\sum_{h \\notin H} c_h.\n\\end{equation}\n\n\\modifyThird{\nFor the two sets $SR^{\\rm Cut} = \\{x \\in \\mathbb{E} : x \\in \\mbox{{\\rm int}} \\mathcal{K}, \\ \\|x\\|_\\infty \\leq 1, \\ \\langle c_i, x \\rangle \\leq \\xi \\ (i \\in H), \\ \\langle c_i, x \\rangle \\leq 1 \\ (i \\notin H) \\}$ and, \n$SR^{\\rm Scaled} = \\{ \\bar{x} \\in \\mathbb{E} : \\bar{x} \\in \\mbox{{\\rm int}} \\mathcal{K}, \\ \\| \\bar{x} \\|_\\infty \\leq 1 \\}$, the following inclusion relation holds:\n\\begin{equation}\nQ_g( SR^{\\rm Scaled} ) \\subseteq SR^{\\rm Cut}. \\notag\n\\end{equation}\n}\n\\end{proposition}\n\\begin{proof}\n\\modifyThird{\nLet $\\bar{x}$ be an arbitrary point of $SR^{\\rm Scaled} = \\{ \\bar{x} \\in \\mathbb{E} : \\bar{x} \\in \\mbox{int} \\mathcal{K}, \\ \\| \\bar{x} \\|_\\infty \\leq 1 \\}$. It suffices to show that (i) $Q_g(\\bar{x}) \\in \\mbox{{\\rm int}} \\mathcal{K}$ and (ii) $\\| Q_g(\\bar{x}) \\|_\\infty \\leq 1$ hold and (iii) $\\langle c_i, Q_g(\\bar{x}) \\rangle \\leq \\xi \\ (i \\in H), \\ \\langle c_i, Q_g(\\bar{x}) \\rangle \\leq 1 \\ (i \\notin H)$. \\\\\n\\medskip \\\\\n\\noindent (i):\nLet us show that $Q_g(\\bar{x}) \\in \\mbox{{\\rm int}} \\mathcal{K}$.\nSince $g$ and $\\bar{x}$ lie in the set $\\mbox{{\\rm int}} \\mathcal{K}$, from Propositions \\ref{ptop:quadratic} and \\ref{q-det-relation}, we see that \n\\begin{equation}\nQ_g(\\bar{x}) \\in \\mathcal{K}, \\ \\ \\det Q_g(\\bar{x}) = \\det (g)^2 \\det (\\bar{x}) > 0, \\notag \n\\end{equation}\nwhich implies that $Q_g(\\bar{x}) \\in \\mbox{{\\rm int}} \\mathcal{K}$. \\\\\n\\medskip \\\\\n\\noindent (ii)\nNext let us show that $\\| Q_g(\\bar{x}) \\|_\\infty \\leq 1$.\nSince $\\bar{x} \\in SR^{\\rm Scaled}$, we see that $\\bar{x} \\in \\mbox{{\\rm int}} \\mathcal{K}$, $\\| \\bar{x} \\|_\\infty \\leq 1$ and hence $e-\\bar{x} \\in \\mathcal{K}$.\nSince $g \\in \\mbox{{\\rm int}} \\mathcal{K}$, Proposition \\ref{ptop:quadratic} guarantees that \n\\begin{equation}\n\\label{eq:Q-g}\nQ_g(e-\\bar{x}) \\in \\mathcal{K}.\n\\end{equation}\nBy the definition (\\ref{eq:d}) of $g$, the following equations hold for $c_1 , \\dots , c_r$:\n\\begin{align*}\n\\mbox{For any $i \\in H$,} \\ \\ \\ \\\nQ_g (c_i) &= 2 g \\circ ( g \\circ c_i ) - (g \\circ g ) \\circ c_i \\\\\n&= 2 g \\circ \\sqrt{\\xi} c_i - \\left( \\xi \\sum_{h \\in H} c_h + \\sum_{h \\notin H} c_h \\right) \\circ c_i \\\\\n&= 2 \\xi c_i - \\xi c_i = \\xi c_i.\n\\end{align*}\n\\begin{align*}\n\\mbox{For any $ i \\notin H$,} \\ \\ \\ \\\nQ_g (c_i) &= 2 g \\circ ( g \\circ c_i ) - (g \\circ g ) \\circ c_i \\\\\n&= 2 g \\circ c_i - \\left( \\xi \\sum_{h \\in H} c_h + \\sum_{h \\notin H} c_h \\right) \\circ c_i \\\\\n&= 2c_i - c_i = c_i.\n\\end{align*}\nThus, we obtain $Q_g(e) = \\xi \\sum_{i \\in H} c_i + \\sum_{i \\notin H} c_i$.\nCombining this with the facts $c_i \\in \\mathcal{K}$ and $(1-\\xi)>0$ and (\\ref{eq:Q-g}), we have\n\\begin{align*}\n\\mathcal{K} \\ni (1-\\xi) \\sum_{i \\in H} c_i + Q_g(e - \\bar{x}) \n&= (1-\\xi) \\sum_{i \\in H} c_i + Q_g(e) - Q_g(e)\\bar{x} \\\\\n&= (1-\\xi) \\sum_{i \\in H} c_i + \\left( \\xi \\sum_{i \\in H} c_i + \\sum_{i \\notin H} c_i \\right) - Q_g(e)\\bar{x} \\\\\n&= e-Q_g(\\bar{x}).\n\\end{align*}\nSince we have shown that $Q_g(\\bar{x}) \\in \\mbox{{\\rm int}} \\mathcal{K}$, we can conclude that $\\|Q_g(\\bar{x})\\|_\\infty \\leq 1$. \\\\\n\\medskip \\\\\n\\noindent (iii): \nFinally, we compute an upper bound for the value $\\langle Q_g(\\bar{x}) , c_i \\rangle$ over the set $SR^{\\rm Scaled}$. It follows from $c_i \\in \\mathcal{K}$ and (\\ref{eq:Q-g}) that $\\langle Q_g(e - \\bar{x}) , c_i \\rangle \\geq 0$, i.e., $\\langle Q_g(e) , c_i \\rangle \\geq \\langle Q_g(\\bar{x}) , c_i \\rangle$ holds. Since we have shown that $Q_g(e) = \\xi \\sum_{i \\in H} c_i + \\sum_{i \\notin H} c_i$, this implies $\\langle Q_g(\\bar{x}) , c_i \\rangle \\leq \\xi$ holds if $i \\in H$ and $\\langle Q_g(\\bar{x}) , c_i \\rangle \\leq 1$ holds if $i \\notin H$.\n}\n\\end{proof}\n\n\\modifyThird{\nProposition \\ref{prop:scaling} $g$ was proven by focusing on the point $u$ that gives an upper bound for $\\langle e, x \\rangle$ for any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$. Specifically, before cut generation, since $\\|x\\|_\\infty \\leq 1$ holds, the point giving an upper bound for the sum of eigenvalues is $u^{\\rm before}=e$, and after the cut generation, since $\\sum_{i=1}^r c_i = e$ holds from the definition of Jordan frame, the point is $u^{\\rm after} = \\sum_{i \\in H} \\xi c_i + \\sum_{i \\notin H} c_i$. $u^{\\rm before}$ and $u^{\\rm after}$ are related by $Q_g(u^{\\rm before}) = u^{\\rm after}$. That is, Proposition \\ref{prop:scaling} implies that if a cut is obtained for ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ based on Proposition \\ref{prop:upper-bound}, we can expect a more efficient search for solutions to problem ${\\rm P}_{S_{\\infty}}(\\mathcal{A}Q_g)$\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P}_{S_{\\infty}}(\\mathcal{A}Q_g)& \\mbox{find} & \\bar{x} \\\\\n\t&\\mbox{s.t.} & \\mathcal{A}Q_g (\\bar{x}) = \\bm{0}, \\\\\n\t& &\\|\\bar{x}\\|_\\infty \\leq 1, \\\\\n\t& &\\bar{x} \\in \\mbox{{\\rm int}} \\mathcal{K}. \\notag\n\\end{array}\n\\end{equation}\nwith scaling of $u^{\\rm after}$ to $e$ in the variable space, rather than trying to solve problem ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$.\n}\n\\subsection{Non-simple symmetric cone case}\n\\label{subsec:non-simple}\n\nIn this section, we consider the case where the symmetric cone is not simple; i.e., it is a Cartesian product of $p$ simple symmetric cones $\\mathcal{K} = \\mathcal{K}_1 \\times \\mathcal{K}_2 \\times \\dots \\times \\mathcal{K}_p$ whose rank is given by (\\ref{eq:r and e}). \nPropositions \\ref{prop:upper-bound-direct} and \\ref{prop:cut-prb-direct} are extensions of Proposition \\ref{prop:upper-bound} and \\ref{prop:cut-prb}, respectively.\n\n\\begin{proposition}\n\\label{prop:upper-bound-direct}\nSuppose that, for any $v \\in \\mbox{\\rm range} \\mathcal{A}^*$, the $\\ell$-th block element $v_\\ell$ of $v \\in \\mathbb{E}$ is decomposed into\n\\begin{equation}\nv_\\ell = \\sum_{i=1}^{r_\\ell} {\\lambda(v_\\ell)}_i {c(v_\\ell)}_i \\notag\n\\end{equation}\nas in Proposition \\ref{prop:spectral}.\nFor each $\\ell \\in \\{ 1,2,\\ldots,p \\}$ and $i \\in \\{1,2,\\ldots,r_p\\}$, define\n\\begin{equation}\n\\label{eq:q_ell,i(alpha)}\nq_{\\ell,i}(\\alpha) := \\left[ 1 - \\alpha {\\lambda(v_\\ell)}_i \\right]^+ + \\sum_{k \\neq i}^{r_\\ell} \\left[ -\\alpha {\\lambda(v_\\ell)}_k \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ -\\alpha {\\lambda(v_j)}_k \\right]^+ .\n\\end{equation}\nThen, \n\\begin{equation}\n\\label{eq:upper-bound-direct1}\n\\langle {c(v_\\ell)}_i , x_\\ell \\rangle \\leq \\min_{\\alpha \\in \\mathbb{R} } q_{\\ell,i}(\\alpha) =\n\\begin{cases}\n\\min \\left\\{ 1 , \\left \\langle e , \\mathcal{P}_{\\mathcal{K}} \\left( - \\frac{1}{{\\lambda(v_\\ell)}_i} v \\right) \\right \\rangle \\right\\} & \\mbox{if ${\\lambda(v_\\ell)}_i \\neq 0$}, \\\\\n1 & \\mbox{if ${\\lambda(v_\\ell)}_i = 0$} \n\\end{cases}\n\\end{equation}\nholds for any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$, $\\ell \\in \\{ 1,2,\\ldots,p \\}$ and $i \\in \\{1,2,\\ldots,r_p\\}$.\n\\end{proposition}\n\n\\begin{proof}\nLet $c \\in \\mathbb {E}$ be an element whose $\\ell$-th block element is $c_\\ell = c (v_\\ell)_i$ and other block elements take $0$.\nFor any real number $\\alpha \\in \\mathbb{R}$, Proposition \\ref {prop:p-d-relation} ensures that\n\\begin{align}\n\\label{eq:upper-bound-direct2}\n\\langle {c(v_\\ell)}_i , x_\\ell \\rangle = \\langle c,x \\rangle &\\leq \\left \\langle \\mathcal{P}_\\mathcal{K} \\left( c - \\alpha v \\right) , e \\right \\rangle \\notag \\\\\n&= \\left \\langle \\mathcal{P}_{\\mathcal{K}_\\ell} \\left( {c(v_\\ell)}_i - \\alpha v_\\ell \\right) , e_\\ell \\right \\rangle + \\sum_{j \\neq \\ell}^p \\left \\langle \\mathcal{P}_{\\mathcal{K}_j} \\left( - \\alpha v_j \\right) , e_j \\right \\rangle \\notag \\\\\n&= \\left[ 1 - \\alpha {\\lambda(v_\\ell)}_i \\right]^+ + \\sum_{k \\neq i}^{r_\\ell} \\left[ -\\alpha {\\lambda(v_\\ell)}_k \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ -\\alpha {\\lambda(v_j)}_k \\right]^+ = q_{\\ell,i}(\\alpha). \n\\end{align}\nWe obtain (\\ref{eq:upper-bound-direct1}) by following a similar argument to the one used in the proof of Proposition \\ref{prop:upper-bound}.\n\\end{proof}\n\nThe next proposition follows similarly to Proposition \\ref{prop:cut-prb}, by noting that $\\langle e, \\mathcal{P}_\\mathcal{K}(-v) \\rangle = {\\lambda(v_\\ell)}_i \\xi$ holds if ${\\lambda(v_\\ell)}_i >0$ and that $\\langle e, \\mathcal{P}_\\mathcal{K}(v) \\rangle = - {\\lambda(v_\\ell)}_i \\xi$ if ${\\lambda(v_\\ell)}_i <0$.\n\n\\begin{proposition}\n\\label{prop:cut-prb-direct}\nSuppose that, for any $v \\in \\mbox{\\rm range} \\mathcal{A}^*$, each $\\ell$-th block element $v_\\ell$ of $v$ is decomposed into\n\\begin{equation}\nv_\\ell = \\sum_{i=1}^{r_\\ell} {\\lambda(v_\\ell)}_i {c(v_\\ell)}_i \\notag\n\\end{equation}\nas in Proposition \\ref{prop:spectral}.\nIf $v$ satisfies\n\\begin{equation}\n\\begin{array}{lll}\n\\label{eq:if}\n{\\lambda(v_\\ell)}_i \\neq 0 &\\mbox{and}& \\left \\langle e , \\mathcal{P}_{\\mathcal{K}} \\left( - \\frac{1}{{\\lambda(v_\\ell)}_i} v \\right) \\right \\rangle = \\xi_\\ell < 1 \n\\end{array}\n\\end{equation}\nfor some $\\xi < 1$, $\\ell \\in \\{1 , \\dots, p \\} $ and $ i \\in \\{1 , \\dots , r_\\ell \\} $, \nthen ${\\lambda(v_\\ell)}_i$ has the same sign as $\\langle e , v \\rangle$.\n\\end{proposition}\n\nFrom Proposition \\ref{prop:upper-bound-direct}, if we obtain $v \\in \\mbox{range} \\mathcal{A}^*$ satisfying (\\ref{eq:if}) for a block $\\ell \\in \\{1 , \\dots, p \\} $ with an index $i \\in \\{1 , \\dots r_\\ell \\} $, then the upper bound for the sum of the eigenvalues of any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ is reduced by $\\langle e , x \\rangle \\leq r-1 + \\xi_\\ell < r$.\nIn this case, as described below, we can find a scaling such that the sum of eigenvalues of any feasible solution of ${\\rm P}_{S_\\infty}(\\mathcal{A})$ is bounded by $r$.\n \nLet $H_\\ell$ be the set of indices $i$ satisfying (\\ref{eq:if}) for each block $\\ell$.\nAccording to Proposition \\ref{prop:scaling}, set $g_\\ell = \\sqrt{\\xi_\\ell} \\sum_{h \\in H_\\ell} {c(v_\\ell)}_h + \\sum_{h \\notin H_\\ell} {c(v_\\ell)}_h$ and define the linear operator $Q$ as follows:\n\\begin{equation}\nQ_\\ell := \n\\begin{cases}\nQ_{g_\\ell} & \\mbox{if } | H_\\ell | \\neq 0, \\\\\nI_\\ell & \\mbox{otherwise},\n\\end{cases}\n\\notag\n\\end{equation}\n\\begin{equation}\nQ(\\mathbb{E}_1 , \\dots , \\mathbb{E}_p) := \\left( Q_1(\\mathbb{E}_1) , \\dots , Q_p(\\mathbb{E}_p) \\right), \\notag\n\\end{equation}\nwhere $I_\\ell$ is the identity operator of the Euclidean Jordan algebra $\\mathbb{E}_\\ell$ associated with the symmetric cone $\\mathcal{K}_\\ell$.\nFrom Proposition \\ref{prop:scaling} and its proof, we can easily see that\n\\begin{equation}\n\\label{eq:prop3.4-1}\nQ_{g_\\ell^{-1}}(c_i) = \\frac{1}{\\xi}c_i \\ (i\\in H_\\ell), \\ \\ \\ Q_{g_\\ell^{-1}}(c_i) =c_i \\ (i \\not\\in H_\\ell),\n\\end{equation}\nand the sum of eigenvalues of any feasible solution of the scaled problem ${\\rm P}_{S_\\infty}(\\mathcal{A}Q)$ is bounded by $\\langle e,e \\rangle = r = \\sum_{\\ell=1}^{p} r_\\ell$.\n\n\n\\section{Basic procedure of the extended method}\n\\label{sec: basic procedure}\n\\subsection{Outline of the basic procedure}\n\n\nIn this section, we describe the details of our basic procedure. \nFirst, we introduce our stopping criteria and explain how to update $y^k$ when the the stopping criteria is not satisfied. Next, we show that the stopping criteria is satisfied within a finite number of iterations, i.e., finite termination of the basic procedure. Our stopping criteria is new and different from the ones used in \\cite{Bruno2019,Pena2017}, while the method of updating $y^k$ is similar to the one used in \\cite{Bruno2019} or in the von Neumann scheme of \\cite{Pena2017}.\nAlgorithm \\ref{basic procedure} is a full description of our basic procedure. \n\n\\subsection{Termination conditions of the basic procedure}\n\nFor $z^k = P_\\mathcal{A}(y^k)$, $v^k = y^k - z^k$ and a given $\\xi \\in (0,1)$, \nour basic procedure terminates when any of the following four cases occurs:\n\\begin{enumerate}\n\\item $z^k \\in \\mbox{int} \\mathcal{K} $ meaning that $z^k$ is a solution of ${\\rm P}(\\mathcal{A})$,\n\\item $z^k = \\bm{0}$ meaning that $y^k$ is feasible for ${\\rm D}(\\mathcal{A})$,\n\\item $y^k - z^k \\in \\mathcal{K}$ and $y^k - z^k \\neq \\bm{0}$ meaning that $y^k - z^k$ is feasible for ${\\rm D}(\\mathcal{A})$, or\n\\item there exist $\\ell \\in \\{ 1 , \\dots , p\\}$ and $i \\in \\{1 , \\dots , r_\\ell\\}$ for which \n\\begin{equation}\n\\begin{array}{lll}\n{\\lambda(v^k_\\ell)}_i \\neq 0 &\\mbox{and}& \\left \\langle e , \\mathcal{P}_{\\mathcal{K}} \\left( - \\frac{1}{{\\lambda(v^k_\\ell)}_i} v^k \\right) \\right \\rangle = \\xi_\\ell \\leq \\xi < 1, \\label{eq:terminate}\n\\end{array}\n\\end{equation}\nmeaning that $\\langle e,x \\rangle < r$ holds for any feasible solution $x$ of ${\\rm P}_{S_\\infty}(\\mathcal{A})$ (see Proposition \\ref{prop:upper-bound-direct}).\n\\end{enumerate}\n\nCases 1 and 2 are direct extensions of the cases in \\cite{Chubanov2015}, while case 3 was proposed in \\cite{Kitahara2018,Bruno2019}.\nCase 3 helps us to determine the feasibility of ${\\rm P}(\\mathcal{A})$ efficiently, while we have to decompose $y^k-z^k$ for checking it. \n\nIf the basic procedure ends with case 1, 2, or 3, the feasibility of ${\\rm P}(\\mathcal{A})$ can be determined, and the basic procedure returns a solution of ${\\rm P}(\\mathcal{A})$ or ${\\rm D}(\\mathcal{A})$ to the main algorithm. \nIf the basic procedure ends with case 4, the basic procedure returns to the main algorithm $p$ index sets $H_1 , \\dots , H_p$ each of which consists of indices $i$ satisfying (\\ref{eq:terminate}) and the set of primitive idempotents $C_\\ell = \\{ {c(v^k_\\ell)}_1, \\dots ,{c(v^k_\\ell)}_{r_\\ell} \\}$ of $v^k_\\ell$ for each $\\ell$.\n\n\\subsection{Update of the basic procedure}\n\n\nThe basic procedure updates $y^k \\in \\mbox{int} \\mathcal{K}$ with $\\langle y^k , e \\rangle = 1$ so as to reduce the value of $\\| z^k \\|_J$. \nThe following proposition is essentially the same as Proposition 13 in \\cite{Bruno2019}, so we will omit its proof.\n\n\\begin{proposition}[cf. Proposition 13, \\cite{Bruno2019}] \n\\label{prop:update}\nFor $y^k \\in \\mbox{\\rm int} \\mathcal{K}$ satisfying $\\langle y^k , e \\rangle = 1$, let $z^k = P_\\mathcal{A}(y^k)$.\nIf $z^k \\notin \\mbox{\\rm int} \\mathcal{K} $ and $z^k \\neq \\bm{0}$, then the following hold.\n\\begin{enumerate}\n\\item \nThere exists $ c \\in \\mathcal{K}$ such that\n\\begin{equation}\n\\langle c , z^k \\rangle = \\lambda_{\\min} (z^k) \\leq 0, \\ \\langle e , c \\rangle \n=1 \\ \\mbox{and} \\ c\\in \\mathcal{K} . \\label{eq:update_bp1}\n\\end{equation}\n\\item \nFor the above $c$, suppose that $P_\\mathcal{A}(c) \\neq \\bm{0}$ and define\n\\begin{equation} \n\\label{eq:bp2}\n\\alpha= \\frac{ \\langle P_\\mathcal{A}(c) , P_\\mathcal{A}(c) - z^k \\rangle }{\\|z^k-P_\\mathcal{A}(c) \\|^2_J}. \n\\end{equation}\nThen, $y^{k+1} := \\alpha y^k + (1-\\alpha) c$ satisfies \n\\begin{enumerate}\n\\item $y^{k+1} \\in \\mbox{\\rm int} \\mathcal{K}$, \n\\item $\\|y^{k+1}\\|_{1,\\infty} \\geq \\frac{1}{p}$, \n\\item $\\langle y^{k+1} , e \\rangle = 1$, and\n\\item $z^{k+1} := P_\\mathcal{A}(y^{k+1})$ satisfies \n\\begin{equation}\n\\frac{1}{\\| z^{k+1} \\|^2_J} \\geq \\frac{1}{\\|z^k\\|^2_J} + 1. \\notag\n\\end{equation}\n\\end{enumerate}\n\\end{enumerate}\n\\end{proposition}\n\nA method of accelerating the update of $y^k$ is provided in \\cite{Roos2018}. \nFor $\\ell \\in \\{1,2,\\ldots,p\\}$, let $I_\\ell := \\{ i \\in \\{1,2,\\ldots,r_\\ell\\} \\mid \\lambda_i (z^k_\\ell) \\leq 0\\} $ and set $N = \\sum_{\\ell=1}^p | I_\\ell | $. \nDefine the $\\ell$-th block element of $c \\in \\mathcal{K}$ as\n\\begin{equation}\nc_\\ell = \\cfrac{1}{N} \\sum_{i \\in I_\\ell} {c(z^k_\\ell)}_i . \\notag\n\\end{equation}\nUsing $P_\\mathcal{A} \\left( c \\right) $, the acceleration method computes $\\alpha$ by (\\ref{eq:bp2}) so as to minimize the norm of $z^{k+1}$ and update $y$ by\n\\begin{equation}\ny^{k+1} = \\alpha y^k + (1-\\alpha) c. \\notag\n\\end{equation}\nWe incorporate this method in the basic procedure of our computational experiment and call it the {\\it modified basic procedure}. \n\n\\modifySecond{\nAs described in~\\cite{Pena2017}, we can also use the smooth perceptron scheme~\\cite{Soheili2012, Soheili2013} to update $y^k$ in the basic procedure.\nAs explained in the next section, using the smooth perceptron scheme significantly reduces the maximum number of iterations of the basic procedure.\n}\n\nA detailed description of our basic procedure (Algorithms \\ref{bp-alg-mvn} and \\ref{bp-alg-sp}) is given in Appendix \\ref{app:modified basic procedure}.\n\n\\subsection{Finite termination of the basic procedure}\n\\label{sec: finite termination of BP}\n\nIn this section, we show that the basic procedure (Proposition \\ref{prop:bp2}) terminates in a finite number of iterations.\nTo do so, we need to prove Lemma \\ref{lemma:1} and Proposition \\ref{prop:bp1}.\n\n\\begin{lemma}\n\\label{lemma:1}\nLet $(\\mathbb{E}, \\circ)$ be a Euclidean Jordan algebra with the corresponding symmetric cone $\\mathcal{K}$\ngiven by the Cartesian product of $p$ simple symmetric cones, i.e., $\\mathcal{K} = \\mathcal{K}_1 \\times \\dots \\times \\mathcal{K}_p$.\nFor any $x \\in \\mathbb{E}$ and $y \\in \\mathcal{K}$, the following inequality holds:\n\\begin{equation}\n[ \\langle x , y \\rangle ]^+ \\leq \\langle \\mathcal{P}_\\mathcal{K} (x) , y \\rangle. \\notag\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nLet $x \\in \\mathbb{E}$ and suppose that each $\\ell$-th block element $x_\\ell$ of $x$ is given by \n\\begin{equation}\nx_\\ell = \\sum_{i=1}^{r_\\ell} {\\lambda(x_\\ell)}_i {c(x_\\ell)}_i \\notag\n\\end{equation}\nas in Proposition \\ref{prop:spectral}.\nThen, we can see that \n\\begin{align}\n\\left[ \\langle x , y \\rangle \\right]^+ &= \\left[ \\sum_{\\ell=1}^p \\left \\langle \\sum_{i=1}^{r_\\ell} {\\lambda(x_\\ell)}_i {c(x_\\ell)}_i , y_\\ell \\right \\rangle \\right]^+ \\notag \\\\\n&= \\left[ \\sum_{\\ell=1}^p \\left( \\sum_{i=1}^{r_\\ell} {\\lambda(x_\\ell)}_i \\left \\langle {c(x_\\ell)}_i , y_\\ell \\right \\rangle \\right) \\right]^+ \\notag \\\\\n&\\leq \\sum_{\\ell=1}^p \\sum_{i=1}^{r_\\ell} \\left[ {\\lambda(x_\\ell)}_i \\left \\langle {c(x_\\ell)}_i , y_\\ell \\right \\rangle \\right]^+ \\notag \\\\\n&= \\sum_{\\ell=1}^p \\sum_{i=1}^{r_\\ell} \\left[ { \\lambda(x_\\ell)}_i \\right]^+ \\langle {c(x_\\ell)}_i , y_\\ell \\rangle \\notag \\\\\n&= \\sum_{\\ell=1}^p \\left \\langle \\sum_{i=1}^{r_\\ell} \\left[ { \\lambda(x_\\ell)}_i \\right]^+ {c(x_\\ell)}_i , y_\\ell \\right \\rangle \n= \\left \\langle \\mathcal{P}_\\mathcal{K} (x) , y \\right \\rangle . \\notag\n\\end{align}\nwhere the inequality follows from the fact that ${c(x_\\ell)}_1 , \\dots , {c(x_\\ell)}_{r_\\ell}$, and $y_\\ell$ lie in the symmetric cone $\\mathcal{K}_\\ell$.\n\\end{proof}\n\n\\begin{proposition}\n\\label{prop:bp1}\nFor a given $y \\in \\mathcal{K}$, define $z = P_\\mathcal{A} (y)$ and $v = y - z$.\nSuppose that $v \\neq 0$ and each $\\ell$-th element $v_\\ell$ is given by $v_\\ell = \\sum_{i=1}^{r_\\ell} {\\lambda(v_\\ell)}_i {c(v_\\ell)}_i$, as in Proposition \\ref{prop:spectral}.\nThen, for any $x \\in F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$, $\\ell \\in \\{ 1, \\dots , p \\}$ and $ i \\in \\{ 1, \\dots , r_\\ell \\}$, \n\\begin{equation}\n\\label{eq:bp1-1}\n\\langle {c(v_\\ell)}_i , x_\\ell \\rangle \\leq \\min_\\alpha q_{\\ell,i}(\\alpha) \\leq \\frac{1}{ \\left \\langle y_\\ell , {c(v_\\ell)}_i \\right \\rangle} \\|z\\|_J \n\\end{equation}\nhold where $q_{\\ell,i}(\\alpha)$ is defined in (\\ref{eq:q_ell,i(alpha)}).\n\\end{proposition}\n\n\\begin{proof}\nThe first inequality of (\\ref{eq:bp1-1}) follows from (\\ref{eq:upper-bound-direct2}) in the proof of Proposition \\ref{prop:upper-bound-direct}. \nThe second inequality is obtained by evaluating $q_{\\ell, i}(\\alpha)$ at $\\alpha = \\cfrac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle}$, as follows:\n\\begin{align*}\nq_{\\ell,i} \\left( \\cfrac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right) \n&= \\left[ 1 - \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} {\\lambda(v_\\ell)}_i \\right]^+ + \\sum_{k \\neq i}^{r_\\ell} \\left[ - \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} {\\lambda(v_\\ell)}_k \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ -\\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} {\\lambda(v_j)}_k \\right]^+\\\\\n&= \\left[ 1 - \\frac{\\langle y_\\ell -z_\\ell , {c(v_\\ell)}_i \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ + \\sum_{k \\neq i}^{r_\\ell} \\left[ - \\frac{\\langle y_\\ell -z_\\ell , c(v_\\ell)_k \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ -\\frac{\\langle y_j - z_j , {c(v_j)}_k \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ \\\\\n& \\ \\ \\ \\ \\modifySecond{(\\mbox{since} \\ \\lambda(v_\\ell)_i = \\langle v_\\ell, c(v_\\ell)_i \\rangle \\ \\mbox{and} \\ v_\\ell = y_\\ell -z_\\ell )} \\\\\n&= \\left[ \\frac{\\langle z_\\ell , {c(v_\\ell)}_i \\rangle }{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ + \\sum_{k \\neq i}^{r_\\ell} \\left[ \\frac{ \\langle z_\\ell , {c(v_\\ell)}_k \\rangle - \\langle y_\\ell , {c(v_\\ell)}_k \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ \\frac{\\langle z_j , {c(v_j)}_k \\rangle - \\langle y_j , {c(v_j)}_k \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ \\\\\n&\\leq \\left[ \\frac{\\langle z_\\ell , {c(v_\\ell)}_i \\rangle }{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ + \\sum_{k \\neq i}^{r_\\ell} \\left[ \\frac{\\langle z_\\ell , {c(v_\\ell)}_k \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ \\frac{\\langle z_j , {c(v_j)}_k \\rangle}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\right]^+ \\\\\n& \\ \\ \\ \\ \\modifySecond{(\\mbox{since} \\ y_\\ell, c(v_\\ell)_i \\in \\mathcal{K}_\\ell \\ \\mbox{and then} \\ \\langle y_\\ell, c(v_\\ell)_i \\rangle \\geq 0)} \\\\\n&= \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\left( \\sum_{k=1}^{r_\\ell} \\left[ \\langle z_\\ell , {c(v_\\ell)}_k \\rangle \\right]^+ + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\left[ \\langle z_j , {c(v_j)}_k \\rangle \\right]^+ \\right) \\\\\n&\\leq \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\left( \\sum_{k=1}^{r_\\ell} \\langle \\mathcal{P}_{{\\mathcal{K}_\\ell}}\\left( z_\\ell \\right), {c(v_\\ell)}_k \\rangle + \\sum_{j \\neq \\ell}^p \\sum_{k=1}^{r_j} \\langle \\mathcal{P}_{{\\mathcal{K}_j}}\\left( z_j \\right), {c(v_j)}_k \\rangle \\right) \\ \\ \\mbox{(by Lemma \\ref{lemma:1})} \\\\\n&= \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\left( \\langle \\mathcal{P}_{\\mathcal{K}_\\ell}\\left( z_\\ell \\right), e_\\ell \\rangle + \\sum_{j \\neq \\ell}^p \\langle \\mathcal{P}_{\\mathcal{K}_j}\\left( z_j \\right), e_j \\rangle \\right) \\\\\n&= \\frac{\\langle \\mathcal{P}_\\mathcal{K}\\left( z \\right), e \\rangle }{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle}\n= \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\| \\mathcal{P}_{\\mathcal{K}}\\left( z \\right) \\|_1 \n\\leq \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\| \\mathcal{P}_{\\mathcal{K}}\\left( z \\right) \\|_J \n\\leq \\frac{1}{ \\langle y_\\ell , {c(v_\\ell)}_i \\rangle} \\| z \\|_J .\n\\end{align*}\n\\end{proof}\n\n\\begin{proposition}\n\\label{prop:bp2}\nLet $r_{\\max} = \\max \\{ r_1 , \\dots , r_p\\}$.\nThe basic procedure (Algorithm \\ref{basic procedure}) terminates in at most $\\frac{p^2r_{\\max}^2}{\\xi^2} $ iterations. \n\\end{proposition}\n\\begin{proof}\nSuppose that $y^k$ is obtained at the $k$-th iteration of Algorithm \\ref{basic procedure}. Proposition \\ref{prop:update} implies that $\\|y^k\\|_{1,\\infty} \\geq \\frac{1}{p}$ and an $\\ell$-th block element exists for which $\\langle y_\\ell , e_\\ell \\rangle \\geq \\frac{1}{p}$ holds. \nThus, by letting $v^k = y^k - z^k$ and the $\\ell$-th block element $v_\\ell^k$ of $v^k$ be $v_\\ell^k = \\sum_{i=1}^{r_\\ell} {\\lambda(v^k_\\ell)}_i {c(v^k_\\ell)}_i$ as in Proposition \\ref{prop:spectral}, we have\n\\begin{equation}\n\\max_{i=1,\\dots,r_\\ell} \\left \\langle y_\\ell^k , {c(v^k_\\ell)}_i \\right \\rangle \\geq \\frac{1}{pr_\\ell}. \\label{eq:bp2-1}\n\\end{equation}\n\nSince Proposition \\ref{prop:update} ensures that $\\frac{1}{\\|z^k\\|^2_J} \\geq k$ holds at the $k$-th iteration,\nby setting $k=\\frac{p^2r_{\\max}^2}{\\xi^2}$, we see that \n\\begin{equation}\n\\xi \\geq p r_{\\max} \\|z^k\\|_J , \\notag\n\\end{equation}\nand combining this with (\\ref{eq:bp2-1}), we have\n\\begin{equation}\n\\notag\n\\xi \\geq p r_{\\max} \\|z^k\\|_J \\geq p r_\\ell \\|z^k\\|_J \\geq \\frac{1}{\\max_{i=1,\\dots,r_\\ell} \\left \\langle y_\\ell^k , {c(v_\\ell)}_i \\right \\rangle} \\| z^k \\|_J . \n\\end{equation}\nThe above inequality and Proposition \\ref{prop:bp1} imply that for any $\\ell \\in \\{ 1,2,\\ldots,p \\}$ and $i \\in \\{1,2,\\ldots,r_p\\}$,\n\\begin{equation}\n\\notag\n\\langle c(v^k_\\ell)_i , x_\\ell \\rangle \\leq \\min_\\alpha q_{\\ell,i} (\\alpha)\n\\leq \\frac{1}{ \\langle y^k_\\ell , {c(v^k_\\ell)}_i \\rangle} \\|z^k\\|_J \\leq \\xi. \n\\end{equation}\nFrom the equivalence in (\\ref{eq:upper-bound-direct1}) and the setting $\\xi \\in (0,1)$, we conclude that Algorithm \\ref{basic procedure} terminates in at most $\\frac{p^2r_{\\max}^2}{\\xi^2} $ iterations by satisfying (\\ref{eq:terminate}) in the fourth termination condition at an $\\ell$-th block and an index $i$.\n\\end{proof}\n\n\\modifySecond{\nAn upper bound for the number of iterations of Algorithm \\ref{bp-alg-sp} using smooth perceptoron scheme can be found as follows.\n\\begin{proposition}\nLet $r_{\\max} = \\max \\{ r_1 , \\dots , r_p\\}$.\nThe basic procedure (Algorithm \\ref{bp-alg-sp}) terminates in at most $\\frac{2 \\sqrt{2} p r_{\\max}}{\\xi}$ iterations.\n\\end{proposition}\n\\begin{proof}\nFrom Proposition 6 in~\\cite{Pena2017}, after $k \\geq 1$ iterations, we obtain the inequality $\\|z^k\\|_J^2 \\leq \\frac{8}{(k+1)^2}$.\nSimilarly to the previous proof of Proposition \\ref{prop:bp2}, if $\\xi \\geq p r_{max} \\|z^k\\|_J$ holds, then Algorithm \\ref{bp-alg-sp} terminates. \nThus, $k \\leq \\frac{2 \\sqrt{2} p r_{\\max}}{\\xi}$ holds for a given $k$ satisfying\n\\begin{equation}\n\\left( \\frac{\\xi}{p r_{max}} \\right)^2 \\leq \\frac{8}{(k+1)^2}.\n\\notag\n\\end{equation}\n\\end{proof}\n}\n\nHere, we discuss the computational cost per iteration of Algorithm \\ref{basic procedure}.\nAt each iteration, the two most expensive operations are computing the spectral decomposition on line 5 and computing $P_{\\mathcal{A}}(\\cdot)$ on lines 24 and 26.\n\nLet $C_{\\ell}^{\\rm sd}$ be the computational cost of the spectral decomposition of an element of $\\mathcal{K}_\\ell$.\nFor example, $C_{\\ell}^{\\rm sd}=\\mathcal{O}(r_\\ell^3)$ if $\\mathcal{K}_\\ell = \\mathbb{S}^{r_\\ell}_+$ and $C_{\\ell}^{\\rm sd}=\\mathcal{O}(r_\\ell)$ if $\\mathcal{K}_\\ell = \\mathbb{L}_{r_\\ell}$, where $\\mathbb{L}_{r_\\ell}$ denotes the $r_\\ell$-dimensional second-order cone. \nThen, the cost $C^{\\rm sd}$ of computing the spectral decomposition of an element of $\\mathcal{K}$ is $C^{\\rm sd} = \\sum_{\\ell=1}^p C_{\\ell}^{\\rm sd}$. Next, let us consider the computational cost of $P_{\\mathcal{A}}(\\cdot)$. \nRecall that $d$ is the dimension of the Euclidean space $\\mathbb{E}$ corresponding to $\\mathcal{K}$. \nAs discussed in \\cite{Bruno2019}, we can compute $P_{\\mathcal{A}} = I - \\mathcal{A}^*(\\mathcal{A}\\mathcal{A}^*)^{-1}\\mathcal{A}$ by using the Cholesky decomposition of $(\\mathcal{A}\\mathcal{A}^*)^{-1}$.\nSuppose that $(\\mathcal{A}\\mathcal{A}^*)^{-1} = LL^*$, where $L$ is an $m \\times m$ matrix and we store $L^*\\mathcal{A}$ in the main algorithm.\nThen, we can compute $P_{\\mathcal{A}}(\\cdot)$ on lines 24 and 26, which costs $\\mathcal{O}(md)$.\n\\modifySecond{\nThe operation $u_\\mu (\\cdot) : \\mathbb{E} \\rightarrow \\{ u \\in \\mathcal{K} \\mid \\langle u,e \\rangle = 1\\}$ in Algorithm \\ref{bp-alg-sp} can be performed within the cost $C^{\\rm sd}$~\\cite{Soheili2013, Pena2017}.\n}\nFrom the above discussion and Proposition \\ref{prop:bp2}, the total costs of Algorithm \\ref{basic procedure} \\modifySecond{and Algorithm \\ref{bp-alg-sp}} are given by \n\\modifySecond{\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{p^2 r_{\\max}^2}{\\xi^2} \\max ( C^{\\rm sd} , md ) \\right), \\label{cost:b-p}\n\\end{equation}\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{p r_{\\max}}{\\xi} \\max ( C^{\\rm sd} , md ) \\right). \\label{cost:b-p-smooth}\n\\end{equation}\n}\n\n\\begin{algorithm}[H]\n \\caption{Basic procedure (von Neumann scheme)}\n \\label{basic procedure}\n \\begin{algorithmic}[1]\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input: }}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output: }}\n \\renewcommand{\\stop}{\\textbf{stop }}\n \\renewcommand{\\return}{\\textbf{return }}\n\n \\STATE \\algorithmicrequire $P_\\mathcal{A}$, $y^1 \\in \\mbox{int} \\mathcal{K}$ such that $\\langle y^1 , e \\rangle = 1$ and a constant $\\xi$ such that $0 < \\xi < 1$\n \\STATE \\algorithmicensure \\modifyThird{(i) a solution to ${\\rm P}(\\mathcal{A})$ or (ii) ${\\rm D}(\\mathcal{A})$ or (iii) a certificate that, for any feasible solution $x$ to ${\\rm P}_{S_\\infty}(\\mathcal{A})$, $\\langle e,x \\rangle < r$}\n \\STATE initialization: $k \\leftarrow 1, z^1 \\leftarrow P_\\mathcal{A}(y^1), v^1 \\leftarrow y^1 - z^1, H_1 , \\dots , H_p = \\emptyset$\n \\WHILE{ $k \\leq \\frac{p^2 r_{\\max}^2}{\\xi^2}$}\n \\STATE For every $\\ell \\in \\{1 , \\dots , p\\}$, perform spectral decomposition: $z^k_\\ell = \\sum_{i=1}^{r_\\ell} {\\lambda(z_\\ell^k)}_i {c(z_\\ell^k)}_i$ and $v^k_\\ell = \\sum_{i=1}^{r_\\ell} {\\lambda(v_\\ell^k)}_i {c(v^k_\\ell)}_i$\n \\IF {$z^k \\in \\mbox{int } \\mathcal{K}$} \n \\STATE \\stop basic procedure and \\return $z^k$ \\modifyThird{(Output (i))}\n \\ELSIF { $z^k = 0$ or $v^k \\in \\mathcal{K} \\setminus \\{0\\} $}\n \\STATE \\stop basic procedure and \\return $y^k$ or $v^k$ \\modifyThird{(Output (ii))}\n \\ENDIF\n \\IF {$\\langle v^k , e \\rangle > 0$}\n \\FOR {$\\ell \\in \\{1 , \\dots , p \\}$}\n \\STATE $I_\\ell \\leftarrow \\left \\{ i \\mid {\\lambda(v_\\ell^k)}_i > 0 \\right \\}$ and then $H_\\ell \\leftarrow \\left \\{ i \\in I_\\ell | \\left \\langle e , \\mathcal{P}_\\mathcal{K} \\left( -\\frac{1}{{\\lambda(v_\\ell^k)}_i} v \\right) \\right \\rangle \\leq \\xi \\right \\}$\n \\ENDFOR\n \\ELSE \n \\FOR {$\\ell \\in \\{1 , \\dots , p \\}$}\n \\STATE $I_\\ell \\leftarrow \\left \\{ i \\mid {\\lambda(v_\\ell^k)}_i < 0 \\right\\}$ and then $H_\\ell \\leftarrow \\left \\{ i \\in I_\\ell | \\left \\langle e , \\mathcal{P}_\\mathcal{K} \\left( -\\frac{1}{{\\lambda(v_\\ell^k)}_i} v \\right) \\right \\rangle \\leq \\xi \\right \\}$\n \\ENDFOR \n \\ENDIF\n \\IF { $ |H_1| + \\dots + |H_p| > 0 $}\n \\STATE For every $\\ell \\in \\{1 , \\dots , p\\}$, let $ C_\\ell $ be $\\{ {c(v_\\ell^k)}_1 , \\dots , {c(v_\\ell^k)}_{r_\\ell} \\}$. \n \\STATE \\stop basic procedure and \\return $H_1 , \\dots , H_p$ and $C_1 , \\dots , C_p$ \\modifyThird{(Output (iii))} \n \\ENDIF\n \\STATE Let $u$ be an idempotent such that $\\langle e , u \\rangle = 1$ and $\\langle z^k , u \\rangle = \\lambda_{\\min}(z^k)$\n \\STATE $y^{k+1} \\leftarrow \\alpha y^k + (1-\\alpha) u$, where $\\alpha = \\frac{ \\langle P_\\mathcal{A} (u) , P_\\mathcal{A} (u) - z^k \\rangle }{\\|z^k-P_\\mathcal{A} (u)\\|^2_J} $\n \\STATE $k \\leftarrow k+1$ , $z^k \\leftarrow P_\\mathcal{A}(y^k)$ and $v^k \\leftarrow y^k - z^k$\n \\ENDWHILE\n \\STATE \\return basic procedure error\n \\end{algorithmic} \n \\end{algorithm}\n \n\\section{Main algorithm of the extended method}\n\\label{sec: main algorithm}\n\\subsection{Outline of the main algorithm}\n\\label{sec:outline of main alg}\n\nIn what follows, for a given accuracy $\\varepsilon > 0$, we call a feasible solution of ${{\\rm P}_{S_{\\infty}}}(\\mathcal{A})$ whose minimum eigenvalue is $\\varepsilon$ or more an {\\em $\\varepsilon$-feasible solution of ${{\\rm P}_{S_{\\infty}}}(\\mathcal{A})$}.\n\n\\modifyFirst{\nThis section describes the two main algorithms, Algorithm \\ref{main algorithm} and Algorithm \\ref{main algorithm 2}. The procedures of the algorithms are almost identical, except for one of the termination criteria (the criterion indicating the non-existence of $\\varepsilon$-feasible solutions). Specifically, to set the upper bound for the minimum eigenvalue of any feasible solution $x$ of ${\\rm P}_{S_{S_{\\infty}}}(\\mathcal{A})$, Algorithm \\ref{main algorithm} focuses on the product $\\det (\\bar{x})$ of the eigenvalues of the arbitrary feasible solution $\\bar{x}$ of the scaled problem ${\\rm P}_{S_{S_{\\infty}}}(\\mathcal{A}^k Q^k)$, while Algorithm \\ref{main algorithm 2} focuses on the sum $\\langle \\bar{x} , e \\rangle$ of the eigenvalues. Algorithm \\ref{main algorithm} and Algorithm \\ref{main algorithm 2} work as follows.\n}\n\nFirst, we calculate the corresponding projection $P_{\\mathcal{A}}$ onto $\\mbox{Ker} \\mathcal{A}$ and generate an initial point as input to the basic procedure. \nNext, we call the basic procedure and determine whether to end the algorithm with an $\\varepsilon$-feasible solution or to perform problem scaling according to the returned result, as follows:\n\n\\begin{enumerate}\n\\item \nIf a feasible solution of ${\\rm P}(\\mathcal{A})$ or ${\\rm D}(\\mathcal{A})$ is returned from the basic procedure, the feasibility of ${\\rm P}(\\mathcal {A})$ or ${\\rm D}(\\mathcal {A})$ can be determined, and we stop the main algorithm.\n\\item \nIf the basic procedure returns the sets of indices $H_1 , \\dots , H_p$ and the sets of primitive idempotents $C_1 , \\dots , C_p$ that construct the corresponding Jordan frames, then\n\\begin{description}\n\\item in Algorithm \\ref{main algorithm}:\n\\begin{enumerate}\n\\item\nif $\\mbox{num}_\\ell \\geq r_\\ell \\frac{\\log \\varepsilon}{\\log \\xi}$ holds for some $\\ell \\in \\{ 1, \\dots p \\}$, we determine that ${{\\rm P}_{S_{\\infty}}}(\\mathcal{A})$ has no $\\varepsilon$-feasible solution according to Proposition \\ref{prop:lambda-min-upper} and stop the main algorithm, \n\\item \nif $\\mbox{num}_\\ell < r_\\ell \\frac{\\log \\varepsilon}{\\log \\xi}$ holds for any $\\ell \\in \\{ 1, \\dots p \\}$, we rescale the problem and call the basic procedure again.\n\\end{enumerate}\n\\item \\modifyFirst{in Algorithm \\ref{main algorithm 2}:\n\\begin{enumerate}\n\\item\nif $\\frac{r_\\ell}{\\left( r_\\ell + \\left( \\frac{1}{\\xi} -1 \\right)m_\\ell \\right)} < \\varepsilon$ holds for some $\\ell \\in \\{ 1, \\dots p \\}$, we determine that ${{\\rm P}_{S_{\\infty}}}(\\mathcal{A})$ has no $\\varepsilon$-feasible solution according to Proposition \\ref{prop:lambda-min-upper-2} and stop the main algorithm, \n\\item \nif $\\frac{r_\\ell}{\\left( r_\\ell + \\left( \\frac{1}{\\xi} -1 \\right)m_\\ell \\right)} \\geq \\varepsilon$ holds for any $\\ell \\in \\{ 1, \\dots p \\}$, we rescale the problem and call the basic procedure again.\n\\end{enumerate}\n}\n\\end{description}\n\\end{enumerate}\n\nNote that our main algorithm is similar to Louren\\c{c}o et al.'s method in the sense that it keeps information about the possible minimum eigenvalue of any feasible solution of the problem. \nIn contrast, Pena and Soheili's method \\cite{Pena2017} does not keep such information.\n\n\\modifyThird{\nWe should also mention that step 24 in Algorithm \\ref{main algorithm} and Algorithm \\ref{main algorithm 2} is not a reachable output theoretically. We have added this step in order to consider the influence of the numerical error in practice.\n}\n\n\\modifyFirst{\nTable \\ref{compare_max_itr_MA} lists upper bounds on the numbers of iterations required by Algorithms \\ref{main algorithm} and \\ref{main algorithm 2}; we will give their proofs in section \\ref{sec: finite termination of MA}. As shown in the table, Algorithm \\ref{main algorithm} can be said to be a polynomial-time algorithm, but Algorithm \\ref{main algorithm 2} is not. On the other hand, the results of the numerical experiments in section \\ref{sec: numerical results and observations} show that Algorithm \\ref{main algorithm 2} is superior to Algorithm \\ref{main algorithm} at detecting $\\varepsilon$-feasibility for the generated instances.\n}\n\n\\begin{table}[H]\n\\caption{\\modifyFirst{Upper bounds on the number of iterations of the main algorithms (cf. section \\ref{sec: finite termination of MA}) }}\n\\begin{center}\n\\label{compare_max_itr_MA}\n\\modifyFirst{\n\\begin{tabular}{lc} \\toprule\nMain Algorithm\t\t\t\t&Upper bound on \\# of iterations\t\\\\ \\midrule\nAlgorithm \\ref{main algorithm}\t\t&$-\\frac{r}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right) - p + 1$\t\\\\\nAlgorithm \\ref{main algorithm 2}\t&$\\frac{\\xi}{1-\\xi} \\left( \\frac{1}{\\varepsilon} -1 \\right) r - p + 1$\t\\\\\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n \\begin{algorithm}[H]\n \\caption{Main algorithm}\n \\label{main algorithm}\n \\begin{algorithmic}[1]\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input: }}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output: }}\n \\renewcommand{\\stop}{\\textbf{stop }}\n \\renewcommand{\\return}{\\textbf{return }}\n\n \\STATE \\algorithmicrequire $\\mathcal{A}$, $\\mathcal{K}$, $\\varepsilon$ and a constant $\\xi$ such that $0 < \\xi < 1$\n \\STATE \\algorithmicensure a solution to ${\\rm P}(\\mathcal{A})$ or ${\\rm D}(\\mathcal{A})$ or a certificate that there is no $\\varepsilon$ feasible solution.\n \\STATE $k \\leftarrow 1$ , $\\mathcal{A}^1 \\leftarrow \\mathcal{A}$ , $\\mbox{num}_\\ell \\leftarrow 0 $, $\\bar{Q_\\ell} \\leftarrow I_\\ell $ for all $\\ell \\in \\{1,\\dots,p\\}$\n \\STATE Compute $P_{\\mathcal{A}}$ and call the basic procedure with $P_\\mathcal{A}$, $\\frac{1}{r}e$, $\\xi$\n \\IF {basic procedure returns $z$ }\n \\STATE \\stop main algorithm and \\return $z$ ($z$ is a feasible solution of ${\\rm P}(\\mathcal{A})$)\n \\ELSIF { basic procedure returns $y$ or $v$ }\n \\STATE \\stop main algorithm and \\return $y$ or $v$ ( $y$ or $v$ is a feasible solution of $D(\\mathcal{A})$)\n \\ELSIF {basic procedure returns $H^k_1 , \\dots , H^k_p$ and $C^k_1 , \\dots , C^k_p$}\n \\FOR {$\\ell \\in \\{1,\\dots,p\\}$}\n \\IF{$|H^k_\\ell| > 0$}\n \\STATE $g_\\ell \\leftarrow \\sqrt{\\xi} \\sum_{h \\in H^k_\\ell} c^k(v_\\ell)_h + \\sum_{h \\notin H_\\ell^k}^{r_\\ell} c^k(v_\\ell)_h$ \n \\STATE $Q_\\ell \\leftarrow Q_{g_\\ell}$\n \\STATE $\\mbox{num}_\\ell \\leftarrow |H^k_\\ell| + \\mbox{num}_\\ell $\n \\IF { $ \\mbox{num}_\\ell \\geq r_\\ell \\frac{\\log \\varepsilon}{\\log \\xi}$ }\n \\STATE { \\stop main algorithm. There is no $\\varepsilon$ feasible solution.}\n \\ENDIF\n \\STATE $\\bar{Q_\\ell} \\leftarrow Q_{g_\\ell^{-1}} \\bar{Q_\\ell} $\n \\ELSE\n \\STATE $ Q_\\ell \\leftarrow I_\\ell$\n \\ENDIF\n \\ENDFOR\n \\ELSE\n \\STATE \\return basic procedure error\n \\ENDIF\n \\STATE Let $Q^k = (Q_1 , \\dots , Q_p)$ \n \\STATE \\modifyThird{$\\mathcal{A}^{k+1} \\leftarrow \\mathcal{A}^k Q^k$ , $k \\leftarrow k+1$. Go back to line 4.}\n \\end{algorithmic} \n \\end{algorithm}\n\n\n \\begin{algorithm}[H]\n \\caption{Main algorithm using another criteria for $\\varepsilon$-feasibility}\n \\label{main algorithm 2}\n \\begin{algorithmic}[1]\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input: }}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output: }}\n \\renewcommand{\\stop}{\\textbf{stop }}\n \\renewcommand{\\return}{\\textbf{return }}\n\n \\STATE \\algorithmicrequire $\\mathcal{A}$, $\\mathcal{K}$, $\\varepsilon$ and a constant $\\xi$ such that $0 < \\xi < 1$\n \\STATE \\algorithmicensure a solution to ${\\rm P}(\\mathcal{A})$ or ${\\rm D}(\\mathcal{A})$ or a certificate that there is no $\\varepsilon$ feasible solution.\n \\STATE $k \\leftarrow 1$ , $\\mathcal{A}^1 \\leftarrow \\mathcal{A}$ , $m_\\ell \\leftarrow 0 $ , $\\bar{Q_\\ell} \\leftarrow I_\\ell $ for all $\\ell \\in \\{1,\\dots,p\\}$\n \\STATE Compute $P_{\\mathcal{A}}$ and call the basic procedure with $P_\\mathcal{A}$, $\\frac{1}{r}e$, $\\xi$\n \\IF { basic procedure returns $z$ }\n \\STATE \\stop main algorithm and \\return $z$ ($z$ is a feasible solution of ${\\rm P}(\\mathcal{A})$)\n \\ELSIF { basic procedure returns $y$ or $v$ }\n \\STATE \\stop main algorithm and \\return $y$ or $v$ ( $y$ or $v$ is a feasible solution of $D(\\mathcal{A})$)\n \\ELSIF {basic procedure returns $H^k_1 , \\dots , H^k_p$ and $C^k_1 , \\dots , C^k_p$}\n \\FOR {$\\ell \\in \\{1,\\dots,p\\}$}\n \\IF{$|H^k_\\ell| > 0$}\n \\STATE $g_\\ell \\leftarrow \\sqrt{\\xi} \\sum_{h \\in H^k_\\ell} c^k(v_\\ell)_h + \\sum_{h \\notin H_\\ell^k}^{r_\\ell} c^k(v_\\ell)_h$ \n \\STATE $Q_\\ell \\leftarrow Q_{g_\\ell}$\n \\STATE $m_\\ell \\leftarrow \\left \\langle \\bar{Q_\\ell} \\left( \\sum_{h \\in H^k_\\ell} c^k(v_\\ell)_h \\right) , e_\\ell \\right \\rangle + m_\\ell$\n \\IF {$ \\frac{r_\\ell}{\\left( r_\\ell + \\left( \\frac{1}{\\xi} - 1 \\right) m_\\ell \\right)} \\leq \\varepsilon$}\n \\STATE { \\stop main algorithm. There is no $\\varepsilon$ feasible solution.}\n \\ENDIF\n \\STATE $\\bar{Q_\\ell} \\leftarrow \\bar{Q_\\ell} Q_{g_\\ell^{-1}}$\n \\ELSE\n \\STATE $ Q_\\ell \\leftarrow I_\\ell$\n \\ENDIF\n \\ENDFOR\n \\ELSE\n \\STATE \\return basic procedure error\n \\ENDIF\n \\STATE Let $Q^k = (Q_1 , \\dots , Q_p)$\n \\STATE \\modifyThird{$\\mathcal{A}^{k+1} \\leftarrow \\mathcal{A}^k Q^k$ , $k \\leftarrow k+1$. Go back to line 4.}\n \\end{algorithmic} \n \\end{algorithm}\n \n\\subsection{Finite termination of the main algorithm}\n\\label{sec: finite termination of MA}\n\nHere, we discuss how many iterations are required until we can determine that the minimum eigenvalue $\\lambda_{\\rm min} (x)$ is less than $\\varepsilon$ for any $x \\in F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$.\n\n\\modifyThird{\nBefore going into the proof, we explain the difference between Algorithm \\ref{main algorithm} and Algorithm \\ref{main algorithm 2} in more detail than in section \\ref{sec:outline of main alg}.\nThe difference between the two algorithms is the processing after the basic procedure returns $H_1, \\dots , H_p$ and $C_1, \\dots , C_p$.\n}\n\n\\modifyThird{\nAt each iteration of Algorithm \\ref{main algorithm}, it accumulates the number of cuts $|H_\\ell^k|$ obtained in the $\\ell$-th block and stores the value in ${\\rm num}_\\ell$. Using ${\\rm num}_\\ell$, we can compute an upper bound for $\\lambda_{\\rm min} (x)$ (Proposition \\ref{prop:lambda-min-upper}). On line 18, $\\bar{Q}_\\ell$ is updated to $\\bar{Q}_\\ell \\leftarrow Q_{g_\\ell^{-1}} \\bar{Q}_\\ell $, where $\\bar{Q}_\\ell$ plays the role of an operator that gives the relation $\\bar{x}_\\ell = \\bar{Q}_\\ell (x_\\ell)$ for the solution $x$ of the original problem and the solution $\\bar{x}$ of the scaled problem. For example, if $|H^1_\\ell| > 0$ for $k=1$ (suppose that the cut was obtained in the $\\ell$-th block), then the proposed method scales $\\mathcal{A}^1_\\ell Q^1_\\ell$ and the problem to yield $\\bar{x}_\\ell = Q_{g_\\ell^{-1}}(x_\\ell)$ for the feasible solution $x$ of the original problem. And if $|H^2_\\ell| > 0$ even for $k=2$, then the proposed method scales $\\bar{x}$ again, so that $\\bar{\\bar{x}}_\\ell = Q_{g_\\ell^{-1}}(\\bar{x}) = \\bar{Q}_\\ell (x_\\ell)$ holds. Note that $\\bar{Q}_\\ell$ is used only for a concise proof of Proposition \\ref{prop:lambda-min-upper}, so it is not essential.\n}\n\n\\modifyThird{\nIn Algorithm \\ref{main algorithm 2}, when a cut is obtained in the $\\ell$-th block, it computes the value of $\\left \\langle \\bar{Q_\\ell} \\left( \\sum_{h \\in H^k_\\ell} c^k(v_\\ell)_h \\right) , e_\\ell \\right \\rangle$ and stores its cumulative value in $m_\\ell$. In fact, using this $m_\\ell$, we can compute an upper bound for $\\lambda_{\\rm min} (x)$ (Proposition \\ref{prop:lambda-min-upper-2}). On line 18, $\\bar{Q}_\\ell$ is updated as $\\bar{Q}_\\ell \\leftarrow \\bar{Q}_\\ell Q_{g_\\ell^{-1}}$, and $\\bar{Q}_\\ell$ of Algorithm \\ref{main algorithm 2} plays the role of an operator that gives the relation $\\langle \\bar{x}_\\ell , a_\\ell \\rangle = \\langle x_\\ell, \\bar{Q}_\\ell (a_\\ell) \\rangle$ for the solution $x$ of the original problem, the solution $\\bar{x}$ of the scaled problem, and any $a \\in \\mathbb{E}$. For example, as before, if $|H^1_\\ell| > 0$ for $k=1$, then $\\langle \\bar{x}_\\ell , a_\\ell \\rangle = \\langle Q_{g_\\ell^{-1}}(x_\\ell) , a_\\ell \\rangle =\\langle x_\\ell , Q_{g_\\ell^{-1}}(a_\\ell) \\rangle$ is valid. And if $|H^2_\\ell| > 0$ even for $k=2$, then the proposed method scales $\\bar{x}$ again, so that $\\langle \\bar{\\bar{x}}_\\ell ,a_\\ell \\rangle = \\langle \\bar{x}_\\ell , Q_{g_\\ell^{-1}}(a_\\ell) \\rangle = \\langle x_\\ell , \\bar{Q}_\\ell (a_\\ell) \\rangle $ holds.\n}\n\n\nNow, let us derive an upper bound for the minimum eigenvalue $\\lambda_{\\min}(x_\\ell)$ of each $\\ell$-th block of $x$ obtained after the $k$-th iteration of Algorithm \\ref{main algorithm}.\nProposition \\ref{prop:iteration-num-ma} gives an upper bound for the number of iterations of Algorithm \\ref{main algorithm}.\n\n\\begin{proposition}\n\\label{prop:lambda-min-upper}\nAfter $k$ iterations of Algorithm \\ref{main algorithm}, for any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ and $ \\ell \\in \\{1 , \\dots , p\\}$, the $\\ell$-th block element $x_\\ell$ of $x$ satisfies\n\\begin{equation}\nr_\\ell \\log {\\left( \\lambda_{\\min}(x_\\ell) \\right)} \\leq \\mbox{\\rm num}_\\ell \\log {\\xi} . \\label{prop:10-0}\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n\\modifyThird{\nAt the end of the $k$-th iteration, any feasible solution $\\bar{x}$ of the scaled problem ${\\rm P}_{S_{\\infty}}(\\mathcal{A}^{k+1}) = {\\rm P}_{S_{\\infty}}(\\mathcal{A}^kQ^k)$ obviously satisfies\n\\begin{equation}\n\\det \\bar{x}_\\ell \\leq \\det e_\\ell \\ \\ (\\ell =1,2,\\ldots,p). \\label{prop:10-1}\n\\end{equation}\nNote that $\\det \\bar{x}_\\ell$ can be expressed in terms of $\\det x_\\ell$. For example, if $|H_\\ell^1|>0$ when $k=1$, then using Proposition \\ref{q-det-relation}, for any feasible solution $\\bar{x}$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A}^{2})$, we find that\n\\begin{equation}\n\\det \\bar{x}_\\ell = \\det Q_{g_\\ell^{-1}} (x_\\ell) = \\det(g_\\ell^{-1})^2 \\det x_\\ell = {\\left( \\frac{1}{\\sqrt{\\xi}} \\right)}^{2|H_\\ell^1|} \\det x_\\ell = {\\left( \\frac{1}{\\xi} \\right)}^{|H_\\ell^1|} \\det x_\\ell .\\notag \n\\end{equation}\nThis means that $\\det \\bar{x}_\\ell$ can be determined from $\\det x_\\ell$ and the number of cuts obtained so far in the $\\ell$-th block. In Algorithm \\ref{main algorithm}, the value of $\\mbox{num}_\\ell$ is updated only when $|H_\\ell^k|>0$. Since $\\bar{x}$ satisfies $\\bar{x}_\\ell = \\bar{Q}_\\ell (x_\\ell) \\ (\\ell=1,2,\\ldots,p)$ for each feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$, we can see that \n\\begin{equation}\n\\det \\bar{x}_\\ell = \\det \\bar{Q_\\ell} (x_\\ell) = {\\left( \\frac{1}{\\xi} \\right)}^{|H_\\ell^k|} \\times {\\left( \\frac{1}{\\xi} \\right)}^{|H_\\ell^{k-1}|} \\dots \\times {\\left( \\frac{1}{\\xi} \\right)}^{|H_\\ell^1|} \\times \\det x_\\ell = {\\left( \\frac{1}{\\xi} \\right)}^{\\mbox{num}_\\ell} \\det x_\\ell . \\notag\n\\end{equation}\nTherefore, (\\ref{prop:10-1}) implies \n\\begin{equation}\n\\det x_\\ell \\leq \\xi^{\\mbox{num}_\\ell} \\det e_\\ell = \\xi^{\\mbox{num}_\\ell}\\notag\n\\end{equation}\nand the fact ${\\left( \\lambda_{\\min}(x_\\ell) \\right)}^{r_\\ell} \\leq \\det x_\\ell$ implies ${\\left( \\lambda_{\\min}(x_\\ell) \\right)}^{r_\\ell} \\leq {\\xi}^{\\mbox{num}_\\ell}$.\nBy taking the logarithm of both sides of this inequality, we obtain (\\ref{prop:10-0}). \n}\n\\end{proof}\n\n\\begin{proposition}\n\\label{prop:iteration-num-ma}\nAlgorithm \\ref{main algorithm} terminates after no more than\n\\begin{equation}\n-\\frac{r}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right) - p + 1 \\notag\n\\end{equation}\niterations.\n\\end{proposition}\n\n\\begin{proof}\nLet us call iteration $k$ of Algorithm \\ref{main algorithm} {\\em good} if $|H_\\ell^k| > 0$ for some $\\ell \\in \\{1,2, \\dots , p\\}$ at that iteration. Suppose that at least $-\\frac{r_\\ell}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right)$ {\\em good} iterations are observed for a cone $\\mathcal{K}_\\ell$. Then, by substituting $-\\frac{r_\\ell}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right)$ into $\\mbox{num}_\\ell$ of inequality (\\ref{prop:10-0}) in Proposition \\ref{prop:lambda-min-upper}, we have \n\\begin{equation}\n\\log {\\left( \\lambda_{\\min}(x_\\ell) \\right)} \\leq - \\log \\left( \\frac{1}{\\varepsilon}\\right) = \\log \\varepsilon \\notag\n\\end{equation}\nand hence, $\\lambda_{\\min}(x_\\ell) \\leq \\varepsilon$.\nThis implies that Algorithm \\ref{main algorithm} terminates after no more than \n\\begin{equation}\n\\sum_{\\ell=1}^p \\left( -\\frac{r_\\ell}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right) - 1 \\right) + 1 = -\\frac{r}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right) - p + 1 \\notag\n\\end{equation}\niterations.\n\\end{proof}\n\n\\modifyFirst{Next, let us derive an upper bound for the number of iterations of Algorithm \\ref{main algorithm 2}.} \n\nProposition \\ref{prop:lambda-min-upper} guarantees that $\\varepsilon$-feasibility of the problem ${\\rm P}(\\mathcal{A})$ can be detected by computing $\\det(\\bar{x})$ of any feasible solution of $ {\\rm P}_{S_{\\infty}}(\\mathcal{A}^kQ^k)$. The following proposition ensures that we may use the value $\\langle \\bar{x} , e \\rangle$ of any feasible solution of $ {\\rm P}_{S_{\\infty}}(\\mathcal{A}^kQ^k)$ to detect the $\\varepsilon$-feasibility of problem ${\\rm P}(\\mathcal{A})$, instead of $\\det(\\bar{x})$. While the analysis of the computational complexity in section \\ref{sec: finite termination of MA} does not hold for it, the new criteria is better able to detect $\\varepsilon$-feasibility in the numerical experiments presented in section\\ref{sec: numerical results and observations}.\n\n\\begin{proposition}\n\\label{prop:lambda-min-upper-2}\nAfter $k$ iterations of Algorithm \\ref{main algorithm 2}, for any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ and $ \\ell \\in \\{1 , \\dots , p\\}$, the $\\ell$-th block element $x_\\ell$ of $x$ satisfies\n\\begin{equation}\n\\lambda_{\\min}(x_\\ell) \\leq \\cfrac{r_\\ell}{\\left( r_\\ell + \\left( \\frac{1}{\\xi} - 1 \\right) m_\\ell \\right)} . \\label{prop:11-0}\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nIn Algorithm \\ref{main algorithm 2}, $m_\\ell$ is updated only when $|H_\\ell^k|>0$. \nSuppose that, at the end of the $k$-th iteration of Algorithm \\ref{main algorithm 2}, the last update of $m_\\ell$ had been at the $k'(\\leq k)$-th iteration.\nThen, the stopping criteria of the basic procedure guarantees that at the beginning of the $k'$-th iteration, $\\bar{Q_\\ell}$\nsatisfies \n\\modifyFirst{\n\\begin{equation}\n\\langle x, \\bar{Q_\\ell}(c^{k'}(v_\\ell)_i) \\rangle \\leq \n\\begin{cases}\n\\xi \t&i \\in H_\\ell^{k'} \\\\\n1\t&i \\notin H_\\ell^{k'}\n\\end{cases}\n.\n\\label{prop:11-0.5}\n\\end{equation}\n}\n\nThis gives a lower bound for $|H_\\ell^{k'}|$:\n\\begin{equation}\n\\frac{1}{\\xi} \\left \\langle x, \\bar{Q_\\ell} \\left( \\sum_{i \\in H_\\ell^{k'} }c^{k'}(v_\\ell)_i \\right) \\right \\rangle \\leq |H_\\ell^{k'}|. \\label{prop:11-0.75}\n\\end{equation}\n\nUsing the fact that $x_\\ell - \\lambda_{\\min} (x_\\ell) e_\\ell \\in \\mathcal{K}_\\ell$, we obtain\n\\begin{align*}\n\\lambda_{\\min} (x_\\ell) \\langle e_\\ell , \\bar{Q_\\ell} (e_\\ell) \\rangle &\\leq \\langle x_\\ell , \\bar{Q_\\ell} (e_\\ell) \\rangle \\\\\n&\\modifyFirst{= \\left \\langle x_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\notin H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle + \\left \\langle x_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle} \\\\\n&\\leq r_\\ell - |H_\\ell^{k'}| + \\left \\langle x_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle \\ \\ \\ \\modifyFirst{\\mbox{(by (\\ref{prop:11-0.5}))}} \\\\\n&\\leq r_\\ell - \\left( \\cfrac{1}{\\xi} - 1 \\right) \\left \\langle x_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle \\ \\ \\ \\mbox{(by (\\ref{prop:11-0.75}))} \\\\\n&\\leq r_\\ell - \\left( \\cfrac{1}{\\xi} - 1 \\right) \\lambda_{\\min} (x_\\ell) \\left \\langle e_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle,\n\\end{align*}\nand hence, \n\\begin{equation}\n\\lambda_{\\min} (x_\\ell) \\left( \\left \\langle e_\\ell , \\bar{Q_\\ell} (e_\\ell) \\right \\rangle + \\left( \\cfrac{1}{\\xi} - 1 \\right) \\left \\langle e_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle \\right) \\leq r_\\ell. \\label{prop:11-1}\n\\end{equation}\n\nNext, suppose that, at the beginning of the $k'$-th iteration of Algorithm \\ref{main algorithm 2}, the last update of $m_\\ell$ had been performed at the $i( 0$ for $ \\ell \\in \\{1 , \\dots , p\\}$ at the $k$-th iteration of Algorithm \\ref{main algorithm 2}, we say that the iteration is ``good'' for the $\\ell$-th block. From Proposition \\ref{prop:lambda-min-upper-2}, since the (meaningful) upper bound of the minimum eigenvalue $\\lambda_{\\min} (x_\\ell)$ of $x_\\ell$ of the $\\ell$-th block of any feasible solution $x$ of ${\\rm P}_{S_{\\infty}}(\\mathcal{A})$ depends on the value of $m_\\ell$, we first calculate a lower bound for the increment of $m_\\ell$ per good iteration in the $\\ell$-th block. \\\\\nSimilar to the proof of Proposition \\ref{prop:lambda-min-upper-2}, suppose that the $k'$-th iteration is a good iteration for the $\\ell$-th block. As shown in equation (\\ref{update-m}), the value of $m_\\ell$ is increased at this time by $\\left \\langle e_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle$ using $\\bar{Q}_\\ell$ at the beginning of the $k'$-th iteration. Let us express $Q_{g_\\ell^{-1}}$ using $g_\\ell$ obtained at the $k$-th iteration as $Q^k_{g_\\ell^{-1}}$, i.e., $\\bar{Q_\\ell} = Q^1_{g_\\ell^{-1}} Q^2_{g_\\ell^{-1}} \\dots Q^{k'-1}_{g_\\ell^{-1}}$. Then, the increment of $m_\\ell$ at the $k'$-th iteration is as follows:\n\\begin{equation}\n\\label{eq-m-value}\n\\left \\langle e_\\ell , \\bar{Q_\\ell} \\left( \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right) \\right \\rangle = \\left \\langle Q^{k'-1}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) , \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right \\rangle.\n\\end{equation}\nNote that $Q^{k'-1}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\in \\mathcal{K}_\\ell$ holds, as we will prove below using induction. \\\\\nFirst, if the first iteration is a good one for the $\\ell$ block, then $Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) = \\frac{1}{\\xi} \\sum_{i \\in H_\\ell^1} c^1(v_\\ell)_i+ \\sum_{j \\notin H_\\ell^1} c^1(v_\\ell)_j = e_\\ell + \\left( \\frac{1}{\\xi} -1 \\right)\\sum_{i \\in H_\\ell^1} c^1(v_\\ell)_i$, and if it is not a good iteration, then $Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) =e_\\ell$.\nThus, $Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\in \\mathcal{K}_\\ell$ holds.\nNext, when $Q^{i}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\in \\mathcal{K}$ holds, by Proposition \\ref{ptop:quadratic}, $Q^{i+1}_{g_\\ell^{-1}} \\left( Q^{i}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\right) \\in \\mathcal{K}_\\ell$ holds.\nFurthermore, the same calculation as in the first iteration yields $Q^{i+1}_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\in \\mathcal{K}_\\ell$, and we see that \n\\begin{align*}\nQ^{i+1}_{g_\\ell^{-1}} \\left( Q^{i}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\right) \\in \\mathcal{K}_\\ell\n&\\Leftrightarrow Q^{i+1}_{g_\\ell^{-1}} Q^{i}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - Q^{i+1}_{g_\\ell^{-1}} \\left( e_\\ell \\right) \\in \\mathcal{K}_\\ell \\\\\n&\\Rightarrow Q^{i+1}_{g_\\ell^{-1}} Q^{i}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) - e_\\ell \\in \\mathcal{K}_\\ell.\n\\end{align*}\nThus, from (\\ref{eq-m-value}), we obtain a lower bound for the increment of $m_\\ell$ as\n\\begin{align*}\n\\left \\langle Q^{k'-1}_{g_\\ell^{-1}} \\dots Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) , \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right \\rangle\n&\\geq \\left \\langle e_\\ell, \\sum_{j \\in H_\\ell^{k'}} {c^{k'}(v_\\ell)}_j \\right \\rangle \\\\\n&= |H^{k'}_\\ell| \\geq 1,\n\\end{align*}\nwhich means that the value of $m_\\ell$ increases by at least $1$ per good iteration.\nTherefore, if the number of good iterations for the $\\ell$-th block is $\\left( \\frac{r_\\ell}{\\varepsilon} - r_\\ell \\right) \\left( \\frac{\\xi}{1-\\xi} \\right)$ or more, then from Proposition \\ref{prop:lambda-min-upper-2}, we can conclude that $\\lambda_{min}(x_\\ell) \\leq \\varepsilon$ holds; i.e., we obtain an upper bound for the number of iterations of Algorithm \\ref{main algorithm 2} as follows:\n\\begin{equation}\n\\sum_{\\ell=1}^p \\left( \\left( \\frac{r_\\ell}{\\varepsilon} - r_\\ell \\right) \\left( \\frac{\\xi}{1-\\xi} \\right) -1 \\right) + 1 = \\frac{\\xi}{1-\\xi} \\left( \\frac{1}{\\varepsilon} - 1\\right) r - p + 1. \\notag\n\\end{equation}\n\\end{proof}\n}\n\\modifyThird{\nIt should be noted that the number of iterations required by Algorithm \\ref{main algorithm 2} to detect the non-existence of $\\varepsilon$-feasible solutions is actually likely to be much smaller than the value given in Proposition \\ref{prop:iteration-num-ma-2}.\nThis is because Proposition \\ref{prop:iteration-num-ma-2} calculates the lower bound for the increment of $m_\\ell$ for one good iteration as $1$.\nThe increment of $m_\\ell$ can be calculated using $\\bar{Q}_\\ell$, but it is difficult to calculate the exact increment of $m_\\ell$ because $\\bar{Q}_\\ell$ depends on the results returned by the previous basic procedure. \\\\\nSuppose that both the first and second iterations are good for the $\\ell$-th block. Then, the increment of $m_\\ell$ at the second iteration is \n\\begin{equation}\n\\notag\n\\left \\langle Q^1_{g_\\ell^{-1}} \\left( e_\\ell \\right) , \\sum_{j \\in H_\\ell^{2}} {c^{2}(v_\\ell)}_j \\right \\rangle = \\left \\langle e_\\ell + \\left( \\frac{1}{\\xi} -1 \\right)\\sum_{i \\in H_\\ell^1} c^1(v_\\ell)_i , \\sum_{j \\in H_\\ell^{2}} {c^{2}(v_\\ell)}_j \\right \\rangle,\n\\end{equation}\nbut it is difficult to find a lower bound greater than 0 for $\\left \\langle \\sum_{i \\in H_\\ell^1} c^1(v_\\ell)_i , \\sum_{j \\in H_\\ell^{2}} {c^{2}(v_\\ell)}_j \\right \\rangle$.\n}\n\n\n\\section{Computational costs of the algorithms}\n\\label{sec: compare}\n\nThis section compares the computational costs of Algorithm \\ref{main algorithm}, Louren\\c{c}o et al.'s method~\\cite{Bruno2019} and Pena and Soheili's method~\\cite{Pena2017}. Algorithm \\ref{main algorithm 2} is omitted from the comparison because it is not guaranteed to be a polynomial-time algorithm.\n\nSection \\ref{sec: complexity of MA vs Lourenco} compares the computational costs of Algorithm \\ref{main algorithm} and Louren\\c{c}o et al.'s method, and\nSection \\ref{sec: complexity of MA vs Pena} compares those of Algorithm \\ref{main algorithm} and Pena and Soheili's method under the assumption that $\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\neq \\emptyset$.\n\nBoth the proposed method and the method of Louren\\c{c}o et al. guarantee finite termination of the main algorithm by termination criteria indicating the nonexistence of an $\\varepsilon$-feasible solution, so that it is possible to compare the computational costs of the methods without making any special assumptions. This is because both methods proceed by making cuts to the feasible region using the results obtained from the basic procedure. On the other hand, Pena and Soheili's method cannot be simply compared because the upper bound of the number of iterations of their main algorithm includes an unknown value of $\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}) := \\max_x \\left \\{ {\\rm det}(x) \\mid x \\in \\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} , \\|x\\|_J^2 = r \\right\\}$.\n\nHowever, by making the assumption $\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\neq \\emptyset$ and deriving a lower bound for $\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K})$, we make it possible to compare Algorithm \\ref{main algorithm} (and Algorithm \\ref{main algorithm 2} ) with Pena and Soheili's method without knowing the specific values of $\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K})$.\n\n\\subsection{Comparison of Algorithm \\ref{main algorithm} and Louren\\c{c}o et al.'s method}\n\\label{sec: complexity of MA vs Lourenco}\n\nlet us consider the computational cost of Algorithm \\ref{main algorithm}.\nAt each iteration, the most expensive operation is computing $P_\\mathcal{A}$ on line 4.\nRecall that $d$ is the dimension of the Euclidean space $\\mathbb{E}$ corresponding to $\\mathcal{K}$.\nAs discussed in \\cite{Bruno2019}, by considering $P_\\mathcal{A}$ to be an $m \\times d$ matrix, \nwe find that the computational cost of $P_{\\mathcal{A}}$ is $\\mathcal{O}(m^3+m^2d)$.\nTherefore, by taking the computational cost (\\ref{cost:b-p}) of the basic procedure and Proposition \\ref{prop:iteration-num-ma} into consideration, the cost of Algorithm \\ref{main algorithm} turns out to be\n\\begin{equation}\n\\label{eq: computational cost 1}\n\\mathcal{O} \\left( -\\frac{r}{\\log \\xi} \\log \\left( \\frac{1}{\\varepsilon}\\right) \\left( m^3+m^2d + \\frac{1}{\\xi^2} p^2 r_{\\max}^2 \\left( \\max \\left( C^{\\rm sd} , md \\right) \\right) \\right) \\right)\n\\end{equation}\nwhere $C^{\\rm sd}$ is the computational cost of the spectral decomposition of $x \\in \\mathbb{E}$.\n\nNote that, in \\cite{Bruno2019}, the authors showed that the cost of their algorithm is\n\\begin{equation}\n\\label{eq: computational cost 2}\n\\mathcal{O} \\left( \\left( \\frac{r}{\\varphi(\\rho)} \\log \\left( \\frac{1}{\\varepsilon} \\right) - \\sum_{i=1}^p \\frac{r_i \\log(r_i)}{\\varphi(\\rho)}\\right) \\left( m^3 + m^2d + \\rho^2 p^3 r_{max}^2 \\left(\\max \\left(C^{\\rm min} , md \\right) \\right) \\right) \\right) \n\\end{equation}\nwhere $C^{\\rm min}$ is the cost of computing the minimum eigenvalue of $x \\in \\mathbb{E}$ with the corresponding idempotent.\n\nWhen the symmetric cone is simple, by setting $\\xi = \\frac{1}{2}$ and $\\rho = 2$, the maximum number of iterations of the basic procedure is bounded by the same value in both algorithms. Accordingly, we will compare the two computational costs (\\ref{eq: computational cost 1}) and (\\ref{eq: computational cost 2}) by supposing $\\xi = \\frac{1}{2}$ and $\\rho = 2$ (hence, $- \\log \\xi \\simeq 0.69$ and $\\varphi (\\rho) \\simeq 0.09$). As we can see below, the cost (\\ref{eq: computational cost 1}) of our method is smaller than (\\ref{eq: computational cost 2}) in the cases of linear programming and second-order cone problems and is equivalent to (\\ref{eq: computational cost 2}) in the case of semidefinite problems.\nFirst, let us consider the case where $\\mathcal{K}$ is the $n$-dimensional nonnegative orthant $\\mathbb{R}^n_+$.\nHere, we see that $r=p=d=n$, $r_1 = \\cdots = r_p = r_{\\max}=1$, and $\\max \\left( C^{\\rm sd} , md \\right) = \\max \\left( C^{\\rm min} , md \\right) = md$ hold.\nBy substituting these values, the bounds (\\ref{eq: computational cost 1}) and (\\ref{eq: computational cost 2}) turn out to be\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{n}{0.69} \\log \\left( \\frac{1}{\\varepsilon} \\right) \\left( m^3 + m^2n + 4 mn^3 \\right) \\right) \\notag\n\\end{equation}\nand\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{n}{0.09} \\log \\left( \\frac{1}{\\varepsilon} \\right) \\left( m^3 + m^2n + 4 mn^4 \\right) \\right). \\notag\n\\end{equation}\nThis implies that for the linear programming case, our method (which is equivalent to Roos's original method \\cite{Roos2018}) is superior to Louren\\c{c}o et al.'s method \\cite{Bruno2019} in terms of bounds (\\ref{eq: computational cost 1}) and (\\ref{eq: computational cost 2}) .\n\nNext, let us consider the case where $\\mathcal{K}$ is composed of $p$ simple second-order cones $\\mathbb{L}^{n_i} \\ (i= 1, \\ldots, p)$, i.e., $\\mathcal{K} = \\mathbb{L}^{n_1} \\times \\mathbb{L}^{n_2} \\times \\cdots \\mathbb{L}^{n_p}$.\nIn this case, we see that $d = \\sum_{i=1}^p n_i$, $r_1 = \\cdots = r_p = r_{\\max}=2$ and $\\max \\left( C^{\\rm sd} , md \\right) = \\max \\left( C^{\\rm min} , md \\right) = md$ hold.\nBy substituting these values, the bounds (\\ref{eq: computational cost 1}) and (\\ref{eq: computational cost 2}) turn out to be\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{2p}{0.69} \\log \\left( \\frac{1}{\\varepsilon} \\right) \\left( m^3 + m^2d + 16p^2md \\right) \\right) \\notag\n\\end{equation}\nand\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{2p}{0.09} \\left( \\log \\left( \\frac{1}{\\varepsilon} \\right) - \\log 2 \\right) \\left( m^3 + m^2d + 16 p^3 md \\right) \\right). \\notag\n\\end{equation}\nNote that $\\varepsilon$ is expected to be very small ($10^{-6}$ or even $10^{-12}$ in practice) and $\\frac{1}{0.69} \\log \\left( \\frac{1}{\\varepsilon} \\right) \\leq \\frac{1}{0.09} \\left( \\log \\left( \\frac{1}{\\varepsilon} \\right) - \\log 2 \\right)$ if $\\varepsilon \\leq 0.451$.\nThus, even in this case, we may conclude that our method is superior to Louren\\c{c}o et al.'s method in terms of the bounds (\\ref{eq: computational cost 1}) and (\\ref{eq: computational cost 2}) .\n\nFinally, let us consider the case where $\\mathcal{K}$ is a simple $n \\times n$ positive semidefinite cone. We see that $p=1$, $r = n$, and $d = \\frac{n(n+1)}{2}$ hold, and upon substituting these values, the bounds (\\ref{eq: computational cost 1}) and (\\ref{eq: computational cost 2}) turn out to be\n\\begin{equation}\n\\mathcal{O} \\left( \\frac{n}{0.69} \\log \\left( \\frac{1}{\\varepsilon} \\right) \\left(m^3 + m^2n^2 + 4n^2 \\max \\left( C^{\\rm sd} , mn^2 \\right) \\right) \\right) \\notag\n\\end{equation}\nand \n\\begin{equation}\n\\mathcal{O} \\left( \\frac{n}{0.09} \\log \\left( \\frac{1}{\\varepsilon} \\right) \\left(m^3 + m^2n^2 + 4n^2 \\max ( C^{\\rm min} , mn^2) \\right) \\right). \\notag\n\\end{equation}\nFrom the discussion in Section \\ref{sec: CSD and Cmin}, we can assume $\\mathcal{O}\\left( C^{\\rm sd} \\right) = \\mathcal{O}\\left( C^{\\rm min} \\right)$, and the computational bounds of two methods are equivalent.\n\n\\subsection{Comparison of Algorithm \\ref{main algorithm} and Pena and Soheili's method}\n\\label{sec: complexity of MA vs Pena}\n\n\\modifySecond{\nIn this section, we assume that $\\mathcal{K}$ is simple since Pena and Soheili's method does not support the direct product form. We also assume that $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} \\neq \\emptyset$, because Pena and Soheili's method does not terminate if $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} = \\emptyset$ and $\\mbox{{\\rm range}} \\mathcal{A}^* \\cap \\mbox{{\\rm int}} \\mathcal{K} = \\emptyset$. Furthermore, for the sake of simplicity, we assume that the main algorithm of Pena and Soheili's method applies only to $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K}$. (Their original method applies the main algorithm to $\\mbox{{\\rm range}} \\mathcal{A}^* \\cap \\mbox{{\\rm int}} \\mathcal{K}$ as well.)\n}\n\n\\modifySecond{\nFirst, we will briefly explain the idea of deriving an upper bound for the number of iterations required to find $x \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K}$ in Pena and Soheili's method. Pena and Soheili derive it by focusing on the indicator $\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}) := \\max_x \\left \\{ {\\rm det}(x) \\mid x \\in \\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} , \\|x\\|_J^2 = r \\right\\}$. If $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} \\neq \\emptyset$, then $\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}) \\in (0,1]$ holds, and if $e \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K}$, then $\\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}) =1$ holds. If $e \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K}$, then the basic procedure terminates immediately and returns $\\frac{1}{r}e$ as a feasible solution. Then, they prove that $\\delta (Q_v \\left( \\mbox{Ker} \\mathcal{A} \\right) \\cap \\mbox{int} \\mathcal{K}) \\geq 1.5 \\cdot \\delta (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K})$ holds if the parameters are appropriately set, and derive an upper bound on the number of scaling steps, i.e., the number of iterations, required to obtain $\\delta (Q_v \\left( \\mbox{Ker} \\mathcal{A} \\right) \\cap \\mbox{int} \\mathcal{K}) = 1$.\n}\n\n\\modifySecond{\nIn the following, we obtain an upper bound for the number of iterations of the proposed method using the index $\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right) := \\max_x \\left \\{ {\\rm det}(x) \\mid x \\in \\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} , \\|x\\|_J^2 = 1 \\right \\}$. Note that $\\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right) = r^{\\frac{r}{2}} \\cdot \\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right)$. In fact, if $x^*$ is the point giving the maximum value of $\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right)$, then the point giving the maximum value of $\\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right)$ is $\\sqrt{r} x^*$. Also, if $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} \\neq \\emptyset$, then $\\delta^{{\\rm supposed}} (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}) \\in (0,1\/ r^{\\frac{r}{2}} ]$, and if $\\frac{1}{\\sqrt{r}} e \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K}$, then $\\delta^{{\\rm supposed}} (\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}) =1\/ r^{\\frac{r}{2}}$.\n}\n\n\\modifySecond{\nThe outline of this section is as follows: First, we show that a lower bound for $\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right)$ can be derived using the index value $\\delta^{{\\rm supposed}} \\left( Q_{g^{-1}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\right) \\cap \\mbox{int} \\mathcal{K} \\right)$ for the problem after scaling (Proposition \\ref{prop:compare scaling}). Then, using this result, we derive an upper bound for the number of operations required to obtain $\\delta^{{\\rm supposed}} \\left( Q_{g^{-1}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\right) \\cap \\mbox{int} \\mathcal{K} \\right) = 1\/ r^{\\frac{r}{2}}$ (Proposition \\ref{pro:compare iteration}). Finally, we compare the proposed method with Pena and Soheili's method.\n}\n\n\\modifySecond{\nTo prove Proposition \\ref{prop:compare tr} used in the proof of Proposition \\ref{prop:compare scaling}, we use the following propositions from \\cite{JNW1934}.\n}\n\n\\modifySecond{\n\\begin{proposition}[Theorem 3 of~\\cite{JNW1934}]\n\\label{JNW_Th3}\nLet $c \\in \\mathbb{E}$ be an idempotent and $N_\\lambda (c)$ be the set such that $N_\\lambda (c) = \\{ x \\in \\mathbb{E} \\mid c \\circ x = \\lambda x\\}$.\nThen $N_\\lambda(c)$ is a linear maniforld, but if $\\lambda \\neq 0, \\frac{1}{2}$, and $1$, then $N_\\lambda (c)$ consists of zero alone. \\\\\nEach $x \\in \\mathbb{E}$ can be represented in the form\n\\begin{equation}\n\\notag\n\\begin{array}{lccc}\nx = u + v + w,\t&u \\in N_0(c),\t&v \\in N_{\\frac{1}{2}}(c),\t&w \\in N_1(c),\n\\end{array}\n\\end{equation}\nin one and only one way.\n\\end{proposition}\n}\n\n\\modifySecond{\n\\begin{proposition}[Theorem 11 of~\\cite{JNW1934}]\n\\label{JNW_Th11}\n$c \\in \\mathbb{E}$ is a primitive idempotent if and only if $N_1(c) =\\{ x \\in \\mathbb{E} \\mid c \\circ x = x\\} = \\mathbb{R} c$.\n\\end{proposition}\n}\n\n\\modifySecond{\n\\begin{proposition}\n\\label{prop:compare tr}\nLet $c \\in \\mathbb{E}$ be a primitive idempotent.\nThen, for any $x \\in \\mathbb{E}$, $\\langle x, Q_c(x) \\rangle = \\left( \\langle x,c \\rangle \\right)^2$ holds.\n\\end{proposition}\n\\begin{proof}\nFrom Propositions \\ref{JNW_Th3} and \\ref{JNW_Th11}, for any $x \\in \\mathbb{E}$, there exist a real number $\\lambda \\in \\mathbb{R}$ and elements $u \\in N_0(c)$ and $v \\in N_{\\frac{1}{2}}(c)$ such that $x =u + v + \\lambda c $. \\\\\nFirst, we show that $\\langle x,c \\rangle = \\lambda$. For $v \\in N_\\frac{1}{2} (c)$, we see that $\\langle v,c \\rangle = \\langle v, c \\circ c \\rangle = \\langle v \\circ c, c \\rangle = \\langle c \\circ v, c \\rangle = \\frac{1}{2} \\langle v, c \\rangle$, which implies that $\\langle v,c \\rangle =0$. Thus, since $u \\in N_0(c)$ and $u \\circ c = 0$, $\\langle x,c \\rangle$ is given by\n\\begin{equation}\n\\notag\n\\langle x,c \\rangle = \\langle u + v + \\lambda c , c \\rangle = \\langle u,c \\rangle + \\langle v,c \\rangle = \\lambda \\langle c,c \\rangle = 0 + 0 + \\lambda.\n\\end{equation}\nOn the other hand, by using the facts $x = u + v + \\lambda c$, $c^2 = c \\circ c = c$, $c \\circ u =0$ and $c \\circ v =\\frac{1}{2} v$ repeatedly, we have \n\\begin{align*}\n\\langle x, Q_c(x) \\rangle \n&= \\langle x, 2 c \\circ (c \\circ x) - c^2 \\circ x \\rangle \\\\\n&=\\langle x, 2 c \\circ (c \\circ (u + v + \\lambda c)) - c \\circ (u + v + \\lambda c)\\rangle \\\\\n&= \\langle x, 2 c \\circ (\\frac{1}{2} v + \\lambda c) - ( \\frac{1}{2}v + \\lambda c ) \\rangle \\\\\n&= \\langle x, (\\frac{1}{2} v + 2 \\lambda c) - ( \\frac{1}{2}v + \\lambda c ) \\rangle \\\\\n&= \\langle x, \\lambda c \\rangle = \\lambda^2.\n\\end{align*}\nThus, we have shown that $\\langle x, Q_c(x) \\rangle = \\left( \\langle x,c \\rangle \\right)^2$ holds.\n\\end{proof}\n}\n\n\\modifySecond{\n\\begin{remark}\n\\label{remark:pena}\nIt should be noted that the proof of Proposition 3 in \\cite{Pena2017} is not correct since equation (14) does not necessarily hold.\nThe above Proposition \\ref{prop:compare tr} also gives a correct proof of Proposition 3 in \\cite{Pena2017}. See the computation $\\langle y , Q_{g^{-2}} (y) \\rangle$ in the proof of Proposition \\ref{prop:compare scaling}.\n\\end{remark}\n}\n\n\\modifySecond{\n\\begin{proposition}\n\\label{prop:compare scaling}\nSuppose that $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} \\neq \\emptyset$ and that, for a given nonempty index set $H \\subseteq \\{1 , \\dots r \\}$, Jordan frame $c_1 , \\dots , c_r $, and $0 < \\xi < 1$,\n\\begin{equation}\n\\notag\n\\langle c_i , x \\rangle \\leq \\xi \\ ( i \\in H), \\ \\ \\langle c_i , x \\rangle \\leq 1 \\ ( i \\notin H)\n\\end{equation}\nholds for any $x \\in F_{{\\rm P}_{S_{\\infty}}(\\mathcal{A})}$.\nDefine $g \\in \\mbox{\\rm int} \\mathcal{K}$ as\n\\begin{equation}\n\\notag\ng := \\sqrt{\\xi} \\sum_{h \\in H} c_h + \\sum_{h \\notin H} c_h \\ \\ \\ \\ \\mbox{i.e.,} \\ \\ \\ g^{-1} = \\frac{1}{\\sqrt{\\xi}} \\sum_{h \\in H} c_h + \\sum_{h \\notin H} c_h.\n\\end{equation}\nThen, the following inequality holds:\n\\begin{equation}\n\\notag\n\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right) > \\xi \\cdot \\delta^{{\\rm supposed}} \\left( Q_{g^{-1}} \\left(\\mbox{{\\rm Ker}} \\mathcal{A} \\right) \\cap {\\rm int} \\mathcal{K} \\right).\n\\end{equation}\n\\end{proposition}\n}\n\n\\begin{proof}\n\\modifySecond{\nFor simplicity of discussion, let $|H|=1$, i.e., $H =\\{i\\}$.\nLet us define the points $x^*$, $y^*$, and $\\bar{x}^*$ as follows:\n\\begin{equation}\n\\begin{array}{l}\nx^*= \\argmax \\left\\{ \\det (x) \\mid x \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K}, \\| x \\|^2_J = 1 \\right\\}, \\\\\ny^* = \\argmax \\left\\{ \\det (y) \\mid y \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K}, \\| Q_{g^{-1}} (y)\\|^2_J = 1 \\right\\}, \\\\\n\\bar{x} ^*= \\argmax \\left\\{ \\det (\\bar{x}) \\mid \\bar{x} \\in Q_{g^{-1}} \\left(\\mbox{{\\rm Ker}} \\mathcal{A} \\right) \\cap {\\rm int} \\mathcal{K}, \\| \\bar{x} \\|^2_J = 1 \\right\\}.\n\\end{array}\n\\notag\n\\end{equation}\nNote that the feasible region with respect to $y$ is the set of solutions whose norm is $1$ after scaling. \n\\medskip \\\\\nFirst, we show that $\\|y\\|_J^2 < 1$, and then $\\det (x^*) > \\det (y^*)$.\nProposition \\ref{q-det-relation} ensures that $\\|Q_{g^{-1}} (y)\\|_J^2 = \\langle Q_{g^{-1}} (y) , Q_{g^{-1}} (y) \\rangle = \\langle y , Q_{g^{-2}} (y) \\rangle$.\nTo expand $Q_{g^{-2}} (y)$, we expand the following equations by letting $a = \\frac{1}{\\sqrt{\\xi}} - 1 $:\n\\begin{align*}\ng^{-2} &= e + (2a + a^2) c_i, \\\\\ng^{-4} &= e + \\left( 2(2a + a^2) + (2a+a^2)^2 \\right) c_i \\\\\ng^{-2} \\circ y &= y + (2a+a^2) c_i \\circ y, \\\\\ng^{-2} \\circ ( g^{-2} \\circ y) &= y + 2(2a+a^2) c_i \\circ y + (2a+a^2)^2 c_i \\circ (c_i \\circ y), \\\\\ng^{-4} \\circ y &= y + \\left( 2(2a + a^2) + (2a+a^2)^2 \\right) c_i \\circ y.\n\\end{align*}\nThus, $Q_{g^{-2}} (y)$ turns out to be\n\\begin{align*}\nQ_{g^{-2}} (y)\n&= 2 g^{-2} \\circ ( g^{-2} \\circ y) - g^{-4} \\circ y \\\\\n&= y + 2(2a+a^2) c_i \\circ y + 2(2a+a^2)^2 c_i \\circ (c_i \\circ y) - (2a+a^2)^2 c_i \\circ y \\\\\n&= y + 2(2a+a^2) c_i \\circ y + (2a+a^2)^2 Q_{c_i} (y), \\\\\n\\end{align*}\nand hence, we obtain $\\|Q_{g^{-1}} (y)\\|_J^2$ as \n\\begin{align*}\n\\langle y , Q_{g^{-2}} (y) \\rangle \n&= \\|y\\|_J^2 + 2(2a+a^2) \\langle y, c_i \\circ y \\rangle + (2a+a^2)^2 \\langle y, Q_{c_i} (y) \\rangle \\\\\n&= \\|y\\|_J^2 + 2(2a+a^2) \\langle y \\circ y, c_i \\rangle + (2a+a^2)^2 \\left( \\langle y, c_i \\rangle \\right)^2 \\\\\n\\end{align*}\nwhere the second equality follows from Proposition \\ref{prop:compare tr}. \nHere, $y \\in {\\rm int} \\mathcal{K}$ and $c_i \\in \\mathcal{K}$imply that $\\langle y,c_i \\rangle >0$, and $y \\circ y = y^2 \\in {\\rm int} \\mathcal{K}$ implies $\\langle y \\circ y,c_i \\rangle > 0$.\nNoting that $a > 0$ and $\\|Q_{g^{-1}} (y)\\|_J^2 = 1$, $\\|y\\|_J^2 < 1$ should hold and hence, $\\frac{1}{\\|y^*\\|_J} >1$, which implies that $\\det \\left(\\frac{1}{\\|y^*\\|_J} y^* \\right) > \\det(y^*)$.\nSince $\\left\\| \\frac{1}{\\|y^*\\|_J} y^* \\right\\|_J^2 = 1$ holds, we find that $\\det(x^*) > \\det(y^*)$.\n}\n\n\\modifySecond{\nNext, we describe the lower bound for $\\det (y^*)$ using $\\det (\\bar{x}^*)$.\nSince the largest eigenvalue of $\\bar{x}$ satisfying $\\| \\bar{x} \\|_J^2 =1$ is less than $1$, by Proposition \\ref{prop:scaling}, we have:\n\\begin{equation}\n\\left\\{ Q_g(\\bar{x}) \\in \\mathbb{E} \\mid \\bar{x} \\in Q_{g^{-1}} \\left(\\mbox{{\\rm Ker}} \\mathcal{A} \\right) \\cap \\mathcal{K}, \\| \\bar{x} \\|^2_J = 1 \\right\\} \\subseteq \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mathcal{K}. \\notag\n\\end{equation}\nThis implies $\\det (y^*) \\geq \\det \\left( Q_g (\\bar{x}^*) \\right)$, and by Proposition \\ref{q-det-relation}, we have $\\det (y^*) \\geq \\det (g)^2 \\det (\\bar{x}^*) = \\xi^{|H|} \\det (\\bar{x}^*) = \\xi \\det (\\bar{x}^*)$.\nThus, $\\det(x^*) > \\det (y^*) \\geq \\xi \\det (\\bar{x}^*)$ holds, and we can conclude that \n\\begin{equation}\n\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right) > \\xi \\cdot \\delta^{{\\rm supposed}} \\left( Q_{g^{-1}} \\left(\\mbox{{\\rm Ker}} \\mathcal{A} \\right) \\cap {\\rm int} \\mathcal{K} \\right). \\notag\n\\end{equation}\n}\n\\end{proof}\n\n\\modifySecond{\nNext, using Proposition \\ref{prop:compare scaling}, we derive the maximum number of iterations until the proposed method finds $x \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K}$ by using $\\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right)$ as in Pena and Soheili's method.\n\\begin{proposition}\n\\label{pro:compare iteration}\nSuppose that $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} \\neq \\emptyset$ holds.\nAlgorithm \\ref{main algorithm} returns $x \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K}$ after at most $\\log_\\xi \\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right)$ iterations.\n\\end{proposition}\n\\begin{proof}\nLet $\\mbox{{\\rm Ker}} \\bar{\\mathcal{A}}$ be the linear subspace at the start of $k$ iterations of Algorithm \\ref{main algorithm} and suppose that $\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\bar{\\mathcal{A}} \\cap {\\rm int} \\mathcal{K} \\right) = 1\/r^{\\frac{r}{2}}$ holds.\nThen, from Proposition \\ref{prop:compare scaling}, we find that \n\\begin{equation}\n\\notag\n\\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right) > \\frac{\\xi^k}{r^{\\frac{r}{2}}}.\n\\end{equation}\nThis implies that $\\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right) > \\xi^k$ since $\\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right) = r^{\\frac{r}{2}} \\cdot \\delta^{{\\rm supposed}} \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right)$ holds.\nBy taking the logarithm base $\\xi$, we obtain $\\log_\\xi \\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K} \\right ) > k$.\n\\end{proof}\n}\n\n\\modifySecond{\nFrom here on, using the above results, we will compare the computational complexities of the methods in the case that $\\mathcal{K}$ is simple and $\\mbox{{\\rm Ker}} \\mathcal{A} \\cap \\mbox{{\\rm int}} \\mathcal{K} \\neq \\emptyset$ holds. Table \\ref{compare-PS-MA} summarizes the upper bounds on the number of iterations of the main algorithm (UB\\#iterations) of the two methods and the computational costs required per iteration (CC\/iterarion). As in the previous section, the main algortihm requires $\\mathcal{O} (m^3+m^2d)$ to compute the projection $\\mathcal{P}_\\mathcal{A}$. Here, BP shows the computational cost of the basic procedure in each method.\n\\begin{table}[H]\n\\caption{\\modifySecond{Comparison of the proposed method and Pena and Soheili's method in the main algorithm}}\n\\label{compare-PS-MA}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{c|cc}\\toprule\nMethod\t& UB\\#iterations\t\t& CC\/iteration \\\\ \\midrule\nProposed method \t&$\\log_\\xi \\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right)$\t&$m^3+m^2d+$ BP \\\\\nPena and Soheili's method &$- \\log_{1.5} \\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right)$\t&$m^3+m^2d+$ BP\t \\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\nThe upper bound on the number of iterations of the main algorithm of the proposed method is given by\n\\begin{equation}\n\\notag\n\\log_\\xi \\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right) = \\frac{\\log_{1.5} \\delta \\left( \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K} \\right)}{\\log_{1.5} \\xi},\n\\end{equation}\nwhere we should note that $0 < \\xi < 1$.\nSince $0 < \\frac{1}{-\\log_{1.5} \\xi} \\leq 1$ when $\\xi \\leq \\frac{2}{3}$, if $\\xi \\leq \\frac{2}{3}$, then the upper bound on the number of iterations of the main algorithm of the proposed method is smaller than that of the main algorithm of Pena and Soheili's method.}\n\n\\modifySecond{\nNext, Table \\ref{compare-PS-BP} summarizes upper bounds on the number of iterations of basic procedures in the proposed method (UB\\#iterations) and Pena and Soheili's method and the computational cost required per iteration (CC\/iteration). In particular, it shows cases of using the von Neumann scheme and the smooth perceptron in each method (corresponding to Algorithm \\ref{basic procedure} and Algorithm \\ref{bp-alg-sp} in the proposed method). As in the previous section, $ C^{\\rm sd}$ denotes the computational cost required for spectral decomposition, and $C^{\\min}$ denotes the computational cost required to compute only the minimum eigenvalue and the corresponding primitive idempotent.\n\\begin{table}[H]\n\\caption{\\modifySecond{Comparison of the proposed method and Pena and Soheili's method in the basic procedure}}\n\\label{compare-PS-BP}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{c|cc|cc}\\toprule\n\t\t&\\multicolumn{2}{c|}{von Neumann scheme}\t&\\multicolumn{2}{c}{smooth perceptron} \\\\\nMethod\t& UB\\#iterations\t\t& CC\/iteration & UB\\#iterations\t\t& CC\/iteration \\\\ \\midrule\nProposed method\t&$\\frac{r^2}{\\xi^2}$\t&$\\max ( C^{\\rm sd} , md)$\t&$\\frac{2 \\sqrt{2}r}{\\xi} -1$\t&$\\max ( C^{\\rm sd} , md)$ \\\\\nPena and Soheili's method &$16r^4$\t&$\\max (C^{\\min} , md)$\t&$8 \\sqrt{2} r^2-1$\t\t&$\\max ( C^{\\rm sd} , md)$ \\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\nNote that by setting $\\xi = \\frac{1}{4r}$, the upper bounds on the number of iterations of the basic procedure of the two methods are the same. If $\\xi = \\frac{1}{4r}$, then $\\frac{1}{-\\log_{1.5} \\xi} = \\frac{1}{\\log_{1.5} 4r} \\leq \\frac{1}{\\log_{1.5} 4} = 0.292$, and the upper bound of the number of iterations of the main algorithm of the proposed method is less than $0.3$ times the upper bound of the number of iterations of the main algorithm of Pena and Soheili's method, which implies that the larger the value of $r$ is, the smaller the ratio of those bounds becomes. \\\\ \nFrom the discussion in Section \\ref{sec: CSD and Cmin}, we can assume $\\mathcal{O}(C^{\\rm sd} = \\mathcal{C^{\\min}})$, and Table \\ref{compare-PS-BP} shows that the proposed method is superior for finding a point $x \\in \\mbox{{\\rm Ker}} \\mathcal{A} \\cap {\\rm int} \\mathcal{K}$.\n}\n\n\\subsection{Computational costs of $ C^{\\rm sd}$ and $C^{\\min}$}\n\\label{sec: CSD and Cmin}\n\n\\modifyEditor{\nThis section discusses the computational cost required for spectral decomposition $ C^{\\rm sd}$ and the computational cost required to compute only the minimum eigenvalue and the corresponding primitive idempotent $C^{\\min}$.}\n\n\\modifyEditor{\nThere are so-called direct and iterative methods for eigenvalue calculation algorithms, briefly described on pp.139-140 of \\cite{Demmel1997}.\n(Note that it is also written that there is no direct method in the strict sense of an eigenvalue calculation since finding eigenvalues is mathematically equivalent to finding zeros of polynomials).\n}\n\n\\modifyEditor{\nIn general, when using the direct method of $\\mathcal{O}(n^3)$, we see that $ C^{\\rm sd}=\\mathcal{O}(n^3)$ and $C^{\\min}=\\mathcal{O}(n^3)$. The Lanczos algorithm is a typical iterative algorithm used for sparse matrices. Its cost per iteration of computing the product of a matrix and a vector once is $\\mathcal{O}(n^2)$. Suppose the number of iterations at which we obtain a sufficiently accurate solution is constant with respect to the matrix size. In that case, the overall computational cost of the algorithm is $\\mathcal{O}(n^2)$. Corollary 10.1.3 in \\cite{Golub2013} discusses the number of iterations that yields sufficient accuracy. It shows that we can expect fewer iterations if the value of \"the difference between the smallest and second smallest eigenvalues \/ the difference between the second smallest and largest eigenvalue\" is larger. However, it is generally difficult to assume that the above value does not depend on the matrix size and is sufficiently large. Thus, even in this case, we cannot take advantage of the condition that we only need the minimum eigenvalue, and we conclude that it is reasonable to consider that $\\mathcal{O}(C^{\\rm sd})=\\mathcal{O}(C^{\\min})$.\n}\n\n\n\n\\section{Numerical experiments}\n\\label{sec: numerical experiments}\n\\subsection{Outline of numerical implementation}\n\\label{sec: outline of numerical implementation}\n\nNumerical experiments were performed\n\\modifyThird{using the authors' implementations of the algorithms}\non a positive semidefinite optimization problem with one positive semidefinite cone $\\mathcal{K} = \\mathbb{S}^n_{+}$ of the form \n\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm P} (\\mathcal{A})& \\mbox{find} & X \\\\\n\t&\\mbox{s.t.} & \\mathcal{A} (X) = \\bm{0} \\in \\mathbb{R}^m \\\\\n\t& & X \\in \\mathbb{S}^n_{++} \\notag\n\\end{array}\n\\end{equation}\nwhere $\\mathbb{S}^n_{++}$ denotes the interior of $\\mathcal{K} = \\mathbb{S}^n_{+}$.\nWe created instances of the following three types:\n\n\\begin{itemize}\n\\item \\modifyFirst{Strongly feasible ill-conditioned instances, i.e., $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{++} \\neq \\emptyset$ and $X \\in \\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{++}$ has positive but small eigenvalues.}\n\\item Weakly feasible instances, i.e., $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{++} = \\emptyset$, but $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{+} \\setminus \\{O\\} \\neq \\emptyset$.\n\\item Infeasible instances, i.e., $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{+} = O $.\n\\end{itemize}\n\nWe will explain how to make each type of instance in section \\ref{sec: instances}.\n\nIn what follows, we refer to Louren\\c{c}o et al.'s method~\\cite{Bruno2019} as Louren\\c{c}o (2019), and Pena and Soheili's method~\\cite{Pena2017} as Pena (2017).\n\nWe set the termination parameter as $\\xi = \\frac{1}{4}$ in our basic procedure, i.e., Algorithm \\ref{basic procedure}.\n\\modifySecond{The reason for setting $\\xi=1\/4$ is to prevent the square root of $\\xi$ from becoming an infinite decimal, and to prevent the upper bound on the number of iterations of the basic procedure from becoming too large.}\nWe also set the accuracy parameter as $\\varepsilon$ = 1e-12, both in our main algorithm (Algorithm \\ref{main algorithm}, Algorithm \\ref{main algorithm 2}) and in Louren\\c{c}o (2019) and determined whether ${{\\rm P}_{S_\\infty} (\\mathcal{A})}$ or ${{\\rm P}_{S_1} (\\mathcal {A})}$ has a solution whose minimum eigenvalue is greater than or equal to $\\varepsilon$.\nHere, we call a solution whose minimum eigenvalue is $\\varepsilon$ or more an {\\em $\\varepsilon$-feasible solution}.\n\nNote that \\cite{Pena2017} proposed various update methods for the basic procedure.\n\\modifySecond{In our numerical experiments, all methods employed the modified von Neumann scheme (Algorithm \\ref{bp-alg-mvn}) with the identity matrix as the initial point and the smooth perceptron scheme (Algorithm \\ref{bp-alg-sp}).}\nThis implies that the basic procedures used in the three methods differ only in the termination conditions for moving to the main algorithm and that all other steps are the same.\n\nAll executions were performed using MATLAB R2022a on an Intel (R) Core (TM) i7-6700 CPU @ 3.40GHz machine with 16GB of RAM. \\modifySecond{Note that we computed the projection $\\mathcal{P}_\\mathcal{A}$ using the MATLAB function for the singular value decomposition. The projection $\\mathcal{P}_\\mathcal{A}$ was given by $\\mathcal{P}_\\mathcal{A} = I - A^\\top (AA^\\top)^{-1} A$ using the matrix $A \\in \\mathbb{R}^{m \\times d}$ which represents the linear operator $\\mathcal{A}(\\cdot)$ and the identity matrix $I$. Here, suppose that the singular value decomposition of a matrix $A$ is given by \n\\begin{equation}\n\\notag\nA = U \\Sigma V^\\top\n=\nU\n(\\Sigma_m \\ O)\nV^\\top\n\\end{equation}\nwhere $U \\in \\mathbb{R}^{m \\times m}$ and $V \\in \\mathbb{R}^{d \\times d}$ are orthogonal matrices, and $\\Sigma_m \\in \\mathbb{R}^{m \\times m}$ is a diagonal matrix with $m$ singular values on the diagonal. \nSubstituting this decomposition into $A^\\top (AA^\\top)^{-1} A$, we have \n\\begin{align*}\nA^\\top (AA^\\top)^{-1} A\n&= A^\\top (U \\Sigma \\Sigma^\\top U^\\top)^{-1} A \\\\\n&= A^\\top U^{-\\top} (\\Sigma_m^2)^{-1} U^{-1} A \\\\\n&= V \\Sigma^\\top \\Sigma_m^{-2} \\Sigma V^\\top \\\\\n&= V\n\\begin{pmatrix}\nI_m & O \\\\\nO & O\n\\end{pmatrix}\nV^\\top \\\\\n&= V_{:, 1:m} V_{:,1:m}^\\top,\n\\end{align*}\nwhere $V_{:, 1:m}$ represents the submatrix from column $1$ to column $m$ of $V$.\nThus, for any $x \\in \\mathbb{E}$, we can compute $\\mathcal{P}_\\mathcal{A}(x) = x - V_{:, 1:m} V_{:,1:m}^\\top x$.\n}\n\n\nFor each method, we observed the total number of iterations of the basic procedure, the number of iterations of the main algorithm, and the total CPU time.\nWe also examined the violation degrees of the output-result, as defined below, and the residual of the constraints for the output-result.\n\nWe classified the output-results into five types: A: an interior feasible solution is found; B: no interior feasible solution is found (ver.1); C: no $\\varepsilon$-feasible solution is found (only for Loren\\c{c}o (2019) and our method); D: no interior feasible solution is found (ver.2; only for Pena (2017)); E: Out-of-time. In what follows, we briefly explain how output-result type D for Pena (2017) differs from output-result type B.\n\n\\cite{Pena2017} pointed out that if ${\\rm P}(\\mathcal{A})$ has no interior feasible solution, meaning that if the main algorithm of Pena (2017) is applied to only ${\\rm P}(\\mathcal{A})$, it does not stop within a finite number of iterations. To overcome this problem, Pena et al. constructed the main algorithm in a way that it applies not only to ${\\rm P}(\\mathcal{A})$ but also to problem ${\\rm Q}(\\mathcal{A})$:\n\n\\begin{equation}\n\\begin{array}{lll}\n{\\rm Q}(\\mathcal{A})&\t\\mbox{find}\t& X \\\\\n&\\mbox{s.t.}\t& X \\in \\mbox{range} \\mathcal{A}^*, \\\\\n&\t\t& X \\in \\mathbb{S}^n_{++}. \\notag\n\\end{array}\n\\end{equation}\n\nAccordingly, we defined output-result type B as the case where a feasible solution of ${\\rm D} (\\mathcal{A})$ is obtained by applying the main algorithm to ${\\rm P}(\\mathcal{A})$ and defined output-result type D as the case where a feasible solution of ${\\rm Q}(\\mathcal{A})$ is obtained by applying the main algorithm to ${\\rm Q}(\\mathcal{A})$.\n\n\nIn what follows, $\\bar{X} \\in \\mathbb{S}^n$ denotes the output obtained from the main algorithm and $X^*$ the result scaled as the solution of the original problem ${\\rm P}(\\mathcal{A})$ (or ${\\rm D} (\\mathcal{A})$ or ${\\rm Q} (\\mathcal{A})$) multiplied by a real number such that the maximum eigenvalue is 1. We defined the violation degree of the output-result as follows:\n\n\\begin{itemize}\n\\item For output-result type A, the violation degree was defined as the number of eigenvalues of $X^*$ (i.e., the solution of ${\\rm P}(\\mathcal{A})$), whose value is less than or equal to $\\varepsilon$. \n\\item For output-result type B, the violation degree was the number of eigenvalues of $X^*$ (i.e., the solution of ${\\rm D}(\\mathcal{A})$), whose value is less than or equal to $0$.\n\\item For output-result type D, the violation degree was the number of eigenvalues of $X^*$ (i.e., the solution of ${\\rm Q}(\\mathcal{A})$), whose value is less than or equal to $\\varepsilon$. \n\\end{itemize}\nWhen output-result type A was obtained, we defined the residual of the constraints as the value of $\\| \\mathcal{A}(X^*) \\|_2$.\n\n\\modifySecond{\nWe also solved the following problem with a commercial code, Mosek~\\cite{Mosek}, and compared it with the output of Chubanov's methods:\n\\begin{equation}\n\\begin{array}{cccccc}\n{\\rm (P)}\t&\\min\t\t&0\t&{\\rm s.t}\t&\\mathcal{A} (X) = \\bm{0},\t&X \\in \\mathbb{S}^n_{+}.\t\\\\\n{\\rm (D)}\t&\\max\t&\\bm{0}^\\top y\t&{\\rm s.t}\t&-\\mathcal{A}^* y \\in \\mathbb{S}^n_{+}.\t\n\\end{array}\n\\notag\n\\end{equation}\nHere, Mosek solves the self-dual embedding model by using a path-following interior-point method, so if we obtain a solution $(X^*, y^*)$, then $X^*$ and $-\\mathcal {A}^* y^*$ lie in the (approximate) relative interior of the primal feasible region and the dual feasible region, respectively~\\cite{Handbook}. That is, $X^*$ obtained by solving a strongly feasible problem with Mosek is in $\\mathbb{S}^n_{++}$, $X^*$ obtained by solving a weakly feasible problem is in $\\mathbb{S}^n_+ \\setminus \\mathbb{S}^n_{++}$, and $X^*$ obtained by solving an infeasible problem is $X^*=O$ (i.e., $-\\mathcal{A}^* y^* \\in \\mathbb{S}^n_{++}$). As well as for Chubanov's methods, we computed $\\| \\mathcal{A}(X^*) \\|_2$ for the solution obtained by Mosek after scaling so that the largest eigenvalue of $X^*$ would be 1.\n}\n\n\\modifySecond{\nNote that (P) and (D) do not simultaneously have feasible interior points; i.e., the Slater constraint does not hold for both the primal and the dual problems. In general, it is difficult to solve such problems stably by using interior point methods, but since strong complementarity exists between (P) and (D), they can be expected to be stably solved. By applying Lemma 3.4 of ~\\cite{Lourenco2021}, we can generate a problem in which both the primal and dual problems have feasible interior points in which it can be determined whether (P) has a feasible interior point. However, since there was no big difference between the solution obtained by solving the problem generated by applying Lemma 3.4 of ~\\cite{Lourenco2021} and the solution obtained by solving the above (P) and (D), we showed only the results of solving (P) and (D) above.\n}\n\n\n\\subsection{How to generate instances}\n\\label{sec: instances}\nHere, we describe how the strongly feasible instances, weakly feasible instances, and infeasible instances were generated.\n\n\\modifyFirst{\nNote that, due to the rounding error of the numerical computation, the weakly (ill-conditioned strongly) feasible instances generated in this experiment may not have been weakly (ill-conditioned strongly) feasible but rather \"pseudo weakly (pseudo ill-conditioned strongly) feasible.\"\n }\n\nIn what follows, for any natural numbers $m,n$, $\\mbox{rand}(n)$ is a function that returns $n$-dimensional real vectors whose elements are uniformly distributed in the open segment $(0,1)$, and $\\mbox{rand}(m,n)$ is a function that returns an $m \\times n$ real matrix whose elements are uniformly distributed in the open segment $(0,1)$. Furthermore, for any $x \\in \\mathbb{R}^n$ and $X \\in \\mathbb{R}^{m\\times n}$, $\\mbox{diag}(x) \\in \\mathbb{R}^{n \\times n}$ is a function that returns a diagonal matrix whose diagonal elements are the elements of $x$, and $\\mbox{vec}(X) \\in \\mathbb{R}^{mn}$ is a function that returns a vector obtained by stacking the $n$ column vectors of $X$.\n\n\n\\subsubsection{Strongly feasible instances}\n\\label{sec: strongly feasible instances}\n\nThe strongly feasible instances were generated by extending the method of generating ill-conditioned strongly feasible instances proposed in~\\cite{Pena2019} to the symmetric cone case.\n\n\\begin{proposition}\n\\label{prop: strongly_feasible}\nSuppose that $\\bar{x} \\in \\rm{int} \\mathcal{K}$, $\\| \\bar{x} \\|_\\infty \\leq 1$ and $\\bar{u} \\in \\mathcal{K} , \\| \\bar{u} \\|_1 = r$ satisfy $\\langle \\bar{x} , \\bar{u} \\rangle = r$.\nDefine the linear operator $\\mathcal{A} : \\mathbb{E} \\rightarrow \\mathbb{R}^m$ as $\\mathcal{A}(x) = ( \\langle a_1, x \\rangle , \\langle a_2, x \\rangle , \\dots ,\\langle a_m, x \\rangle)^T$ for which $a_1 = \\bar{u} - \\bar{x}^{-1}$ and $\\langle a_j , \\bar{x} \\rangle = 0$ hold for any $j = 2,\\dots,m$.\nThen, \n\\begin{equation}\n\\bar{x} = \\argmax_x \\left \\{ \\det(x) : x \\in \\mathcal{K} \\cap \\mbox{\\rm ker}\\mathcal{A}, \\| x \\|_\\infty = 1 \\right \\}. \n\\label{eq:0}\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nFirst, note that the assertion (\\ref{eq:0}) is equivalent to\n\\begin{equation}\n\\bar{x} = \\argmax_{x \\in \\mathcal{F}} \\left \\{ \\log \\det(x) \\right \\} \n\\ \\mbox{where} \\ \n\\mathcal{F} := \\left \\{ x \\in \\mathcal{K} \\cap \\mbox{ker}\\mathcal{A} : \\| x \\|_\\infty \\leq 1 \\right \\}.\n\\label{eq:1}\n\\end{equation}\nFrom the assumptions, we see that $\\bar{x} \\in \\mathcal{K}$, $\\| \\bar{x} \\|_\\infty \\leq 1$ and $\\langle a_1 , \\bar{x} \\rangle = \\langle \\bar{u} - \\bar{x}^{-1} , \\bar{x} \\rangle = r - r = 0$; thus, $\\mathcal{A}(\\bar{x}) = 0$ and $\\bar{x} \\in \\mathcal{F}$. Since $\\nabla \\log \\det (x) = x^{-1}$, if $\\bar{x}$ satisfies \n\\begin{equation}\n\\langle x - \\bar{x} , \\bar{x}^{-1} \\rangle \\leq 0 \\ \\mbox{for any} \\ x \\in \\mathcal{F} \\label{eq:2}\n\\end{equation}\nwe can conclude that (\\ref{eq:1}) holds. In what follows, we show that (\\ref{eq:2}) holds.\n\nFor any $x \\in \\mathcal{F}$, $x \\in \\mbox{ker}\\mathcal{A}$ and hence, $\\langle a_1 , x \\rangle = \\langle \\bar{u} - \\bar{x}^{-1} , x \\rangle = \\langle \\bar{u} - \\bar{x}^{-1} , x \\rangle = 0$, i.e., $\\langle \\bar{u} , x \\rangle = \\langle \\bar{x}^{-1} , x \\rangle$.\nThus, we obtain\n\\begin{align*}\n\\langle x - \\bar{x} , \\bar{x}^{-1} \\rangle &= \\langle \\bar{u} , x \\rangle - r \\\\\n&\\leq \\langle \\bar{u} , x \\rangle - \\| \\bar{u} \\|_1 \\| x \\|_\\infty \\hspace{1cm}(\\mbox{by} \\ \\| \\bar{u} \\|_1 = r \\ \\mbox{and} \\ \\| x \\|_\\infty \\leq 1) \\\\\n&\\leq 0 \\hspace{1cm}(\\mbox{by} \\ \\langle \\bar{u} , x \\rangle \\leq \\| \\bar{u} \\|_1 \\| x \\|_\\infty)\n\\end{align*}\nwhich completes the proof.\n\\end{proof}\n\nProposition \\ref{prop: strongly_feasible} guarantees that we can generate a linear operator $\\mathcal{A}$ satisfying $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{++} \\neq \\emptyset$ by determining an appropriate value $\\mu = \\displaystyle \\max_{ X \\in \\mathcal{F}} \\det(X)$, where $\\mathcal{F} = \\{ X \\in \\mathbb{S}^n : X \\in \\mathbb{S}^n_{++} \\cap \\mbox{Ker} \\mathcal{A} , \\|X\\|_\\infty = 1\\}$.\n\nThe details on how to generate the strongly feasible instances are in Algorithm \\ref{strongly feasible problem}. The input consists of the rank of the semidefinite cone $n$, the number of constraints $m$, an arbitrary orthogonal matrix $P$, \\modifyFirst{and the parameter $\\tau \\in \\mathbb{R}_{++}$ which determines the value of $\\mu$. We made instances for which the value of $\\mu$ satisfies $1e-\\tau \\leq \\mu \\leq 1e-(\\tau-1)$. In the experiments, we set $\\tau \\in \\{50,100,150, 200, 250 \\}$ so that $\\mu$ would vary around 1e-50, 1e-100, 1e-150, 1e-200, and 1e-250; i.e., strongly feasible, but ill-conditioned instances.}\n\n\\modifyFirst{\nNote that Algorithm \\ref{strongly feasible problem} generates instances using $\\bar{x}$ that has a natural eigenvalue distribution. For example, let $n-1=3$ and consider two $X$s where one has 3 eigenvalues of about 1e-2, and the others have 1 each of 1e-1, 1e-2, and 1e-3. $\\det(X) \\simeq $1e-6 is obtained for both $X$s, but the latter is more natural for the distribution of eigenvalues. In our experiment, we generated ill-conditioned instances by using $X$ having a natural eigenvalue distribution as follows:\n\\begin{enumerate}\n\\item Find an integer $s$ that satisfies 1e-$s$ $\\leq l^{\\frac{1}{n-1}} \\leq u^{\\frac{1}{n-1}} \\leq$ 1e-$(s-1)$.\n\\item Generate $t = 2s-1$ eigenvalue classes.\n\\item Decide how many eigenvalues to generate for each class.\n\\end{enumerate}\n}\n\n\\modifyFirst{\nFor example, when $n=13$ and $\\tau = 30$, Algorithm \\ref{strongly feasible problem} yields $s=3$, $t=5$, $a=2$ and $b=2$, and since $b$ is even, we have $num = (2,3,2,3,2)^\\top$. The classes of $t=5$ eigenvalues are shown in Table \\ref{example1} below. Note that $\\left( l^{\\frac{1}{n-1}} \\cdot 10^{s-i} \\right) \\cdot \\left( l^{\\frac{1}{n-1}} \\cdot 10^{s-(t-i+1)} \\right)= l^{\\frac{2}{n-1}}$, $\\left( u^{\\frac{1}{n-1}} \\cdot 10^{s-i} \\right) \\cdot \\left( u^{\\frac{1}{n-1}} \\cdot 10^{s-(t-i+1)} \\right)= u^{\\frac{2}{n-1}}$ holds for the $i$-th and $t-i+1$-th classes. This implies that we obtain $1e-\\tau \\leq \\mu = \\det(X) \\leq 1e-(\\tau-1)$ both when generating $n-1$ eigenvalues in the $s$th class and when generating $n-1$ eigenvalues of $X$ according to $num$. When $n=14, \\tau = 30$, Algorithm \\ref{strongly feasible problem} gives $s=3$, $t=5$, $a=2$, and $b=3$, and since $b$ is an odd number, we have $num = (2,3,3,3,2)^\\top$. Thus, Algorithm \\ref{strongly feasible problem} generates the instances by controlling the frequency so that the geometric mean of the $n-1$ eigenvalues of $X$ falls within the $s$-th class width. \n\\begin{table}[H]\n\\caption{\\modifyFirst{Frequency distribution table of eigenvalues of $X$ generated by Algorithm \\ref{strongly feasible problem} when $n=13$ or $n=14$, $\\tau=30$}}\n\\label{example1}\n\\begin{center}\n\\modifyFirst{\n\\begin{tabular}{ccccc} \\toprule\nClass\t&\\multicolumn{2}{c}{Class width of eigenvalues of $\\bar{x}$}\t\t\t\t\t&\\multicolumn{2}{c}{Frequency($num$)}\\\\ \\midrule\n\t& Lower bound \t\t& Upper bound\t\t\t\t\t\t&$n=13$\t\t&$n=14$\t\\\\ \\midrule\n1\t&$ l^{\\frac{1}{n-1}} \\cdot 10^{2}$\t&$u^{\\frac{1}{n-1}} \\cdot 10^{2}$\t&2\t\t\t&2\\\\\n2\t&$ l^{\\frac{1}{n-1}} \\cdot 10^{1}$\t&$u^{\\frac{1}{n-1}} \\cdot 10^{1}$\t&3\t\t\t&3\\\\\n3\t&$ l^{\\frac{1}{n-1}}$\t\t\t&$u^{\\frac{1}{n-1}}$\t\t\t&2\t\t\t&3\\\\\n4\t&$ l^{\\frac{1}{n-1}} \\cdot 10^{-1}$&$u^{\\frac{1}{n-1}} \\cdot 10^{-1}$\t&3\t\t\t&3\\\\\n5\t&$ l^{\\frac{1}{n-1}} \\cdot 10^{-2}$&$u^{\\frac{1}{n-1}} \\cdot 10^{-2}$\t&2\t\t\t&2\\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n}\n\n\\begin{algorithm}[H]\n\\caption{\\modifyFirst{Strongly feasible instance}}\n\\label{strongly feasible problem}\n\\begin{algorithmic}[1]\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input: }}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output: }}\n\n \\STATE \\algorithmicrequire $n, m, \\tau, P$\n \\STATE \\algorithmicensure $A$ \n \\STATE $l \\leftarrow 1{\\rm e}-\\tau$ and $u \\leftarrow 1{\\rm e}-(\\tau-1)$\n \\STATE $s \\leftarrow \\lceil \\frac{\\tau}{n-1} \\rceil$\n \\STATE $t \\leftarrow 2s-1$\n \\STATE $b \\leftarrow (n-1) \\bmod t$ and $a \\leftarrow \\frac{(n-1) - b}{t}$\n \\STATE $num \\leftarrow a \\cdot \\bm{1} \\in \\mathbb{R}^t$\n \\IF {$b$ is odd}\n \\STATE $\\bar{b} \\leftarrow \\frac{b-1}{2}$\n \\STATE $num_i \\leftarrow num_i + 1$ such that $s - \\bar{b} \\leq i \\leq s + \\bar{b}$\n \\ELSE \n \\STATE $\\bar{b} \\leftarrow \\frac{b}{2}$\n \\STATE $num_i \\leftarrow num_i + 1$ such that $s - \\bar{b} \\leq i < s$ or $s < i \\leq s + \\bar{b}$\n \\ENDIF\n \\STATE $d_1 \\leftarrow 1$\n \\STATE $k \\leftarrow 2$\n \\FOR {$i = 1$ \\mbox{to} $t$}\n \\FOR {$j=1$ \\mbox{to} $num_i$}\n \\STATE $dl \\leftarrow l^{\\frac{1}{n-1}} \\cdot 10^{s-i}$and $du \\leftarrow u^{\\frac{1}{n-1}} \\cdot 10^{s-i}$\n \\STATE $d_k \\leftarrow dl + \\left( du-dl \\right) \\mbox{rand} \\left( 1 \\right)$ \n \\STATE $k \\leftarrow k+1$\n \\ENDFOR\n \\ENDFOR\n \\STATE $D' \\leftarrow \\mbox{diag}(d)$ and then compute $C \\leftarrow PD'P^T $ and $c \\leftarrow \\mbox{vec}(C) $\n \\STATE $u \\leftarrow (n , 0_{n-1}^T )^T$ where $0_{n-1}$ denotes the $n-1$-dimensional vector of zeros \n \\STATE $U \\leftarrow P(\\mbox{diag}(u) - {D'}^{-1})P^T$\n \\STATE $A' \\leftarrow \\mbox{vec}(U)$\n \\STATE $R \\leftarrow I - \\frac{1}{\\|c\\|_2^2} cc^T $\n \\FOR {$ i = 1 \\mbox{ to } m-1 $}\n \\STATE $A_i' \\leftarrow \\mbox{rand}(n,n)$ and $A_i \\leftarrow \\left( A_i' + (A_i')^T \\right)\/2 $\n \\STATE $A' \\leftarrow \n\\begin{pmatrix}\nA' \\\\\n\\mbox{vec}(A_i)^T \\\\\n\\end{pmatrix}$\n \\ENDFOR\n \\STATE $\\bar{A} \\leftarrow A'R$\n \\STATE $A \\leftarrow \\begin{pmatrix}\n\\mbox{vec}(U)^T \\\\\n\\bar{A}\n\\end{pmatrix}$\n\\end{algorithmic} \n\\end{algorithm}\n\n\n\\begin{proposition}\nFor any $A \\in \\mathbb{R}^{m \\times n^2}$ returned from Algorithm \\ref{strongly feasible problem}, there exists $X \\in \\mathbb{S}^n_{++}$ satisfying $A\\left( \\mbox{vec} (X) \\right)=\\bm{0}$.\n\\end{proposition}\n\n\\begin{proof}\nWe can see that the matrix $C\\in \\mathbb{S}^n_{++}$ computed on line 7 of Algorithm \\ref{strongly feasible problem} satisfies \n\\begin{equation}\nA \\mbox{vec}(C) = Ac = \n\\begin{pmatrix}\n\\mbox{vec}(U)^Tc \\\\\n\\bar{A}c\n\\end{pmatrix}= \n\\begin{pmatrix}\nn-n \\\\\nA'Rc\n\\end{pmatrix}= \\bm{0}.\\notag\n\\end{equation}\n\\end{proof}\n\n\n\\subsubsection{Weakly feasible instances}\n\\label{sec: weakly feasible instances}\n\nThe weakly feasible instances were generated by Algorithm \\ref{weakly feasible instance}.\n\n\\begin{algorithm}[H]\n\\caption{Weakly feasible instance}\n\\label{weakly feasible instance}\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input: }}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output: }}\n\n \\STATE \\algorithmicrequire $n , m$, $A' = [ \\ \\ ]$\n \\STATE \\algorithmicensure $A$ \n \\STATE $B \\leftarrow \\mbox{rand} (n,n)$ \\hspace{3.3cm}\/\/ $B$ must not be $O$\n \\STATE $C \\leftarrow \\frac{B+B^T}{2}$ \\hspace{3cm}\/\/ $C \\neq O$ must not be $C \\succeq O$ or $C \\preceq O$\n \\STATE $C_+ \\leftarrow \\mathcal{P}_{\\mathbb{S}^n_+} (C)$ \\hspace{3.5cm}\/\/ $C_+ \\neq O$ since $C \\neq O$ is not negative semidefinite.\n \\STATE $C_- \\leftarrow -\\mathcal{P}_{\\mathbb{S}^n_+} (-C)$ \\hspace{3cm}\/\/ $C_- \\neq O$ since $C \\neq O$ is not positive semidefinite.\n \\STATE $c_+ \\leftarrow \\mbox{vec} (C_+) $ and $R \\leftarrow I - \\frac{1}{\\|c_+\\|_2^2} c_+c_+^T $\n \\FOR {$ i = 1 \\mbox{ to } m-1 $}\n \\STATE $A_i' \\leftarrow \\mbox{rand}(n,n)$ and $A_i \\leftarrow \\left( A_i' + (A_i')^T \\right)\/2 $\n \\STATE $A' \\leftarrow \n\\begin{pmatrix}\nA' \\\\\n\\mbox{vec}(A_i)^T \\\\\n\\end{pmatrix}$ \n \\ENDFOR\n \\STATE $A \\leftarrow \n\\begin{pmatrix}\n\\mbox{vec}(C_-)^T \\\\\nA'R\n\\end{pmatrix}$\n\\end{algorithmic} \n\\end{algorithm}\n\n\\begin{proposition}\nFor any $A \\in \\mathbb{R}^{m \\times n^2}$ returned by Algorithm \\ref{weakly feasible instance}, no $X \\in \\mathbb{S}^n_{++}$ exists that satisfies $A\\left( \\mbox{\\rm vec} (X) \\right)= \\bm{0}$, but an $X \\in \\mathbb{S}^n_{+} \\setminus \\{ O \\}$ exists that satisfies $A\\left( \\mbox{\\rm vec} (X) \\right)= \\bm{0}$.\n\\end{proposition}\n\\begin{proof}\nFirst, we show that an $X \\in \\mathbb{S}^n_{+} \\setminus \\{ O \\}$ exists that satisfies $A\\left( \\mbox{vec} (X) \\right)= \\bm{0}$. For the matrix $C_+ \\in \\mathbb{S}^n_{+}$ computed on line 5 of Algorithm \\ref{weakly feasible instance}, we see that $C_+ \\neq O$ and the following holds:\n\\begin{equation}\nA\\left( \\mbox{vec} (C_+) \\right)= Ac_+ = \n\\begin{pmatrix}\n\\mbox{vec}(C_-)^T \\\\\nA'R\n\\end{pmatrix} c_+ \n=\n\\begin{pmatrix}\n\\mbox{vec}(C_-)^T c_+ \\\\\nA'R c_+\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n\\bm{0} \\\\\nA'( c_+ - c_+)\n\\end{pmatrix}\n= \\bm{0}. \\notag\n\\end{equation}\nNext, we show by contradiction that no $X \\in \\mathbb{S}^n_{++}$ exists that satisfies $A\\left( \\mbox{vec} (X) \\right)= \\bm{0}$. Suppose that an $X \\in \\mathbb{S}^n_{++}$ satisfies $A\\left( \\mbox{vec} (X) \\right)= \\bm{0}$. Since the first row of $A$ is $\\mbox{vec}(C_-)^T$, if $A\\left( \\mbox{vec} (X) \\right)= \\bm{0}$ holds, then $\\mbox{vec}(C_-)^T \\mbox{vec}(X) = 0$, i.e., \n\\begin{align*}\n\\mbox{vec}(C_-)^T \\mbox{vec}(X) &= \\langle C_- , X \\rangle = \\langle PDP^T, QEQ^T \\rangle \\\\\n&= \\langle D ,P^TQEQ^TP \\rangle = \\sum_{i=1}^n D_{ii} \\left( P^TQEQ^TP \\right)_{ii} = 0\n\\end{align*}\nwhere $C_-=PDP^T$, $X = QEQ^T$, $P$ are $Q$ orthogonal matrices, and $D$ and $E$ are diagonal matrices. Here, $X \\in \\mathbb{S}^n_{++}$ implies $\\left( P^TQEQ^TP \\right)_{ii} > 0$ for any $ i \\in \\{ 1 , \\dots, n \\}$ and hence, $D$ should be $O$ so that $\\sum_{i=1}^n D_{ii} \\left( P^TQEQ^TP \\right)_{ii} = 0$, but this contradicts $C_- \\neq O$. Thus, no $X \\in \\mathbb{S}^n_{++}$ exists satisfying $A\\left( \\mbox{vec} (X) \\right)= \\bm{0}$.\n\\end{proof}\n\n\\subsubsection{Infeasible instances}\n\\label{sec: infeasible instances}\n\nThe infeasible instances were generated by Algorithm \\ref{infeasible instance}.\n\nIf we define the linear operator $\\mathcal{A} : \\mathbb{S}^n \\rightarrow \\mathbb{R}^m$ as $\\mathcal{A}(X) = ( \\langle A_1, X \\rangle , \\dots ,\\langle A_m, X \\rangle)^T$, then by choosing $A_1 \\in \\mathbb{S}^n_{++}$, we obtain $\\mathcal{A}$ such that $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{+} = \\{ O \\} $. On the basis of this observation, by introducing a parameter $\\alpha > 0$, we generated a positive definite matrix $A_1$ whose minimum eigenvalue is a uniformly distributed random number in $(0,\\alpha)$. We chose $\\alpha \\in \\{1e-1, 1e-2, 1e-3, 1e-4, 1e-5 \\}$. The input of Algorithm \\ref{infeasible instance} consisted of the rank of the semidefinite cone $n$, the number of constraints $m$, an arbitrary orthogonal matrix $P$, and the parameter $\\alpha > 0$.\n\n\\begin{algorithm}[H]\n\\caption{Infeasible instance}\n\\label{infeasible instance}\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input: }}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output: }}\n\n \\STATE \\algorithmicrequire $n, m, \\alpha, P$, $A' = [ \\ \\ ]$\n \\STATE \\algorithmicensure $A$ \n \\STATE $B \\leftarrow \\mbox{rand} (n,n)$\n \\STATE $B' \\leftarrow \\frac{B+B^T}{2}$ and then compute an orthogonal matrix $Q$ and diagonal matrix $E$ such that $B' = QDQ^T$\n \\STATE $E_+ = \\mbox{rand} \\left( 1 \\right) \\times \\alpha I + \\mathcal{P}_{\\mathbb{S}^n_+} (E)$ \n \\STATE $d \\leftarrow \\mbox{rand} (n)$ and $D \\leftarrow \\mbox{diag}(d)$\n \\STATE $B_+ \\leftarrow QE_+Q^T $ and $C \\leftarrow PDP^T $ \n \\STATE $c = \\mbox{vec}(C) $ and $R \\leftarrow I - \\frac{1}{\\|c\\|_2^2} cc^T $\n \\FOR {$ i = 1 \\mbox{ to } m-1 $}\n \\STATE $A_i' \\leftarrow \\mbox{rand} (n,n)$ and $A_i \\leftarrow \\left( A_i' + (A_i')^T \\right)\/2 $\n \\STATE $A' \\leftarrow \n\\begin{pmatrix}\nA' \\\\\n\\mbox{vec}(A_i)^T \\\\\n\\end{pmatrix}$\n \\ENDFOR\n \\STATE $A \\leftarrow \n\\begin{pmatrix}\n\\mbox{vec} (B_+)^T \\\\\nA'R\n\\end{pmatrix}$\n\\end{algorithmic} \n\\end{algorithm}\n\nNote that the first row of the matrix $A$ returned by Algorithm \\ref{infeasible instance} is ${\\mbox{vec}(B_+)}^T$. Since $B_+ \\in \\mathbb{S}^n_{++}$, we see that ${\\mbox{vec}(B_+)}^T \\mbox{vec}(X) > 0$ for any positive definite matrix $X \\in \\mathbb{S}^n_{++}$. Thus, there is no $X \\in \\mathbb{S}^n_{++}$ satisfying $A \\left( \\mbox{vec}(X) \\right)=\\bm{0}$, which implies that the generated instance is infeasible.\n\n\\subsection{Numerical results and observations}\n\\label{sec: numerical results and observations}\n\nWe set the size of the positive semidefinite matrix to $n = 50$, \\modifySecond{so that the computational experiments could be performed in a reasonable period of time. To eliminate bias in the experimental results, we generated instances in which the number of constraints $m$ was controlled using the parameter $\\nu$ for the number $\\frac{n(n+1)}{2}$ of variables in the symmetric matrix of order $n$. Specifically,} the number of constraints $m$ on an integer was obtained by rounding the value of $\\frac{n(n+1)}{2} \\nu$, where $\\nu \\in \\{0.1, 0.3, 0.5, 0.7, 0.9\\}$.\n\nFor each $\\nu \\in \\{0.1, 0.3, 0.5, 0.7, 0.9\\}$, we generated five instances, i.e., 25 instances for each of five strongly feasible cases (corresponding to five patterns of $\\mu \\simeq \\mbox{1e-50}, \\dots, \\mu \\simeq \\mbox{1e-250}$, see section \\ref{sec: strongly feasible instances} for details), 25 instances for a weakly feasible case, and 25 instances for each of five infeasible cases (corresponding to five patterns of $\\alpha = \\mbox{1e-1}, \\dots, \\alpha = \\mbox{1e-5}$, see section \\ref{sec: infeasible instances} for details). Thus, we generated 125 strongly feasible instances, 25 weakly feasible instances, and 125 infeasible instances, totalling 275 instances. We set the upper limit of the execution time to 2 hours and compared the performance of our method (Algorithm \\ref{main algorithm}, \\ref{main algorithm 2}) with those of Louren\\c{c}o (2019) and Pena(2017).\n\n\\modifySecond{\nWe compared the results of the proposed method, two of the existing Chubanov's methods, and Mosek. When using Mosek, we set the primal feasibility tolerance (MSK\\_DPARA\\_INTPNT\\_CO\\_TOL\\_PFEAS) to 1e-12.\n}\n\n\\modifySecond{Tables \\ref{strongly-feasible-MVN-SP1} and \\ref{strongly-feasible-MVN-SP2} list} the results for the (ill-conditioned) strongly feasible case. The ``CO-ratio'' column shows the ratio of correct outputs, the ``times(s)'' column shows the average CPU time of the method, the ``M-iter'' column shows the average iteration number of each main algorithm, \\modifySecond{and the $\\|\\mathcal{A}(X^*)\\|_2$ column shows the residual of the constraints. The ``BP'' column shows which scheme (the modified von Neumann (MVN) or the smooth perceptron (SP)) was used in the basic procedure.}\nThe values in parentheses () in row $\\mu \\approx \\mbox{1e-100}$ are the average values excluding instances for which the method ended up running out of time. \n\n\\modifySecond{\nFirst, we compare the results when using MVN or SP as the basic procedure in each method.\nFrom Table \\ref{strongly-feasible-MVN-SP1}, we can see that for strongly-feasible problems, using SP as the basic procedure has a shorter average execution time than using MVN.\nNext, we compare the results of each method.\nFor $\\mu \\simeq \\mbox{1e-50}$, there was no significant difference in performance among the three methods.\nFor $\\mu \\leq \\mbox{1e-100}$, the results in the rows BP=MVN show that our method and Louren\\c{c}o (2019) obtained interior feasible solutions for all problems, while Pena (2017) ended up running out of time for 99 instances.}\nThis is because Pena (2017) needs to call its basic procedure to find a solution of $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_{++}$. Comparing our method with Louren\\c{c}o (2019), we see that it is superior in terms of CPU time. This is probably because it employs a more efficient scaling at each iteration, which will be described in detail in section \\ref{sec: comparisons}. \n\n\\modifySecond{\nFinally, we compare the results for each value of $\\mu$. As $\\mu$ becomes smaller, i.e., as the problem becomes more ill-conditioned, the number of scaling times (M-iter) and the execution time increase, and the accuracy of the obtained solution ($\\| \\mathcal{A} (X^*) \\|_2$) gets worse.\n}\n\n\\begin{table}[H]\n\\caption{\\modifySecond{Results for ill-conditioned strongly feasible instances (Correct input (CO-) ratio and CPU time)}}\n\\label{strongly-feasible-MVN-SP1}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{ll|rr|rr|rr} \\toprule\n\t\t\t&\t&\\multicolumn{2}{c|}{Algorithm \\ref{main algorithm}} & \\multicolumn{2}{c|}{Louren\\c{c}o (2019)} & \\multicolumn{2}{c}{Pena (2017)} \\\\\nInstance\t\t&BP\t&CO-ratio&time(s)&CO-ratio&time(s)&CO-ratio&time(s)\\\\ \\midrule\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-50}\t\t&MVN\t&25\/25\t&7.81\t\t&25\/25\t&25.94\t&25\/25\t&3.60 \\\\\n\t\t\t\t\t\t\t&SP\t\t&25\/25\t&0.75\t\t&25\/25\t&10.12\t&25\/25\t&0.80 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-100}\t&MVN\t&25\/25\t&51.62\t&25\/25\t&448.05\t&1\/1\t\t&(4513.59) \\\\\n\t\t\t\t\t\t\t&SP\t\t&25\/25\t&32.11\t&25\/25\t&256.24\t&25\/25\t&123.65 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-150}\t&MVN\t&25\/25\t&99.39\t&25\/25\t&888.25\t&-\t&- \\\\\n\t\t\t\t\t\t\t&SP\t\t&25\/25\t&76.98\t&25\/25\t&520.73\t&25\/25\t&781.88 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-200}\t&MVN\t&25\/25\t&144.48\t&25\/25\t&1328.68\t&-\t&- \\\\\n\t\t\t\t\t\t\t&SP\t\t&25\/25\t&118.06\t&25\/25\t&789.29\t&25\/25\t&1874.20 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-250}\t&MVN\t&25\/25\t&188.11\t&25\/25\t&1827.24\t&-\t&- \\\\\n\t\t\t\t\t\t\t&SP\t\t&25\/25\t&162.67\t&25\/25\t&1074.07\t&25\/25\t&3308.35 \\\\\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\n\\begin{table}[H]\n\\caption{\\modifySecond{Results for ill-conditioned strongly feasible instances (M-iter and $\\|\\mathcal{A}(X^*)\\|_2$)}}\n\\label{strongly-feasible-MVN-SP2}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{ll|rr|rr|rr} \\toprule\n\t\t\t&\t&\\multicolumn{2}{c|}{Algorithm \\ref{main algorithm}} & \\multicolumn{2}{c|}{Louren\\c{c}o (2019)} & \\multicolumn{2}{c}{Pena (2017)} \\\\\nInstance\t\t&BP\t&M-iter&$\\| \\mathcal{A} (X^*)\\|_2$&M-iter&$\\| \\mathcal{A} (X^*)\\|_2$&M-iter&$\\| \\mathcal{A} (X^*)\\|_2$\\\\ \\midrule\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-50}\t\t&MVN\t&3.28\t\t&1.24e-11\t&14.48\t&7.64e-12\t&1.00\t&1.27e-11 \\\\\n\t\t\t\t\t\t\t&SP\t\t&1.00\t\t&1.23e-11\t&14.08\t&8.22e-12\t&1.00\t&1.23e-11 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-100}\t&MVN\t&53.12\t&9.98e-12\t&329.04\t&1.26e-11\t&(2.00)\t&(1.07e-8) \\\\\n\t\t\t\t\t\t\t&SP\t\t&36.04\t&4.18e-11\t&365.76\t&1.10e-11\t&23.32\t&5.38e-9 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-150}\t&MVN\t&118.12\t&1.96e-10\t&728.68\t&4.29e-10\t&-\t&- \\\\\n\t\t\t\t\t\t\t&SP\t\t&91.96\t&2.21e-10\t&756.36\t&5.60e-10\t&117.32\t&6.31e-9 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-200}\t&MVN\t&185.40\t&1.51e-8\t&1145.40\t&4.76e-8\t&-\t&- \\\\\n\t\t\t\t\t\t\t&SP\t\t&151.44\t&1.09e-8\t&1150.20\t&3.81e-8\t\t&236.44\t&1.72e-8 \\\\ \\hline\n\\multirow{2}{*}{$\\mu \\simeq $ 1e-250}\t&MVN\t&251.44\t&9.51e-7\t&1601.20\t&2.58e-6\t&-\t&- \\\\\n\t\t\t\t\t\t\t&SP\t\t&215.12\t&1.72e-6\t&1564.80\t&3.35e-6\t\t&376.24\t&1.73e-6 \\\\\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\n\\modifySecond{\nTable \\ref{strongly-feasible-Mosek} summarizes the results of our experiments using Mosek to solve strongly feasible ill-conditioned instances. Mosek sometimes returned the error message `'rescode = 10006'' for the $\\mu \\leq 1e-200$ instances. This error message means that \"the optimizer is terminated due to slow progress.\" In this case, the obtained solution is not guaranteed to be optimal, but it may have sufficient accuracy as a feasible solution. Therefore, we took the CO-ratio when the residual $\\|\\mathcal{A}(X^*)\\|_2$ is less than or equal to1e-5 to be the correct output. The reason why we set the threshold to 1e-5 is that the maximum value of $\\|\\mathcal{A}(X^*)\\|_2$ was less than 1e-5 among the $X^*$ values obtained for the strongly feasible ill-conditioned instances by the three methods, Algorithm \\ref{main algorithm}, Louren\\c{c}o (2019) and Pena (2017). On the other hand, for the $\\mu \\leq 1e-200$ instances, the Chubanov methods had higher CO-ratios. That is, when the problem was quite ill-conditioned, the solution obtained by each of the Chubanov methods had a smaller value of $\\| \\mathcal{A}(X^*)\\|_2$ compared with the solution obtained by Mosek, which implies that the accuracy of the solution obtained by each of the Chubanov methods was higher than that of Mosek.\n}\n\n\\begin{table}[H]\n\\caption{\\modifySecond{Results for ill-conditioned strongly feasible instances with Mosek}}\n\\label{strongly-feasible-Mosek}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{l|rrr} \\toprule\nInstance\t\t\t&CO-ratio\t&time(s)\t&$\\| \\mathcal{A} (X^*)\\|_2$\t\\\\ \\midrule\n$\\mu \\simeq $ 1e-50\t&25\/25 \t&1.96\t\t&8.73e-13 \\\\\n$\\mu \\simeq $ 1e-100\t&25\/25 \t&3.18\t\t&1.87e-12 \\\\\n$\\mu \\simeq $ 1e-150\t&25\/25 \t&3.72\t\t&2.48e-10 \\\\\n$\\mu \\simeq $ 1e-200\t&21\/25 \t&6.56 \t\t&2.58e-7 \\\\\n$\\mu \\simeq $ 1e-250\t&1\/25 \t&6.88 \t\t&2.57e-7 \\\\\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\\modifySecond{Table \\ref{infeasible} summarizes} the results for infeasible instances.\nSimilarly to Table \\ref{strongly-feasible-MVN-SP1}, the ``CO-ratio and ``times(s)'' columns respectively show the ratio of correct outputs and the average CPU time of each method (the values in parentheses () in rows $\\alpha = \\mbox{1e-4}$ and $\\alpha = \\mbox{1e-5}$ are the average CPU times of each method excluding the instances for which the method ended up running out of time). \n\n\\modifySecond{When using MVN as the basic procedure,} whereas our method and Louren\\c{c}o (2019) found an element of $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_+$ for all instances, Pena (2017) ended up running out of time for one instance for $\\alpha=\\mbox{1e-4}$ and $\\alpha=\\mbox{1e-5}$. \n\n\\modifySecond{\nFrom the results for infeasible instances, we can observe the following three points. First, our method obtained correct outputs for every instance in a short execution time. This would be because it employed an efficient scaling and found an element of $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_+$. Second, the method of Pena (2017) obtained better results when SP was used as the basic procedure. As shown in Table \\ref{infeasible}, the method of Pena (2017) using SP as the basic procedure solved all problems and had shorter execution times than the method using MVN. Since Pena's (2017) method calls the basic procedure not only to find points in $\\mbox{Ker} \\mathcal{A} \\cap \\mathbb{S}^n_{++}$ but also to find points in $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_{++}$, using SP, which can update basic procedures efficiently, is better than using MVN in terms of execution time. Third, it is not always possible to detect infeasibility (i.e., to find a point in $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_{+}$) in a shorter execution time when using SP than when using MVN. In fact, according to Louren\\c{c}o (2019), the execution time is shorter when using MVN as the basic procedure than when using SP. SP is a more efficient update method than MVN in terms of satisfying a termination criterion (the criterion for moving to scaling) of the basic procedure. On the other hand, from the point of view of finding points in $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_{+}$, it is not possible to determine whether SP or MVN is more suitable. Pena (2017) used SP to significantly reduce the execution time, which is the result of updating the basic procedure for finding points in $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_{++}$ more efficiently than MVN.\n} \n\\modifySecond{\nMosek obtained a point in $\\mbox{range} \\mathcal{A}^* \\cap \\mathbb{S}^n_{++}$ as a feasible solution to the dual problem for all instances. From the viewpoint of execution time, Mosek was superior to the other methods.\n}\n\n\\begin{table}[H]\n\\caption{\\modifySecond{Results for infeasible instances}}\n\\label{infeasible}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{ll|rr|rr|rr|rr} \\toprule\n\t\t\t&\t&\\multicolumn{2}{c|}{Algorithm \\ref{main algorithm}} & \\multicolumn{2}{c|}{Louren\\c{c}o (2019)} &\\multicolumn{2}{c|}{Pena (2017)} &\\multicolumn{2}{c}{Mosek} \\\\\nInstance\t\t&BP\t&CO-ratio&time(s)&CO-ratio&time(s)&CO-ratio&time(s)\t&CO-ratio&time(s)\\\\ \\midrule\n\\multirow{2}{*}{$\\alpha= $ 1e-1}\t&MVN\t&25\/25\t&1.23\t&25\/25\t&2.37\t\t&25\/25\t&0.79\t&\\multirow{2}{*}{25\/25}\t&\\multirow{2}{*}{1.22} \\\\\n\t\t\t\t\t\t&SP\t\t&25\/25\t&1.01\t&25\/25\t&21.46\t&25\/25\t&0.61 \t\\\\ \\hline\n\\multirow{2}{*}{$\\alpha= $ 1e-2}\t&MVN\t&25\/25\t&4.39\t&25\/25\t&37.93\t&25\/25\t&25.99 &\\multirow{2}{*}{25\/25}\t&\\multirow{2}{*}{1.25}\\\\\n\t\t\t\t\t\t&SP\t\t&25\/25\t&3.87\t&25\/25\t&62.92\t&25\/25\t&1.05 \\\\ \\hline\n\\multirow{2}{*}{$\\alpha= $ 1e-3}\t&MVN\t&25\/25\t&5.38\t&25\/25\t&61.61\t&25\/25\t&61.55 &\\multirow{2}{*}{25\/25}\t&\\multirow{2}{*}{1.25}\\\\\n\t\t\t\t\t\t&SP\t\t&25\/25\t&5.34\t&25\/25\t&84.08\t&25\/25\t&2.08 \\\\ \\hline\n\\multirow{2}{*}{$\\alpha= $ 1e-4}\t&MVN\t&25\/25\t&7.81\t&25\/25\t&88.32\t&24\/24\t&(20.80) &\\multirow{2}{*}{25\/25}&\\multirow{2}{*}{1.24}\\\\\n\t\t\t\t\t\t&SP\t\t&25\/25\t&7.40\t&25\/25\t&98.79\t&25\/25\t&33.48 \\\\ \\hline\n\\multirow{2}{*}{$\\alpha= $ 1e-5}\t&MVN\t&25\/25\t&9.08\t&25\/25\t&76.17\t&24\/24\t&(9.47) &\\multirow{2}{*}{25\/25}\t&\\multirow{2}{*}{1.24}\\\\\n\t\t\t\t\t\t&SP\t\t&25\/25\t&8.00\t&25\/25\t&91.88\t&25\/25\t&55.42 \\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\nFor the weakly feasible instances, we compared our method (Algorithm \\ref{main algorithm}), a modified version with another criteria for $\\varepsilon$-feasibility (Algorithm \\ref{main algorithm 2}), Louren\\c{c}o (2019), and Pena (2017). The results are summarized in Table \\ref{weakly-feasible-output}.\n\nAs described in section \\ref{sec: outline of numerical implementation}, we classified the output-results into type A: an interior feasible solution is found; type B: no interior feasible solution is found (ver.1); type C: no $\\varepsilon$-feasible solution is found (only for Loren\\c{c}o (2019) and our methods); type D: no interior feasible solution is found (ver.2; only for Pena (2017)); type E: Out-of-time. \n\nNote that $\\mbox{B}^*$ indicates that the output was B, but when we converted the obtained solution to a solution of ${\\rm D} (\\mathcal{A})$, it contained a negative eigenvalue and violated the SDP constraint.\n\nFrom Table \\ref{weakly-feasible-output}, we can observe the following:\n\\begin{itemize}\n\\item \\modifySecond{\nFor all the methods, the average execution time was shorter when SP was used as the basic procedure than when MVN was used.\n}\n\\item \\modifySecond{\nAll methods except Algorithm \\ref{main algorithm 2} \n} sometimes obtained output type A (an interior feasible solution is found), \\modifySecond{ and Pena(2017) returned output-result D}, while the obtained solution had $0 \\sim 5$ negative eigenvalues (about -1e-16) and more than 20 positive eigenvalues (less than 1e-12) when we converted it into a solution of ${\\rm P} (\\mathcal{A})$.\n\\item Louren\\c{c}o (2019) obtained output type $\\mbox{B}^*$ (no interior feasible solution is found) but when we converted the obtained solution into a solution of ${\\rm D} (\\mathcal{A})$, it contained a negative eigenvalue and violated the SDP constraint). The obtained solution had $1 \\sim 3$ negative eigenvalues (about -1e-6) and violated the SDP constraint when we converted it into a solution of ${\\rm D} (\\mathcal{A})$.\n\\item Our modified method (Algorithm \\ref{main algorithm 2}) was able to determine the existence of an $\\varepsilon$-feasible solution for all instances. This implies that, \\modifyThird{at least for this specific set of weakly feasible instances,} the criteria focusing on the total value of the eigenvalues used in Algorithm \\ref{main algorithm 2} is more suitable than the criteria focusing on the product of all the eigenvalues.\n\\end{itemize}\n\n\\begin{table}[H]\n\\caption{\\modifySecond{Output types for weakly feasible instances}}\n\\label{weakly-feasible-output}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{cc|ccccc|r}\\toprule\nMethod\t\t\t\t\t\t\t\t\t&BP\t\t&$\\nu=0.1$&$\\nu=0.3$&$\\nu=0.5$&$\\nu=0.7$&$\\nu=0.9$\t&time(s)\\\\ \\midrule\n\\multirow{2}{*}{Algorithm \\ref{main algorithm}}\t&MVN\t&AAAAA\t&AAAAA\t&AAAAA &AAAAA &BBBBB\t&414.42\\\\\n\t\t\t\t\t\t\t\t\t\t&SP\t\t&AAAAA\t&AAAAA\t&AAAAA &AAAAA &ABABB\t&226.25 \\\\ \n\\multirow{2}{*}{Algorithm \\ref{main algorithm 2}} &MVN\t&CCCCC\t&CCCCC\t&CCCCC &CCCCC &CCCCC\t&301.97\\\\\n\t\t\t\t\t\t\t\t\t\t&SP\t\t&CCCCC\t&CCCCC\t&CCCCC &CCCCC &CCCCC\t&179.72\\\\ \n\\multirow{2}{*}{Louren\\c{c}o (2019)}\t\t\t&MVN\t&AAAAA & B$\\mbox{B}^*$$\\mbox{B}^*$$\\mbox{B}^*$B & ABAAA & ABA$\\mbox{B}^*$$\\mbox{B}^*$ &BBBBB\t&3512.78\\\\\n\t\t\t\t\t\t\t\t\t\t&SP\t\t&AAAAA &AAAAA\t&AAAAA\t&AAAAA\t&BBBBB\t&1550.76 \\\\ \n\\multirow{2}{*}{Pena (2017)}\t\t\t\t\t&MVN\t&EEEEE\t&EEEEE\t&EEEEE\t&EEEEE\t&EEEEE\t&\\\\ \n\t\t\t\t\t\t\t\t\t\t&SP\t\t&AAAAA\t&DAAAD\t&AAAAA\t&AAAAA\t&DDDDD\t&3239.12 \\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\\modifySecond{\nTable \\ref{weakly-feasible-output-Mosek} summarizes the results obtained by Mosek.\nThe error message ``rescode = 10006'' was obtained for 22 instances, similar to the results for the strongly feasible ill-conditioned instances.\nNote that we assumed that feasible solutions were obtained for all instances since the constraint residual $\\|\\mathcal{A}(X^*)\\|_2$ was as small as 1.1e-7 or less for all obtained solutions.\nThere were three instances for which we obtained a feasible solution with a minimum eigenvalue larger than 1e-12 (These three instances are all included in the instance set of $\\nu = 0.9$).\nWe set the CO-ratio to $22\/25$ considering that it is difficult to determine whether such a solution satisfies $X \\in \\mathbb{S}^n_{++}$ or $X \\in \\mathbb{S}^n_+$.\n}\n\n\\modifySecond{\nNote that for all problems with $0.1 \\leq \\nu \\leq 0.7$, Algorithm \\ref{main algorithm} and Louren\\c{c}o (2019) using SP for the basic procedure returned output A, i.e., a feasible solution to the original problem. Table \\ref{weakly feasible compare} summarizes the accuracies of the solutions obtained with Algorithm \\ref{main algorithm}, Louren\\c{c}o (2019), and Mosek for all instances with $0.1 \\leq \\nu \\leq 0.7$. \nChubanov's methods sometimes returned incorrect output (output-result type A) for weakly feasible instances, but Table \\ref{weakly feasible compare} shows that the average accuracy of feasible solutions obtained by Chubanov's methods was better than that of Mosek.\n}\n\n\\begin{table}[H]\n\\caption{\\modifySecond{Output types for weakly feasible instances with Mosek}}\n\\label{weakly-feasible-output-Mosek}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{l|rrr} \\toprule\nInstance\t\t\t&CO-ratio\t&time(s)\t\t&$\\| \\mathcal{A} (X^*)\\|_2$\t\\\\ \\midrule\nweakly feasible\t&22\/25 \t\t&4.86 \t\t&1.28e-9 \\\\\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\n\\begin{table}[H]\n\\caption{\\modifySecond{\nAverage of the constraint residuals $\\|\\mathcal{A}(X^*)\\|_2$ of the solution $X^*$ obtained for the weakly feasible instances\n}}\n\\label{weakly feasible compare}\n\\begin{center}\n\\modifySecond{\n\\begin{tabular}{l|ccc} \\toprule\nValue of $\\nu$\t&Algorithm \\ref{main algorithm}\t&Louren\\c{c}o (2019)\t&Mosek \\\\ \\midrule\n$\\nu=0.1$ &1.28e-13\t&5.51e-14\t&1.45e-12\t\\\\\n$\\nu=0.3$ &1.56e-13\t&7.04e-14\t&2.53e-10\t\\\\\n$\\nu=0.5$ &1.40e-13\t&1.05e-13\t&1.29e-9\t\\\\\n$\\nu=0.7$ &3.44e-13\t&1.09e-13\t&3.75e-9\t\\\\\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\nThe detailed numerical results are in Appendix \\ref{sec: detailed numerical results}.\n\n\\section{More comparisons of the basic procedures}\n\\label{sec: comparisons}\n\nIn section \\ref{sec: complexity of MA vs Lourenco}, we showed that the bound of the computational cost of our method is lower than that of Louren\\c{c}o et al. when $\\mathcal{K}$ is the $n$-dimensional nonnegative orthant $\\mathbb{R}^n_+$ or a Cartesian product of simple second-order cones, and that their bounds on their costs are equivalent when $\\mathcal{K}$ is a simple positive semidefinite cone under the assumption that the costs of computing the spectral decomposition and the minimum eigenvalue are the same for an $n \\times n$ symmetric matrix. In this section, we make more detailed comparisons of these algorithms in terms of \\modifyThird{the performance of the cut obtained from the basic procedure} and the detectability of an $\\varepsilon$-feasible solution. Similarly to section \\ref{sec: numerical experiments}, we will refer to Louren\\c{c}o et al.'s method \\cite{Bruno2019} as Louren\\c{c}o (2019) throughout this section.\n\n\\subsection{\\modifyThird{Performance comparison} of the two basic procedures for the simple case}\n\nHere, for the sake of simplicity, we will focus on the case where the symmetric cone is simple, i.e., $p=1$. Let $\\mathbb{E}$ be the Euclidean space corresponding to the symmetric cone $\\mathcal{K}$. For any given $w, v \\in \\mathbb{E}$, Louren\\c{c}o et al. \\cite{Bruno2019} defined $\\mbox{\\rm vol} (w,v)$ as the volume of the intersection $H(w,v) \\cap \\mathcal{K}$, where $H(w,v)$ is the half space given by\n\\begin{equation}\nH(w,v) = \\{ x \\in \\mathbb{E} \\ \\ | \\ \\ \\langle w , x \\rangle \\leq \\langle w,v \\rangle \\}. \\notag\n\\end{equation}\n\n\\modifyThird{\nIn this section, we first identify the half-space $H(w,v)$ that will be transferred to the half-space $H(e, e\/r)$ after scaling and then find the constant $\\mbox{\\textbf{rate}} \\in \\mathbb{R}$ that satisfies $\\mbox{vol} (w,v) \\leq \\mbox{\\textbf{rate}} \\times \\mbox{vol}(e,e\/r)$, so that we can compare the proposed method and Louren\\c{c}o (2019). The proposed method and Louren\\c{c}o (2019) use the basic procedure results to narrow down the original problem's feasible region. It can be interpreted that the algorithm becomes more efficient as the constant $\\mbox{\\textbf{rate}} \\in \\mathbb{R}$ (indicating how much $\\mbox{vol}(w,v)$ is reduced compared with $\\mbox{vol}(e,e\/r)$) gets smaller. In what follows, we call the constant $\\mbox{\\textbf{rate}} \\in \\mathbb{R}$ the reduction rate.\n}\n\n\\modifyThird{\nSection \\ref{sec:reduction 1} derives the reduction rate of the proposed method and section \\ref{sec:reduction 2} that of Louren\\c{c}o (2019). The results in these sections are summarized in Table \\ref{table:reduction}, where the ``UB\\#iterations'' column shows the upper bound on the number of iterations required in the basic procedure. The ``UB\\#iterations'' of Louren\\c{c}o (2019) comes from Proposition 14 of \\cite{Bruno2019} (where the authors showed their result by substituting $\\rho = 2$), whereas that of Algorithm \\ref{basic procedure} comes from Proposition \\ref{prop:bp2} with $\\ell = 1$. The ``Reduction rate'' of Louren\\c{c}o (2019) comes from Theorem \\ref{theo:reduction 2}, whereas that of Algorithm \\ref{basic procedure} comes from (\\ref{eq:reduction 1}) with $(w,v) = (Q_{g^{-1}}(e) , Q_g (e) \/r ) $.\n}\n\n\\modifyThird{\n\\begin{table}[H]\n\\caption{Comparison of reduction rates of the two algorithms: Theoretical results}\n\\label{table:reduction}\n\\begin{center}\n\\begin{tabular}{c|c|c} \\hline\n Basic procedure & UB\\#iterations & Reduction rate \\\\ \\hline\nLouren\\c{c}o (2019) & $\\rho^2 r_{\\max}^2$ & $ \\mbox{\\rm vol} (w,v) = \\left( \\frac{r^r}{\\det w} \\right)^{\\frac{d}{r}} \\mbox{\\rm vol} (e,e\/r) \\leq \\left( e^{- \\varphi (\\rho)} \\right)^{\\frac{d}{r}} \\mbox{\\rm vol} (e,e\/r) $\\\\ \nAlgorithm \\ref{basic procedure} & $\\cfrac{r_{\\max}^2}{\\xi^2}$ & $ \\mbox{\\rm vol} (w, v) = \\left( \\xi^N \\right)^{\\frac{d}{r}} \\mbox{\\rm vol} (e,e\/r) $ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n}\n\n\\modifyThird{\nBy setting $\\rho=2$ and $\\xi = \\frac{1}{2}$, the two bounds in``UB\\#iterations'' have the same value; in this case, the reduction rates turn out to be\n\\begin{align*}\n\\mbox{Louren\\c{c}o (2019):} \\ \\left( \\frac{r^r}{\\det w} \\right)^{\\frac{d}{r}} \\leq \\left( e^{- \\varphi (2)} \\right)^{\\frac{d}{r}} \\simeq (0.918)^{\\frac{d}{r}}, \\hspace{1cm}\n\\mbox{Algorithm \\ref{basic procedure}: } \\left( \\xi^N \\right) ^{\\frac{d}{r}} \\leq \\left( \\frac{1}{2} \\right)^{\\frac{d}{r}}.\n\\end{align*}\nThe above comparison indicates that Algorithm \\ref{basic procedure} is superior to the basic procedure in \\cite{Bruno2019} in terms of the reduction rate of the feasible region.\n}\n\n\\subsubsection{\\modifyThird{Theoretical reduction rate} of Algorithm \\ref{basic procedure}}\n\\label{sec:reduction 1}\n\nSuppose that Algorithm \\ref{basic procedure} returns a result such that there exists a nonempty index set $I \\subseteq \\{1 ,\\dots , r\\}$ with $|I| = N $ for which \n\\begin{equation}\n\\langle c_i , x \\rangle \\leq \n\\begin{cases}\n\\xi & i \\in I \\\\\n1 & i \\notin I\n\\end{cases} \\label{eq:I}\n\\end{equation}\nholds for any feasible solution $x$ of ${\\rm P}_{S_\\infty(A)}$, where $\\{c_1 , \\dots , c_r\\}$ are primitive idempotents that make up a Jordan frame.\n\nNote that Algorithm \\ref{basic procedure} employs the scaling $\\bar{x} = Q_{g^{-1}} (x)$ with $g^{-1} = \\frac{1}{\\sqrt{\\xi}} \\sum_{i \\in I} c_i + \\sum_{i \\notin I} c_i$.\nLet us find $w , v \\in \\mathbb{E}$ which satisfy\n\\begin{equation}\nH(e , e\/r) = Q_{g^{-1}} \\left( H(w,v) \\right). \\label{appendix-C-1}\n\\end{equation}\nSince (\\ref{appendix-C-1}) and the scaling $\\bar{x} = Q_{g^{-1}} (x)$ imply that\n\\begin{align*}\nH(w,v) &= Q_g \\left( H(e , e\/r) \\right) \\\\\n&= \\{ Q_g (\\bar{x}) \\in \\mathbb{E} \\ \\ | \\ \\ \\langle \\bar{x} , e \\rangle \\leq 1 \\} \\\\\n&= \\{ Q_g (\\bar{x}) \\in \\mathbb{E} \\ \\ | \\ \\ \\langle Q_g (\\bar{x}) , Q_{g^{-1}}(e) \\rangle \\leq 1 \\} \\\\\n&= \\{ x \\in \\mathbb{E} \\ \\ | \\ \\ \\langle x , Q_{g^{-1}}(e) \\rangle \\leq \\langle Q_{g^{-1}}(e) , Q_g (e) \/r \\rangle = 1\\} ,\n\\end{align*}\nby setting $w = Q_{g^{-1}}(e)$ and $v = Q_g (e) \/r$, we find that the half space $H(w,v)$ is transformed to $H(e,e\/r)$ after the scaling.\nSince $Q_{g^{-1}}(e) \\in \\mbox{int} \\mathcal{K}$, we can apply the following proposition to $w = Q_{g^{-1}}(e)$.\n\n\\begin{proposition}[Proposition 6 of~\\cite{Bruno2019}]\nSuppose that $w \\in \\mbox{\\rm int} \\mathcal{K}$. Then, \n\\begin{align*}\nQ_{w^{-1\/2} \\sqrt{\\langle w , v \\rangle}} \\left( H(e , e\/r )\\right) &= H(w,v) , \\\\\n\\mbox{\\rm vol} (w,v) &= \\left( \\frac{\\langle w , v \\rangle}{\\sqrt[r]{\\det w}} \\right)^d \\mbox{\\rm vol} (e,e\/r) .\n\\end{align*}\n\\end{proposition}\n\nUsing the above proposition and the assumption $|I| = N$ for the set $I$ in (\\ref{eq:I}), we can see how the volume $\\mbox{\\rm vol} (Q_{g^{-1}}(e) , Q_g (e) \/r)$ of $H (Q_{g^{-1}}(e) , Q_g (e) \/r) \\cap \\mathcal{K}$ decreases compared with $\\mbox{\\rm vol} (e , e\/r)$:\n\\begin{align}\n\\notag\n\\mbox{\\rm vol} (Q_{g^{-1}}(e) , Q_g (e) \/r ) &= \\left( \\frac{1}{\\sqrt[r]{\\det Q_{g^{-1}}(e) }} \\right)^d \\mbox{\\rm vol} (e,e\/r) \\\\\n\\notag\n&= \\left( \\frac{1}{\\sqrt[r]{ \\frac{1}{\\xi^N} }} \\right)^d \\mbox{\\rm vol} (e,e\/r) \\\\\n\\label{eq:reduction 1}\n&= \\left( \\xi^N \\right)^{\\frac{d}{r}} \\mbox{\\rm vol} (e,e\/r) .\n\\end{align}\n\n\\subsubsection{\\modifyThird{Theoretical reduction rate} of the basic procedure of Louren\\c{c}o (2019) }\n\\label{sec:reduction 2}\n\nThe following theorem gives the reduction rate of the basic procedure of Louren\\c{c}o (2019).\n\n\\begin{theorem}[Theorem 10 of~\\cite{Bruno2019}]\n\\label{theo:reduction 2}\nLet $\\rho>1$ and $y \\in \\mathcal{K} \\setminus \\{ 0 \\}$ be such that $F_{{\\rm P}_{S_1}(A)} \\subseteq H(y , e\/\\rho r)$.\nLet \n\\begin{align*}\n\\beta &= r - \\left( \\frac{1}{\\rho} - \\frac{1}{\\sqrt{\\rho(3\\rho-2)}} \\right) , \\\\\nw &= \\frac{r-\\beta}{\\langle y,e \\rangle} \\rho r y + \\beta e , \\\\\nv &= w^{-1} .\n\\end{align*}\nThen, the following hold:\n\\begin{enumerate}\n\\item $F_{{\\rm P}_S(A)} \\subseteq H(y , e \/ \\rho r) \\cap H(e , e \/ r) \\subseteq H(w,v)$\n\\item $Q_{\\sqrt{r} w^{-1 \/ 2}} \\left( H(e , e\/r) \\right) = H(w,v) $\n\\item \n\\begin{equation}\n\\mbox{\\rm vol} (w,v) = \\left( \\frac{r^r}{\\det w} \\right)^{\\frac{d}{r}} \\mbox{\\rm vol} (e , e\/r) \n\\leq \\left( \\exp \\left( - \\varphi(\\rho) \\right) \\right)^{\\frac{d}{r}} \\mbox{\\rm vol} (e , e\/r) \\notag\n\\end{equation}\nwhere $\\varphi(\\rho) = 2-\\frac{1}{\\rho} - \\sqrt{3-\\frac{2}{\\rho}} $.\n\nIn particular, if $\\rho \\geq 2$, we have $\\mbox{\\rm vol} (w ,v) < (0.918)^{\\frac{d}{r}} \\mbox{\\rm vol} (e , e\/r)$.\n\n\\end{enumerate}\n\n\\end{theorem}\n\n\n\\subsubsection{Comparison of reduction rates of the two algorithms in numerical experiments}\n\nTo confirm whether similar reduction rates are observed numerically, we conducted an experiment where we used our method (Algorithms \\ref{bp-alg-mvn} and \\ref{main algorithm 2}) with $\\xi = 1\/2$ and Louren\\c{c}o (2019) with modified von Neumann scheme to solve a weakly feasible instance with $\\nu=0.1$. At each iteration of the main algorithms, we recorded the value of $\\frac{r^r}{\\det w} $ of Louren\\c{c}o (2019) and the value of $\\xi^N$ of our method and computed the reduction rates of the search region. The results are summarized in Table \\ref{tabel:reduction2}.\n\n\\begin{table}[H]\n\\caption{\\modifyFirst{Comparison of reduction rates of the two algorithms: Numerical results}}\n\\label{tabel:reduction2}\n\\begin{center}\n\\begin{tabular}{c|c|c|c|c} \\hline\nAlgorithm & \\#iterations of M-A & Output & Average reduction rate & Final reduction rate \\\\ \\hline\nLouren\\c{c}o (2019) : BP = MVN & 3060 & A &0.864 &3.86e-195 \\\\ \nAlgorithms \\ref{bp-alg-mvn} and \\ref{main algorithm 2} &618 & C & 0.357 & 9.11e-305 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nThe ``\\#iterations of M-A'' column shows the number of iterations of the main algorithm.\nThe ``Average reduction rate'' column shows the average value of $\\frac{r^r}{\\det w} $ for Louren\\c{c}o (2019) and the average value of $\\xi^N$ for our method (Algorithms \\ref{bp-alg-sp} and \\ref{main algorithm}). The ``Final reduction rate'' column shows the value\n\\begin{equation}\n\\frac{r^{kr}}{\\det w(1) \\times \\det w(2) \\times \\dots \\times \\det w(k)} \\notag\n\\end{equation}\nfor Louren\\c{c}o (2019), where $w(k)$ denotes $w$ computed from the result of the basic procedure at the $k$-th iteration of the main algorithm, or the value \n\\begin{equation}\n\\xi^{N_1 + \\dots + N_k} . \\notag\n\\end{equation}\nfor our method (Algorithms \\ref{bp-alg-mvn} and \\ref{main algorithm 2}), where $N_k$ denotes the number of cuts obtained from the basic procedure at the $k$-th iteration of the main algorithm.\n\nHere, we observed that our method (Algorithms \\ref{bp-alg-mvn} and \\ref{main algorithm 2}) terminated at the 618-th iteration of the main algorithm with a reduction rate of 9.11e-305, while Louren\\c{c}o (2019) attained a reduction rate of 5.88e-40 at the same iteration of the main algorithm.\n\n\n\\subsection{Detection of an $\\varepsilon$-feasible solution}\n\nHere, we discuss the capabilities of our method and Louren\\c{c}o (2019) at detecting an $\\varepsilon$-feasible solution. Both methods terminate their main algorithms by detecting the existence of an $\\varepsilon$-feasible solution. We compared them by computing the reduction in $\\log \\left( \\lambda_{\\min} (x_\\ell) \\right)$ per iteration for parameter settings in which the maximum numbers of iterations of the basic procedures would be the same (i.e., $\\rho=2$ in Louren\\c{c}o (2019) and $\\xi = \\frac{1}{2}$ in our method).\n\nIn \\cite{Bruno2019}, for each block $\\ell$, Lemma 16 ensures that $\\log \\left( \\lambda_{\\min} (x_\\ell) \\right)$ is bounded from above by $\\epsilon_\\ell$, and Theorem 17 ensures that $\\epsilon_\\ell$ decreases at least $\\frac{\\varphi(\\rho)}{r_\\ell}>0$ if a {\\it good} iteration is obtained for the block $\\ell$.\n\nFor our method, Proposition \\ref{prop:lambda-min-upper} ensures that $\\log \\left( \\lambda_{\\min} (x_\\ell) \\right)$ is bounded from above by $\\frac{\\mbox{num}_\\ell}{r_\\ell} \\log \\xi$ and Proposition \\ref{prop:iteration-num-ma} ensures that $\\frac{\\mbox{num}_\\ell}{r_\\ell} \\log \\xi$ decreases $-\\frac{1}{r_\\ell} \\log \\xi > 0 $ in the same situation.\n\nBy substituting $\\rho=2$ and $\\xi = \\frac{1}{2}$ into $\\varphi(\\rho)$ and $- \\log \\xi$ so that the upper bounds for the numbers of iterations of the basic procedures are the same, we obtain\n\\begin{align*}\n\\varphi(2) &= 2 - \\frac{1}{2} - \\sqrt{2} \\simeq 0.085786 , \\\\\n-\\log \\frac{1}{2} &= \\log 2 \\simeq 0.693147\n\\end{align*}\nwhich implies that the rate of reduction in the upper bound $\\log \\left( \\lambda_{\\min} (x_\\ell) \\right)$ of our method is greater than that of Louren\\c{c}o (2019).\n\n\n\n\n\n\\section{Concluding remarks}\n\\label{sec: concluding remarks}\n\nIn this study, we proposed a new version of Chubanov's method for solving the feasibility problem over the symmetric cone by extending Roos's method \\cite{Roos2018} for the feasible problem over the nonnegative orthant, and we conducted comprehensive numerical experiments on the problem over the positive semidefinite cone to compare the performances of our method and the existing ones \\cite{Bruno2019,Pena2017}.\n\nOur method has the following features:\n\\begin{itemize}\n\\item It considers the feasibility problem ${\\rm P}_{S_\\infty} (\\mathcal{A})$, which is equivalent to ${\\rm P} (\\mathcal{A})$, and uses a rescaling focusing on the upper bound for the sum of eigenvalues of any feasible solution of ${\\rm P}_{S_\\infty} (\\mathcal{A})$. \n\\item Using the norm $\\| \\cdot \\|_\\infty$ in problem ${\\rm P}_{S_\\infty} (\\mathcal{A})$ makes it possible to (i) calculate the upper bound for the minimum eigenvalue of any feasible solution of ${\\rm P}_{S_\\infty} (\\mathcal{A})$, (ii) quantify the feasible region of ${\\rm P} (\\mathcal{A})$, and hence (iii) determine whether there exists a feasible solution of ${\\rm P} (\\mathcal{A})$ whose minimum eigenvalue is greater than $\\epsilon$ as in \\cite{Bruno2019}.\n\\item In terms of the computational bound, our method is (i) equivalent to Roos's original method \\cite{Roos2018} and superior to Louren\\c{c}o et al.'s method \\cite{Bruno2019} when the symmetric cone is the nonnegative orthant, (ii) superior to Louren\\c{c}o et al.'s when the symmetric cone is a Cartesian product of second-order cones, (iii) equivalent to Louren\\c{c}o et al.'s when the symmetric cone is the simple positive semidefinite cone, under the assumption that the costs of computing the spectral decomposition and the minimum eigenvalue are of the same order for any given symmetric matrix, and (iv) superior to Pena and Soheili's method~\\cite{Pena2017} for any simple symmetric cones under the assumption that ${\\rm P} (\\mathcal{A})$ is feasible.\n\\end{itemize}\n\nWe conducted comprehensive numerical experiments comparing our method with the methods of Chubanov and Mosek. We generated instances in three types: (i) strongly (but ill-conditioned) feasible instances by using Algorithm \\ref{strongly feasible problem}, (ii) weakly feasible instances by using Algorithm \\ref{weakly feasible instance}, and (iii) infeasible instances by using Algorithm \\ref{infeasible instance}. Our numerical results showed that\n\\begin{itemize}\n\\item Our method (Algorithms \\ref{main algorithm} and \\ref{main algorithm 2}) is superior to the methods proposed in \\cite{Bruno2019,Pena2017} in terms of accuracy and execution time.\n\\item It is considerably faster than the existing methods on ill-conditioned strongly feasible instances.\n\\item A modified version of our method (Algorithm \\ref{main algorithm 2}) can exactly determine whether the instance has no feasible solution whose minimum eigenvalue is less than $\\varepsilon = \\mbox{1e-12}$ for all weakly feasible instances (i.e., having no interior feasible solution), which is in contrast to Louren\\c{c}o et al.'s method, which sometimes returns a solution that does not satisfy the conic constraint of ${\\rm P} (\\mathcal{A})$ or ${\\rm D}(\\mathcal{A})$ and is affected by the large number of iterations of its main algorithm. \n\\item \\modifySecond{\nOur results showed that Mosek was the better than Chubanov's methods in terms of execution time. On the other hand, in terms of the accuracy of the solution (the value of $\\|\\mathcal{A}(X^*)\\|_2$), we found that all of Chubanov's methods are better than Mosek. In particular, we have seen such results for strongly-feasible (terribly) ill-conditioned ($\\mu \\simeq 1e-250$) and weakly-feasible instances.\n}\n\\end{itemize}\n\nOn the basis of the above numerical results, we further examined the number of iterations of Louren\\c{c}o et al.'s method and our method. As a result, we found that the basic procedure of our method is superior to the one of Louren\\c{c}o et al. in terms of both the constant rate of reduction in the volume of the detecting region and the upper bound for the minimum eigenvalue of any feasible solution.\n\nNote that Chubanov's method can find an $x \\in \\mbox{int} \\mathcal{K}$ satisfying $\\mathcal{A} (x) = \\bm{0}$, but not $x \\in \\mathcal{K}$ close to the boundary and satisfying $\\mathcal{A} (x) = \\bm{0}$, and it can determine the feasibility of ${\\rm P} (\\mathcal{A})$ in a finite number of iterations, but not a feasible solution of ${\\rm D}(\\mathcal{A})$ in such a way. \n\nOn the other hand, once we find that ${ \\rm P}_\\infty (\\mathcal{A})$ and ${ \\rm P}_{1,\\infty} (\\mathcal{A})$ have no feasible solution whose minimum eigenvalue is greater than $\\varepsilon$, the next issue is to find an $x \\in \\mathcal{K}\\setminus \\mbox{int} \\mathcal{K}$ satisfying $\\mathcal{A} (x) = \\bm{0}$, or to find the smallest dimensional symmetric cone $\\mathcal{K}_{\\min}$ satisfying $\\mbox{Ker} \\mathcal{A} \\cap \\mbox{int} \\mathcal{K}_{\\min} \\neq \\emptyset$ and $\\mathcal{K}_{\\min} \\subsetneq \\mathcal{K}$. It has been shown that the smallest dimensional symmetric cone $\\mathcal{K}_{\\min}$ can be detected by using a feasible solution of ${\\rm D}(\\mathcal{A})$~\\cite{Waki2013}, and several algorithms have been proposed to find a feasible solution of ${\\rm D}(\\mathcal{A})$ in a finite number of iterations~\\cite{Chubanov2017,Muramatsu2018}.\n\nIt remains as future work to explore whether it is possible to modify Chubanov's method so it can find $x \\in \\mathcal{K}$ close to the boundary and satisfying $\\mathcal{A} (x) = \\bm{0}$ directly or find a feasible solution of ${\\rm D}(\\mathcal{A})$ in a finite number of iterations.\n\n\n\\section*{Acknowledgments}\nWe would like to express our deep gratitude to the reviewers and editors for their many valuable comments. \nTheir comments significantly enriched the content of this paper, especially sections \\ref{sec: extension}, \\ref{sec: main algorithm}, \\ref{sec: compare}, and \\ref{sec: numerical experiments}.\nWe also would like to sincerely thank Daisuke Sagaki for essential ideas on the proof of Proposition \\ref{prop:compare tr}, and Yasunori Futamura for helpful information about the computational cost of the eigenvalue calculation in Section \\ref{sec: CSD and Cmin}.\nWe could not complete this paper without their support.\nThis work was supported by JSPS KAKENHI Grant Numbers (B)19H02373 and JP 21J20875.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{INTRO}\n\n\n\\noindent\nThe Dyson-Schwinger (DS) approach \n\\cite{ForExample,Alkofer:2000wg,Roberts:2000hi,Roberts:2000aa,Holl:2006ni,Fischer:2006ub} \nto QCD and its modeling is the chirally well-behaved bound-state approach. Thus, it is \nthe most suitable bound-state approach to treat the light pseudoscalar mesons \n(those composed of the $u, d$ and $s$ quarks), for which \ndynamical chiral symmetry breaking (DChSB) is essential.\nOne solves the (``gap\") DS equations (DSEs) \n${S}_q^{-1} = (S^{\\scriptstyle {\\rm free}}_{q})^{-1} - \\Sigma_q$ \nfor the dynamically dressed quark propagators \n${S}_q$, where $q$ is the quark flavor ($q=u,d,s, ...$),\n$S^{\\scriptstyle {\\rm free}}_q$ is the {free} quark propagator, and \n$\\Sigma_q$ is the quark self-energy.\nThese dressed quark propagator solutions are then employed in \nBethe-Salpeter equations (BSEs)\nfor the bound-state vertex $\\Gamma_{q{\\bar q}'}$ of\nthe meson composed of the quark of the flavor $q$ and antiquark\nof the flavor $q'$:\n\\begin{equation}\n[\\Gamma_{q{\\bar q}'}]_{ef} = \\int [S_q \\Gamma_{q{\\bar q}'} S_{q'} ]_{gh} \n[K]_{ef}^{hg} \\, \\, ,\n\\label{BSE}\n\\end{equation}\nwhere $e,f,g,h$ {\\it schematically} represent spinor, color\nand flavor indices, integration is meant over loop momenta,\nand $K$ is the interaction kernel. Solving Eq. (\\ref{BSE}) \nfor $\\Gamma_{q{\\bar q}'}$ also yields $M_{q\\bar q'}$, \nthe mass eigenvalue of the $q{\\bar q}'$ meson.\n\nTo obtain the chiral behavior as in QCD, DS and BS equations\nmust be solved in a consistent approximation. The rainbow-ladder \napproximation (RLA), where DChSB is well-understood, is still \nthe most usual approximation in phenomenological applications.\nThis also entails that in both DSE and BSE (\\ref{BSE}) we employ \nthe same effective interaction kernel, \n\\begin{equation}\n[K(k)]_{ef}^{hg} = {\\rm i} \\,\n g^2 D_{\\mu\\nu}^{ab}(k)_{\\mbox{\\rm\\scriptsize eff}} \\,\n[\\frac{\\lambda^a}{2}\\,\\gamma^{\\mu}]_{eg} \\,\n[\\frac{\\lambda^a}{2}\\,\\gamma^{\\nu}]_{hf} \\, ,\n\\label{RLAkernel}\n\\end{equation}\nso that the quark self energy in the gap DSE is\n\\begin{eqnarray}\n \\label{DS-equation}\n\\Sigma_q(p) = \n- \\int \\!\\!\\frac{d^4\\ell}{(2\\pi)^4} \\,\n g^2 D_{\\mu\\nu}^{ab}(k)_{\\mbox{\\rm\\scriptsize eff}} \\, \n\\frac{\\lambda^a}{2}\\,\\gamma^{\\mu}\nS_q(\\ell) \\frac{\\lambda^b}{2}\\,\\gamma^{\\nu}~. \n\\end{eqnarray}\nIn Eqs. (\\ref{RLAkernel}) and (\\ref{DS-equation}), \n$D_{\\mu\\nu}^{ab}(k)_{\\mbox{\\rm\\scriptsize eff}}$ is an effective gluon \npropagator. For example, for renormalization-group improved (RGI) \ninteractions (e.g., in Refs. \n\\cite{jain93b,Klabucar:1997zi,Kekez:2000aw,Kekez:2003ri,Kekez:2005ie}), \nit has the form \n\\begin{equation}\ng^2 D_{\\mu\\nu}^{ab}(k)_{\\mbox{\\rm\\scriptsize eff}}\n= \n4\\pi\\alpha_{\\mbox{\\rm\\scriptsize eff}}(k^2)\nD_{\\mu\\nu}^{ab}(k)_{\\scriptstyle {\\rm free}}\n\\end{equation}\nwhere $D_{\\mu\\nu}^{ab}(p)_{\\scriptstyle {\\rm free}}$ is the free gluon propagator,\nand $\\alpha_{\\mbox{\\rm\\scriptsize eff}}(k^2)$ is an {\\it effective} running \ncoupling. For large spacelike momenta ($k^2 \\gg 1$ GeV$^2$), \n$\\alpha_{\\mbox{\\rm\\scriptsize eff}}(k^2)$ approaches the perturbative \nQCD running coupling $\\alpha_{\\mbox{\\rm\\scriptsize s}}(k^2)$ known from the\nQCD renormalization group analysis, although it must be modeled at\nlow momenta.\n\n\nConcretely, in the present paper we recall and utilize the results obtained \n{\\it i)} in Refs. \\cite{Klabucar:1997zi,Kekez:2000aw} by using the \nRGI of Jain and Munczek \\cite{jain93b}, {\\it ii)} in Ref. \\cite{Kekez:2005ie} \nby using the RGI gluon condensate-induced interaction \\cite{Kekez:2003ri}, \nand {\\it iii)} in Refs. \\cite{Horvatic:2007wu,Horvatic:2007qs} by using \nthe separable interaction \\cite{Blaschke:2000gd}. In any case,\nsuch effective interactions must be modeled at least in the low-energy, \nnonperturbative regime in order to be phenomenologically successful --\nwhich above all means to be sufficiently strong in the low-momentum \ndomain to yield DChSB. \nIn the chiral limit (and {\\it close} to it), light pseudoscalar ($P$) \nmeson $q\\bar q$ bound states ($P=\\pi^{0,\\pm}, K^{0,\\pm}, \\eta$) \nthen {simultaneously} manifest themselves also as \n\\mbox{({\\it quasi-})}Goldstone bosons of DChSB.\nThis enables one to work with the mesons as \nexplicit $q\\bar q$ bound states, \nwhile reproducing \nthe results of the Abelian axial anomaly for the light pseudoscalars, \ni.e., the amplitudes for $P\\rightarrow\\gamma\\gamma$ and \n$\\gamma^\\star \\rightarrow P^0 P^+ P^-$.\nThis is unique among the bound state approaches -- e.g., see\nRefs. \\cite{Roberts:2000hi,Kekez:1998xr,Alkofer:1995jx,Bistrovic:1999dy} \nand references therein. Nevertheless, one keeps the advantage of bound-state \napproaches that from the $q\\bar q$ substructure one can calculate many \nimportant quantities (such as the pion, kaon and $s\\bar s$ pseudoscalar \ndecay constants: $f_\\pi$, $f_K$ and $f_{s\\bar s}$) which are just parameters \nin most of other chiral approaches to the light-quark sector. The treatment \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2001ph,Kekez:2005ie}\nof the $\\eta$-$\\eta'$ complex is remarkable in that it is \nvery successful in spite of the limitations of RLA.\n(Very recently, during the work on the present paper,\nthe first and still simplified DS treatments of $\\eta$ and \n$\\eta'$ beyond RLA appeared \\cite{Lakhina:2007vm,Bhagwat:2007ha}. \nHowever, RLA treatments \nwill probably long retain their usefulness in applications\nwhere simple modeling is desirable, as in the calculationally\ndemanding finite-temperature calculations \\cite{Horvatic:2007qs}.)\nThe RLA treatments of the $\\eta$-$\\eta'$ complex at first determined \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2001ph} the anomalous $\\eta_0$ mass \nparameter by fitting the empirical $\\eta$ and $\\eta'$ masses. More recently,\nthe treatment was improved by avoiding this fitting while retaining the \nphenomenologically successful description \\cite{Kekez:2005ie,Horvatic:2007qs}. \nNamely, the anomalous $\\eta_0$ mass was no longer a free parameter \nbut determined from the lattice results (on QCD topological susceptibility) \nthrough the Witten-Veneziano (WV) relation\n\\cite{Witten:1979vv,Veneziano:1979ec}. However, Shore achieved\n\\cite{Shore:2006mm,Shore:2007yn} what can be considered as a\ngeneralization of the WV relation, and the purpose of the\npresent paper is exploring the usage of\nthis generalization in the DS context.\n\n\nThe paper is organized as follows: in the next section, we recapitulate \nthe procedures and results of our previous treatments \n\\cite{Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs} relying on \nthe WV relation (\\ref{WittenVenez}), and present in Table I also \ntheir extension to the scheme of the four decay constants (and two \nmixing angles) of $\\eta$ and $\\eta'$. \nIn Section \\ref{ShoresRelations}, we expose the usage of the\npertinent Shore's equations \\cite{Shore:2006mm,Shore:2007yn}\nin the context of DS approach. The last section concludes \nafter giving the results of solving the pertinent equations.\n\n\n\\section{$\\eta$-$\\eta'$ mass matrix from Witten-Veneziano relation}\n\\label{massMatrixAndWVrelation}\n\n\nAll $q\\bar q'$ model masses $M_{q\\bar q'}$ ($q, q' = u,d,s$)\nused in the present paper, and corresponding $q\\bar q'$ bound-state \namplitudes, were obtained in Refs. \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Blaschke:2007ce,Horvatic:2007wu,Horvatic:2007qs}\nin RLA, i.e., \nwith an interaction kernel which \n(irrespective of how one models the dynamics) cannot possibly \ncapture the effects of the non-Abelian, gluon axial anomaly. \nThus, when we form the $\\eta$-$\\eta'$ mass matrix \n\\begin{equation}\n{\\hat M}^2_\\mathrm{NA} =\n\\left[ \\begin{array}{cl} M_{88}^2 & M_{80}^2\\\\\n M_{08}^2 & M_{00}^2\n \\end{array} \\right]~,\n\\label{M2NA}\n\\end{equation}\nin this case in the octet-singlet basis $\\eta_8$-$\\eta_0$\nof the (broken) flavor-SU(3) states of isospin zero, \n\\begin{equation}\n \\eta_8\n =\n \\frac{1}{\\sqrt{6}}(u\\bar{u} + d\\bar{d} -2 s\\bar{s}),\n\\quad \n \\eta_0\n =\n \\frac{1}{\\sqrt{3}}(u\\bar{u} + d\\bar{d} + s\\bar{s}),\n\\label{etasdef}\n \\end{equation}\nthis matrix (\\ref{M2NA}), consisting of our calculated $q\\bar q$ masses,\n\\begin{equation}\nM_{88}^2 \\equiv \\langle \\eta_8 | {\\hat M}^2_\\mathrm{NA} |\\eta_8 \\rangle\n = \\frac{2}{3}\\, (M_{s\\bar{s}}^2 + \\frac{1}{2}M_{u\\bar{u}}^2) \\, ,\n\\end{equation}\n\\begin{equation}\nM_{80}^2 \\equiv \\langle \\eta_8 | {\\hat M}^2_\\mathrm{NA} |\\eta_0 \\rangle\n= M_{08}^2 = \\frac{\\sqrt{2}}{3} ( M_{u\\bar{u}}^2 - M_{s\\bar{s}}^2 )\n< 0,\n\\end{equation}\n\\begin{equation}\nM_{00}^2 \\equiv \\langle \\eta_0| {\\hat M}^2_\\mathrm{NA} |\\eta_0 \\rangle\n= \\frac{2}{3}\\, (\\frac{1}{2}M_{s\\bar{s}}^2 + M_{u\\bar{u}}^2) \\, ,\n\\end{equation}\nis purely non-anomalous ($\\mathrm{NA}$), vanishing in the chiral limit. In \nthe isospin limit, to which we adhere throughout, the pion is strictly \ndecoupled from the gluon anomaly and $M_{u\\bar{u}} = M_{d\\bar{d}}$ is \nexactly our model pion mass $M_{\\pi}$.\nAlso the unphysical $s\\bar s$ quasi-Goldstone's mass $M_{s\\bar s}$ \nresults from RLA BSE and does not include the contribution from the \ngluon anomaly. This is consistent with the fact that due to the \nDashen-Gell-Mann-Oakes-Renner (DGMOR) relation, it is in a good \napproximation \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs} \ngiven by\n\\begin{equation}\nM_{s\\bar s}^2 = 2 M_K^2 - M_{\\pi}^2 \\, ,\n\\label{MssbarMKMpi}\n\\end{equation}\ni.e., by the kaon and pion masses protected from the anomaly\nby strangeness and\/or isospin. \n\nIn our previous DS studies \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Blaschke:2007ce,Horvatic:2007wu,Horvatic:2007qs},\nto which we refer for all model details, the phenomenology of the\nnon-anomalous sector was successfully reproduced, e.g., $f_\\pi$, \n$f_K$, as well as the empirical masses $M_{\\pi}$ and \n$M_K$ (see the upper part of Table \\ref{3WV+Shore}), \nyielding a strongly non-diagonal ${\\hat M}^2_\\mathrm{NA}$ (\\ref{M2NA}). \nIts diagonalization leads to the eigenstates known as the \nnonstrange-strange ($\\mathrm{NS}$-$\\mathrm{S}$) basis,\n\\begin{equation}\n\\eta_\\mathrm{NS} = \\frac{1}{\\sqrt{2}} (u\\bar{u} + d\\bar{d})~,\n\\qquad\n\\eta_\\mathrm{S} = s\\bar{s} \\, ,\n\\label{NS-Sbasis}\n\\end{equation}\nand to \n${\\hat M}^2_\\mathrm{NA}={\\rm diag}[M_{\\pi}^2,M_{s\\bar s}^2]$. \nIn contrast to these mass-squared eigenvalues, the experimental \nmasses are such that $(M_{\\pi}^2)_{exp} \\ll (M_{\\eta}^2)_{exp}$, \nand $\\eta'$ is too heavy, $(M_{\\eta'})_{exp} = 958$ MeV,\nto be considered even as the $s\\bar s$ quasi-Goldstone boson.\nThis is the well-known $U_A(1)$ problem, resolved by the\nfact that the {\\it complete} $\\eta$-$\\eta'$ mass matrix \n${\\hat M}^2$ must contain the anomalous ($A$) part ${\\hat M}^2_\\mathrm{A}$.\nThat is, ${\\hat M}^2 = {\\hat M}^2_\\mathrm{NA} + {\\hat M}^2_\\mathrm{A}$. \n\nHowever, ${\\hat M}^2_\\mathrm{A}$ is inaccessible to RLA\nwhich yields our Goldstone pseudoscalars. In Refs. \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007wu,Horvatic:2007qs},\n${\\hat M}^2_\\mathrm{A}$ was extracted from lattice data through the WV relation\n[the second equality in Eq. (\\ref{WittenVenez})]. \nThe purpose of the present paper, instead, is to approach \n$\\eta$ and $\\eta'$ through Shore's \\cite{Shore:2006mm,Shore:2007yn}\nrecent generalization of that relation. \n\nBefore that, however, we review the usage of the WV relation in Refs.\n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007wu,Horvatic:2007qs}.\nThe expansion in the large number of colors, $N_c$, indicates that \nthe leading approximation in that expansion describes the bulk\nof main features of QCD.\nThe gluon anomaly is suppressed as $1\/N_c$ and can be viewed as a \nperturbation in the large $N_c$ expansion. In the SU(3) limit\n[compare Eqs. (\\ref{M2A}) and (\\ref{M2AqqX})], it is coupled {\\it only} \nto the singlet combination $\\eta_0$ (\\ref{etasdef}); only the $\\eta_0$ \nmass receives, from the gluon anomaly, a contribution which, unlike \nquasi-Goldstone masses $M_{q\\bar q'}$'s comprising ${\\hat M}^2_\\mathrm{NA}$, \ndoes {\\it not} vanish in the chiral limit.\nAs discussed in Refs. \\cite{Klabucar:1997zi,Kekez:2005ie}, \nin the present bound-state context it is thus meaningful to \ninclude the effect of the gluon anomaly just on the level of\na mass shift for the $\\eta_0$ as the lowest-order effect, and retain\nthe $q\\bar q$ bound-state amplitudes and the corresponding \nmass eigenvalues $M_{q\\bar q}$ as calculated by solving DSEs and\nBSEs with kernels in RLA. \n\nReferences \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007wu,Horvatic:2007qs}\nthus break the $U_A(1)$\nsymmetry, and avoid the $U_A(1)$ problem, by shifting the $\\eta_0$ \n(squared) mass by an amount denoted by $3\\beta$ (in the notation of \nRefs. \\cite{Kekez:2000aw,Kekez:2005ie}). The complete mass matrix \n${\\hat M}^2 = {\\hat M}^2_\\mathrm{NA} + {\\hat M}^2_\\mathrm{A}$ then contains\nthe anomalous part\n\\begin{equation}\n{\\hat M}^2_\\mathrm{A} = \\mbox{\\rm diag}[0, 3\\beta]~,\n\\label{M2A}\n\\end{equation}\nwhere the anomalous $\\eta_0$ mass shift $3\\beta$ \nis related to the topological susceptibility of the vacuum,\nbut in the present approach must be treated as a parameter to \nbe determined outside of our RLA model, i.e., fixed by\nphenomenology or taken from the lattice calculations \\cite{Alles:1996nm}.\n(The possibility of employing an additional microscopic model for the \ngluon anomaly contribution, such as the one of Ref. \\cite{Blaschke:1996dp},\nis presently not considered.)\n\n\n\nThe SU(3) flavor symmetry breaking and its interplay with the\ngluon anomaly modifies \\cite{Kekez:2005ie} \n${\\hat M}^2_\\mathrm{A}$ (\\ref{M2A}) to \n\\begin{equation}\n{\\hat M}^2_\\mathrm{A} = \n\\beta \\left[ \\begin{array}{cl}\n \\,\\, \\frac{2}{3}(1-X)^2 & \\frac{\\sqrt{2}}{3}(1-X)(2+X) \\\\\n \\frac{\\sqrt{2}}{3}(1-X)(2+X) & \\,\\,\\,\\,\\, \\quad \\frac{1}{3}(2+X)^2\n \\end{array} \\right]\n \\, ~,\n\\label{M2AqqX}\n\\end{equation}\nwhere $X$ is the flavor symmetry breaking parameter. It is most often\nestimated as $X = f_\\pi\/f_{s\\bar s} \\sim 0.7-0.8$ (see, e.g., Refs. \n\\cite{FeldmannKrollStech98PRD,Feldmann99IJMPA,Kekez:2000aw,Kekez:2005ie},\nalthough there are some other \\cite{Kekez:2000aw}, of course related, \nestimates of $X$). Presently we also adopt $X = f_\\pi\/f_{s\\bar s}$, \nwhich means that $X$ is a calculated quantity in our approach.\nThe employed models achieved good agreement with phenomenology \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs}, \ne.g., fitted the experimental \nvalue of $M_\\eta^2 + M_{\\eta'}^2$ for $\\beta$ around 0.26 -- 0.28 GeV$^2$.\nThe anomaly contribution ${\\hat M}^2_\\mathrm{A}$ then brings the complete $M^2$ \nrather close to a diagonal form for all considered models\n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs}; \nthat is, to diagonalize $M^2$, only a relatively small rotation \n($|\\theta|\\sim 13^\\circ \\pm 2^\\circ$) \nof the $\\eta_{8}$-$\\eta_{0}$ basis states,\n\\begin{equation}\n\\eta = \\cos\\theta \\, \\eta_{8}\n - \\sin\\theta \\, \\eta_0~,\n\\,\\,\\,\\,\\,\\,\\,\n\\eta^\\prime = \\sin\\theta \\, \\eta_8\n + \\cos\\theta \\, \\eta_0~,\n\\label{MIXtheta}\n\\end{equation}\nis needed to align them with the mass eigenstates, i.e.,\nwith the physical $\\eta$ and $\\eta^\\prime$.\nIn contrast to this,\nthe $\\eta$-$\\eta^\\prime$ mass matrix in the $\\mathrm{NS}$-$\\mathrm{S}$\nbasis (\\ref{NS-Sbasis}),\n\\begin{eqnarray}\n\\label{M2_NS-S}\n{\\hat M}^2 &=&\n \\pipsq{\n \\begin{array}{ll}\n M_{\\eta_{\\mathrm{NS}}}^2 & M_{\\eta_{\\mathrm{S}}\\eta_{\\mathrm{NS}}}^2 \\\\\n M_{\\eta_{\\mathrm{NS}}\\eta_{\\mathrm{S}}}^2 & M_{\\eta_{\\mathrm{S}}}^2\n \\end{array}\n } \\\\\n &=&\n \\pipsq{\n \\begin{array}{ll}\n M_{\\pi}^2 + 2 \\beta & \\quad \\sqrt{2} \\beta X \\\\\n \\, \\sqrt{2} \\beta X & M_{s\\bar{s}}^2 + \\beta X^2\n \\end{array}\n }\n\\begin{array}{c} \\vspace{-2mm} \\longrightarrow \\\\ \\phi \\end{array}\n \\pipsq{\n \\begin{array}{ll}\n M_\\eta^2 & \\,\\, 0 \\\\\n \\, 0 & M_{\\eta'}^2\n \\end{array}\n },\n\\nonumber\n\\end{eqnarray}\nis then strongly off-diagonal.\nThe indicated diagonalization, given by\n\\begin{equation}\n\\eta = \\cos\\phi \\, \\eta_{\\mathrm{NS}}\n - \\sin\\phi \\, \\eta_\\mathrm{S}~,\n\\,\\,\\,\\,\\,\\,\\,\n\\eta^\\prime = \\sin\\phi \\, \\eta_\\mathrm{NS}\n + \\cos\\phi \\, \\eta_\\mathrm{S}~,\n\\label{MIXphi}\n\\end{equation}\nis thus achieved for a large $\\mathrm{NS}$-$\\mathrm{S}$ state-mixing angle\n$\\phi \\sim 42^\\circ \\pm 2^\\circ$. Of course, this is again in agreement \nwith phenomenological requirements \\cite{Kekez:2000aw,Kekez:2005ie}, \nsince $\\phi$ is fixed to the $\\eta_8$-$\\eta_0$ state-mixing angle \n$\\theta$ by the relation \n$\\phi = \\theta + \\arctan\\sqrt{2} = \\theta + 54.74\\deg$.\nThe masses are\n\\begin{eqnarray}\n\\label{Meta}\n\\nonumber\nM_{\\eta}^2 &=& \\frac{1}{2} \\left[ M_{\\eta_{\\mathrm{NS}}}^2 + M_{\\eta_{\\mathrm{S}}}^2\n - \\sqrt{(M_{\\eta_{\\mathrm{NS}}}^2 - M_{\\eta_{\\mathrm{S}}}^2)^2 + 8 \\beta^2 X^2} \\right],\n\\\\\nM_{\\eta'}^2 &=& \\frac{1}{2} \\left[ M_{\\eta_{\\mathrm{NS}}}^2 + M_{\\eta_{\\mathrm{S}}}^2\n + \\sqrt{(M_{\\eta_{\\mathrm{NS}}}^2 - M_{\\eta_{\\mathrm{S}}}^2)^2 + 8 \\beta^2 X^2} \\right].\n\\nonumber\n\\label{MetaPrime}\n\\end{eqnarray}\n\nThe invariant trace of the mass matrix (\\ref{M2_NS-S}), together\nwith Eq. (\\ref{MssbarMKMpi}), \ngives the first equality in \n\\begin{equation}\n \\beta \\, (2 + X^2) = M_\\eta^2 + M_{\\eta'}^2 - 2 M_K^2 =\n\\frac{6}{f_\\pi^2} \\, \\chi_{\\rm YM}\n \\, .\n\\label{WittenVenez}\n\\end{equation}\nThe second equality is the Witten-Veneziano (WV) relation \n\\cite{Witten:1979vv,Veneziano:1979ec} between the $\\eta$, \n$\\eta'$ and kaon masses and $\\chi_{\\rm YM}$, the topological \nsusceptibility of the pure gauge, Yang-Mills theory.\nThus, $\\beta$ does not need to be a free parameter, but can be\ndetermined from lattice results on $\\chi_{\\rm YM}$, so that \nno fitting parameters are introduced. For the three models\n\\cite{jain93b,Kekez:2003ri,Blaschke:2000gd} utilized in our treatments \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs} of\n$\\eta$ and $\\eta'$, the bare quark mass parameters and the \ninteraction parameters were fixed already in the non-anomalous sector, \nby requiring the good pion and kaon phenomenology.\n(See the $\\pi$ and $K$ masses and decay constants in\nthe uppermost part of Table \\ref{3WV+Shore}.)\nThen, following Refs. \\cite{Kekez:2005ie,Horvatic:2007qs} in\nadopting the central value of the weighted average of the recent \nlattice results on Yang-Mills topological susceptibility \n\\cite{Lucini:2004yh,DelDebbio:2004ns,Alles:2004vi}, \n\\begin{equation}\n\\chi_{\\rm YM} = (175.7 \\pm 1.5 \\, \\rm MeV)^4 \\, ,\n\\label{chiYMaverage}\n\\end{equation}\nwe have obtained \nthe good descriptions of the $\\eta$-$\\eta'$ phenomenology\n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs}, \nexemplified by the first three columns (one for each DS models used) \nof the middle part of Table \\ref{3WV+Shore}, giving the predictions for \nthe $\\eta$ and $\\eta'$ masses and for the $\\mathrm{NS}$-$\\mathrm{S}$ mixing angle $\\phi$.\n \nThe lowest part of the table, below the second horizontal dividing line, \ncontains the results on the quantities ($\\theta_0$, $\\theta_8$, etc.) \ndefined in the scheme with four $\\eta$ and $\\eta^\\prime$ decay constants \nand two mixing angles, introduced and explained in the following \nSection~\\ref{ShoresRelations}.\nTable \\ref{3WV+Shore} also compares\nthese results of ours (in the first three columns) with the corresponding \nresults of Shore's approach~\\cite{Shore:2006mm,Shore:2007yn}, in which \nthe {\\it experimental} values of the meson masses $M_\\pi$, $M_K$, $M_\\eta$, \nand $M_{\\eta^\\prime}$, as well as the decay constants $f_\\pi$ and $f_K$ \n(in contrast to our $q\\bar q$ bound-state model predictions for these \nquantities) are used as inputs enabling the calculation of various decay \nconstants in the $\\eta$-$\\eta'$ complex and the two mixing angles $\\theta_0$ \nand $\\theta_8$ (corresponding to $\\phi = 38.24^\\circ$ in our approach).\n\n\\begin{table}[b]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nfrom & \\cite{Kekez:2000aw} & \\cite{Kekez:2005ie} & \\cite{Horvatic:2007qs} & Shore & \\\\\nRef. & \\& WV & \\& WV & \\& WV & \\cite{Shore:2006mm,Shore:2007yn} & Experiment \\\\\n\\hline\n$M_\\pi$ & 137.3 & 135.0 & 140.0 & & $(138.0)^{\\mathrm{isospin}}_{\\mathrm{average}}$ \\\\\n$M_K$ & 495.7 & 494.9 & 495.0 & & $(495.7)^{\\mathrm{isospin}}_{\\mathrm{average}}$\\\\\n$M_{s\\bar s}$ & 700.7 & 722.1 & 684.8 & & \\\\\n$f_\\pi$ & 93.1 & 92.9 & 92.0 & & $92.4 \\pm 0.3 $ \\\\\n$f_K$ & 113.4 & 111.5 & 110.1 & & $113.0 \\pm 1.0$ \\\\\n$f_{s\\bar s}$ & 135.0 & 132.9 & 119.1 & & \\\\\n\\hline\n$M_\\eta$ & 568.2 & 577.1 & 542.3 & & $547.75\\pm 0.12$ \\\\\n$M_{\\eta^\\prime}$ & 920.4 & 932.0 & 932.6 & & $957.78\\pm 0.14$ \\\\\n$\\phi$ & $41.42^\\circ$ & $39.56^\\circ$ & $40.75^\\circ$ & $(38.24^\\circ)$ & \\\\\n\\hline\n$\\theta_0$ & $-2.86^\\circ$ & $-5.12^\\circ$ & $-6.80^\\circ$ & $-12.3^\\circ$ & \\\\\n$\\theta_8$ & $-22.59^\\circ$ & $-24.14^\\circ$ & $-20.58^\\circ$ & $-20.1^\\circ$ & \\\\\n$f_0$ & 108.8 & 107.9 & 101.8 & 106.6 & \\\\\n$f_8$ & 122.6 & 121.1 & 110.7 & 104.8 & \\\\\n$f_\\eta^0$ & 5.4 & 9.6 & 12.1 & 22.8 & \\\\\n$f_{\\eta^\\prime}^0$& 108.7 & 107.5 & 101.1 & 104.2 & \\\\\n$f_\\eta^8$ & 113.2 & 110.5 & 103.7 & 98.4 & \\\\\n$f_{\\eta^\\prime}^8$& -47.1 & -49.5 & -38.9 & -37.6 & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{The results of employing the WV relation (\\ref{WittenVenez}) \nin our DS approach for the three dynamical models used in Refs. \n\\cite{Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs},\ncompared with the results of Shore's analysis \\cite{Shore:2006mm,Shore:2007yn}\nand with the experimental results.\nThe first column was obtained by the WV-recalculation of the results of \nRef. \\cite{Kekez:2000aw}, which in turn used the Jain-Munczek \n{\\em Ansatz} for the gluon propagator \\cite{jain93b}.\nColumn 2: the results based on Ref. \\cite{Kekez:2005ie}, which used \nthe OPE-inspired, gluon-condensate-enhanced gluon propagator \\cite{Kekez:2003ri}.\nColumn 3: the results based on Ref. \\cite{Horvatic:2007qs}, which utilized the \nseparable {\\em Ansatz} for the dressed gluon propagator \\cite{Blaschke:2000gd}.\nColumn 4: The results of Shore \\cite{Shore:2006mm,Shore:2007yn}, who used the \nlattice result $\\chi_{\\mbox{\\rm\\scriptsize YM}} = (191\\, \\rm MeV)^4$ of \nRef.~\\cite{DelDebbio:2004ns}, and not the weighted average (\\ref{chiYMaverage}),\nin contrast to us.\nColumn 5: the experimental values. All masses and decay constants are in \nMeV, and angles are in degrees. For more details, see text.\n}\n\\label{3WV+Shore}\n\\end{table}\n\n \n\n\\section{Usage of Shore's equations in DS approach}\n\\label{ShoresRelations}\n\nThe WV relation was derived in the lowest-order approximation\nin the large $N_c$ expansion. \nHowever, considerations by Shore \\cite{Shore:2006mm,Shore:2007yn}\ncontain what amounts to the generalization of the WV relation, \nwhich is valid to all orders in $1\/N_c$. Among the relations he\nderived through the inclusion of the gluon anomaly in DGMOR relations, \nthe following are pertinent for the present paper:\n\\begin{eqnarray}\n\\!\\! (f^0_{\\eta'})^2 M_{\\eta'}^2 + (f^0_{\\eta})^2 M_\\eta^2 =\n{1\\over3} \\bigl(f_\\pi^2 M_\\pi^2 + 2 f_K^2 M_K^2\\bigr) + 6 A~, \n\\quad\n\\label{eq:bn}\\\\\n\\nonumber\\\\\n\\!\\! f^0_{\\eta'} f^8_{\\eta'} M_{\\eta'}^2 + f^0_{\\eta} f^8_{\\eta} M_{\\eta}^2 =\n{2\\sqrt2\\over3}\\bigl(f_\\pi^2 M_\\pi^2 - f_K^2 M_K^2\\bigr)~, \\qquad\n\\label{eq:bo}\\\\\n\\nonumber\\\\\n\\!\\! (f^8_{\\eta'})^2 M_{\\eta'}^2 + (f^8_{\\eta})^2 M_{\\eta}^2 =\n-{1\\over3}\\bigl(f_\\pi^2 M_\\pi^2 - 4 f_K^2 M_K^2\\bigr)~, \\qquad\n\\label{eq:bp\n\\end{eqnarray}\nwhere $A$ is the full QCD topological charge parameter, and\n$f^0_{\\eta'}, f^0_{\\eta}, f^8_{\\eta'}, f^8_{\\eta}$ are the {\\it four}\ndecay constants \\cite{Gasser:1984gg,Leutwyler98,KaiserLeutwyler98}\nassociated with the two isoscalar pseudoscalars $\\eta$ and $\\eta'$.\n\nThe nonperturbative parameter $A$ is related to the\nQCD topological susceptibility, quark condensates and\nquark masses \\cite{Shore:2006mm,Shore:2007yn}.\nAt large $N_c$, it should be well-approximated by the topological susceptibility,\n$A \\approx \\chi$. More precisely, it reduces to the YM topological susceptibility\nin the large $N_c$ limit: $A = \\chi_{\\rm YM} + {\\cal O}({1}\/{N_c})$,\nbut at present it is not known better than that, as there are still\nno lattice data on this nonperturbative QCD parameter.\nTherefore, in his own phenomenological analysis,\nShore himself had to approximate $A$ by a value of\n$\\chi_{\\rm YM}$ \\cite{Shore:2006mm,Shore:2007yn}. In that sense,\nbecause of this crucial assumption based on the lowest-order ${1}\/{N_c}$\napproximation, even his analysis was not (and, because of the lack of the\ncorresponding lattice data, could not be) carried out {\\it numerically}\nconsistently in the orders of ${N_c}$, even though\nhis {\\it formulas} are valid in all orders in the ${1}\/{N_c}$ expansion.\n\nWhile the present bound-state DS approach clearly cannot improve on the\nconsistency aspect, it offers the possibility of a phenomenological analysis\nentirely different from Shore's.\nNamely, in addition to $A \\approx \\chi_{\\rm YM}$, Shore used\nthe experimentally known quantities (pion, kaon, $\\eta$ and $\\eta'$ masses,\nas well as the pion and kaon decay constants) as inputs in\nEqs. (\\ref{eq:bn})-(\\ref{eq:bp}) to obtain the $\\eta$ and $\\eta'$ decay\nconstants $f^0_{\\eta'}, f^0_{\\eta}, f^8_{\\eta'}, f^8_{\\eta}$.\nOn the other hand, the predicting power of our bound-state DS approach is\nmuch larger: not only are pion and kaon masses and decay constants\ncalculated quantities, predicted from the $q\\bar q$ substructure, but once we\nformulate the incorporation of Shore's generalization within the bound-state\nDS approach, it will become obvious that also these four $\\eta$ and $\\eta'$\ndecay constants {\\it and} their masses $M_\\eta$ and $M_{\\eta'}$ come out\nas pure predictions.\nSuch a phenomenological analysis, complementary to Shores, motivates us\nto formulate and perform the treatment based on Shore's generalization,\ninstead of the original WV relation (or fitting the anomalous $\\eta_0$\nmass shift) as in our earlier references \\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2001ph,Kekez:2005ie,Horvatic:2007qs}.\n\nAdding Eqs. (\\ref{eq:bn}) and (\\ref{eq:bp}), one gets the relation\n\\begin{eqnarray}\n(f^0_{\\eta'} )^2 M_{\\eta'}^2 &+& (f^0_{\\eta} )^2 M_\\eta^2 \n+ (f^8_{\\eta})^2 M_\\eta^2 \\nonumber\n\\\\\n&+& (f^8_{\\eta'})^2 M_{\\eta'}^2 - 2f_K^2 M_K^2 = 6A\n\\end{eqnarray}\nwhich is the analogue of the standard WV formula (\\ref{WittenVenez}), \nto which it reduces in the large $N_c$ limit where $A\\to \\chi_{\\rm YM}$, \nthe $f^0_{\\eta '}, f^8_{\\eta }, f_K \\to f_\\pi$ limit, and the limit of\nvanishing subdominant decay constants (since $\\eta$ and $\\eta'$ are \ndominantly $\\eta_8$ and $\\eta_0$, respectively), i.e.,\n$f^0_{\\eta }, f^8_{\\eta'} \\to 0$. However, we will need to use\nnot just this single equation, but the three equations \n(\\ref{eq:bn})-(\\ref{eq:bp}) from Shore's generalization. \n\n\n\nThese four $\\eta$ and $\\eta'$ decay constants are often \nparameterized in terms of two decay constants, $f_8$ and $f_0$,\nand two mixing angles, $\\theta_8$ and $\\theta_0$: \n\\begin{equation}\n\\label{def:f8etaf0eta}\nf^8_\\eta = \\cos\\theta_8\\, f_8~, \\qquad \\,\\, f^0_\\eta = -\\sin\\theta_0\\, f_0~,\n\\end{equation}\n\\begin{equation}\nf^8_{\\eta'} = \\sin\\theta_8\\, f_8~, \\qquad f^0_{\\eta'} = \\cos\\theta_0\\, f_0~.\n\\label{def:f8eta'f0eta'}\n\\end{equation}\nThis is the so-called two-angle mixing scheme, which shows explicitly \nthat it is inconsistent to assume that the mixing of the decay constants\nfollows the pattern (\\ref{MIXtheta}) of the mixing of the states \n$\\eta_8$ and $\\eta_0$\n\\cite{Gasser:1984gg,Schechter:1992iz,Leutwyler98,KaiserLeutwyler98,FeldmannKrollStech98PRD,FeldmannKrollStech99PLB,Feldmann99IJMPA}.\n\nThe advantage of our model is that, as we shall see, we are able to calculate\nthe $f_8$ and $f_0$ parts of the physical decay constants\n(\\ref{def:f8etaf0eta})-(\\ref{def:f8eta'f0eta'}) from the $q\\bar q$ substructure.\nHowever, we cannot keep the full generality of Shore's approach, which \nallows for the mixing with the gluonic pseudoscalar operators, and \ntherefore employs the definition \\cite{Shore:2006mm,Shore:2007yn} \nof the decay constants which, in general, due to the gluonic contribution, \ndiffers from the following standard definition \nthrough the matrix elements of the axial currents $A^{a\\,\\mu}(x)$:\n\\begin{equation}\n\\!\\! \\langle 0|A^{a\\,\\mu}(x)|P(p)\\rangle = if^a_P\\, p^\\mu e^{-ip\\cdot x},\n\\,\\,a=8,0;\\,\\,P=\\eta,\\eta^\\prime~.\n\\label{def2angSch}\n\\end{equation}\nNevertheless, Shore's definition \\cite{Shore:2006mm,Shore:2007yn}\ncoincides with the above standard one in the \nnon-singlet channel, where there cannot be any admixture of the pseudoscalar \ngluonic component. Similarly, since our BS solutions (from Refs. \n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs}) \nare the pure $q\\bar q$ states, without any gluonic components, using Shore's\ndefinition would not help us calculate the gluon anomaly influence on the \ndecay constants. We thus employ the standard definitions (\\ref{def2angSch}), \nalso used by, e.g., Gasser, Leutwyler, and Kaiser \n\\cite{Gasser:1984gg,Leutwyler98,KaiserLeutwyler98},\nas well as by Feldmann, Kroll, and Stech (FKS)\n\\cite{FeldmannKrollStech98PRD,FeldmannKrollStech99PLB,Feldmann99IJMPA}.\n\nIn Eqs. (\\ref{def:f8etaf0eta})-(\\ref{def:f8eta'f0eta'}), the angles are\nchosen so \\cite{Feldmann99IJMPA} that $\\theta_8 = \\theta_0 = \\theta = 0$ \nin the limit of the exact SU(3) flavor symmetry, since only then there \nare just two decay constants, purely octet $f^8_{\\eta} = f_8$ and purely\nsinglet $f^0_{\\eta'} = f_0$, while the off-diagonal decay constants\nvanish, $f^0_{\\eta} = 0 = f^8_{\\eta'}$, in this limit.\nOtherwise, all four decay constants (\\ref{def2angSch}) are different from \nzero due to the breaking of the SU(3) flavor symmetry, since this leads to \n$\\theta \\neq 0$ and gives both $\\eta$ and $\\eta'$ the both components \n$\\eta_8$ and $\\eta_0$. [In the parameterization \n(\\ref{def:f8etaf0eta})-(\\ref{def:f8eta'f0eta'}), the angles $\\theta_8$\nand $\\theta_0$ differ from $\\theta$ since also \n$\\langle 0|A_\\mu^8|\\eta_0\\rangle\\neq 0 \\neq \\langle 0|A_\\mu^0|\\eta_8\\rangle$.]\nThus, although not $\\eta_8$ but $\\eta_0$ couples to the gluon anomaly, \nthe octet-chanel constants $f^8_{\\eta}$ and $f^8_{\\eta'}$ are influenced by the \ngluon anomaly through its interplay with the SU(3) flavor symmetry breaking\n[similarly to the anomalous mass matrix (\\ref{M2AqqX}) having nonvanishing\n88, 08 and 80 elements when $X \\neq 1$]. \n\n\nEquivalently to $f^0_{\\eta'}, f^8_{\\eta }, f^0_{\\eta }$, and $f^8_{\\eta'}$, \ndefined by Eq. (\\ref{def2angSch}), one has four related but different constants\n$f^{\\mathrm{NS}}_{\\eta'}, f^{\\mathrm{NS}}_{\\eta }, f^\\mathrm{S}_{\\eta }$, and $f^\\mathrm{S}_{\\eta'}$, if instead \nof octet and singlet axial currents ($a=8,0$) in Eq. (\\ref{def2angSch}) \none uses the nonstrange-strange axial currents ($a=\\mathrm{NS},\\mathrm{S}$)\n\\begin{eqnarray}\nA_\\mathrm{NS}^{\\mu}(x) &=& \\frac{1}{\\sqrt{3}} A^{8\\,\\mu}(x)\n + \\sqrt{\\frac{2}{3}} A^{0\\,\\mu}(x)\n\\nonumber\n\\\\\n &=& \\frac{1}{2} \\left[\n \\bar{u}(x)\\gamma^\\mu\\gamma_5 u(x)\n +\n \\bar{d}(x)\\gamma^\\mu\\gamma_5 d(x)\n \\right] ,\n\\end{eqnarray}\n\\begin{equation}\nA_\\mathrm{S}^{\\mu}(x) = - \\sqrt{\\frac{2}{3}} A^{8\\,\\mu}(x)\n + \\frac{1}{\\sqrt{3}} A^{0\\,\\mu}(x)\n= \\frac{1}{\\sqrt{2}}\\bar{s}(x)\\gamma^\\mu\\gamma_5 s(x)~.\n\\end{equation}\nThe relation between the two equivalent sets is thus\n\\begin{equation}\n\\left[ \\begin{array}{cc}\nf^\\mathrm{NS}_\\eta &\n f^S_\\eta\n\\\\\nf^\\mathrm{NS}_{\\eta^\\prime} &\n f^S_{\\eta^\\prime}\n \\end{array}\n\\right]\n=\n\\left[ \\begin{array}{cc}\nf^8_\\eta &\n f^0_\\eta\n\\\\\nf^8_{\\eta^\\prime} &\n f^0_{\\eta^\\prime}\n \\end{array}\n\\right]\n\\left[ \\begin{array}{cc}\n\\frac{1}{\\sqrt{3}} & -\\sqrt{\\frac{2}{3}}\n\\\\\n\\sqrt{\\frac{2}{3}} & \\frac{1}{\\sqrt{3}}\n \\end{array}\n\\right]~.\n\\label{TwoMixingAngles:sns-etatap}\n\\end{equation}\nOf course, this other quartet of $\\eta$ and $\\eta'$ decay constants\ncan also be parameterized in terms of other two constants and two other\nmixing angles:\n\\begin{equation}\n\\label{def:fNSetafSeta}\nf^\\mathrm{NS}_\\eta = \\cos\\phi_\\mathrm{NS}\\, f_\\mathrm{NS}~, \\qquad \\,\\, f^\\mathrm{S}_\\eta = -\\sin\\phi_\\mathrm{S}\\, f_\\mathrm{S}~,\n\\end{equation}\n\\begin{equation}\nf^\\mathrm{NS}_{\\eta'} = \\sin\\phi_\\mathrm{NS}\\, f_\\mathrm{NS}~, \\qquad f^\\mathrm{S}_{\\eta'} = \\cos\\phi_\\mathrm{S}\\, f_\\mathrm{S}~,\n\\label{def:fNSeta'fSeta'}\n\\end{equation}\nwhere $f_\\mathrm{NS}$ and $f_\\mathrm{S}$ are given by the matrix elements\n\\begin{equation}\n\\langle 0| A_\\mathrm{NS}^{\\mu}(x) |\\eta_\\mathrm{NS}(p)\\rangle\n=\ni f_\\mathrm{NS}\\, p^\\mu e^{-ip\\cdot x}~,\n\\end{equation}\n\\begin{equation}\n\\langle 0| A_\\mathrm{S}^{\\mu}(x) |\\eta_\\mathrm{S}(p)\\rangle\n=\ni f_\\mathrm{S}\\, p^\\mu e^{-ip\\cdot x}~,\n\\end{equation}\nwhile $\\langle 0| A_\\mathrm{NS}^{\\mu}(x) |\\eta_\\mathrm{S}(p)\\rangle = 0\n= \\langle 0| A_\\mathrm{S}^{\\mu}(x) |\\eta_\\mathrm{NS}(p)\\rangle$.\n\nIn the $\\mathrm{NS}$-$\\mathrm{S}$ basis, it is possible to recover a scheme with a single mixing\nangle $\\phi$ through the application of the Okubo-Zweig-Iizuka (OZI) rule \n\\cite{FeldmannKrollStech98PRD,FeldmannKrollStech99PLB,Feldmann99IJMPA}. \nFor example, $f_\\mathrm{NS} f_\\mathrm{S} \\sin(\\phi_\\mathrm{NS}-\\phi_\\mathrm{S})$ \ndiffers from zero just by an OZI-suppressed term \\cite{Feldmann99IJMPA}. \nNeglecting this term thus implies $\\phi_\\mathrm{NS} = \\phi_\\mathrm{S}$.\n(Refs. \n\\cite{FeldmannKrollStech98PRD,FeldmannKrollStech99PLB,Feldmann99IJMPA}\ndenote $f_\\mathrm{NS}, f_\\mathrm{S}, \\phi_\\mathrm{NS}, \\phi_\\mathrm{S}$ by, respectively,\n$f_q, f_s, \\phi_q, \\phi_s$.)\nIn general, neglecting the OZI-suppressed terms, i.e., \napplication of the OZI rule, leads to the so-called FKS scheme\n\\cite{FeldmannKrollStech98PRD,FeldmannKrollStech99PLB,Feldmann99IJMPA},\nwhich exploits a big practical difference between the (in principle\nequivalent) parameterizations (\\ref{def:f8etaf0eta})-(\\ref{def:f8eta'f0eta'}) \nand (\\ref{def:fNSetafSeta})-(\\ref{def:fNSeta'fSeta'}):\nwhile $\\theta_8$ and $\\theta_0$ differ a lot from each other and\nfrom the octet-singlet {\\it state} mixing angle \n$\\theta \\approx (\\theta_8 + \\theta_0)\/2$, \nthe $\\mathrm{NS}$-$\\mathrm{S}$ decay-constant mixing angles are very close to each other\nand both can be approximated by the state mixing angle:\n$\\phi_\\mathrm{NS} \\approx \\phi_\\mathrm{S} \\approx \\phi$. Therefore one\ncan deal with only this one angle, $\\phi$, and express\nthe physical $\\eta$-$\\eta'$ decay constants as\n\\begin{equation}\n\\! \\! \\left[ \\begin{array}{cc}\nf^8_\\eta &\n f^0_\\eta\n\\\\\nf^8_{\\eta^\\prime} &\n f^0_{\\eta^\\prime}\n \\end{array}\n\\right]\n\\! = \\!\n\\left[ \\begin{array}{cr}\nf_\\mathrm{NS} \\cos\\phi & -f_\\mathrm{S} \\sin\\phi \n\\\\\nf_\\mathrm{NS} \\sin\\phi & f_\\mathrm{S} \\cos\\phi\n \\end{array}\n\\right]\n\\!\n\\left[ \\begin{array}{cc}\n\\frac{1}{\\sqrt{3}} & \\sqrt{\\frac{2}{3}}\n\\\\\n-\\sqrt{\\frac{2}{3}} & \\frac{1}{\\sqrt{3}}\n \\end{array}\n\\right].\n\\label{matrixEqLast}\n\\end{equation}\nThis relation is valid also in our approach, where $\\eta$ and $\\eta'$ are \nthe simple $\\eta_\\mathrm{NS}$-$\\eta_\\mathrm{S}$ mixtures (\\ref{MIXphi}).\nThe FKS relations \n\\cite{FeldmannKrollStech98PRD,FeldmannKrollStech99PLB,Feldmann99IJMPA}\n\\begin{equation}\nf_8 = \\sqrt{\\frac{1}{3} f_\\mathrm{NS}^2\n + \\frac{2}{3} f_\\mathrm{S}^2 }~,\n\\label{f_8}\n\\quad\n\\theta_8 = \\phi - {\\arctan}\\left(\\frac{\\sqrt{2} f_\\mathrm{S}}\n {f_\\mathrm{NS}} \\right)~,\n\\end{equation}\n\\begin{equation}\n\\! f_0 = \\sqrt{\\frac{2}{3} f_\\mathrm{NS}^2\n + \\frac{1}{3} f_\\mathrm{S}^2 }~,\n\\,\\,\\,\\,\n\\theta_0 = \\phi - \\mbox{\\rm arctan}\\left(\\frac{\\sqrt{2} f_\\mathrm{NS}}\n {f_\\mathrm{S}} \\right)~,\n\\label{f_0}\n\\end{equation}\nequivalent to Eq. (\\ref{matrixEqLast}), were also shown \\cite{Kekez:2000aw}\nto hold in our DS approach.\n\n\nIn our present DS approach, mesons are pure $q\\bar q$\nBS solutions, without any gluonium admixtures, which are \nprominent possible sources of OZI violations. \nTherefore, our decay constants are calculated quantities,\n$f_\\mathrm{NS}=f_{u\\bar u}=f_{d\\bar d}=f_\\pi$ and $f_\\mathrm{S}=f_{s\\bar s}$,\nin agreement with the OZI rule. \nOur DS approach is thus naturally compatible with the FKS scheme,\nand we can use the $\\eta$ and $\\eta'$ decay constants \n(\\ref{matrixEqLast}) with our calculated $f_\\mathrm{NS}=f_\\pi$ and \n$f_\\mathrm{S}=f_{s\\bar s}$ in Shore's equations (\\ref{eq:bn})-(\\ref{eq:bp}).\n\n\n\\begin{table}[b]\n\\begin{center}\n\\begin{tabular}{|c||c|c||c|c||c|c||}\n\\hline\nInputs: & \\multicolumn{2}{|c||}{from Ref. \\cite{Kekez:2000aw}} & \\multicolumn{2}{|c||}{from Ref. \\cite{Kekez:2005ie}} & \\multicolumn{2}{|c||}{from Ref. \\cite{Horvatic:2007qs} } \\\\\n\\hline\n$\\chi_{\\mbox{\\rm\\scriptsize YM}}^{1\/4}$ & $175.7$ & $191$ & $175.7$ & $191$ & $175.7$ & $191$ \\\\\n\\hline\n$M_\\eta$ & 485.7 & 499.8 & 482.8 & 496.7 & 507.0 & 526.2 \\\\\n$M_{\\eta^\\prime}$ & 815.8 & 931.4 & 818.4 & 934.9 & 868.7 & 983.2 \\\\\n$\\phi$ & $46.11^\\circ$ & $52.01^\\circ$ & $46.07^\\circ$ & $51.85^\\circ$ & $40.86^\\circ$ & $47.23^\\circ$ \\\\\n\\hline\n$\\theta_0$ & $1.84^\\circ$ & $7.74^\\circ$ & $1.39^\\circ$ & $7.17^\\circ$ & $-6.69^\\circ$ & $-0.33^\\circ$ \\\\\n$\\theta_8$ & $-17.90^\\circ$& $-12.00^\\circ$ & $-17.6^\\circ$ & $-11.85^\\circ$ & $-20.47^\\circ$ & $-14.11^\\circ$ \\\\\n$f_0$ & 108.8 & 108.8 & 107.9 & 107.9 & 101.8 & 101.8 \\\\\n$f_8$ & 122.6 & 122.6 & 121.1 & 121.1 & 110.7 & 110.7 \\\\\n$f_\\eta^0$ & -3.5 & -14.7 & -2.6 & -13.5 & 11.9 & 0.6 \\\\\n$f_{\\eta^\\prime}^0$ & 108.8 & 107.9 & 107.9 & 107.1 & 101.1 & 101.8 \\\\\n$f_\\eta^8$ & 116.7 & 119.9 & 115.4 & 118.5 & 103.7 & 107.4 \\\\\n$f_{\\eta^\\prime}^8$ & -37.7 & -25.5 & -37.6 & -24.9 & -38.7 & -27.0 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{The results of the three DS models\nobtained through Shore's equations (\\ref{eq:bn})-(\\ref{eq:bp})\nfor the two values of $\\chi_{\\rm YM}$ approximating $A$:\n$(175.7\\rm MeV)^4$ and $(191\\rm MeV)^4$.\nColumns 1 and 2: The results when the non-anomalous inputs for\nEqs. (\\ref{eq:bn})-(\\ref{eq:bp}), namely $M_\\pi, M_K, f_\\pi=f_\\mathrm{NS},\nf_{s\\bar s}=f_\\mathrm{S}$ and $f_K$, are taken from Ref. \\cite{Kekez:2000aw},\nwhich uses Jain--Munczek {\\em Ansatz} interaction \\cite{jain93b}.\nColumns 3 and 4: The results for the non-anomalous\ninputs from Ref. \\cite{Kekez:2005ie}\nusing OPE-inspired interaction nonperturbatively\ndressed by gluon condensates \\cite{Kekez:2003ri}.\nColumns 5 and 6: The results for the\ninputs from Ref. \\cite{Horvatic:2007qs}\nusing the separable {\\em Ansatz} interaction \\cite{Blaschke:2000gd}.\nAll masses and decay constants, as well as\n$\\chi_{\\mbox{\\rm\\scriptsize YM}}^{1\/4}$, are in MeV,\nand angles are in degrees.\n}\n\\label{DGMOR:tab:eta-etap-mixing-all3m-forArticle}\n\\end{table}\n\n\n\\section{Results and conclusions}\n\\label{ResultsAndConclusions}\n\nAll quantities appearing on the right-hand side of Eqs. (\\ref{eq:bn})-(\\ref{eq:bp}), \nnamely $M_\\pi$, $M_K$, $f_\\pi$, and $f_K$, are calculated in our DS approach \n\\cite{Kekez:2000aw,Kekez:2005ie,Horvatic:2007qs} (for the three dynamical\nmodels \\cite{jain93b,Kekez:2003ri,Blaschke:2000gd}), {\\it except} the full \nQCD topological charge parameter $A$. Since it is at present\nunfortunately not yet known, we follow Shore and approximate it\nby the Yang-Mills topological susceptibility $\\chi_{\\rm YM}$.\n\n\nOn the left-hand side of Eqs. (\\ref{eq:bn})-(\\ref{eq:bp}), \nthe model results for $f_\\mathrm{NS}=f_\\pi$ and $f_\\mathrm{S}=f_{s\\bar s}$ \nand Eq.~(\\ref{matrixEqLast}) reduce the unknown part of the four \n$\\eta$ and $\\eta'$ decay constants $f_\\eta^0$, $f_{\\eta^\\prime}^0$, \n$f_\\eta^8$, and $f_{\\eta^\\prime}^8$, down to the mixing angle $\\phi$.\nThe three Shore's equations (\\ref{eq:bn})-(\\ref{eq:bp}) can then\nbe solved for $\\phi$, $M_\\eta$ and $M_{\\eta^\\prime}$, providing us\nwith the upper three lines of \nTable \\ref{DGMOR:tab:eta-etap-mixing-all3m-forArticle}.\nFor each of the three different dynamical models\nwhich we used in our previous DS studies\n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Blaschke:2007ce,Horvatic:2007wu,Horvatic:2007qs},\nthese results are displayed\nfor $\\chi_{\\rm YM} = (175.7\\, \\rm MeV)^4$\nas in Refs. \\cite{Kekez:2005ie,Horvatic:2007qs}\nand for $\\chi_{\\rm YM} = (191\\, \\rm MeV)^4$ \\cite{DelDebbio:2004ns}\n(adopted by Shore \\cite{Shore:2006mm,Shore:2007yn}).\nThe lower part of the table, displaying various additional results,\nis then readily obtained through Eq. (\\ref{matrixEqLast}) and\/or \nthe useful relations (\\ref{f_8})-(\\ref{f_0}) which give $f_8$, $f_0$,\n$\\theta_8-\\phi$ and $\\theta_0-\\phi$ in terms of $f_\\mathrm{NS} =f_\\pi$ \nand $f_\\mathrm{S} = f_{s\\bar s}$.\nThus, unlike the mixing angles, $f_0$ and $f_8$ do not result from solving \nof Eqs. (\\ref{eq:bn})-(\\ref{eq:bp}), but are the calculated predictions of a\nconcrete dynamical DS model, independently of Shore's equations.\n\n\nFor all three quite different (RGI \\cite{jain93b,Kekez:2003ri} and\nnon-RGI \\cite{Blaschke:2000gd}) dynamical models\nwhich we used in our previous DS studies\n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Blaschke:2007ce,Horvatic:2007wu,Horvatic:2007qs},\nthe situation with the results turns out to be rather similar.\nSimilar results from various models mean that the usage of Shore's\ngeneralization in conjunction with the DS approach does not help one\nto discriminate between various dynamical models and so draw conclusions\non the dynamics. This is not surprising, as it has been established \n\\cite{ForExample,Alkofer:2000wg,Roberts:2000hi,Roberts:2000aa,Holl:2006ni,Fischer:2006ub}\nthat while a successful reproduction of static properties and other low-energy\nmeson phenomenology requires interaction modeling at low momenta, it is possible\nto achieve a satisfactory description of low-energy phenomenology for many forms\nof model interactions as long as their integrated strength at low momenta\n($p^2 < 1$ GeV$^2$) is sufficient to achieve a realistic DChSB. On the other hand, \nthis (similarity of our results from the very different models) has the\nadvantage that our conclusions further below are not sensitive to the changes\nof the model dynamics.\n\nThe most conspicuous feature of our results is that $\\eta$ and \n$\\eta'$ masses are both much too low when the weighted\naverage $\\chi_{\\rm YM} = (175.7 \\pm 1.5 \\, \\rm MeV)^4$ of Refs.\n\\cite{Lucini:2004yh,DelDebbio:2004ns,Alles:2004vi} is used, in contrast to\nthe results from the standard WV relation, displayed in Table \\ref{3WV+Shore}.\nIf we single out just the highest of these values ($191 \\, \\rm MeV)^4$\n\\cite{DelDebbio:2004ns}), the masses improve somewhat. However,\nother results are spoiled -- e.g., the mixing angle $\\phi$\nbecomes too high to enable agreement with the experimental results on\n$\\eta, \\eta' \\to \\gamma\\gamma$ decays, which require $\\phi \\sim 40^\\circ$\n\\cite{Kekez:2005ie}.\n\nWhen we turn to the lower parts of Tables \\ref{3WV+Shore} and\n\\ref{DGMOR:tab:eta-etap-mixing-all3m-forArticle}, where the\nresults for the $\\eta$ and $\\eta'$ decay constants, and the\ncorresponding two mixing angles $\\theta_0$ and $\\theta_8$,\nare given, we notice a feature common to all our results,\nas well as Shore's (also given in Table \\ref{3WV+Shore}).\nThe diagonal ones, $f_{\\eta^\\prime}^0$ and $f_\\eta^8$, are\nall of the order of $f_\\pi$, being larger by some 10\\% to\n30\\%. The off-diagonal ones, $f_{\\eta^\\prime}^8$ and $f_\\eta^0$,\nare, on the other hand, in general strongly suppressed.\nThis is expected, as $\\eta^\\prime$ is mostly singlet,\nand $\\eta$ is mostly octet. \n\nTo understand the dependence \nof the decay constants on the topological susceptibility \n$\\chi_{\\rm YM}$ (approximating $A$), it is important to note \nthat our $f_0$, which in a full QCD bound-state calculation\nwould be influenced by the gluon anomaly, presently is not, \nsince it is calculated (same as $f_8$) from the modeled\nmeson $q\\bar q$ substructure relying on RLA. \nIn Tables I and II one therefore sees no $\\chi_{\\rm YM}$-dependence of \nnot only $f_8$, but also of $f_0$, since the difference between $f_0$ \nand $f_8$ is presently generated only by their different $\\mathrm{NS}$\nand $\\mathrm{S}$ quark content. This feature is not only consistent with \nthe FKS scheme, but is in fact a general characteristic of this scheme.\nNamely, due to the neglect of OZI-violating contributions\n\\cite{Feldmann99IJMPA}, in the SU(3) flavor symmetry limit \none would have $f_\\mathrm{NS} = f_\\mathrm{S}$ and $f_0 = f_8$.\n(The DS approach will be able to obtain the \ngluon anomaly dependence of $f_0$ only when it manages to go beyond RLA,\nwhich is presently achieved only with schematic, very simplified\n$\\delta$-function-type interactions \\cite{Bhagwat:2007ha}.)\nThus, not only in the present DS calculation, but in fact in any \napplication of the FKS scheme, the $\\chi_{\\rm YM}$-dependence of the \nfour physical $\\eta$-$\\eta'$ decay constants \n(\\ref{def:f8etaf0eta})-(\\ref{def:f8eta'f0eta'})\nstems exclusively from the\n$\\chi_{\\rm YM}$-dependence of the mixing angles $\\phi, \\theta_8$ and \n$\\theta_0$. Its origin, as explained in the previous section, is in \nthe interplay of the anomaly with the flavor symmetry breaking. \nIn fact, the FKS scheme is based on the assumption that the \nflavor symmetry breaking is significantly more important than \nthe OZI-violating contributions (arising beyond RLA in DS approach).\n\nThe feature that may be surprising is that Shore's results\n(which, to be sure, were obtained \\cite{Shore:2006mm,Shore:2007yn}\nin quite a different way from ours) are more similar to our results\nobtained through the standard WV relation, than to our results\nobtained through Shore's Eqs. (\\ref{eq:bn})-(\\ref{eq:bp}).\n\n\nTo summarize: the present paper has explored a modification of \nthe DS treatments of the $\\eta$-$\\eta'$ complex employed in \nRefs. \\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2001ph,Kekez:2005ie}.\nIn Refs. \\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2001ph}, the\nvalue of the anomalous $\\eta_0$ mass shift was obtained by fitting, \nbut Ref. \\cite{Kekez:2005ie} improved the treatment by obtaining \nit from the lattice through the WV relation. A generalization of this\nrelation was recently proposed \\cite{Shore:2006mm,Shore:2007yn}, \nand the purpose of the present paper is to tests the usage thereof \nin the bound-state, DS context, and compare the results with those\nfrom the standard WV relation.\n\n\nAll in all, inspection and comparison of the results \nin Table \\ref{DGMOR:tab:eta-etap-mixing-all3m-forArticle}\nwith the results (in Table \\ref{3WV+Shore}) from the\nanalogous calculations but using the standard WV relation to construct\nthe complete $\\eta$-$\\eta'$ mass matrix, leads to the conclusion that\nthe DS approach with the standard WV relation (\\ref{WittenVenez}) is \nphenomenologically more successful, yielding the masses closer to \nthe experimental ones. This may seem surprising,\nbut one must be aware that we do not\nyet have at our disposal the full QCD topological charge parameter\n$A$, and that we (along with Shore) had to use its lowest ${1}\/{N_c}$\napproximation, $\\chi_{\\rm YM}$.\nThis in general precludes a {\\it consistently} improved $1\/N_c$ treatment in\nspite of the usage of Shore's relations. The problems with inconsistencies\nin the $1\/N_c$ counting may well cause spoiling of results, especially in an\napproach such as ours, where the $\\eta$ and $\\eta'$ masses are not inputs,\nbut predicted quantities.\nOur results thus add a new argument to the motivation for undertaking\nlattice calculations proposed by Shore \\cite{Shore:2007yn} and aimed\nat proper finding the quantity $A$.\nAlso, we should recall from Sections \\ref{INTRO} and\n\\ref{massMatrixAndWVrelation} that the very usage of the RLA\nassumed that the anomaly is implemented on the level of\nthe anomalous mass only, as a lowest order $1\/N_c$ correction\n\\cite{Klabucar:1997zi,Kekez:2000aw,Kekez:2005ie,Horvatic:2007wu,Horvatic:2007qs}.\nThus, with respect to the orders in ${1}\/{N_c}$,\nusing Shore's generalization in the present formulation of our DS approach\nmay be less consistent than using the standard WV relation, which\nmay well be the cause of its lesser phenomenological success.\n\nIn spite of the lesser phenomenological success (than the standard WV relation) \nin the present context of bound-state DS calculations at zero temperature, the \npresently exposed usage of Shore's generalization will likely find its application \nat finite-temperature calculations in the DS context. Namely, there it may help \nalleviate the difficulties met due to the usage of the standard WV relation in \nthe DS approach at $T>0$, as discussed in Ref. \\cite{Horvatic:2007qs}.\n\n\n\n{\\bf Acknowledgments} \n\nD.H.~and~D.Kl. acknowledge the support of the project No.~119-0982930-1016 \nof MSES of Croatia. D.Kl. also acknowledges the hospitality and support through\nsenior associateship of International Centre for Theoretical \nPhysics at Trieste, Italy, where the present paper was started. \nD.Kl. also thanks the LIT of JINR for \nits hospitality in Dubna, Russia, in August 2007. D.Ke. acknowledges \nthe support of the Croatian MSES project No.~098-0982887-2872.\nYu.K. thanks for support from Deutsche Forschungsgemeinschaft (DFG) \nunder grant No. BL 324\/3-1, and the work of D.B. was supported by \nthe Polish Ministry of Science and Higher Education under contract \nNo. N N202 0953 33.\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOrigin of instabilities, patterns, and chaos in Rayleigh-B\\'enard convection (RBC) is an interesting problem in fluid dynamics~\\cite[]{chandra:book,busse:TAP_1981,jkb:book,manneville:book, ahlers:RMP_2009}. Prandtl number, $P$ (ratio of kinematic viscosity $\\nu$ and thermal diffusivity $\\kappa$) and Rayleigh number, $R$ (ratio of buoyancy and dissipative terms) are the two critical parameters for RBC. Some of the key and difficult problem in this field are related to the onset of convection for zero-Prandtl number (zero-P) and low-Prandtl number (low-P) fluids. For zero-P and low-P convection, the nonlinear term of the Navier-Stokes equation plays an important role, and it generates vertical vorticity resulting in three-dimensional patterns and the associated secondary oscillatory patterns~\\cite[]{busse:JFM_1972, clever:JFM_1974, busse:jfm_1984}. In the present paper we will study various patterns and chaos for zero-Prandtl number convection using bifurcation analysis. We use direct numerical simulations and a low-dimensional model for this purpose.\n\nLow-Prandtl number fluids, for example, mercury ($P \\approx 0.02$), liquid sodium ($P \\approx 0.01$), solar plasma in the convective zone ($P \\sim 10^{-8}$), exhibit interesting convective patterns and chaos \\cite[]{croquette:CP_1989_a, croquette:CP_1989_b,cross:RMP_1993, bodenschatz:ARFM_2000, das:PRE_2000, stein:solar}. In terrestrial experiments, the Prandtl number cannot go below that of liquid sodium ($\\sim 0.01$). In addition, the visualization of flow patterns inside low-P fluids like mercury and sodium is quite difficult. Due to these limitations of experiments, numerical simulations are very significant in the study of low-P and zero-P convection. The numerical work of \\cite{thual:zeroP_1992} indicate that the properties of low-P convection as $P \\rightarrow 0$ are quite close to that of zero-P convection. Hence low-P convection as $P \\rightarrow 0$ appears to approach zero-P convection as a limiting case. Therefore, zero-P convection is very useful for understanding the properties of low-P convection. Even though numerical analysis of zero-P convection is quite tricky due to its inherent instabilities near the onset, it provides certain advantages. For $P=0$ the thermal modes are slaved to the velocity modes, hence the number of independent variables are less than those required for low-P analysis. Also, the time steps for numerical simulations of very low-P convection could be very small due to the stiffness of the equations, which is not a limitation for zero-P convection \\cite[]{thual:zeroP_1992}. These features led us to investigate zero-P convection in detail for understanding the convective patterns and chaos in low-P fluids.\n\n\\cite{thual:zeroP_1992} was one of the first to simulate zero-P convection for free-slip and no-slip boundary conditions. He reported various supercritical oscillatory instabilities, regular and chaotic patterns, etc.\\ in his simulations. The patterns observed by Thual are two-dimensional rolls, periodic and quasiperiodic rolls, squares, travelling waves, etc. For low-P fluids, \\cite{meneguzzi:jfm_1987} performed numerical simulations for $P = 0.2$ under stress-free boundary conditions, and for $P = 0.025$ under no-slip boundary conditions, and observed similar patterns. \\cite{clever:pf_1990} found travelling wave solutions for low-P convection under no-slip boundary conditions. These observations indicate that zero-P convection contains pertinent features of low-P convection. Also, free-slip and no-slip boundary conditions exhibit similar convective flow patterns. \n\n\nThe oscillatory instabilities, and regular and chaotic patterns found in numerical simulations have also have been observed in convection experiments on mercury, air, and liquid sodium \\cite[]{Rossby:lowP, krishnamurti:JFM_1970, krishnamurti:JFM_1973, maurer:JPL_1980,libchaber:JPL_1982, libchaber:physica_1983}. Some of the most commonly observed patterns in experiments are stationary, periodic, quasiperiodic, chaotic, and travelling rolls, as well as squares, asymmetric squares, and phase locked states. Chaotic rolls have been observed to appear through both period doubling and quasiperiodic routes in some of these experiments. \n\nConvection experiments of \\cite{willis:pf_1967}, \\cite{krishnamurti:JFM_1970, krishnamurti:JFM_1973}, \\cite{busse:JFM_1974}, and numerical simulations of \\cite{lipps:jfm_1976}, \\cite{bolton:JFM_1985}, \\cite{clever:pf_1990}, \\cite{meneguzzi:jfm_1987}, \\cite{thual:zeroP_1992}, \\cite{ozoe:NHT_1995} indicate the presence of oscillatory instability of the two-dimensional convective rolls and the resultant stable wavy rolls. Busse and coworkers~\\cite[]{busse:JFM_1972, clever:JFM_1974, busse:jfm_1984} showed using perturbative analysis that the two-dimensional rolls become unstable to oscillatory three-dimensional disturbances when the amplitude of the convective motion exceeds a finite critical value. According to \\cite{busse:JFM_1972}, the condition for the instability takes the form\n\\begin{equation}\n\\frac{R_t}{R_c} -1 \\approx 0.310 P^2\n\\end{equation}\nwhere $R_c$ is the critical Rayleigh number for the onset of convection, and $R_t$ is the Rayleigh number where oscillatory instability starts. In addition, the time period of the oscillations measured in the units of $d^2\/\\nu$ (viscous time scale) is independent of $P$ \\cite[Eq.~(5.2) of][]{busse:JFM_1972}. In a related work, \\cite{fauve:1987} investigated the origin of instabilities in low-P convection using phase dynamical equations and argued that the instability always saturates into travelling waves. In the present paper we will explore the origin of oscillatory instability using numerical simulations and bifurcation theory.\n\nOrigin of plethora of convective patterns observed in convective experiments and numerical simulations can be quite intricate. Each run of a convective simulation takes significantly long time, so it is not possible to scan the parameter space minutely for deciphering detailed bifurcation scenario. Experiments too have their complexities and limitations. The large number of modes present in simulations and experiments tend to obscure the underlying dynamics. These difficulties are circumvented by a powerful and complimentary approach in which the system is analyzed using an appropriately constructed low-dimensional model. Relatively low computational cost for running low-dimensional models, and the ease of construction of the bifurcation diagrams are some of the distinct advantages of these models compared to the experiments and simulations. However care must be taken to take all the relevant modes of the system for constructing the low-dimensional models. \n\nNumber of active modes near the onset of convection is not very large, so low-dimensional models consisting of these active modes are very useful for analyzing this regime. \\cite{kumar:JP_1996} showed using a 6-mode model of zero-P convection that the growth of the 2D rolls saturate through the generation of vertical vorticity (wavy nature). \\cite{kumar:burst_2006} observed critical bursting in the above model during the saturation. \\cite{Pal:PRE_2002} explained the mechanism of selection of the square patterns using a 15-mode model of the zero-P RBC. Recently \\cite{Pal:EPL_2009} constructed a 13-mode model using the energetic modes of DNS, and performed a bifurcation analysis using the model and simulation results. Using the bifurcation diagram, \\cite{Pal:EPL_2009} could explain the origin of squares (SQ), asymmetric squares (ASQ), oscillating asymmetric squares (OASQ), relaxation oscillations with an intermediate square regime (SQOR), and three kinds of chaotic attractors. In addition, the above patterns were observed in both DNS and models. Earlier \\cite{thual:zeroP_1992} had observed SQ and SQOR patterns in his simulations. \\cite{jenkins:1984} studied the transition from 2D rolls to square patterns in RBC for different Prandtl numbers using analytical tools. \\cite{knobloch:jp_1992} studied the stability of the SQ patterns using a complementary procedure called the amplitude equations. \n\n\nA major limitation of the 13-mode model of \\cite{Pal:EPL_2009} was the absence of wavy rolls near the onset. This limitation is overcome by extending this model to a 27-mode model by incorporating the corresponding dominant modes. In the present paper we perform a bifurcation analysis of this model and explore the origin of the various convective patterns of zero-P convection with special emphasis on the oscillatory instability and related wavy roll patterns. All the features of the 13-mode model are reproduced in the 27-mode model by construction. We will show in our discussion that properties of the wavy rolls of the 27-mode model matches reasonably well with those observed in experiments and simulations. \n\nThe organization of the paper is as follows: In section \\ref{sec:hydrosystem}, we describe the basic hydrodynamic system considered for the study. The low dimensional model will be derived in section \\ref{sec:lowdmodel}. Section \\ref{sec:model_results} contains the results of the numerical simulations and the low-dimensional model. Various bifurcation diagrams are described in this section. A brief study of the wavy rolls observed in the low-dimensional model is presented in section \\ref{sec:wavy_rolls}. We finally conclude in section \\ref{conclusion}.\n\n\\section{Governing equations and direct numerical simulations}\\label{sec:hydrosystem}\n\nThe RBC system consists of a conducting fluid of kinematic viscosity\n$\\nu$, thermal diffusivity $\\kappa$, and coefficient of volume\nexpansion $\\alpha$ confined between two conducting plates separated\nby a distance $d$ and heated from below. In the zero-P limit the\nequations under Boussinesq approximation take the\nform~\\cite[]{spiegel,thual:zeroP_1992}\n\\begin{eqnarray}\n\\partial_t(\\nabla^2 v_3) &=& \\nabla^4 v_3 + R \\nabla^2_H \\theta - \\hat{\\bf e}_3\\cdot\\nabla\\times\n \\left[(\\mbox{\\boldmath $\\omega$}{\\cdot}\\nabla){\\bf v}\n -( {\\bf v}{\\cdot}\\nabla)\\mbox{\\boldmath $\\omega$} \\right],\n\\label{eq:v3}\\\\\n\\partial_t \\omega_3 &=& \\nabla^2 \\omega_3\n +\\left[(\\mbox{\\boldmath $\\omega$}{\\cdot}\\nabla) v_3\n -({\\bf v}{\\cdot}\\nabla)\\omega_3\\right],\n\\label{eq:omega3}\\\\\n {\\nabla}^2 \\theta&=& - v_3,\\label{eq:theta}\\\\\n\\nabla{\\cdot}{\\bf v} &=& 0, \\label{eq:continuity}\n\\end{eqnarray}\nwhere ${\\bf v}(x,y,z,t) \\equiv (v_1,v_2,v_3)$ is the velocity field,\n$\\theta(x,y,z,t)$ is the deviation in the temperature field from the\nsteady conduction profile, $\\mbox{\\boldmath $\\omega$} \\equiv\n(\\omega_1, \\omega_2, \\omega_3) \\equiv \\nabla\\times{\\bf v}$ is the\nvorticity field, $\\hat{\\bf e}_3 $ is the vertically\ndirected unit vector, and $ \\nabla_H^2 = \\partial_{xx} +\n\\partial_{yy} $ is the horizontal Laplacian. The equations have\nbeen nondimensionalized using $d$ as the length scale, $d^2\/\\nu$ as\nthe time scale, and $\\nu \\beta d\/\\kappa$ as the temperature scale,\nwhere $\\beta$ is the uniform temperature gradient. The two\nnondimensional parameters in the equations are the Rayleigh number $R =\n\\alpha \\beta g d^4\/{\\nu\\kappa}$ and the Prandtl number $P = \\nu\/\\kappa$,\nwhere $g$ is the acceleration due to gravity. In the following\ndiscussions we will also use the reduced Rayleigh number $r =\nR\/R_{c}$ as a parameter.\n\n\nWe consider perfectly conducting boundary conditions for the top and\nbottom plates along with the free-slip boundary condition for the\nvelocity field. Consequently\n\\begin{equation}\nv_3 = \\partial_{3}v_1 = \\partial_{3}v_2 = \\theta = 0 ,\\quad\n\\mbox{at}\\quad z = 0, 1. \\label{eq:bc}\n\\end{equation}\nWe assume periodic boundary conditions along the horizontal directions.\n\nEquations~(\\ref{eq:v3}-\\ref{eq:continuity}) are numerically solved\nusing direct numerical simulations (DNS) under the above boundary conditions (Eqs.~\\ref{eq:bc}). DNS were performed using a pseudo-spectral code TARANG~\\cite[]{canuto,tarang} in a box with aspect ratio $\\Gamma_x = 2\\sqrt{2}, \\Gamma_y = 2\\sqrt{2}$. Various grid resolutions, $32\\times 32\\times 32$, $64\\times 64\\times 64$, have been used for the simulations. These grids provide well-resolved simulations near the onset of convection. We used fourth-order Runge-Kutta scheme (RK4) for time advancement. Each run was carried out till the system reaches a steady state. The DNS runs were performed for $ 0.98 \\le r \\le 1.25$.\n\nWe perform around 200 simulation runs, yet it is insufficient to construct the bifurcation diagram of the system in detail. For this purpose we construct a low-dimensional model, which will be described in the next section.\n\n\\section{Low-dimensional model}\\label{sec:lowdmodel}\n\n We identify 27 Fourier modes that have significant energy (approximately 1\\% or more of the total energy) in the DNS runs and create a low-dimensional model. The modes of the model are shown in Fig.~\\ref{fig:modes_lowdim} and they account for approximately 98\\% of the total energy. The triangles represent the interacting triads. Note that only some of the interacting triads of the model have been shown in the figure. Care has been taken to include sufficient number of modes so that the model reproduces many features found in experiments and simulations. Also, we ensure that the model results are reasonably close to the simulation results. The vertical velocity field ($v_{3}$) and vertical vorticity field ($\\omega_{3}$) in terms of the chosen modes are\n\\begin{eqnarray}\nv_3 &=& W_{101}(t)\\cos(k x)\\sin(\\pi z) + W_{011}(t)\\cos(k y)\\sin(\\pi z)\\nonumber\\\\\n& &+W_{202}(t)\\cos(2 k x)\\sin(2 \\pi z) + W_{022}(t)\\cos(2 k y)\\sin(2 \\pi z)\\nonumber\\\\\n& &+W_{103}(t)\\cos(k x)\\sin(3 \\pi z) + W_{013}(t)\\cos(k y)\\sin(3 \\pi z)\\nonumber\\\\\n& &+W_{301}(t)\\cos(3 k x)\\sin(\\pi z) + W_{031}(t)\\cos(3 k y)\\sin(\\pi z)\\nonumber\\\\\n& &+W_{121}(t)\\cos(k x)\\cos(2 k y)\\sin(\\pi z)+W_{211}(t)\\cos(2 k x)\\cos(k y)\\sin(\\pi z) \\nonumber\\\\\n& &+W_{112}(t)\\cos(k x)\\cos(k y)\\sin(2\\pi z)+W_{111}(t)\\sin(k x)\\sin(k y)\\sin(\\pi z)\\\\\n\\omega_3 &=& Z_{100}(t)\\cos(k x) + Z_{010}\\cos(k y)\\nonumber\\\\\n& & + Z_{110}(t)\\sin(k x)\\sin(k y) + Z_{112}(t) \\sin(k x)\\sin(k y) \\cos(2 \\pi z) \\nonumber\\\\\n& & + Z_{310}(t)\\sin(3 k x)\\sin(k y) + Z_{130}(t)\\sin(k x)\\sin(3 k y)\\nonumber\\\\\n& & + Z_{120}(t)\\cos(k x)\\cos(2 k y) + Z_{210}(t)\\cos(2 k x)\\cos(k y)\\nonumber\\\\\n& & + Z_{102}(t)\\cos(k x)\\cos(2 \\pi z) + Z_{012}(t)\\cos(k y)\\cos(2 \\pi z)\\nonumber\\\\\n& & + Z_{201}(t)\\cos(2 k x)\\cos(\\pi z) + Z_{021}(t)\\cos(2 k y)\\cos(\\pi z)\\nonumber\\\\\n& & + Z_{111}(t)\\cos(k x)\\cos(k y)\\sin(\\pi z) + Z_{121}(t)\\sin(k x)\\sin(2 k y)\\cos(\\pi z)\\nonumber\\\\\n& &+Z_{211}(t)\\sin(2 k x)\\sin(k y)\\cos(\\pi z)\n\\end{eqnarray}\nwhere $W_{lmn}$ and $Z_{lmn}$ are the Fourier amplitudes of the vertical velocity and vertical vorticity modes respectively with the three subscripts ($l, m, n$) indicating the wavenumber components along the $x$, $y$, and $z$ directions respectively. The modes $(1,0,1)$ and $(0,1,1)$ are the most important modes of our model, and they represent the rolls along $y$ and $x$ directions respectively. For the square pattern, the most important participating triad is $\\{ (1,0,1), (0,1,1), (1,1,2) \\}$. Note that the wavenumbers of the interacting triad satisfy ${\\bf k}={\\bf p}+{\\bf q}$.\n\n\nThe horizontal components of the velocity field can be computed using the incompressibility condition of the velocity field (Eq.~(\\ref{eq:continuity})), and the temperature field $\\theta$ can be computed using Eq.~(\\ref{eq:theta}). A Galerkin projection of the RBC equations~(\\ref{eq:v3}-\\ref{eq:omega3}) on the above modes provides a set of $27$-dimensional coupled first-order nonlinear ordinary differential equations for the amplitudes of the above Fourier modes. On this 27-mode model, we perform a detailed bifurcation analysis that we describe in the subsequent sections. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{modes_lowdim.eps}\n\\end{center}\n\\caption{The interacting modes of the 27-mode model. Interactions corresponding to the 13-mode model are shown in blue, whereas new interactions unique to 27-mode model are shown in red. The modes in green are active in the wavy roll flow patterns.} \n\\label{fig:modes_lowdim}\n\\end{figure}\n\nOur 27-mode model is a superset of the 13-mode model of \\cite{Pal:EPL_2009}. The nonlinear interactions of the 13-mode model are indicated by blue curves in Fig.~\\ref{fig:modes_lowdim}. Additional interactions induced by the new modes of the 27-mode model are represented by red curves in the figure. A primary motivation for the 27-mode model is to be able to generate wavy rolls. The triads $ \\{ (0,1,0), (1,0,1), (1,1,1) \\}$ and $ \\{ (1,0,0), (0,1,1), (1,1,1) \\}$ play a critical role in inducing wavy rolls along the $y$ and $x$ axes respectively. In the present paper we will investigate the dynamics of these wavy rolls in RBC using numerical simulations and the 27-mode model along with other features that are generated by the inclusion of these modes.\n\nWe numerically solve the 27-mode model by employing accurate ODE solvers of MATLAB. As a result we observe a variety of convective patterns: squares (SQ), asymmetric squares (ASQ), oscillating asymmetric squares (OASQ), relaxation oscillations with an intermediate square regime (SQOR), wavy rolls, chaotic squares, etc. We have illustrated three snapshots each of OASQ in Fig.~\\ref{fig:pattern_OASQ}, SQOR in Fig.~\\ref{fig:pattern_SQOR}, and wavy rolls in Fig.~\\ref{fig:pattern_wavyroll}. For dynamics of these patterns as well as other patterns mentioned in this paper refer to the accompanying videos. Note that all the above patterns were also found in our DNS. Earlier \\cite{thual:zeroP_1992} in his DNS of zero-P convection had shown the existence of SQ, SQOR, oscillatory quasihexagons (SQOS), chaotic squares (SQCH), and chaotic quasihexagon (HXCH). Thual observed the oscillatory and chaotic quasi-hexagons for Rayleigh numbers beyond the range investigated in this paper.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{OASQ_flow_r1p1.eps}\n\\end{center}\n\\caption{Oscillating asymmetric square (OASQ) pattern in the mid-plane of the convection box in zero-P RBC. The pattern is obtained from the 27-mode model at $r=1.1$. Snapshots at: (a) t = 0, (b) t = T\/4, and (c) t = T\/2, where T is the time period of oscillation.} \\label{fig:pattern_OASQ}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{SQOR_flow_r1p05.eps}\n\\end{center}\n\\caption{Relaxation oscillations with an intermediate square regime (SQOR) pattern observed in the 27-mode model at $r=1.05$. Snapshots at: (a) t = 0, (b) t = T\/4, and (c) t = T\/2.} \\label{fig:pattern_SQOR}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{wavyroll_flow_r1p15.eps}\n\\end{center}\n\\caption{Wavy roll pattern in the 27-mode model at $r=1.15$. \\break Snapshots at: (a) t = 0, (b) t = T\/4, and (c) t = T\/2.} \\label{fig:pattern_wavyroll}\n\\end{figure}\n\n\nWe investigate the origin of various convective flow patterns from the bifurcation diagrams generated using the low dimensional model. To generate the bifurcation diagram, we start first with the fixed points of the system. We compute the fixed points using the Newton-Raphson method for a given $r$, and these fixed points are subsequently continued using a fixed arc-length based continuation scheme for the neighbouring $r$ values~\\cite[]{nanda,wahi}. The stability of the fixed points are ascertained through an eigenvalue analysis of the Jacobian. New branches of fixed points and limit cycles are born when the eigenvalue(s) become zero (pitchfork) and purely imaginary (Hopf) respectively. This process is continued on the new branch. For aperiodic and chaotic solutions, we resort to numerical integration and report the extremum values of the important modes. We use our own MATLAB code as well as MATCONT~\\cite[]{dhooge:matcont} for the analysis. \n\n\n\\section{Bifurcation analysis using model and simulation results }{\\label{sec:model_results}}\n\nIn the present section we numerically solve Eqs.~(\\ref{eq:v3}-\\ref{eq:continuity}) using DNS and the 27-mode model in the range $0.98 \\leq r \\leq 1.25$. This range of $r$ values is near the onset of convection. We will present the bifurcation diagrams associated with the different attractors using the low-dimensional model followed by a detailed comparison of the model results with those obtained from DNS. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=15cm]{3d_bif1_diagram.eps}\n\\end{center}\n\\caption{Three dimensional view of the bifurcation diagram exhibiting the fixed points only. \nSolid and dashed curves represent the stable and unstable fixed points respectively. Black,\nblue, and cyan curves represent stationary squares (SQ), asymmetric stationary squares\n(ASQ), and conduction state respectively. Solid purple lines on the $W_{101}$ and $W_{011}$ axes represent the neutrally stable 2D roll solutions and chained purple lines represent unstable 2D rolls. CB1 (black diamond), CB2 (red dots) and CB3 (green squares) are the codimension-2 bifurcation points on the $r=1$ plane. The critical eigenvalues at CB1, CB2 and CB3 are denoted by $\\lambda_{c}$ in the figure.}\n\\label{3d_bif}\n\\end{figure}\n\n\\subsection{Fixed points of the system}{\\label{sec:fixed_points}}\nFixed points of a dynamical system and their bifurcations provide important clues about the\nsystem dynamics. Therefore, we start our analysis by locating all\nthe fixed points of the 27-mode model. In Fig.~\\ref{3d_bif} we\ndisplay the projection of these fixed points on the\n$W_{101}$-$W_{011}$ plane as a function of $r$. For $r<1$, the only stable fixed point of the system is the origin, which\ncorresponds to the pure conduction state (cyan curve). At $r=1$,\nthe conduction state loses stability, and neutrally stable pure roll\nsolutions (purple curves of Fig.~\\ref{3d_bif}) and four unstable branches of symmetric square solutions (dotted black curves of Fig.~\\ref{3d_bif}) satisfying $|W_{101}| = |W_{011}|$ are born. Note\nthat the stability matrix at the origin (CB1) has double zero eigenvalues,\nand hence CB1 is a codimension-2 bifurcation point. \n\nThe neutrally stable 2D roll solutions of the $r=1$ plane become unstable\nthrough another codimension-2 bifurcation at CB2 \\{($W_{101} \\simeq \\pm 13.44$, $W_{011}=0$); ($W_{011} \\simeq \\pm 13.44$, $W_{101}=0$)\\}, which is represented by large red dots on the $r=1$ plane. The stability matrix at CB2 has a zero eigenvalue and a purely imaginary pair ($\\lambda_c = (0, \\pm i \\omega$) with $\\omega \\approx 14.2$). As a consequence of the complex eigenvalues of CB2, for $r>1$ periodic solutions are born, and the 2D rolls lose their stability (chained purple lines of Fig.~\\ref{3d_bif}). These\nperiodic solutions are also unstable akin to the symmetric square fixed points associated with CB1. We will show later that the wavy rolls are associated with CB2. Note that CB2 and their associated attractors are absent in the 13-mode model~\\cite[]{Pal:EPL_2009}. \n\nThe unstable roll solutions which persist at $r=1$ with amplitudes greater than $13.44$ subsequently undergo yet another codimension-2\nbifurcation at CB3 \\{($W_{101}=\\pm 26.94, W_{011}=0$); ($W_{101}=0, W_{011}=\\pm 26.94$)\\}, which are represented as large green\nsquares on the $r=1$ plane. The stability matrix at CB3 has double zero eigenvalues ($\\lambda_c=0,0$). Consequently unstable asymmetric square solutions with $|W_{101}| \\neq |W_{011}|$ (dotted blue curves) are born. This bifurcation is also present in the 13-mode model~\\cite[]{Pal:EPL_2009}, and the attractors associated with this bifurcation in the 13-mode model carry over to the 27-mode model as well. Note that the ASQ solutions of the 27-mode model have two pairs of unstable eigenvalues as opposed to one unstable pair in the 13-mode model due to the presence of CB2. \n\nFor low-P convection \\cite{mishra:EPl_2010} have earlier observed that the 2D rolls undergo a pitchfork bifurcation followed by a Hopf. In the limiting case of zero-P, the Hopf bifurcation point merges with the pitchfork bifurcation point at $r=1$, and the critical eigenvalues at this bifurcation point are $(0,0)$ giving rise to a codimension-2 bifurcation CB3. With an increase in $r$, the double zero eigenvalues split into unstable complex conjugate pair giving rise to a scenario very similar to the Takens-Bogdanov bifurcation \\cite[]{guckenheimer:book,kuznetsov:book}.\n\n\n\n\nAs shown in Fig.~\\ref{3d_bif}, on the projection of $W_{101}$-$W_{011}$ plane, there are $13$ unstable fixed points for $r>1$: one corresponding to the pure conduction state, four satisfying $|W_{101}| = |W_{011}|$ (SQ), and the remaining eight satisfying $|W_{101}| \\ne |W_{011}|$ (ASQ). After two successive inverse Hopf bifurcations, to be described later, the unstable ASQ fixed points become stable. These\nstable fixed points are shown by the solid blue curves in\nFig.~\\ref{3d_bif}. Subsequently at $r\\simeq 1.1690$, these stable\nASQs merge (via a pitchfork bifurcation) with the symmetric square\nsolutions that originate from CB1 and stabilize them (solid black\ncurves in Fig.~\\ref{3d_bif}). Note that there is a small difference in the values\nof $r$ corresponding to the stabilization of the ASQ solutions and\nSQ solutions for the 13-mode model and the 27-mode model. \nIn summary, the 3D figure (Fig.~\\ref{3d_bif}) is\nqualitatively similar to the corresponding figure of the 13 mode\nmodel~\\cite[]{Pal:EPL_2009} except the codimension-2 bifurcation at CB2 which is responsible for the wavy rolls. In the following discussions we will describe the bifurcation diagrams including limit cycles, chaotic attractors etc.\n\nBifurcation diagrams of the 27-mode model are quite complex. They include six different types of chaotic attractors, various types of fixed points and periodic solutions. The model also has multiple coexisting attractors for a given value of $r$. To disentangle its complexity, we present the bifurcation diagrams as four separate diagrams, ``Bif-13M'', ``Bif-A'', ``Bif-B'', ``Bif-C'', that highlight different features of the dynamics. First we draw the bifurcation diagram ``Bif-13M'' associated with the 13-mode model~\\cite[]{Pal:EPL_2009}, which is a subset of the 27-mode model. Later, we will contrast the bifurcation diagram of the 27-mode model (denoted by ``Bif-A'') with the diagram of the 13-mode model.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{bifurcation_13mode.eps}\n\\end{center}\n\\caption{Bifurcation diagram Bif-13M of the 13-mode model for $0.98 \\leq r\n\\leq 1.25$~\\cite[adapted from][]{Pal:EPL_2009}. The stable branches corresponding to SQ and ASQ are represented by solid black and blue lines respectively. Red, green, and brown points represent the extrema of OASQ, SQOR, and chaotic solutions respectively. A zoomed view of the bifurcation diagram for the chaotic regime is shown in the inset. Branches corresponding to the unstable fixed points are represented by dashed lines. Cyan line represents the conduction state.}\n\\label{fig:bifur13}\n\\end{figure}\n\n\\subsection{Bifurcation diagram of the 13-mode model}{\\label{sec:bif13M}}\n\nAs illustrated by Fig.~\\ref{fig:modes_lowdim}, the 13-mode model is a subset of the 27-mode model. If we force only the modes of the 13-mode model to be nonzero, and others to be zero, naturally the bifurcation diagram corresponding to the 13-mode model is reproduced. \\cite{Pal:EPL_2009} contains a detailed discussion on this diagram and the associated flow patterns (both from DNS and the model). Here we only provide a brief description. \n\nThe bifurcation diagram for the 13-mode model is illustrated in Fig.~\\ref{fig:bifur13} in which we plot the positive value of $|W_{101}|_{extremum}$ as a function of $r$. For zero-P convection, chaos is observed at the onset itself. Therefore, ~\\cite{Pal:EPL_2009} start their analysis at $r=1.4$ where symmetric square (SQ) patterns are observed. These states are represented by the solid black curve of Fig.~\\ref{fig:bifur13} (here the diagram is shown only for $r\\le 1.25$). At around $r \\approx 1.2201$, SQ branches bifurcate to ASQ solutions through a supercritical pitchfork bifurcation. The solid blue curves of Fig.~\\ref{fig:bifur13} correspond to ASQ patterns. The ASQ branches bifurcate to OASQ solutions (the solid red curves) through a Hopf bifurcation. The limit cycles thus generated grow in size and touch the saddles (dashed line of the SQ branch) to create a very narrow window of homoclinic chaos. After this, the system again becomes periodic (SQOR) with the merger of the limit cycles. The SQOR patterns transform to a chaotic attractor {\\em Ch1} through a homoclinic bifurcation. {\\em Ch1} turns to {\\em Ch2} and subsequently to {\\em Ch3} through ``crisis''. \\cite{Pal:EPL_2009} observed these patterns in both model and DNS. Earlier, \\cite{thual:zeroP_1992} had observed SQ, ASQ, and SQOR patterns in his DNS runs.\n\nIn the subsequent subsections we will describe the bifurcation scenario for the 27-mode model.\n\n\\subsection{Bifurcation diagram Bif-A of the 27-mode model}{\\label{sec:Bif-A}}\n\nThe square pattern described above is also observed in the 27-mode model. However, for SQ in the 27-mode model, twelve modes $W_{111}$, $Z_{110}$, $Z_{112}$, $Z_{100}$, $Z_{010}$, $Z_{111}$, $Z_{210}$, $Z_{120}$, $Z_{201}$, $Z_{021}$, $Z_{102}$, and $Z_{012}$ still remain zero. When we continue the SQ branch of the 27-mode model, we obtain a new bifurcation-diagram called ``Bif-A\" shown in Fig.~\\ref{fig:bifur27A}. The\nbifurcation diagram Bif-A is qualitatively similar to Bif-13M except in a narrow window of $ 1.116 \\le r \\le 1.128$ where additional bifurcations are observed. The ASQ branch of solutions in Bif-A have seventeen active modes with the modes $W_{111}$, $Z_{100}$, $Z_{010}$, $Z_{111}$, $Z_{210}$, $Z_{120}$, $Z_{201}$, $Z_{021}$, $Z_{102}$, and $Z_{012}$ as zeros.\n\nThe new features of Bif-A are as follows. At $r = 1.1260$, the ASQ branch undergoes a supercritical Hopf bifurcation (H1, see Fig.~\\ref{fig:r_w111_projection}) resulting in a time-periodic convective flow as illustrated in Fig.~\\ref{fig:H1_NS1}(a,b), where we show a projection of the limit cycle obtained from the DNS and the model on the $W_{111}$-$Z_{010}$ plane. All the 27 modes are active for these periodic flow patterns.\n\nAs $r$ is reduced further, at $r=1.1257$ a new frequency incommensurate with the original frequency is born through a Neimark-Sacker bifurcation (NS1) and the limit cycle becomes unstable. Here, a pair of imaginary Floquet multipliers cross the unit\ncircle outwards as illustrated in Fig.~\\ref{fig:Floquet_NS1_NS2}(a). The phase space\ntrajectory of the system on the $W_{111}$-$Z_{010}$ plane is therefore\nquasiperiodic as demonstrated in Fig.~\\ref{fig:H1_NS1}(c,d) for\nthe DNS and the model respectively. The unstable limit cycle \ncontinues till $r=1.0651$ where they meet the unstable limit cycles\nof the wavy rolls, which will be discussed in~\\S~\\ref{sec:Bif-C}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{bifurcation1_diagram_2.eps}\n\\end{center}\n\\caption{Bifurcation diagram Bif-A of the 27-mode model with the same color convention as Bif-13M shown in Fig.~\\ref{fig:bifur13}. This diagram is qualitatively similar to Bif-13M. New features of Bif-A are the H1, NS1, H2, and NS2 bifurcations shown in the boxed region ($1.116 < r < 1.128$) whose zoomed view is shown in the inset.} \n\\label{fig:bifur27A}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{H1_LC_w111_projection.eps}\n\\end{center}\n\\caption{Plot of $W_{111}$ vs. $r$ near the first Hopf bifurcation (H1) of the\nASQ branch. The solid brown curve represents the limit cycles generated after the first Hopf (H1).}\n\\label{fig:r_w111_projection}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{H1_NS1_DNS_model.eps}\n\\end{center}\n\\caption{Projection of the phase space on the $W_{111}$ - $Z_{010}$\nplane. The limit cycles born through H1 in (a) DNS at r=1.138 and (b)\nmodel at r=1.1258. Quasiperiodic attractor born through NS1 in (c)\nDNS at $r=1.131$ and (d) model at $r=1.1245$.} \n\\label{fig:H1_NS1}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{NS1_NS2.eps}\n\\end{center}\n\\caption{Movement of Floquet multipliers in the complex plane during NS1 and NS2 bifurcations. (a) At NS1, a pair of complex Floquet multipliers move out of the unit circle (red dots changing to blue). (b) At NS2, a pair of complex Floquet multipliers enter the unit circle (red dots again becoming blue).} \n\\label{fig:Floquet_NS1_NS2}\n\\end{figure}\n\nOn further reduction of $r$, at $r= 1.1226$, another Hopf bifurcation\n(H2) takes place on the unstable ASQ branch. The limit cycle born\nfrom H2 is however unstable. At $r = 1.1181$, this limit cycle becomes stable via\nan inverse Neimark-Sacker bifurcation (NS2) wherein the unstable\nFloquet multiplier pair enters the unit circle as evident in\nFig.~\\ref{fig:Floquet_NS1_NS2}(b). The resulting stable limit cycle\nis the oscillatory asymmetric square (OASQ) solution of the 13-mode\nmodel. Figure \\ref{fig:NS2} shows a projection of the limit cycle\ncorresponding to the OASQ solution obtained from the DNS and\nthe model on the $W_{101}$-$W_{011}$ plane. The\nquasiperiodic solutions exist only in the range\n$r=1.1181$-$1.1257$, i.e., between NS1 and NS2 and disappears after NS2. Note that the attractors between H1 and NS2 contain all the 27 modes, but beyond NS2 the ten modes $W_{111}$, $Z_{100}$, $Z_{010}$, $Z_{111}$, $Z_{210}$, $Z_{120}$, $Z_{021}$, $Z_{102}$, and $Z_{012}$ again become zero. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{NS2_phase_DNS_model.eps}\n\\end{center}\n\\caption{Projections of the two coexisting OASQ solutions on the $W_{101}$ -$W_{011}$ plane just after the NS2 bifurcation point: (a) in DNS at $r=1.0768$ and (b) in the model at $r=1.095$.}\n\\label{fig:NS2}\n\\end{figure}\n\nBeyond NS2, the patterns and associated bifurcations from OASQ to SQOR\nto the chaotic attractors {\\em Ch1, Ch2} and {\\em Ch3} in the decreasing $r$\ndirection are exactly the same as for Bif-13M. The range of $r$ corresponding to these patterns are approximately $r=1.086$--$1.1181$ for OASQ, $r=1.0046$--$1.086$ for SQOR, $r=1.0034$--$1.0046$ for Ch1, $r=1.0025$--$1.0034$ for {\\em Ch2} and $r=1$--$1.0025$ for {\\em Ch3}. Comparison with the 13-mode model indicates that the ranges of $r$ for the above patterns as well as that for SQ and ASQ are different. This is due to the fact that more than 13 modes are active in the present model for Bif-A. The 27-mode model reproduces the ranges of the patterns obtained from DNS more accurately. Note that the modes absent for SQ, ASQ, OASQ, SQOR, {\\em Ch1} to {\\em Ch3} flow patterns are active for wavy rolls that will be described in~\\S\\ref{sec:Bif-C}. \n\n\n\\subsection{Bifurcation diagram Bif-B of the 27-mode model}{\\label{sec:Bif-B}}\n\nWhen we start from an arbitrary initial condition for $r>1$ near the onset, most often the system tends to another attractor {\\em Ch4}, which differs from {\\em Ch1, Ch2}, and {\\em Ch3}. The codimension-2 bifurcation point CB3 of Fig.~\\ref{3d_bif} generates the chaotic attractor {\\em Ch4} as well. The bifurcation diagram ``Bif-B\" shown in Fig.~\\ref{fig:bifur27B} contains the attractor {\\em Ch4}. The attractor {\\em Ch4} coexists with the chaotic attractors {\\em Ch1, Ch2}, {\\em Ch3} and SQOR for $r=1$--$1.056$. This feature is illustrated in Fig.~\\ref{fig:multi_attrac} where a phase space projection (both from the model and DNS) for two different initial conditions at $r = 1.0342$ yield the SQOR (green curve) and the {\\em Ch4} (grey trajectory) attractors. Clearly the trajectories of {\\em Ch4} explore all the four quadrants of the $W_{101}$-$W_{011}$ plane. The qualitative behavior of the {\\em Ch4} attractor is similar to {\\em Ch2}, but its size is larger than {\\em Ch2} \\cite[compare with Fig. 5 of][]{Pal:EPL_2009}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{bifurcation1_diagram.eps}\n\\end{center}\n\\caption{Bifurcation diagram Bif-B with the same color convention as Bif-13M (Fig.~\\ref{fig:bifur13}). A large chaotic attractor represented by gray dots {\\em Ch4} is born at CB3. The chaotic attractor {\\em Ch4} coexists with {\\em Ch1}, {\\em Ch2}, {\\em Ch3} (shown in the inset using brown dots), and SQOR.}\n\\label{fig:bifur27B}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{multiple_attractor.eps}\n\\end{center}\n\\caption{Phase space projection of coexisting SQOR (green) and chaotic attractor {\\em Ch4} (gray) on the $W_{101}$-$W_{011}$ plane at $r = 1.0342$: (a) obtained from model and (b) obtained from DNS. }\n\\label{fig:multi_attrac}\n\\end{figure}\n\nNote that there are two complex pairs of unstable eigenvalues associated with ASQ. The first one carries from CB2, and the second one is due to the splitting of the double zero eigenvalues at CB3 into a complex pair (see~\\S\\ref{sec:fixed_points}). The {\\em Ch4} attractor is associated with the first pair, while the {\\em Ch3} attractor is related to the second pair. The disappearance of the chaotic attractor {\\em Ch4} is possibly through a boundary crisis \\cite[]{hilborn:book} wherein an unstable periodic\nsolution hits the basin boundaries of {\\em Ch4}. This unstable periodic\nsolution possibly has connections with the unstable periodic\nsolutions originating from the branch point CB2 of Fig.~\\ref{3d_bif}. \n\nIn the next subsection, we will discuss the bifurcations \nassociated with the solutions arising from the branch point CB2 whose flow patterns resemble the wavy rolls.\n\n\n\\subsection{Bifurcation diagram Bif-C of the 27-mode model}{\\label{sec:Bif-C}}\n\n\nThe 27-mode model has another chaotic attractor near the onset that originates from the bifurcation point CB2 of Fig.~\\ref{3d_bif}. Recall from \\S\\ref{sec:fixed_points} that CB2 is a codimension-2 bifurcation point whose stability matrix has a simple zero eigenvalue and an imaginary pair $(0,\\pm i \\omega)$. As a consequence, an unstable limit cycle is generated as $r$ is increased beyond 1. The attractors from this branch yield periodic, quasiperiodic, and chaotic wavy rolls. See Fig.~\\ref{fig:pattern_wavyroll} for an illustration of a periodic wavy roll. The diagram Bif-C (Fig.~\\ref{fig:bifur3}) illustrates the bifurcation scenario of this type of solutions. The limit cycles generated through this bifurcation are unstable and they have an unstable torus associated with them as discussed in Section 7.4 of \\cite{guckenheimer:book}. As a result, four chaotic attractors named {\\em Ch5} are born. A phase space projection of one of the {\\em Ch5} attractor is shown in Fig.~\\ref{fig:roll_QP_model}(a). Its chaotic nature is ascertained by the broad-band power spectrum exhibited in Fig.~\\ref{fig:roll_QP_model}(b). As $r$ is increased further, the size of {\\em Ch5} increases till $r \\simeq 1.009$ after which a single large chaotic attractor {\\em Ch6} is generated through an ``attractor-merging crisis''~\\cite[]{hilborn:book}. A phase space projection and power spectrum of {\\em Ch6} are shown in Fig.~\\ref{fig:roll_QP_model}(c,d) respectively. The chaotic attractors {\\em Ch5} and {\\em Ch6} are illustrated in the bifurcation diagram Bif-C. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{bif_wavy_roll_lhs.eps}\n\\end{center}\n\\caption{Bifurcation diagram Bif-C of the wavy roll solutions. The blue, red, and brown points represent periodic, quasiperiodic and chaotic ({\\em Ch5} and {\\em Ch6}) solutions respectively. }\n\\label{fig:bifur3}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{QP_route_wavyroll_model.eps}\n\\end{center}\n\\caption{Phase space projections on the $W_{101}$-$W_{111}$ plane for the wavy rolls and their corresponding power spectra obtained from the model: (a,b) the chaotic attractor {\\em Ch5} at $r = 1.005$; (c,d) the chaotic attractor {\\em Ch6} at $r = 1.05$; (e,f) frequency locked state with $f1\/f2 \\approx 5$ at $r = 1.078$; (g,h) quasiperiodic state with $f1\/f2\\approx 6.33$ at $r = 1.10$; (i,j) periodic state at $r = 1.15$ ($f1=4.3$). }\n\\label{fig:roll_QP_model}\n\\end{figure}\n\n As $r$ is increased further, we observe a series of phase-locked and stable quasiperiodic solutions. Phase space projections and power spectra of the phase-locked ($r=1.078$) and quasiperiodic ($r=1.10$) states are shown in Fig.~\\ref{fig:roll_QP_model}(e,f) and Fig.~\\ref{fig:roll_QP_model}(g,h) respectively. The phase-locked and quasiperiodic states are also evident in the bifurcation diagram Bif-C as banded and filled regions. Further increase in $r$ transforms the quasiperiodic states to a limit cycle through an inverse Niemark-Sacker bifurcation. A phase space projection and power spectrum of a limit cycle generated for $r=1.15$ are shown in Fig.~\\ref{fig:roll_QP_model}(i,j). Note that the transformations of phase-locked, quasiperiodic, and periodic states to one another can be understood by the movement of the Floquet multipliers of the underlying limit cycles (see \\S \\ref{sec:Bif-A}). The above states have also been observed in DNS. For example, Fig.~\\ref{fig:roll_QP_DNS} illustrates chaotic, quasiperiodic, and periodic states obtained in DNS for $r=1.05, 1.09$, and $1.10$ respectively.\n \nWhen we examine the active modes of Bif-C, we find that only the green colored modes of Fig.~\\ref{fig:modes_lowdim} are active for these convective patterns. The bifurcation diagram Bif-C has been generated by setting these modes as nonzero and all other modes as zero. The most important among the active modes of Bif-C are $(1,0,1)$, $(0,1,0)$, and $(1,1,1)$ which are instrumental for the generation of wavy rolls along the $y$ axis. Naturally, the wavy rolls along the $x$ axis will have the complimentary set of modes, e.g., $(1,0,0)$ instead of $(0,1,0)$, etc. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=8cm]{QP_route_wavyroll_DNS.eps}\n\\end{center}\n\\caption{Phase space projections on the $W_{101}$-$W_{111}$ plane\ncorresponding to the wavy rolls obtained from DNS: (a)\nthe chaotic attractor {\\em Ch6} at $r=1.05$; (b) quasiperiodic state at $r=1.09$; (c) periodic state at $r=1.10$. } \n\\label{fig:roll_QP_DNS}\n\\end{figure}\n\nNote that the above sets of solutions (Bif-A, Bif-B, Bif-C) are observed for different sets of initial conditions. For both DNS and the model, a random initial condition generally produces Bif-B whose basin of attraction appears to be rather large. If we keep only the green colored modes of Fig.~\\ref{fig:modes_lowdim} as nonzero, we obtain Bif-C that correspond to the wavy rolls. In fact, Bif-C is generated by first constructing a limit cycle for $r = 1.25$, and then continuing the solution for lower $r$ values. The bifurcation diagram Bif-A is generated by starting from the SQ pattern for $r = 1.25$ for which all the modes except $W_{111}$, $Z_{110}$, $Z_{112}$, $Z_{100}$, $Z_{010}$, $Z_{111}$, $Z_{210}$, $Z_{120}$, $Z_{201}$, $Z_{021}$, $Z_{102}$, and $Z_{012}$ are nonzero. Continuation of the above solution however generates ASQ solutions followed by H1, NS1, H2, and NS2 bifurcations where all the 27 modes are present. \n\nWavy rolls are one of most studied convective patterns in experiments and numerical simulations. The bifurcation diagram Bif-C provides a clear explanation for the origin of this pattern. In the next section we will provide a quantitative comparison of the bifurcation results related to the wavy rolls with those from the experiments and previous simulations.\n\n\\section{Wavy rolls: a quantitative study}\\label{sec:wavy_rolls}\n\n In this section we will analyze time scales of the wavy rolls of Bif-C quantitatively, and compare these values with some of the experimental and numerical results. At the bifurcation point CB2, the stability matrix has a pair of complex eigenvalues $(0,\\pm i\\omega)$ with $\\omega \\approx 14.2$. As a result, the unstable limit cycle originating from CB2 has a time period around $2 \\pi\/\\omega \\approx 0.44$ in units of $d^2\/\\nu$ (viscous time scale). Subsequent periodic, quasiperiodic, and chaotic time series have time scales comparable to the above value since their origin is closely connected to the bifurcation CB2 at $r=1$ in Bif-C. Our preliminary calculations indicate that the time period of oscillations for these patterns are within a factor of 10 from this value for $r=1$--$1.25$.\n \n Earlier \\cite{krishnamurti:JFM_1973} observed time-dependent wavy rolls in her convection experiments on mercury ($P\\approx 0.02$). She observed multiple peaks with the time period ranging from 0.1 to 1 in the time units of $d^2\/\\nu$ \\cite[see Fig.~3 of][]{krishnamurti:JFM_1973}. Krishnamurti's experimental value for the time-scale of the chaotic wavy rolls is in the same range as our theoretical time-scale estimated above. \\cite{willis:jfm_1970} and \\cite{croquette:wavyroll_1989} reported the time period of the oscillatory rolls using their experimental data for air ($P=0.7$) to be around 1 in the units of $d^2\/\\nu$. Using numerical simulations, \\cite{lipps:jfm_1976} observed time periods of oscillatory rolls to be around $0.24$--$0.27$ in the units of $d^2\/\\kappa$ for $P=0.7$. \\cite{meneguzzi:jfm_1987} found the period of the wavy oscillations to be around 0.065 viscous time units for $P=0.025$. These results are in general agreement (within a factor of 10) with our theoretical finding based on the bifurcation analysis. Note that \\cite{busse:JFM_1972} reported that the time period of the oscillatory instability in the units of $d^2\/\\nu$ is independent of the Prandtl number. Hence the time-scales are not expected to vary appreciably even when we change the Prandtl number by an order of magnitude which is consistent with the results for mercury ($P\\approx 0.02$) and air ($P\\approx 0.7$). Therefore, a comparison of our results for $P=0$ with those obtained for finite $P$ is also justified. \n \nOscillatory instabilities and their saturation through critical bursting have been studied by Kumar and coworkers \\cite[]{kumar:JP_1996,kumar:burst_2006} using several low-dimensional models. They show that the growth of the mode $W_{101}$ is saturated by the vorticity mode $Z_{010}$. In Fig.~\\ref{fig:wavy_roll_energy}(a,b) we plot the time series of $\\langle v_1^2 + v_3^2 \\rangle$ (sum of kinetic energy along $x$ and $z$ axes) and $\\langle v_2^2 \\rangle$ (kinetic energy along $y$ axis) computed from our 27-mode model for $r=1.05$. These results are in general agreement with the results of Kumar and coworkers. The panel (c) of Fig.~\\ref{fig:wavy_roll_energy} shows the time series of the modes $W_{101}$, $W_{111}$, and $Z_{010}$ that illustrates their growth and subsequent breakdowns (critical bursting). Note that the time-scales of oscillations for the modes $W_{111}$ and $Z_{010}$ are around 0.1, which is in the same range as the theoretical time-scale derived above using the bifurcation analysis.\n\nThe above arguments strongly suggest that the origin of the wavy rolls or the oscillatory instabilities are intimately related to the purely imaginary pair of eigenvalues at CB2 and the limit cycles that originate from it. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=!,width=12cm]{Ch6_wavyroll_r1p05_energy.eps}\n\\end{center}\n\\caption{Time series of $\\langle v_1^2 + v_3^2 \\rangle \/ 2$ (panel (a)) and $\\langle v_2^2 \\rangle$\/2 (panel (b)) obtained from the model at $r=1.05$. Panel (c) shows the time series of the modes $W_{101}$, $W_{111}$, and $Z_{010}$ during critical bursting.} \\label{fig:wavy_roll_energy}\n\\end{figure}\n\n\\section{Conclusion}\\label{conclusion}\n\nIn conclusion, we explored various flow patterns of zero-P convection and performed a detailed bifurcation analysis near the onset using direct numerical simulation and a 27-mode low-dimensional model. The low-dimensional model was constructed using the most energetic modes of DNS. The results of the DNS and the low-dimensional model are in good agreement with each other. Several new chaotic attractors and windows of periodic and quasiperiodic rolls have been reported for the first time for zero-P convection. The origin and dynamics of all the observed patterns have been explained successfully using the bifurcation diagrams.\n\nThe RBC system for $P=0$ is chaotic at the onset itself. The stability analysis of the 27-mode model indicates three codimension-2 bifurcation points that play critical roles in the dynamics of convection near the onset. The chaotic attractors {\\em Ch1}, {\\em Ch2}, and {\\em Ch3}, described earlier by \\cite{Pal:EPL_2009}, and {\\em Ch4} are all related to the bifurcation point CB3. Beyond {\\em Ch1} and {\\em Ch4}, we observe SQOR, OASQ, ASQ, SQ etc., that are common to Pal \\etal's 13-mode model. The other codimension-2 bifurcation point, CB2, generates chaotic attractors {\\em Ch5} and {\\em Ch6}, and the subsequent periodic, quasiperiodic, and phase-locked convective states, which correspond to the wavy roll patterns of RBC observed earlier in experiments and simulations. In addition, we find that the frequency of the wavy rolls are connected to the imaginary eigenvalues of the stability matrix at the CB2 bifurcation point. Thus, the bifurcation analysis presented in the paper provides useful insights into the origin of the wavy rolls of RBC. \n\nInterestingly, the bifurcation diagram of 30-mode model of \\cite{mishra:EPl_2010} for $P=0.0002$ matches quite closely with Bif-A of our model. This reinforces earlier observations that zero-P convection is a valid limit of low-P convection as $P \\rightarrow 0$ \\cite[]{thual:zeroP_1992}. The extension of the present study to low-P convection in relation to wavy rolls will be very valuable for understanding the experimental and numerical findings near the onset.\n\n\\begin{acknowledgments}\nWe are thankful to Krishna Kumar and Pankaj K. Mishra for useful discussions. This work is supported by the Swarnajayanti fellowship grant to MKV by Department of Science and Technology, India.\n\\end{acknowledgments}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfwxd b/data_all_eng_slimpj/shuffled/split2/finalzzfwxd new file mode 100644 index 0000000000000000000000000000000000000000..1023fca31f27675acd17ac14e9e64b6de4595c1a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfwxd @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nTheoretical developments and novel experiments in the area of\nferroelectrics have rapidly evolved over the last ten years,\nallowing further progress in the understanding of this remarkable\nphenomenon. In particular, ``nanoscale\" ferroelectrics have\nattracted considerable\nattention~\\cite{ahn04_1,lee05,was04,chu04,nau04}. The question of\nthe existence of a critical thickness, in other words whether or\nnot ferroelectricity can be maintained at reduced dimensions, is\namongst the most exciting topics of the field today, with very\nactive experimental~\\cite{tyb99,str02,fon04,lic05} and theoretical\nefforts~\\cite{gho00,mey01,jun03_1}.\n\nProbing ferroelectricity in thin films and nanostructures is a\ndifficult task, which requires advanced techniques. Among these,\nscanning probe characterization based on piezoelectric microscopy\nhas allowed a ferroelectric ground state to be identified down to\n40 \\AA\\ in thin Pb(Zr$_{0.2}$Ti$_{0.8}$)O$_3$ films~\\cite{tyb99},\nand x-ray studies on PbTiO$_3$ films suggested that 28 \\AA\\ films\nare ferroelectric~\\cite{lic05}. Dielectric and pyroelectric\nresponse measurements have allowed ferroelectricity to be\nidentified in polymer films down to 10 \\AA\\ (two unit\ncells)~\\cite{bun98}. More recently, ultrahigh vacuum scanning\nprobe characterization based on electrostatic force microscopy was\nused to study ferroelectricity in barium titanate nanowires with\ndiameters as small as 10 nm~\\cite{yun02}. On insulating\nsubstrates, lateral periodicity was observed via x-ray diffraction\nin thin PbTiO$_3$ films down to 12 \\AA\\ and attributed to\nalternately polarized domains~\\cite{str02,fon04}. In all these\nstudies, however, properties {\\it averaged} over the complete\nferroelectric structure were measured and no {\\it local}\ninformation on the atomic displacements was obtained.\n\nIn contrast, the photoemission based photoelectron diffraction\n(XPD) used in our study presents two interesting characteristics:\nit is naturally surface sensitive, due to an electron escape depth\nof approximately 20 \\AA\\ (at the energy used here) ; and atomic\ndisplacements within the unit cell can be directly probed,\nallowing the non-centro-symmetric and tetragonal nature of\nthe crystal lattice to be directly demonstrated. This turns\nout to be crucial for studying ultrathin films, and for\ndiscriminating the behavior of the surface from that of the body of\nthe film.\n\nThe paper is organized as follows. In Section II, we characterize the ferroelectric distortion of PbTiO$_3$ and contrast it to the natural atomic relaxation appearing at surface and interface. In Sec. III A, we discuss aspects of the measurement methods while, in Sec. III B we show the non-centro-symmetry of a 20 \\AA\\ thick film using XPD. The Sec. III C is dedicated to the tetragonality measurements as a function of film thickness with XPD using Pb as emitter, down to 4 \\AA\\ (one unit cell). We summarize and conclude in Sec. IV.\n\n\\section{Ferroelectric distortion versus surface relaxation}\n\nThe studies were carried out on c-axis oriented perovskite\nPbTiO$_3$ ultrathin films epitaxially grown on conducting\nNb-SrTiO$_3$ substrates. Above 490$^\\circ$C, bulk PbTiO$_3$ is a\nparaelectric insulator and a simple cubic perovskite structure\nwith a lattice parameter of 3.96 \\AA\\ (``para\"-state, see\nFig.~\\ref{despontfig1}(a)). In this structure, the Ti and O atoms\nare in perfectly centro-symmetric positions with respect to the\nsurrounding Pb cage. At lower temperature, the material becomes\ntetragonal and ferroelectric with a- and c-axis parameters of 3.90\n\\AA\\ and 4.17 \\AA\\\/, respectively ~\\cite{nel85,jon97}, as\nillustrated in Fig.~\\ref{despontfig1}(b). The ferroelectric phase is\ncharacterized by a non-centro-symmetric structure where the O and\nTi atoms are unequally shifted with respect to Pb. In a unit cell\nwith the polar c-axis along the $z$-direction, O and Ti move\neither upwards or downwards (with a larger O displacement)\nresulting respectively in a ``down\"- and ``up\"-polarized state, as\ndrawn in Fig.~\\ref{despontfig1}(b).\n\nIn the surface region (five top unit cells) that is probed by the\nXPD technique, the evidence of such a polar atomic distortion could be\nthe signature of a ferroelectric ``up\"- or ``down\"-state but\nmay also arise from the natural atomic relaxation at the film\nsurface and interface already present in the paraelectric phase. A\nproper interpretation of our data therefore requires independent\nquantification of both effects. To that end, a {\\it reference}\n configuration (``para-unrelaxed'') is defined in Fig.~\\ref{despontfig2}(a):\nit corresponds to the truncated bulk paraelectric structure of\nPbTiO$_3$ with the in-plane lattice constant constrained to that\nof SrTiO$_{3}$ ($a_{STO} = 3.90$ \\AA) and a consequent\ntetragonality $c_0\/a_{STO} = 1.03$~\\cite{lic05}.\nFig.~\\ref{despontfig2}(b) (``ferro-unrelaxed'') shows the atomic\ndistortion of the ``up\"-state as determined for bulk tetragonal\nPbTiO$_3$ by Nelmes and Kuhs in Ref. \\cite{nel85} with $c = 4.17$\n\\AA . In order to quantify the surface relaxation, density\nfunctional theory calculations~\\cite{abinit} were performed within\nthe local density approximation (LDA) using the\nABINIT~\\cite{gon02} package. Two different supercells were\nconsidered~: a thick PbTiO$_{3}$ slab in vacuum\n(Fig.~\\ref{despontfig2}(c)) and a SrTiO$_3$\/(one unit cell)\nPbTiO$_3$\/vacuum stack~\\cite{unitcell}(Fig.~\\ref{despontfig2}(d)).\nInsulating SrTiO$_3$ was considered in our simulations since Nb\ndoping is not presently affordable at the first-principles\nlevel~\\cite{STO}. To reproduce the substrate clamping effect, the\nin-plane lattice constant was fixed to the relaxed a-axis value of\nbulk SrTiO$_{3}$~\\cite{acell}. The atomic positions were then\nrelaxed until the maximum residual atomic force was smaller than\n40 meV\/\\AA\\\/. Our calculations were restricted to $(1 \\times 1)$\nsurface periodicity and did not allow for an eventual\nantiferrodistortive (AFD) $c(2 \\times 2)$\nreconstruction~\\cite{bun05,sep05}. The latter is not excluded but,\nas discussed later, was not evidenced on our films at room\ntemperature. The distortions in the upper half of each supercell\nare reported in Fig.~\\ref{despontfig2}(c), (d) (``para-relaxed\"\nstate). For easy comparison with the experiment, because of the\ntypical LDA underestimate of the lattice constant~\\cite{acell},\nthe values are given as a percentage of $c_0$. The magnitudes of\nthe ferroelectric and surface relaxation effects can now be\ncompared. First, the cation-oxygen displacements due to\nferroelectricity (11.6\\% - 8.3\\% of $c_0$,\nFig.~\\ref{despontfig2}(b)) are significantly larger than the\ndisplacements due to surface relaxation\/rumpling (3.4\\% - 1.4\\% of\n$c_0$, Fig.~\\ref{despontfig2} (c) and 3.3\\% - 1.5\\% of $c_0$,\nFig.~\\ref{despontfig2}(d)). Second, the mean layer displacement for\nthe ``up\"-state (Fig.~\\ref{despontfig2}(b)) and for the surface\nrelaxation (Figs.~\\ref{despontfig2}, (c) and (d)) are opposite. Third,\nthe surface relaxation and rumpling effects are globally\nunaffected by the film thickness (Figs.~\\ref{despontfig2}, (c) and\n(d)) and their amplitude decays very quickly in the interior of the\nfilm : they are already negligible two unit cells below the\nsurface. This implies that the XPD measurements will be probing\nboth the narrow relaxed surface region and a few unit cells below,\nessentially not affected by the surface relaxation.\n\n\\section{Experimental results and discussion}\n\\subsection{Experimental details }\n\nThe samples used in this study are epitaxial, c-axis oriented\nPbTiO$_3$ thin films grown on conducting (001) Nb-SrTiO$_3$\nsubstrates using off-axis radio-frequency magnetron\nsputtering~\\cite{eom90,lic04}. Topographic measurements using\natomic force microscopy (AFM) showed that these films are\nessentially atomically smooth, with a root-mean-square roughness\nbetween 2 and 6 \\AA\\ over a 10 $\\times$10 $\\mu$m$^2$ area. Room\ntemperature x-ray diffraction measurements, for films with\nthickness $\\geq $ 28 \\AA\\, allowed us to precisely determine the\nthickness and the c-axis parameter of the films, and to confirm\ntheir epitaxial ``cube-on-cube\" growth.\n\nAfter growth and characterization, the films were transferred\n\\emph{ex-situ} to a modified Vacuum Generators ESCALAB Mk II\nphotoelectron spectrometer. The XPD measurement system comprises a\nhemispherical electron energy analyzer with a three-channel\ndetector, an x-ray photon source with two possible energies\n($h\\nu$ = 1253.6 eV and 1740 eV for MgK$\\alpha$ and SiK$\\alpha$\nradiation, respectively), and a computer-controlled two-axis\ngoniometer capable of rotating the photoelectron emission angle\nover the full hemisphere above the surface~\\cite{ost91,nau93_1}.\n\nThe local geometry around a selected atom can be probed by performing an intensity versus emission-angle\n scan of a chosen photoemission line.\nBecause of the chemical sensitivity of photoemission, a given atom\ntype is then chosen by selecting one of its core levels. The\noutgoing photoemitted electrons exhibit a strongly anisotropic\nangular intensity distribution. This angular distribution is due\nto the interference of the directly emitted photoelectron wave\nwith the scattered electron waves. The analysis of the\ninterference (or diffraction) patterns is facilitated by the\nso-called ``forward focusing\" effect taking place for photoelectron\nkinetic energies greater than $\\approx$ 0.5 keV. When considering\na row of atoms, scattering at the first few atoms along this row\nfocuses the electron flux in the emitter-scatterer direction (for\na review see Ref. \\cite{ege90,fad90}). This enhancement of the\nintensity in the emitter-scatterer direction is schematically\nillustrated by the green curve in Fig.~\\ref{despontfig1}(a) (right\npart) for the centro-symmetric ``para\"-state (continuous line) and\nthe ``up\"-state (dotted line). The forward focusing effect is\nfurther amplified for electron scattering by heavy atoms. In a\nsemi-classical picture this can be understood as the focusing of\nthe electron wave by the high number of protons in high atomic\nnumber atoms~\\cite{ege90}. Note that, despite the forward focusing\neffect, the experimentally measured angles are sensitive to\nmultiple interferences, refraction and possible anisotropic atom\nvibrations at the surface. In the present case of PbTiO$_3$, Pb\nscattering is highly dominant compared to the scattering by other\nelements~\\cite{des05}. As a first step, in order to probe the\nnon-centro-symmetry, O was chosen as emitter-atom (O 1s core\nlevel, E$_{kin}$ = 724.1 eV), since it has the largest\ndisplacement~\\cite{nel85} and has Pb scatterers as nearest\nneighbors (see Fig.~\\ref{despontfig3}). However, the O\ncontribution from the Nb-SrTiO$_{3}$ substrate becomes\nnon-negligible for films thinner than the photoelectron inelastic\nmean free path, making the study of films thinner than 20 \\AA\\\nmore difficult. As a second step, in order to probe the\ntetragonality of the films, i. e., the $c\/a$ ratio of the Pb\nlattice (related to the polarization via the polarization-strain\ncoupling as discussed below and in details in Ref.~\\cite{lic05}),\nPb was chosen as emitter-atom (Pb 4f$_{7\/2}$ core level, E$_{kin}$\n= 1115.5 eV), and Pb-Pb forward focusing directions were used.\nSince Pb atoms are absent from the substrate, this study can be\ndone down to a\n monolayer of ferroelectric material.\n\n\\subsection{Non-centro-symmetric position of oxygen atoms}\n\nFirst, considering oxygen as the emitter atom, fully automated\ncomputer code for calculating electron diffraction in atomic\nclusters (EDAC) via multiple scattering~\\cite{gar01}, based on the\nmuffin-tin potential approximation~\\cite{pen74} was used to\ncalculate the XPD pattern. Fig.~\\ref{despontfig3}(a) shows four O 1s\ncore level emission (E$_{kin}$ = 724.1 eV) interference patterns.\nOne is the measurement made on a 20 \\AA\\ thin film while the three\nothers are multiple scattering EDAC calculations of the ``up\"-,\n``down\"- and ``para\"-state. Intensities are plotted in a\nstereographic projection with the center corresponding to normal\nemission (polar angle $\\theta = 0^\\circ$) and the outer\n border corresponding to grazing emission ($\\theta = 70^\\circ$).\nThe strongest intensities (surrounded by red ellipses) correspond\nto the scattering of O 1s photoelectrons by Pb nearest neighbours\n(see Fig.~\\ref{despontfig3}(b)). The white circle is a guide to the\neye indicating the polar angle of maximum intensity for the\nmeasured interference pattern. It is evident that the polar angle\nposition of this peak, which is directly linked to\n the O-Pb directions, is perfectly reproduced by the ``up\"-state calculation while the ``para\"- and\nthe ``down\"-state simulations predict a different position.\nThe ``up\"-state (down-shifted O position) corresponds to a\nsmaller polar emission angle ($\\theta_{up}$ in\nFig.~\\ref{despontfig3}(b)) appearing closer to normal emission\n(center of the interference pattern in Fig.~\\ref{despontfig3}(a)).\nSuch measurements have also been performed on films with\nthicknesses of $\\sim$ 500, 200, 100, 60, 44 and 28 \\AA\\\/, and all\nperfectly reflect the characteristics of the\n``up\"-state \\footnote{The presence of alternating 180$^{\\circ}$\ndomains in our films can be ruled out from the XPD measurement\nbecause the two characteristic ``up\" and ``down\"-state ``forward\nfocusing\" peaks are not observed simultaneously in the\nexperimental diffractogram, Fig.~\\ref{despontfig3}(a). This\ncontrasts with the results of Fong et al. \\cite{fon04} on a\ninsulating substrates.}.\n\nThese conclusions, drawn from visual inspection of the\ninterference patterns locally around the intensity maximum\n(Fig.~\\ref{despontfig3}(a)), are confirmed by a global matching\napproach using a reliability (R)-factor to evaluate the quality of\nthe fit between\n the complete experimental interference pattern data and theory (Fig.~\\ref{despontfig3}(b)). The c-axis lattice\nconstant value and the O and Ti shifts are the adjustable\nstructural parameters. A cut in the (100) plane, containing Pb and\nO atoms, is shown to facilitate the discussion. In the\ncalculation, O and Ti atoms\n are moved together and the dipole is continuously changed from the ``down\"-state to the ``up\"-state,\ncrossing over the ``para\"-state. The best fit corresponds to the\nminimal R-factor value, which is reached when O and Ti atoms are\nshifted below the centro-symmetric position (parameters used for\nthe ``up\"-state simulation in Fig.~\\ref{despontfig3}(a)), with an\nR-factor value of $\\approx$ 0.34. In comparison, for the same\nc-axis parameter but the opposite O and Ti atom shifts (parameters\nused for the ``down\"-state simulation in Fig.~\\ref{despontfig3}(a)),\nthe calculation gives a much higher R-factor of $\\approx 0.47$.\nIn between (zero O shift), in the centro-symmetric ``para\"-state,\nthe R-factor is $\\approx 0.45$ (parameters used for the\n``para\"-state simulation in Fig.~\\ref{despontfig3}(a)).\n\nIt is important to note that surface relaxation and rumpling,\nneglected here, cannot weaken our conclusions; indeed they would\ngive a picture resembling a ``down\"-state, the corresponding O-Pb\natoms being shifted in the opposite direction than what is\nobserved (see Fig.~\\ref{despontfig2}(c)). Also, the possibility of a\nsurface AFD reconstruction was explored without finding evidence\nfor it in our room temperature experiments.\n\n\nThis R-factor analysis therefore quantitatively confirms the\nobservations made in Fig.~\\ref{despontfig3}(a), namely that the\nmeasured interference pattern is best simulated with the ``up\"-\nstate. This demonstrates unambiguously that, for a film as thin as\n20 \\AA\\, the O atoms have a non-centro-symmetric position in the\nPb cage corresponding to a non-vanishing spontaneous polarization.\n\nLet us emphasize that piezoelectric AFM measurements performed\nafter the XPD experiments on the thickest films ($\\sim$ 500 \\AA )\nconfirmed the uniform``up\"-state polarization while a uniform\n``down\"-state polarization had been initially found for the same\nfilms just after growth. This confirms the {\\it monodomain}\ncharacter of the as grown sample and also indicates that the films\nare uniformly switched from ``down\" to ``up\"-state by exposure to\nour conventional x-ray source, attesting for the switchable\ncharacter of the polarization. The details behind the switching\nare presently not known, but we believe that it occurs at the\ninitial stage of the experiment while the measurement itself is\nessentially done in zero field. In fact, our results do not depend\non the x-ray intensity, proving that the films are in equilibrium\nstate during the measurements \\footnote{The influence of the x-ray\nintensity on the tetragonality was measured by investigating \ndifferent x-ray powers. A modification of the\ntetragonality would have indicated, via the polarization-strain\ncoupling, a variation of the spontaneous polarization. However no\nsuch modification was found.}. As discussed below, the\nagreement between the tetragonality deduced from x-ray diffraction\nand XPD also suggests that the measurements are performed in\nsimilar conditions.\n\n\\subsection{Tetragonality via lead emission}\n\nIn a second step, considering Pb as the emitter atom, XPD was used\nto determine the tetragonality. As demonstrated in\nRef.~\\cite{lic05}, below 200 \\AA\\ the tetragonality decreases as\nthe film thickness decreases. This decrease is a consequence of\nthe strong polarization-strain coupling in PbTiO$_3$ and a\nsignature of a reduced polarization in thin films. In\nRef.~\\cite{lic05}, this polarization reduction was attributed to\nimperfect screening of the depolarizing field~\\cite{jun03_1}. With\nXPD, using Pb as emitter, the tetragonality was measured down to\nthe unit cell level as shown in Fig.~\\ref{despontfig4}. The\nabsolute values of $c\/a$, deduced from the forward focusing\nangles, are particulary large. This might reflect\n a strong enhancement of the polarization in the\nprobed surface region (of the order of $80\\% $ for $c\/a = 1.15$,\nfrom the PTO polarization-strain coupling), even larger than in\nthe theoretical prediction of Ref.~\\cite{gho00}. However, as\npreviously stated, we are not necessarily measuring the precise\natom-atom directions and the anomalously large forward focusing $c\/a$ might\nalso be related to other effects (anisotropic atom\nvibrations at the surface, refraction and multiple scattering\ninterferences). Therefore a comparison\n with x-ray diffraction (XRD)~\\cite{lic05} must be done at the relative level (Fig.~\\ref{despontfig4},\nleft and right scale).\n\nTo study the evolution of the tetragonality as a function\nof the film thickness, the measured XPD values are compared to the $c\/a$ values obtained by XRD.\n The XPD measurement on Fig.~\\ref{despontfig4}\nconfirms the evolution of c\/a obtained from x-ray measurements in\nRef.~\\cite{lic05} and agrees with the theoretical prediction\n(dashed curve) relying on the suppression of polarization due to\nimperfect screening of the depolarizing field~\\cite{lic05}. The\nsimilar thickness dependence for the XPD (very surface sensitive)\nand the x-ray measurements (average on the whole film) implies\nthat the polarization evolves at the surface in the same way as at\nthe interior of the film and that there is no thick paraelectric\ndead layer at the surface. In addition, the XPD tetragonality\nmeasurement shows a continuous decrease of tetragonality down to\nthe thickness of one unit cell~\\cite{unitcell}. Two ribbons are\ndrawn in Fig.~\\ref{despontfig4}, labeled with 1 and 2. They\nindicate the regions within which $c\/a$ values of 1.03 and 1.01\nare crossed with respect to both c\/a scales. For film thicknesses\nabove two unit cells, the $c\/a$ values are larger than 1.03, the\nvalue expected at the bulk level for the paraelectric phase\n(resulting from the mechanical constraint imposed by the\nsubstrate, see also Fig.~\\ref{despontfig2}(a)). This observation\ndirectly implies, via the polarization-strain coupling, that the\nfilms still have a finite -although progressively reduced-\nspontaneous polarization. At thicknesses of one or two unit\ncells, as can be seen on Fig.~\\ref{despontfig4}, c\/a drops even\nmore, reaching a value close to 1.01 for the one unit cell thick\nfilm~\\cite{unitcell}. This further decrease highlights that\nmacroscopic elasticity no longer applies at such thicknesses where\nthe interlayer atomic distances are affected by surface relaxation\nand rumpling as shown by the \\emph{ab-initio} calculations\n(Fig.~\\ref{despontfig2}(d)). The measured tetragonality agrees with\nthe computed value of 1.01 for the one unit-cell thick relaxed\nparaelectric film suggesting the absence of any additional\nferroelectric distortion at this thickness.\n\n\\section{Conclusion}\n\nThis study thus directly demonstrates non-centro-symmetry,\nunambiguously a result of ferroelectricity in PbTiO$_3$ thin films\ndown to 20 \\AA . The measurements of the tetragonality, with a\ncontinuous decrease down to the bare substrate, show that even\nextremely thin films (3 unit cells) have a $c\/a$ value larger than\n1.03, attesting for the presence of a non-vanishing spontaneous\npolarization at this thickness scale. As the film thickness is\nreduced to a single unit cell, the experiments, together with\ncalculations, strongly suggest that both non-centro-symmetry and\ntetragonality are governed by surface effects, giving rise for our\ngeometry to a polar relaxed structure but without switchable\nferroelectric distortion.\n\n\n\\section*{Acknowledgements}\nWe would like to thank M. A. Van Hove and C. Battaglia for helpful discussions, P. Paruch for careful reading of the manuscript, and the whole Neuch\\^atel workshop and electric engineering team for efficient technical support. This project has been supported by the Swiss National Science Foundation through the National Center of Competence in\nResearch ``Materials with Novel Electronic Properties-MaNEP\", the European Network of Excellence FAME and the VolkswagenStiftung.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Acceleration. Proofs of \\cref{thm:acceleration_quasiquasarconvexity} and \\cref{thm:riemannian_acceleration}} \\label{app:acceleration}\n\\cite{diakonikolas2017approximate} developed the \\textit{approximate duality gap technique} which is a technique that provides a structure to design and prove first order methods and their guarantees for the optimization of convex problems. We take inspiration from this ideas to apply them to the non-convex problem we have at hand \\cref{thm:acceleration_quasiquasarconvexity}, as it was sketched in \\cref{subsec:sketch_of_my_axgd_proof}. We start with two basic definitions.\n\\begin{definition}\\label{def:bregman_divergence}\n Given two points $\\tilde{x}, \\tilde{y}$, we define the Bregman divergence with respect to $\\psi(\\cdot)$ as\n \\[\n D_{\\psi} (\\tilde{x}, \\tilde{y}) \\defi \\psi(\\tilde{x})-\\psi(\\tilde{y}) - \\innp{\\nabla \\psi(\\tilde{y}), \\tilde{x}-\\tilde{y}}.\n \\]\n\\end{definition}\n\n\\begin{definition} \\label{def:fenchel_dual}\n Given a closed convex set $Q$ and a function $\\psi:Q \\to \\R$, we define the convex conjugate of $\\psi$, also known as its Fenchel dual, as the function\n \\[\n \\psi^\\ast(\\tilde{z}) = \\max_{\\tilde{x}\\in Q}\\{ \\innp{\\tilde{z}, \\tilde{x}}-\\psi(\\tilde{x})\\}.\n \\]\n\\end{definition}\nFor simplicity, we will use $\\psi(\\tilde{x}) = \\frac{1}{2}\\norm{\\tilde{x}}^2$ in \\cref{alg:accelerated_gconvex}, but any strongly convex map works. The gradient of the Fenchel dual of $\\psi(\\cdot)$ is $\\nabla\\psi^\\ast(\\tilde{z}) = \\argmin_{\\tilde{z'}\\in\\X}\\{\\norm{\\tilde{z}'-\\tilde{z}}\\}$, that is, the Euclidean projection $\\Pi_Q(\\tilde{z})$ of the point $\\tilde{z}$ onto $Q$. Note that when we apply \\cref{thm:acceleration_quasiquasarconvexity} to \\cref{thm:riemannian_acceleration} our constraint $\\X$ will be a ball centered at $0$ of radius $\\tilde{R}$, so the projection of a point $\\tilde{z}$ outside of $\\X$ will be the vector normalization $\\tilde{R}\\tilde{z}\/\\norm{\\tilde{z}}$. Any continuously differentiable strongly convex $\\psi$ would work, provided that $\\psi^\\ast(z)$ is easily computable, preferably in closed form. Note that by the Fenchel-Moreau theorem we have for any such map that $\\psi^{\\ast\\ast} =\\psi$.\n\nWe recall we assume that $f$ satisfies\n\\begin{align} \\label{eq:appendix_quasiquasarconvexity}\n \\begin{aligned}\n f(\\tilde{x}) + \\frac{1}{\\gamman}\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}\\leq f(\\tilde{y}) &\\ & & {\\text{ if } \\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}}\\leq 0, \\\\\n f(\\tilde{x})+\\gammap\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}} \\leq f(\\tilde{y}) &\\ & & {\\text{ if } \\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}} \\geq 0.\n \\end{aligned}\n\\end{align}\n\nLet $\\alpha_t$ be an increasing function of time $t$. We want to work with continuous and discrete approaches in a unified way so we use Lebesgue-Stieltjes integration. Thus, when $\\alpha_t$ is a discrete measure, we have that $\\alpha_t = \\sum_{i=1}^\\infty a_i \\delta(t-(t_0+i-1))$ is a weighted sum of Dirac delta functions. We define $A_t = \\int_{t_0}^t d\\alpha_\\tau = \\int_{t_0}^t \\dott{\\alpha}_\\tau d\\tau$. In discrete time, it is $A_t = \\sum_{i=1}^{t-t_0+1} a_{i}$. In the continuous case note that we have $\\alpha_t -A_t = a_{t_0}$.\n\n\nWe start defining a continuous method that we discretize with an approximate implementation of the implicit Euler method. Let $\\tilde{x}_t$ be the solution obtained by the algorithm at time $t$. We define the duality gap $G_t\\defi U_t-L_t$ as the difference between a differentiable upper bound $U_t$ on the function at the current point and a lower bound on $f(x^\\ast)$. Since in our case $f$ is differentiable we use $U_t \\defi f(\\tilde{x}_t)$. The idea is to enforce the invariant $\\frac{d}{dt}(\\alpha_t G_t) = 0$, so we have at any time $f(\\tilde{x}_t)-f(\\tilde{x}^\\ast)\\leq G_t = G_{t_0}\\alpha_{t_0}\/\\alpha_t$. \n\nNote that for a global minimum $\\tilde{x}^\\ast$ of $f$ and any other point $\\tilde{x}\\in Q$, we have $\\innp{\\nabla f(\\tilde{x}), \\tilde{x}^\\ast-\\tilde{x} }\\leq 0$. Otherwise, we would obtain a contradiction since by \\eqref{eq:appendix_quasiquasarconvexity} we would have \n\\[\nf(\\tilde{x}) 0 $, for $\\lambda_1 = \\hat{\\Gamma}_i^{-1}(1\/\\gamman)$ and $\\lambda_2 = \\hat{\\Gamma}_i^{-1}(\\gammap)$. Therefore, by continuity, there is $\\lambda^\\ast \\in [\\lambda_1, \\lambda_2]$ such that $\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda^\\ast}), \\tilde{x}_{i+1}^{\\lambda^\\ast} - \\tilde{x}_{i}}=0$. The continuity condition is easy to prove. We omit it because it is derived from the Lipschitzness condition that we will prove below. Such a point satisfies \\eqref{eq:appendix_quasiquasarconvexity} for $\\hat{\\epsilon}_i=0$. We will prove that the function $G_i:[\\frac{a_{i+1}}{A_{i+1}}, \\frac{a_{i+1}\/\\gamman}{A_i\\gammap+a_{i+1}\/\\gamman}]\\to\\R$, defined as \n \\begin{equation}\\label{def:G}\n G_i(\\lambda) \\defi -\\hat{\\Gamma}_i(\\lambda) \\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i}+ (f(\\tilde{x}_{i+1}^{\\lambda})-f(\\tilde{x}_i)),\n \\end{equation}\n is Lipschitz so we can guarantee that \\eqref{eq:approximate_multiplied_convexity} holds for an interval around $\\lambda^\\ast$. Finally, we will be able to perform a binary search to efficiently find a point in such interval or another interval around another point that satisfies that the inner product is $0$.\n\n So \n \\begin{align}\\label{lipschitzness_of_G}\n \\begin{aligned}\n \\abs{G_i(\\lambda) - G_i(\\lambda')} &\\leq \\abs{f(\\tilde{x}_{i+1}^{\\lambda})-f(\\tilde{x}_{i+1}^{\\lambda'})} \\\\\n &\\quad+ \\abs{\\hat{\\Gamma}_i(\\lambda')}\\cdot\\abs{\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda'}), \\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_i}-\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i}} \\\\\n &\\quad+\\abs{\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i}}\\cdot\\abs{\\hat{\\Gamma}_i(\\lambda')-\\hat{\\Gamma}_i(\\lambda)} \n \\end{aligned}\n \\end{align}\n We have used used the triangular inequality and the inequality \n \\begin{equation}\\label{simple_inequality_from_triangular}\n \\abs{\\alpha_1 \\beta_1 -\\alpha_2\\beta_2} \\leq \\abs{\\alpha_1}\\abs{\\beta_1-\\beta_2}+\\abs{\\beta_2}\\abs{\\alpha_1-\\alpha_2},\n \\end{equation} \n which is a direct consequence of the triangular inequality, after adding and subtracting $\\alpha_1\\beta_2$ in the $\\abs{\\cdot}$ on the left hand side. We bound each of the three summands of the previous inequality separately, but first we bound the following which will be useful for our other bounds, \n\\begingroup\n\\allowdisplaybreaks\n \\begin{align} \\label{eq:bound_on_x_i_plus_1}\n \\begin{aligned}\n \\norm{\\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_{i+1}^{\\lambda}} & \\circled{1}[=] \\norm{(\\lambda'\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda'})+(1-\\lambda')\\tilde{x}_i)-(\\lambda\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda})+(1-\\lambda)\\tilde{x}_i)} \\\\\n & \\circled{2}[\\leq] \\norm{\\nabla \\psi^\\ast(\\tilde{\\zeta}_i^\\lambda)-\\tilde{x}_i}\\abs{\\lambda'-\\lambda} + \\norm{\\lambda'\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda'}) -\\lambda'\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda})} \\\\\n & \\circled{3}[\\leq] 2\\tilde{R}\\abs{\\lambda-\\lambda'} +\\norm{\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda'})-\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda})} \\\\\n & \\circled{4}[\\leq] 2\\tilde{R}\\abs{\\lambda-\\lambda'} + \\frac{1}{\\gamman\\sigma} \\norm{\\nabla f(\\tilde{\\chi}_i^{\\lambda})-\\nabla f(\\tilde{\\chi}_i^{\\lambda'})} \\\\\n & \\circled{5}[\\leq] 2\\tilde{R}\\abs{\\lambda-\\lambda'} + \\frac{\\tilde{L}}{\\gamman\\sigma}\\norm{\\tilde{\\chi}_i^{\\lambda}-\\tilde{\\chi}_i^{\\lambda'}} \\\\\n & \\circled{6}[\\leq] \\left(2\\tilde{R}+\\frac{2L\\tilde{R}}{\\gamman\\sigma}\\right)\\abs{\\lambda-\\lambda'}\n \\end{aligned}\n\\end{align}\n\\endgroup\n Here, $\\circled{1}$ uses the definition of $\\tilde{x}_{i+1}^\\lambda$ as a convex combination of $\\tilde{x}_i$ and $\\nabla\\psi^\\ast(\\tilde{\\zeta}_i^{\\lambda})$. $\\circled{2}$ adds and substracts $\\lambda'\\nabla \\psi^\\ast(\\tilde{\\zeta}_i^\\lambda)$, groups terms and uses the triangular inequality. In $\\circled{3}$ we use the fact that the diameter of $Q$ is $2\\tilde{R}$ and bound $\\lambda'\\leq 1$, and $\\abs{\\lambda} \\leq 1$. $\\circled{4}$ uses the $\\frac{1}{\\sigma}$ smoothness of $\\nabla \\psi^\\ast(\\cdot)$, which is a consequence of the $\\sigma$-strong convexity of $\\psi(\\cdot)$. $\\circled{5}$ uses the smoothness of $f$. In $\\circled{6}$, from the definition of $\\tilde{\\chi}_{i}^{\\lambda}$ we have that $ \\norm{\\tilde{\\chi}_i^{\\lambda}-\\tilde{\\chi}_i^{\\lambda'}}\\leq \\norm{\\tilde{x}_i-\\tilde{z}_i}\\abs{\\lambda-\\lambda'}$. We bounded this further using the diameter of $ Q$.\n\nNote that $f$ is Lipschitz over $Q$. By the existence of $x^\\ast$, $\\tilde{L}$-smoothness, and the diameter of $Q$ we obtain that the Lipschitz constant $L_\\mathtt{p}$ is $L_\\mathtt{p}\\leq 2R^2L$. Now we can proceed and bound the three summands of \\eqref{lipschitzness_of_G}. The first one reduces to the inequality above after using Lipschitzness of $f(\\cdot)$: \n \\begin{equation}\\label{eq:first_summand}\n \\abs{f(\\tilde{x}_{i+1}^{\\lambda})-f(\\tilde{x}_{i+1}^{\\lambda'})} \\leq L_\\mathtt{p}\\norm{\\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_{i+1}^{\\lambda}}.\n \\end{equation}\n In order to bound the second summand, we note that \n \\begin{equation}\\label{eq:lipschitzness_of_Gamma}\n \\abs{(\\hat{\\Gamma}_i^{-1})'(\\mathtt{\\tilde{x}})} = \\absadj{\\frac{A_ia_{i+1}\/\\gamman}{(A_i\\mathtt{\\tilde{x}}+a_{i+1}\/\\gamman)^2}} \\geq \\frac{\\gamman A_ia_{i+1}}{A_{i+1}^2},\n \\end{equation}\n so $\\hat{\\Gamma}_i(\\lambda')$, appearing in the first factor, is bounded by $A_{i+1}^2\/(\\gamman A_i a_{i+1})$. We used $\\mathtt{\\tilde{x}}\\in[\\gammap, 1\/\\gamman]$ for the bound. For the second factor, we add and subtract $\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}),\\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_{i}}$ and use the triangular inequality and then Cauchy-Schwartz. Thus, we obtain\n \\begin{align}\\label{eq:second_summand}\n \\begin{aligned}\n \\abs{&\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda'}), \\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_i}-\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i}} \\\\\n &\\leq \\norm{\\nabla f(\\tilde{x}_{i+1}^\\lambda)}\\cdot \\norm{\\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_{i+1}^{\\lambda}} +\\norm{\\nabla f(\\tilde{x}_{i+1}^{\\lambda'})-\\nabla f(\\tilde{x}_{i+1}^{\\lambda})}\\cdot\\norm{\\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_{i}} \\\\\n &\\circled{1}[\\leq] (2L_\\mathtt{p} +2\\tilde{L}\\tilde{R})\\norm{\\tilde{x}_{i+1}^{\\lambda'}-\\tilde{x}_{i+1}^{\\lambda}}.\n \\end{aligned}\n\\end{align}\nIn $\\circled{1}$, we used Lipschitzness to bound the first factor. We also used the diameter of $ Q$ to bound the last factor and the smoothness of $f(\\cdot)$ to bound the first factor of the second summand.\n\nFor the third summand, we will bound the first factor using Cauchy-Schwartz, smoothness of $f(\\cdot)$ and the diameter of $ Q$. We just proved in \\eqref{eq:lipschitzness_of_Gamma} that $\\hat{\\Gamma}_{i}$ is Lipschitz, so use use this property for the second factor. The result is the following\n \\begin{align}\\label{eq:third_summand}\n \\begin{aligned}\n \\abs{\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i}}\\cdot\\abs{\\hat{\\Gamma}_i(\\lambda')-\\hat{\\Gamma}_i(\\lambda)} \\leq 4\\tilde{L}\\tilde{R}^2 \\frac{A_{i+1}^2}{\\gamman A_i a_{i+1}}\\abs{\\lambda'-\\lambda}.\n \\end{aligned}\n\\end{align}\n\nApplying the bounds of the three summands \\eqref{eq:first_summand}, \\eqref{eq:lipschitzness_of_Gamma}, \\eqref{eq:second_summand}, \\eqref{eq:third_summand} into \\eqref{lipschitzness_of_G} we obtain the inequality $\\abs{G_i(\\lambda') - G_i(\\lambda)} \\leq \\hat{L}\\abs{\\lambda'-\\lambda}$ for \n\\begin{align*}\n \\begin{aligned}\n \\hat{L} = \\left(2\\tilde{R}+\\frac{2\\tilde{L}\\tilde{R}}{\\gamman\\sigma}\\right)\\left(L_\\mathtt{p} + (2L_\\mathtt{p}+2\\tilde{L}\\tilde{R})\\frac{A_{i+1}^2}{\\gamman A_i a_{i+1}}\\right)+ 4\\tilde{L}\\tilde{R}^2 \\frac{A_{i+1}^2}{\\gamman A_i a_{i+1}}.\n \\end{aligned}\n\\end{align*}\n\nWe will use the following to bound $\\hat{L}$. If we use the learning rates prescribed in \\cref{thm:accelerated_axgd_modified}, namely $a_{i} = \\frac{i\\sigma \\gamman^2\\gammap}{2L}$ and thus $A_i=\\frac{i(i+1)\\sigma\\gamman^2\\gammap}{4L}$ we can bound $A^2_{i+1}\/(A_ia_{i+1}) \\leq 3(i+2)$, using that $i\\geq 1$. In our setting, by smoothness and the existence of $\\tilde{x}^\\ast\\in Q$ such that $\\nabla f(\\tilde{x}^\\ast)=0$, we have that $L_\\mathtt{p}\\leq 2\\tilde{R}\\tilde{L}$. Recall we assume $\\sigma=O(1)$. In \\cref{alg:accelerated_gconvex} we use $\\sigma=1$.\n\nRecall we are denoting by $\\lambda^\\ast$ a value such that $\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda^\\ast}), \\tilde{x}_{i+1}^{\\lambda^\\ast} - \\tilde{x}_{i}}=0$ so $G_i(\\lambda^\\ast) \\leq 0$. Lipschitzness of $G$ implies that if $G_i(\\lambda^\\ast) \\leq 0$ then $G_i(\\lambda) \\leq \\hat{\\epsilon}_i$ for $\\lambda \\in [\\lambda^\\ast-\\frac{\\hat{\\epsilon}_i}{\\hat{L}}, \\lambda^\\ast+\\frac{\\hat{\\epsilon}_i}{\\hat{L}}]\\cap[\\Gamma_i^{-1}(\\gamman), \\Gamma_i^{-1}(\\gammap)]$. If the extremal points, $\\Gamma_i^{-1}(\\gamman),\\Gamma_i^{-1}(\\gammap)$ did not satisfy \\eqref{property_result_of_line_search_approximate}, then this interval is of length $\\frac{2\\hat{\\epsilon}_i}{\\hat{L}}$ and a point in such interval or another interval that is around another point $\\bar{\\lambda}^\\ast$ that satisfies $\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\bar{\\lambda}^\\ast}), \\tilde{x}_{i+1}^{\\bar{\\lambda}^\\ast}-\\tilde{x}_i}=0$ can be found with a binary search in at most\n\\[\n O\\left(\\log\\left(\\frac{\\hat{L}}{\\hat{\\epsilon}_i}\\right)\\right) \\circled{1}[=] O\\left(\\log\\left(\\frac{\\tilde{L}\\tilde{R}}{\\gamman\\hat{\\epsilon}_i}\\cdot i\\right)\\right)\n\\]\niterations, provided that at each step we can ensure we halve the size of the search interval. The bounds of the previous paragraph are applied in $\\circled{1}$.The binary search can be done easily: we start with $[\\Gamma_i^{-1}(\\gamman), \\Gamma_i^{-1}(\\gammap)]$ and assume the extremes do not satisfy \\eqref{property_result_of_line_search_approximate}, so the sign of $\\innp{\\nabla f(\\tilde{x}_{i+1}^\\lambda), \\tilde{x}_{i+1}^\\lambda-\\tilde{x}_i}$ is different for each extreme. Each iteration of the binary search queries the midpoint of the current working interval and if \\eqref{property_result_of_line_search_approximate} is not satisfied, we keep the half of the interval such that the extremes keep having the sign of $\\innp{\\nabla f(\\tilde{x}_{i+1}^\\lambda), \\tilde{x}_{i+1}^\\lambda-\\tilde{x}_i}$ different from each other, ensuring that there is a point in which this expression evaluates to $0$ and thus keeping the invariant. We include the pseudocode of this binary search in \\cref{alg:bin_search}.\n\\end{proof}\n\nWe proceed to prove \\cref{thm:acceleration_quasiquasarconvexity}, which is an immediate consequence of the previous results.\n\n\\begin{proof}[Proof of \\cref{thm:acceleration_quasiquasarconvexity}]\n The proof follows from \\cref{thm:accelerated_axgd_modified}, provided that we can find $\\hat{\\gamma}_i$ satisfying \\eqref{eq:approximate_multiplied_convexity}. \\cref{lemma:binary_search} shows that this is possible after performing a logarithmic number of queries to the gradient oracle. Note that given our choice of $\\hat{\\epsilon}_i$, $t$ and $a_i$, the number of queries to the gradient oracle \\cref{lemma:binary_search} requires is no more than $O(\\log(\\tilde{L}R\/\\gamman\\epsilon))$ for any $i \\leq t$. So we find an $\\epsilon$-minimizer of $f$ after $\\bigotilde{\\sqrt{\\tilde{L}\/(\\gamma^2\\gammap\\epsilon)}}$ queries to the gradient oracle.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:riemannian_acceleration}]\n Given the function to optimize $F:\\M\\to\\R$ and the geodesic map $h$, we define $f=F\\circ h^{-1}$. Using \\cref{lemma:smoothness_of_transformed_function} we know that $f$ is $\\tilde{L}$-smooth, with $\\tilde{L}=O(L)$. \\cref{prop:bounding_hyperplane} proves that $f$ satisfies \\eqref{eq:appendix_quasiquasarconvexity} for constants $\\gamman$ and $\\gammap$ depending on $R$. So \\cref{thm:acceleration_quasiquasarconvexity} applies and the total number of queries to the oracle needed to obtain an $\\epsilon$-minimizer of $f$ is $\\bigotilde{\\sqrt{\\tilde{L}\/\\gamman^2\\gammap\\epsilon}} = \\bigotilde{\\sqrt{L\/\\epsilon}}$. The result follows, since $f(\\tilde{x}_t) - f(\\tilde{x}^\\ast) = F(x_t)-F(x^\\ast)$.\n\\end{proof}\n\nWe recall a few concepts that were assumed during \\cref{sec:algorithm} to better interpret \\cref{thm:riemannian_acceleration}. We work in the hyperbolic space or an open hemisphere. The aim is to minimize a smooth and g-convex function defined on any of these manifolds, or a subset of them. The existence of a point $x^\\ast$ that satisfies $\\nabla F(x^\\ast)=0$ is assumed. Starting from an arbitrary point $x_0$, we let $R$ be a bound of the distance between $x_0$ and $x^\\ast$, that is, $R\\geq d(x_0,x^\\ast)$. We let $\\M=\\expon_x(\\bar{B}(0,R))$ so that $x^\\ast\\in\\M$. We assume $F:\\M'\\to\\R$ is a differentiable function, where $\\M' = \\expon_x{B(0,R')}$ and $R'>R$. We define $F$ on $\\M'$ only for simplicity, to avoid the use of subdifferentials. $\\M$ has constant sectional curvature $K$. If $K$ is positive, we restrict $R<\\pi\/2\\sqrt{K}$ so $\\M$ is contained in an open hemisphere and it is uniquely geodesic. We define a geodesic map $h$ from the hyperbolic plane or a open hemisphere onto a subset of $\\R^d$ and define the function $f:h(\\M)\\to\\R$ as $f = F\\circ h^{-1}$. We optimize this function in an accelerated way up to constants and log factors, where the constants appear as an effect of the deformation of the geometry and depend on $R$ and $K$ only. Note the assumption of the existence of $x^\\ast$ such that $\\nabla F(x^\\ast)=0$ is not necessary, since $\\argmin_{x\\in\\expon_{x_0}(\\bar{B}(0,R))}\\{F(x)\\}$ also satisfies the first inequality in \\eqref{eq:appendix_quasiquasarconvexity} so the lower bounds $L_i$ can be defined in the same way as we did. In that case, if we want to perform constrained optimization, one needs to use the Lipschitz constant of $F$, when restricted to $\\expon_x(\\bar{B}(0,R))$, for the analysis of the binary search. \n\n\\begin{algorithm}\n \\caption{BinaryLineSearch$(\\tilde{x}_i, \\tilde{z}_i, f, \\X, a_{i+1}, A_{i}, \\epsilon, \\tilde{L}, \\gamman, \\gammap)$}\n \\label{alg:bin_search}\n \\begin{algorithmic}[1]\n \\REQUIRE Points $\\tilde{x}_i$, $\\tilde{z}_i$, function $f$, domain $\\X$, learning rate $a_{i+1}$, accumulated learning rate $A_i$, final target accuracy $\\epsilon$, final number of iterations $t$, smoothness constant $\\tilde{L}$, constants $\\gamman, \\gammap$.\n Define $\\hat{\\epsilon}_i \\gets (A_t\\epsilon)\/(2(t-1)A_i)$ as in \\cref{thm:accelerated_axgd_modified}, i.e. with $A_t=t(t+1)\\gamma_n^2\\gamma_p\/4\\tilde{L}$. $\\hat{\\Gamma}_i$ defined as in \\eqref{def:Gamma} and $G_i$ defined as in \\eqref{def:G} i.e.\n \\[\n G_i(\\lambda) \\defi -\\hat{\\Gamma}_i(\\lambda) \\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i}+ (f(\\tilde{x}_{i+1}^{\\lambda})-f(\\tilde{x}_i)),\n \\]\n for $x_{i+1}^\\lambda$ being the result of method \\eqref{appendix_general_rule_modified_quasar_axgd} when $\\hat{\\gamma}_i = \\hat{\\Gamma}_i(\\lambda)$.\n \\ENSURE $\\lambda = \\frac{a_{i+1}\/\\gamman}{A_i\\hat{\\gamma}_i+a_{i+1}\/\\gamman}$ for $\\hat{\\gamma}_i$ such that $G_i(\\hat{\\Gamma}_i^{-1}(\\hat{\\gamma}_i)) \\leq \\hat{\\epsilon}_i$.\n \\IF{$G_i(\\hat{\\Gamma}_i^{-1}(1\/\\gamman)) \\leq \\hat{\\epsilon}_i$} $\\lambda = \\hat{\\Gamma}_i^{-1}(1\/\\gamman)$\n \\ELSIF{$G_i(\\hat{\\Gamma}_i^{-1}(\\gammap)) \\leq \\hat{\\epsilon}_i$} $\\lambda = \\hat{\\Gamma}_i^{-1}(\\gammap)$\n \\ELSE\n \\State $\\varl \\gets \\hat{\\Gamma}_i^{-1}(1\/\\gamman)$\n \\State $\\varr \\gets \\hat{\\Gamma}_i^{-1}(\\gammap)$\n \\State $\\lambda \\gets (\\varl + \\varr)\/2$\n \\WHILE{$G_i(\\lambda) > \\hat{\\epsilon}_i$}\n \\IF{$\\innp{\\nabla f(\\tilde{x}_{i+1}^{\\lambda}), \\tilde{x}_{i+1}^{\\lambda}-\\tilde{x}_i} <0$} $\\varr \\gets \\lambda$\n \\ELSE $\\ \\varl \\gets \\lambda$\n \\ENDIF\n \\State $\\lambda \\gets (\\varl + \\varr)\/2$\n \\ENDWHILE\n \\ENDIF\n \\State \\textbf{return} $\\lambda$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Auxiliary lemmas}\nThe following are classical lemmas of convex optimization that we used in this section and that we add for completeness.\n\n\\begin{fact}\\label{grad_of_fenchel_dual}\n Let $\\psi:Q\\to\\R$ be a differentiable strongly-convex function. Then\n \\[\n \\nabla \\psi^\\ast(\\tilde{z}) = \\argmax_{\\tilde{x}\\in Q}\\{\\innp{\\tilde{z}, \\tilde{x}}-\\psi(\\tilde{x})\\}.\n \\] \n\\end{fact}\n\n\\begin{lemma}[Duality of Bregman Div.] \\label{prop:duality_of_bregman_divergences}\n $D_{\\psi}(\\nabla \\psi^\\ast(\\tilde{z}), \\tilde{x}) = D_{\\psi^\\ast}(\\nabla \\psi(\\tilde{x}), \\tilde{z})$ for all $\\tilde{z}, \\tilde{x}$.\n\\end{lemma}\n\n\\begin{proof}\n From the definition of the Fenchel dual \\eqref{def:fenchel_dual} and \\eqref{grad_of_fenchel_dual} we have\n \\[\n \\psi^\\ast(\\tilde{z}) = \\innp{\\nabla \\psi^\\ast(\\tilde{z}), \\tilde{z}} - \\psi(\\nabla \\psi^\\ast(\\tilde{z})) \\text{ for all } \\tilde{z}.\n \\] \n Since by the Fenchel-Moreau Theorem we have $\\psi^{\\ast\\ast}=\\psi$, it holds\n \\[\n \\psi(\\tilde{x}) = \\innp{\\nabla \\psi(\\tilde{x}), \\tilde{x}} - \\psi^\\ast(\\nabla \\psi(\\tilde{x})), \\text{ for all } \\tilde{x}.\n \\] \n Using the definition of Bregman divergence \\eqref{def:bregman_divergence} and \\eqref{grad_of_fenchel_dual}:\n \\begin{align*}\n \\begin{aligned}\n D_{\\psi}(\\nabla \\psi^\\ast(\\tilde{z}), \\tilde{x}) &= \\psi(\\nabla \\psi^\\ast(\\tilde{z})) - \\psi(\\tilde{x})-\\innp{\\nabla \\psi(\\tilde{x}), \\nabla\\psi^\\ast(\\tilde{z})-\\tilde{x}} \\\\\n &= \\psi(\\nabla \\psi^\\ast(\\tilde{z}))+\\psi^\\ast(\\nabla \\psi(\\tilde{x})) - \\innp{\\nabla\\psi(\\tilde{x}), \\nabla\\psi^\\ast(\\tilde{z})} \\\\\n & = \\psi^\\ast(\\nabla \\psi(\\tilde{x}))-\\psi^\\ast(\\tilde{z}) - \\innp{\\nabla\\psi^\\ast(\\tilde{z}), \\nabla \\psi(\\tilde{x})-\\tilde{z}} \\\\\n & = D_{\\psi^\\ast}(\\nabla\\psi(\\tilde{x}), \\tilde{z}).\n \\end{aligned}\n \\end{align*}\n\\end{proof}\n\n\\begin{lemma}[Triangle inequality of Bregman Divergences] \\label{prop:triangle_inequality_of_bregman_div}\n For all $\\tilde{x}, \\tilde{y}, \\tilde{z} \\in Q$ we have \n \\[\n D_{\\psi^\\ast} (\\tilde{x}, \\tilde{y}) = D_{\\psi^\\ast} (\\tilde{z}, \\tilde{y}) + D_{\\psi^\\ast} (\\tilde{x}, \\tilde{z}) + \\innp{\\nabla \\psi^\\ast(\\tilde{z})-\\nabla \\psi^\\ast(\\tilde{y}), \\tilde{x}-\\tilde{z}}.\n \\] \n\\end{lemma}\n\n\\begin{proof}\n\\begin{align*}\n \\begin{aligned}\n D_{\\psi^\\ast}& (\\tilde{z}, \\tilde{y}) + D_{\\psi^\\ast} (\\tilde{x}, \\tilde{z}) + \\innp{\\nabla \\psi^\\ast(\\tilde{z})-\\nabla \\psi^\\ast(\\tilde{y}), \\tilde{x}-\\tilde{z}} \\\\\n &= (\\psi^\\ast(\\tilde{z})-\\psi^\\ast(\\tilde{y}) -\\innp{\\nabla \\psi^\\ast(\\tilde{y}), \\tilde{z}-\\tilde{y}}) \\\\\n & \\quad + (\\psi^\\ast(\\tilde{x})-\\psi^\\ast(\\tilde{z}) -\\innp{\\nabla \\psi^\\ast(\\tilde{z}), \\tilde{x}-\\tilde{z}}) \\\\\n &\\quad + \\innp{\\nabla \\psi^\\ast(\\tilde{z})-\\nabla \\psi^\\ast(\\tilde{y}), \\tilde{x}-\\tilde{z}} \\\\\n & = \\psi^\\ast(\\tilde{x}) -\\psi^\\ast(\\tilde{y}) -\\innp{\\nabla \\psi^\\ast(\\tilde{y}), \\tilde{z}-\\tilde{y}}+ \\innp{-\\nabla \\psi^\\ast(\\tilde{y}), \\tilde{x}-\\tilde{z}} \\\\\n & = D_{\\psi^\\ast} (\\tilde{x}, \\tilde{y}).\n \\end{aligned}\n \\end{align*}\n\\end{proof} \n\n\\begin{lemma}\\label{prop:bounding_breg_div_by_norm_of_difference}\n Given a $\\sigma$-strongly convex function $\\psi(\\cdot)$ the following holds:\n \\[\n D_{\\psi^\\ast}(\\tilde{z}_1, \\tilde{z}_2) \\geq \\frac{\\sigma}{2}\\norm{\\nabla \\psi^\\ast(\\tilde{z}_1)-\\nabla \\psi^\\ast(\\tilde{z}_2)}^2.\n \\] \n\\end{lemma}\n\n\\begin{proof}\n Using the first order optimality condition of the Fenchel dual and \\eqref{grad_of_fenchel_dual} we obtain\n \\[\n \\innp{\\nabla\\psi(\\nabla\\psi^\\ast(\\tilde{z}_1))-\\tilde{z}_1l, \\nabla \\psi^\\ast(\\tilde{z}_2)-\\nabla \\psi^\\ast(\\tilde{z}_1)} \\geq 0\n \\] \n Using $\\sigma$-strong convexity of $\\psi$ and the previous inequality we have\n \\begin{align*}\n \\begin{aligned}\n D_{\\psi^\\ast}(\\tilde{z}_1, \\tilde{z}_2) &= \\psi(\\nabla \\psi^\\ast(\\tilde{z}_2)) -\\psi(\\nabla\\psi^\\ast(\\tilde{z}_1)) - \\innp{\\tilde{z}_1, \\nabla \\psi^\\ast(\\tilde{z}_2)-\\nabla \\psi^\\ast(\\tilde{z}_1)} \\\\\n &\\geq \\frac{\\sigma}{2} \\norm{\\nabla \\psi^\\ast(\\tilde{z}_1)-\\nabla \\psi^\\ast(\\tilde{z}_2)}^2 + \\innp{\\nabla \\psi(\\nabla\\psi^\\ast(\\tilde{z}_1))-\\tilde{z}_1, \\nabla\\psi^\\ast(\\tilde{z}_2)-\\nabla\\psi^\\ast(\\tilde{z}_1)} \\\\\n &\\geq \\frac{\\sigma}{2} \\norm{\\nabla \\psi^\\ast(\\tilde{z}_1)-\\nabla \\psi^\\ast(\\tilde{z}_2)}^2.\n \\end{aligned}\n \\end{align*}\n\\end{proof} \n\n\\section{Reductions. Proofs of results in \\cref{sec:reductions}.} \\label{app:reductions}\n\n\\begin{proof}[Proof of \\cref{thm:reduction_to_g_convex}]\n Let $\\mathcal{A}_{\\operatorname{ns}}$ be the algorithm in the statement of the theorem. By strong g-convexity of $F$ and the assumptions on $\\mathcal{A}_{\\operatorname{ns}}$ we have that $\\hat{x}_T$ satisfies\n \\[\n \\frac{\\mu}{2}d(\\hat{x}_T, x^\\ast)^2 \\leq F(\\hat{x}_T)-F(x^\\ast) \\leq \\frac{\\mu}{2}\\frac{d(x_0,x^\\ast)^2}{2},\n \\]\n after $T=\\timens(L, \\mu, R)$ queries to the gradient oracle. This implies $d(\\hat{x}_T, x^\\ast)^2 \\leq d(x_0,x^\\ast)^2\/2$. Then, by repeating this process $r\\defi \\lceil\\log (\\mu \\cdot d(x_0,x^\\ast)^2\/\\epsilon)-1\\rceil$ times, using the previous output as input for the next round, we obtain a point $\\hat{x}_T^r$ that satisfies\n \\[\n F(\\hat{x}_T^r)-F(x^\\ast) \\leq \\frac{\\mu \\cdot d(\\hat{x}_T^{r-1},x^\\ast)^2}{4} \\leq \\cdots \\leq \\frac{\\mu \\cdot d(x_0,x^\\ast)^2}{4 \\cdot 2^{r-1}} \\leq \\epsilon.\n \\]\n And the total running time is $\\timens(L, \\mu, R)\\cdot r = O(\\timens(L, \\mu, R)\\log(\\mu \\cdot d(x_0,x^\\ast)^2\/\\epsilon))=O(\\timens(L, \\mu, R)\\log(\\mu\/\\epsilon))$.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{coroll:acceleration_st_g_convex}]\n Let $R$ be an upper bound on the distance between the initial point $x_0$ and an optimum $x^\\ast$, i.e. $d(x_0, x^\\ast)\\leq R$. Note that $\\norm{\\tilde{x}_0-\\tilde{x}^\\ast}\/R$ is bounded by a constant depending on $R$ by \\cref{lemma:deformations}.a). Note that $\\gamman$ and $\\gammap$ are constants depending on $R$ by \\cref{prop:bounding_hyperplane}. As any g-strongly convex function is g-convex, by using \\cref{thm:accelerated_axgd_modified} and \\cref{lemma:binary_search} with $\\epsilon=\\mu \\frac{R^2}{4}$ we obtain that \\cref{alg:accelerated_gconvex} obtains a $\\mu \\frac{R^2}{4}$-minimizer in at most \n\\[\n T = O\\left(\\frac{\\norm{\\tilde{x}_0-\\tilde{x}^\\ast}}{R}\\sqrt{\\frac{L}{\\mu\\gamman^2\\gammap}}\\log\\left(\\frac{\\norm{\\tilde{x}_0-\\tilde{x}^\\ast}}{R}\\sqrt{\\frac{L}{\\mu\\gamman^2\\gammap}}\\right)\\right) = O\\left(\\sqrt{L\/\\mu}\\log(L\/\\mu)\\right)\n\\] \n queries to the gradient oracle. Subsequent stages, i.e. calls to \\cref{alg:accelerated_gconvex}, need a time at most equal to this. The analysis is the same, but we start at the previous output point and take into account that the initial distance to the optimum has decreased. Using the reduction \\cref{thm:reduction_to_g_convex} we conclude that given $\\epsilon>0$ and running \\cref{alg:accelerated_gconvex} in stages, we obtain an $\\epsilon$-minimizer of $F$ in \n \\[\n O(\\sqrt{L\/\\mu}\\log(L\/\\mu)\\log(\\mu \\cdot d(x_0, x^\\ast)^2\/\\epsilon)) = \\bigoast{\\sqrt{L\/\\mu}\\log(\\mu\/\\epsilon)},\n \\] \n queries to the gradient oracle.\n\n As advanced in the main paper, each of the stages of the algorithm resulting from combining \\cref{thm:reduction_to_g_convex} and \\cref{coroll:acceleration_st_g_convex} reduces the distance to $x^\\ast$ by a factor of $1\/\\sqrt{2}$. This means that subsequent stages can be run using a geodesic map centered at the new starting point, and with the new parameter $R$ being the previous one reduced by a factor of $1\/\\sqrt{2}$. This reduces the constants incurred by the deformation of the geometry which ultimately reduces the overall constant in the rate. Note that in order to perform the method with the recentering steps, we need the function $F$ to be defined over at least $\\expon_{x_0}(\\bar{B}(0, R\\cdot(1+2^{-1\/2})))$, since subsequent centers are only guaranteed to be $\\leq R\/\\sqrt{2}$ close to $x^\\ast$, and they could get slightly farther from $x_0$.\n\\end{proof}\n\n\\subsection{Proof of \\cref{thm:reduction_to_g_st_convex}}\n The algorithm is the following. We successively regularize the function with strongly g-convex regularizers in this way $F^{(\\mu_i)}(x) \\defi F(x)+ \\frac{\\mu_i}{2}d(x, x_0)^2$ for $i \\geq 0$. For each $i \\geq 0$, we use the algorithm $\\mathcal{A}$ on the function $F^{(\\mu_i)}$ for the time in the statement of the theorem and obtain a point $\\hat{x}_{i+1}$, starting from point $\\hat{x}_i$, where $\\hat{x}_0=x_0$. The regularizers are decreased exponentially $\\mu_{i+1} = \\mu_i\/2$ until we reach roughly $\\mu_T=\\epsilon\/R^2$, see below for the precise value. Let's see how this algorithm works. We first state the following fact, that says that indeed $\\frac{\\mu_i}{2}d(x, x_0)^2$ is a strongly g-convex regularizer. Let $D$ be the diameter of $\\M$. We define the following quantities\n\\[\n \\distorp \\defi \n \\begin{cases} \n {1} & {\\text{if } K_{\\max } \\leq 0} \\\\\n {\\sqrt{K_{\\max }} D \\cot (\\sqrt{K_{\\max }} D)} & {\\text{if } K_{\\max }>0}\n \\end{cases}\n\\]\n\\[\n \\distorn \\defi \n \\begin{cases} \n {\\sqrt{-K_{\\min }} D \\operatorname{coth}(\\sqrt{-K_{\\min }} D)} & {\\text{if } K_{\\min }<0} \\\\\n {1} & {\\text{if } K_{\\min } \\geq 0}\n \\end{cases}\n\\]\nHere $K_{\\max}$ and $K_{\\min}$ are the upper and lower bounds on the sectional curvature of the manifold $\\mathcal{M}$. In \\cref{thm:reduction_to_g_st_convex}, it is $D=2R$.\n\n\\begin{fact}\\label{strong_convexity_and_smoothness_of_ell_2}\n Let $\\M=\\expon_{x_0}(\\bar{B}(0,R))$ be a manifold with sectional curvature bounded below and above by $K_{\\min}$ and $K_{\\max}$, respectively, where $x_0\\in\\M$ is a point. The function $f:\\M\\to\\R$ defined as $f(x) = \\frac{1}{2}d(x, x_0)^2$ is $\\distorp$-g-strongly convex and $\\distorn$-smooth.\n\\end{fact}\n\nThe result regarding strong convexity can be found, for instance, in \\cite{alimisis2019continuous} and it is a direct consequence of the following inequality, which can also be found in \\cite{alimisis2019continuous}:\n\\[\n d(y,x_0)^2 \\geq d(x,x_0)^2-2\\innp{\\expon^{-1}_x(x_0), y-x} + \\distorp d(x,y)^2,\n\\]\nalong with the fact that $\\operatorname{grad} f(x) = -\\expon^{-1}_x(x_0)$. The result regarding smoothness is, similarly, obtained from the following inequality:\n\\[\n d(y,x_0)^2 \\leq d(x,x_0)^2-2\\innp{\\expon^{-1}_x(x_0), y-x} + \\distorn d(x,y)^2,\n\\]\nwhich can be found in \\cite{zhang2016first} (Lemma $6$). Alternatively, one can derive these inequalities from upper and lower bounds on the Hessian of $f(x) = \\frac{1}{2}d(x, x_0)$, cf. \\cite{jost2008riemannian}, Theorem 4.6.1, as it was done in \\cite{lezcano2020curvature}.\n\nWe prove now that the regularization makes the minimum to be closer to $x_0$, so the assumption of the Theorem on $\\hat{F}$ holds for the functions we use. Define $x_{i+1}$ as the minimizer of $F^{(\\mu_i)}$.\n\n\\begin{lemma}\\label{claim:reduction_sc}\nWe have $d(x_{i+1},x_0) \\leq d(x^\\ast, x_0)$.\n\\end{lemma}\n\n\\begin{proof}\n By the fact that $x_{i+1}$ is the minimizer of $F^{(\\mu_i)}$ we have $F^{(\\mu_i)}(x_{i+1})-F^{(\\mu_i)}(x^\\ast) \\leq 0$. Note that by g-strong convexity, equality only holds if $x_{i+1} = x^\\ast$ which only happens if $x_0=x_{i+1}=x^\\ast$. By using the definition of $F^{(\\mu_i)}(x) = F(x) + \\frac{\\mu_i}{2}d(x,x_0)^2$ we have:\n\\begin{align*} \n \\begin{aligned}\n F(x_{i+1}) & + \\frac{\\mu_i}{2} d(x_{i+1},x_0)^2 - F(x^\\ast)-\\frac{\\mu_i}{2}d(x^\\ast,x_0)^2 \\leq 0 \\\\\n \\Rightarrow \\ \\ \\ & d(x_{i+1},x_0) \\leq d(x^\\ast,x_0),\n \\end{aligned}\n\\end{align*}\n where in the last step we used the fact $F(x_{i+1}) - F(x^\\ast) \\geq 0$ that holds because $x^\\ast$ is the minimizer of $F$.\n\\end{proof}\n\nWe note that previous techniques proved and used the fact that $d(x_{i+1}, x^\\ast) \\leq d(x_0, x^\\ast)$ instead \\cite{allen2016optimal}. But crucially, we need our former lemma in order to prove the bound for our non-Euclidean case. Our variant could be applied to \\cite{allen2016optimal} to decrease the constants of the Euclidean reduction. Now we are ready to prove the theorem.\n\n\\begin{proof}[Proof of \\cref{thm:reduction_to_g_st_convex}]\n We recall the definitions above. $F^{(\\mu_i)}(x) = F(x)+\\frac{\\mu_i}{2}d(x,x_0)^2$. We start with $\\hat{x}_0=x_0$ and compute $\\hat{x}_{i+1}$ using algorithm $\\mathcal{A}$ with starting point $\\hat{x}_i$ and function $F^{(\\mu_i)}$ for time $\\time(L^{(i)}, \\mu^{(i)}, \\M, R)$, where $L^{(i)}$ and $\\mu^{(i)}$ are the smoothness and strong g-convexity parameters of $F^{(\\mu_i)}$. We denote by $x_{i+1}$ the minimizer of $F^{(\\mu_i)}$. We pick $\\mu_i=\\mu_{i-1}\/2$ and we will choose later the value of $\\mu_0$ and the total number of stages. By the assumption of the theorem on $\\mathcal{A}$, we have that \n \\begin{equation} \\label{eq:aux_eq_reduction_to_g_st_convex}\n F^{(\\mu_i)}(\\hat{x}_{i+1})-\\min_{x\\in\\M} F^{(\\mu_i)}(x) = F^{(\\mu_i)}(\\hat{x}_{i+1})- F^{(\\mu_i)}(x_{i+1}) \\leq \\frac{F^{(\\mu_i)}(\\hat{x}_{i})- F^{(\\mu_i)}(x_{i+1})}{4}.\n \\end{equation}\n\n Define $D_{i} \\defi F^{\\left(\\mu_{i}\\right)}\\left(\\hat{x}_{i}\\right)-F^{\\left(\\mu_{i}\\right)}\\left(x_{i+1}\\right)$ to be the initial objective distance to the minimum on function $F^{(\\mu_i)}$ before we call $\\mathcal{A}$ for the $(i+1)$-th time. At the beginning, we have the upper bound $D_{0}=F^{(\\mu_{0})}(\\hat{x}_{0})- \\min_{x} F^{(\\mu_{0})}(x) \\leq F(x_{0})-F(x^{\\ast})$. For each stage $i \\geq 1$, we compute that\n\\begin{align*}\n \\begin{aligned}\n \\quad& D_{i} = F^{\\left(\\mu_{i}\\right)}\\left(\\hat{x}_{i}\\right)-F^{\\left(\\mu_{i}\\right)}\\left(x_{i+1}\\right) \\\\\n &\\circled{1}[=] F^{\\left(\\mu_{i-1}\\right)}\\left(\\hat{x}_{i}\\right)-\\frac{\\mu_{i-1}-\\mu_{i}}{2}d(x_{0},\\hat{x}_{i})^{2}-F^{\\left(\\mu_{i-1}\\right)}\\left(x_{i+1}\\right)+\\frac{\\mu_{i-1}-\\mu_{i}}{2}d(x_{0},x_{i+1})^{2} \\\\\n &\\circled{2}[\\leq] F^{\\left(\\mu_{i-1}\\right)}\\left(\\hat{x}_{i}\\right)-F^{\\left(\\mu_{i-1}\\right)}\\left(x_{i}\\right)+\\frac{\\mu_{i-1}-\\mu_{i}}{2}d(x_{0},x_{i+1})^{2} \\\\\n &\\circled{3}[\\leq] \\frac{D_{i-1}}{4} + \\frac{\\mu_{i}}{2}d(x_{0},x_{i+1})^{2} \\\\\n &\\circled{4}[\\leq] \\frac{D_{i-1}}{4}+\\frac{\\mu_{i}}{2}d(x_{0},x^\\ast)^{2}.\n \\end{aligned}\n\\end{align*}\n Above, $\\circled{1}$ follows from the definition of $F^{(\\mu_i)}(\\cdot)$ and $F^{\\left(\\mu_{i-1}\\right)}(\\cdot)$; $\\circled{2}$ follows from the fact that $x_{i}$ is the minimizer of $F^{\\left(\\mu_{i-1}\\right)}(\\cdot)$. We also drop the negative term $-(\\mu_{i-1}-\\mu_i)d(x_0, \\hat{x}_i)\/2$. $\\circled{3}$ follows from the definition of $D_{i-1}$, the assumption on $\\mathcal{A}$, and the choice $\\mu_i = \\mu_{i-1}\/2$ for $i\\geq 1$; and $\\circled{4}$ follows from \\cref{claim:reduction_sc}. \nNow applying the above inequality recursively, we have\n \\begin{equation}\\label{eq:aux_eq_bounding_D_T}\nD_T \\leq \\frac{D_0}{4^T} + d(x_0,x^\\ast)^2 \\cdot (\\frac{\\mu_T}{2} + \\frac{\\mu_{T-1}}{8} + \\cdots) \\leq \\frac{F(x_0)-F(x^\\ast)}{4^T} + \\mu_T \\cdot d(x_0,x^\\ast)^2.\n\\end{equation}\nWe have used the choice $\\mu_i = \\mu_{i-1}\/2$ for the second inequality. Lastly, we can prove that $\\hat{x}_T$, the last point computed, satisfies \n\\begin{align*}\n \\begin{aligned}\n F(\\hat{x}_{T})-F(x^\\ast) &\\circled{1}[\\leq] F^{(\\mu_{T})}(\\hat{x}_{T})-F^{(\\mu_{T})}(x^{\\ast})+\\frac{\\mu_{T}}{2}d(x_{0},x^{\\ast})^{2} \\\\\n & \\circled{2}[\\leq] F^{(\\mu_{T})}(\\hat{x}_{T})-F^{(\\mu_{T})}(x_{T+1}) +\\frac{\\mu_{T}}{2}d(x_{0},x^{\\ast})^{2} \\\\ \n &\\circled{3}[=] D_{T}+\\frac{\\mu_{T}}{2}d(x_{0},x^{\\ast})^{2} \\\\\n &\\circled{4}[\\leq] \\frac{F(x_{0})-F(x^{\\ast})}{4^{T}}+ \\frac{3\\mu_{T}}{2} d(x_{0},x^{\\ast})^{2}.\n \\end{aligned}\n\\end{align*}\n\n We use the definition of $F^{(\\mu_{T})}$ in $\\circled{1}$ and drop $-\\frac{\\mu_T}{2}d(x_0,\\hat{x}_T)^2$. In $\\circled{2}$ we use the fact that $x_{T+1}$ is the minimizer of $F^{(\\mu_{T})}$. The definition of $D_T$ is used in $\\circled{3}$. We use inequality \\eqref{eq:aux_eq_bounding_D_T} for step $\\circled{4}$. Finally, by choosing $T = \\lceil \\log_2(\\Delta\/\\epsilon)\/2\\rceil+1$ and $\\mu_0 = \\Delta\/R^2$ we obtain that the point $\\hat{x}_T$ satisfies \n \\[\n F(\\hat{x}_T)-F(x^\\ast) \\leq \\frac{F(x_{0})-F(x^{\\ast})}{4\\Delta\/\\epsilon}+ \\frac{3\\mu_0}{8\\Delta\/\\epsilon} d(x_{0},x^{\\ast})^{2} \\leq \\frac{\\epsilon}{4}+\\frac{3\\epsilon}{8}<\\epsilon,\n \\] \n and can be computed in time $\\sum_{t=0}^{T-1} \\time(L+2^{-t}\\mu_0 \\distorn, 2^{-t}\\mu_0 \\distorp, \\M, R)$, since by Fact \\ref{strong_convexity_and_smoothness_of_ell_2} the function $F^{(\\mu_t)}$ is $L+2^{-t}\\mu_0 \\distorn$ smooth and $2^{-t}\\mu_0 \\distorp$ g-strongly convex.\n\\end{proof}\n\n\\subsection{\\cref{example:application_of_reduction_to_st_convex}}\nWe use the algorithm in \\cref{coroll:acceleration_st_g_convex} as the algorithm $\\mathcal{A}$ of the reduction of \\cref{thm:reduction_to_g_st_convex}. Given $\\M=\\H$ or $\\M=\\S$, the assumption on $\\mathcal{A}$ is satisfied for $\\time(L, \\mu, \\M, R)=O(\\sqrt{L\/\\mu}\\log (L\/\\mu))$. Indeed, if $\\Delta$ is a bound on the gap $\\hat{F}(x_0)-\\hat{F}(x^\\ast) = \\hat{F}(x_0)-\\min_{x\\in\\M}\\hat{F}(x) = \\hat{F}(x_0)-\\min_{x\\in\\expon_{x_0}(\\bar{B}(0, R))}\\hat{F}(x)$ for some $\\mu$ strongly g-convex $\\hat{F}$, then we know that $d(x_0, x^\\ast)^2 \\leq \\frac{2\\Delta}{\\mu}$. By calling the algorithm in \\cref{coroll:acceleration_st_g_convex} with $\\epsilon = \\frac{\\Delta}{4}$ we require a time that is \n\\begin{align*}\n \\begin{aligned}\n O( \\sqrt{L\/\\mu}\\log (L\/\\mu)\\log(\\mu \\cdot d(x_0, x^\\ast)^2\/(\\Delta\/4))) &= O(\\sqrt{L\/\\mu}\\log (L\/\\mu)\\log(\\mu \\cdot (2\\Delta\/\\mu)\/(\\Delta\/4))) \\\\\n &= O(\\sqrt{L\/\\mu}\\log (L\/\\mu)).\n \\end{aligned}\n\\end{align*}\n\nLet $T = \\lceil \\log_2(\\Delta\/\\epsilon)\/2\\rceil+1$. The reduction of \\cref{thm:reduction_to_g_st_convex} gives an algorithm with rates\n\\begin{align*}\n \\begin{aligned}\n \\sum_{t=0}^{T-1}& \\time(L+2^{-t}\\mu_0 \\distorn, 2^{-t}\\mu_0 \\distorp, \\M, R) \\\\\n &= O\\left(\\sum_{t=0}^{T-1}\\sqrt{\\frac{L}{2^{-t}\\distorp\\Delta\/R^2}+\\frac{\\distorn}{\\distorp}}\\cdot\\log\\left(\\frac{L}{2^{-t}\\distorp\\Delta\/R^2}+\\frac{\\distorn}{\\distorp}\\right)\\right) \\\\\n &\\circled{1}[=] O\\left(\\left(\\sqrt{\\frac{L}{\\distorp\\epsilon}}+\\sqrt{\\frac{\\distorn}{\\distorp}}\\log(\\Delta\/\\epsilon)\\right)\\log\\left(\\frac{L}{\\distorp\\epsilon}+\\frac{\\distorn}{\\distorp}\\right) \\right) \\\\\n &= \\bigotilde{\\sqrt{L\/\\epsilon}}\n \\end{aligned}\n\\end{align*}\nIn $\\circled{1}$ we have used Minkowski's inequality $\\sqrt{a+b} \\leq \\sqrt{a} + \\sqrt{b}$. We used the value $\\mu_0=\\Delta\/R^2$. In order to group the sum of the first summands, we used the fact that $\\sqrt{1\/\\epsilon} + \\sqrt{1\/2\\epsilon} + \\cdots = O(\\sqrt{1\/\\epsilon})$, along with the fact $2^{-(T-1)}\\mu_0\\geq\\log(\\epsilon\/\\Delta)$. We added up the group of second summands. For the $\\log$ factor, we upper bounded $L\/(2^{-t}\\distorp\\Delta\/R^2) = O(L\/\\distorp\\epsilon)$, for $t< T$. Note that by $L$-smoothness and the diameter being $2R$, we have $\\Delta \\leq 2LR^2$ so $\\sqrt{\\distorn\/\\distorp}\\log(\\Delta\/\\epsilon) = \\bigotilde{1}$.\n\n\\section{Geometric results. Proofs of Lemmas \\ref{lemma:deformations}, \\ref{prop:bounding_hyperplane} and \\ref{lemma:smoothness_of_transformed_function}} \\label{app:geometric_results}\nIn this section we prove the lemmas that take into account the deformations of the geometry and the geodesic map $h$ to obtain relationships between $F$ and $f$. Namely \\cref{lemma:deformations}, \\cref{prop:bounding_hyperplane} and \\cref{lemma:smoothness_of_transformed_function}. First, we recall the characterizations of the geodesic map and some consequences. Then in \\cref{app:sec_distance_deformation}, \\cref{app:sec_angle_deformation} and \\cref{sec:app_gradient_deformation_and_smoothness} we prove the results related to distances angles and gradient deformations, respectively. That is, each of the three parts of \\cref{lemma:deformations}. In \\cref{sec:app_gradient_deformation_and_smoothness} we also prove \\cref{lemma:smoothness_of_transformed_function}, which comes naturally after the proof of \\cref{lemma:deformations}.c). Finally, in \\cref{app:sec_proof_of_bounding_hyperplane} we prove \\cref{prop:bounding_hyperplane}. Before this, we note that we can assume without loss of generality that the curvature of our manifolds of interest can be taken to be $K\\in\\{1, -1\\}$. One can see that the final rates depend on $K$ through $R$, $L$ and $\\mu$.\n\n\\begin{remark}\\label{remark:rescaling_of_K}\n For a function $F:\\M\\to\\R$ where $\\M=\\H$ or $\\M=\\S$ is a manifold of constant sectional curvature $K\\not\\in\\{1, -1, 0\\}$, we can apply a rescaling to the Gnomonic or Beltrami-Klein projection to define a function on a manifold of constant sectional curvature $K\\in\\{1, -1\\}$. Namely, we can map $\\M$ to $\\B$ via $h$, then we can rescale $\\B$ by multiplying each vector in $\\B$ by the factor $\\sqrt{\\abs{K}}$ and then we can apply the inverse geodesic map for the manifold of curvature $K\\in\\{1, -1\\}$. If $R$ is the original bound of the initial distance to an optimum, and $F$ is $L$-smooth and $\\mu$-strongly g-convex (possibly with $\\mu=0$) then the initial distance bound becomes $\\sqrt{\\abs{K}}R$ and the induced function becomes $L\/\\abs{K}$-smooth and $\\mu\/\\abs{K}$-strongly g-convex. This is a consequence of the transformation rescaling distances by a factor of $\\sqrt{\\abs{K}}$, i.e. if $r:\\M_{K} \\to \\M_{K\/\\abs{K}}$ is the rescaling function, then $d_{K}(x,y)\\sqrt{\\abs{K}} = d_{K\/\\abs{K}}(r(x),r(y))$, where $d_c(\\cdot, \\cdot)$ denotes the distance on the manifold of constant sectional curvature $c$.\n\\end{remark}\n\n\\subsection{Preliminaries}\nWe recall our characterization of the geodesic map. Given two points $\\tilde{x}, \\tilde{y}\\in \\B$, we have that $d(x,y)$, the distance between $x$ and $y$ with the metric of $\\M$, satisfies\n\\begin{equation}\\label{eq:appendix_characterization_of_geodesic_map_and_metric}\n C_K(d(x,y)) = \\frac{1+K\\innp{\\tilde{x}, \\tilde{y}}}{\\sqrt{1+K\\norm{\\tilde{x}}^2}\\cdot\\sqrt{1+K\\norm{ \\tilde{y}}^2}}.\n\\end{equation}\nAnd since the expression is symmetric with respect to rotations, $\\X=h(\\M)$ is a closed ball of radius $\\tilde{R}$, with $C_K(R) = (1+K\\tilde{R}^2)^{-1\/2}$. Equivalently, \n\\begin{align} \\label{eq:R_tilde_vs_R}\n \\begin{aligned}\n \\tilde{R} = \\tan(R) &\\ & & \\text{ if } K=1, \\\\\n \\tilde{R} = \\tanh(R) &\\ & & \\text{ if } K=-1.\n \\end{aligned}\n\\end{align}\nSimilarly, we can write the distances as\n\\begin{align} \\label{eq:distances}\n \\begin{aligned}\n d(x, y) = \\arccos\\left(\\frac{ 1 + \\innp{\\tilde{x}, \\tilde{y}}}{\\sqrt{1+\\norm{\\tilde{x}^2}}\\sqrt{1+\\norm{\\tilde{y}^2}}}\\right) &\\ & & \\text{ if } K=1, \\\\\n d(x, y) = \\arccosh\\left(\\frac{ 1 - \\innp{\\tilde{x}, \\tilde{y}}}{\\sqrt{1-\\norm{\\tilde{x}^2}}\\sqrt{1-\\norm{\\tilde{y}^2}}}\\right) &\\ & & \\text{ if } K=-1, \n \\end{aligned}\n\\end{align}\nAlternatively, we have the following expression for the distance $d(x, y)$ when $K=-1$. Let $\\tilde{a}, \\tilde{b}$ be the two points of intersection of the ball $\\B=B(0,1)$ with the line joining $\\tilde{x}, \\tilde{y}$, so the order of the points in the line is $\\tilde{a}, \\tilde{x}, \\tilde{y}, \\tilde{b}$. Then\n\\begin{equation} \\label{eq:distances_hyper_alt}\n d(x, y) = \\frac{1}{2}\\log\\left(\\frac{\\norm{\\tilde{a}-\\tilde{y}}\\norm{\\tilde{x}-\\tilde{b}}}{\\norm{\\tilde{a}-\\tilde{x}}\\norm{\\tilde{b}-\\tilde{y}}}\\right) \\text{ if } K=-1.\n\\end{equation}\nWe will use this expression when working with the hyperbolic space. A simple elementary proof of the equivalence of the expressions in \\eqref{eq:distances} and \\eqref{eq:distances_hyper_alt} is the following. We can assume without loss of generality that we work with the hyperbolic plane, i.e. $d=2$. By rotational symmetry, we can also assume that $\\tilde{x} = (x_1, x_2)$ and $\\tilde{y} = (y_1, y_2)$, for $x_1=y_1$. In fact, it is enough to prove it in the case $x_2=0$ because we can split a general segment into two, each with one endpoint at $(x_1, 0)$, and then add their lengths up. So according to \\eqref{eq:distances} and \\eqref{eq:distances_hyper_alt}, respectively, we have\n\\[\n \\frac{1}{\\cosh^2(d(x, y))} =\\frac{(1-x_1^2)(1-y_1^2-y_2^2)}{(1-x_1^2)^2} = \\frac{(1-x_1^2-y_2^2)}{1-x_1^2}.\n\\] \n\\begin{align*} \n \\begin{aligned}\n d(x, y)&=\\frac{1}{2}\\log \\left(\\frac{(\\sqrt{1-y_1^2}+y_2)(\\sqrt{1-x_1^2})}{(\\sqrt{1-x_1^2})(\\sqrt{1-y_1^2}-y_2)}\\right) = \\frac{1}{2}\\log\\left(\\frac{1+y_2\/\\sqrt{1-x_1^2}}{1-y_2\/\\sqrt{1-x_1^2}}\\right) \\\\\n &= \\arctanh\\left(\\frac{y_2}{\\sqrt{1-x_1^2}}\\right)\n \\end{aligned}\n\\end{align*}\nwhere we have used the equality $\\tanh(t) = \\frac{1}{2}\\log(\\frac{1+t}{1-t})$. Now, using the trigonometric identity $\\frac{1}{\\cosh^2(t)} = 1-\\tanh^2(t)$, for $t=d(x, y)$, we obtain that the two expressions above are equal. See Theorem 7.4 in \\citep{greenberg1993euclidean} (p. 268) for more details about the distance formula under this geodesic map.\n\nThe spherical case is of a remarkable simplicity. If we have a $d$-sphere of radius $1$ centered at $0$, we can see the transformation as the projection onto the plane $x_d =1$. Given two points $\\mathbf{x}=(\\tilde{x}, 1)$, $\\mathbf{y}=(\\tilde{y}, 1)$ then the angle between these two vectors is the distance of the projected points on the sphere so we have $\\cos(d(x,y))=\\innp{\\mathbf{x}, \\mathbf{y}}\/\\norm{\\mathbf{x}}\\norm{\\mathbf{y}}$ which is equivalent to the corresponding formula in \\ref{eq:distances}.\n\n\\subsection{Distance deformation} \\label{app:sec_distance_deformation}\n\n\\begin{lemma} \\label{lemma:distances_hyperbolic} \n Let $\\H=\\expon_{x_0}(\\bar{B}(0, R))$ be a subset of the hyperbolic space with constant sectional curvature $K=-1$. Let $x, y\\in\\H$ be two different points. Then, we have\n \\[\n 1 \\leq \\frac{d(x, y)}{\\norm{\\tilde{x}-\\tilde{y}}} \\leq \\cosh^2(R).\n \\]\n\\end{lemma}\n\\begin{proof}\n We can assume without loss of generality that the dimension is $d=2$. As in \\eqref{eq:R_tilde_vs_R}, let $\\tilde{R}=\\tanh(R)$, so any point $\\tilde{x}\\in\\X$ satisfies $\\norm{\\tilde{x}} \\leq \\tilde{R}$, or equivalently $d(x, x_0) \\leq R$. Recall $\\tilde{x}_0=h(x_0)=0$. Without loss of generality, we parametrize an arbitrary segment of length $\\ell$ in $\\X$ by two endpoints $\\tilde{x}, \\tilde{y}$ with coordinates $\\tilde{x}=(x_1, x_2)$ and $\\tilde{y}=(x_1-\\ell, x_2)$, for $0\\leq x_2\\leq \\tilde{R}$, $0\\leq x_1 \\leq\\sqrt{\\tilde{R}^2-x_2^2}$ and $0<\\ell \\leq x_1+\\sqrt{\\tilde{R}^2-x_2^2}$. Let $\\mathfrak{d}(x_1, x_2, \\ell) \\defi \\frac{d(x,y)}{\\ell}$, the quantity we aim to bound. We will prove the upper bound on $\\mathfrak{d}(x_1, x_2, \\ell)$ in three steps. \n \\begin{enumerate}\n \\item If $x_1 = \\ell$ then $\\mathfrak{d}(\\cdot)$ is larger the larger $x_1$ is. This allows to prove that it is enough to consider points with the extra constraint $\\ell \\leq x_1$.\n\n \\item The partial derivative of $\\mathfrak{d}(\\cdot)$ with respect to $x_1$, whenever $\\ell \\leq x_1$, is non-negative. So we can just look at the points for which $x_1=\\sqrt{\\tilde{R}^2-x_2^2}$.\n\n \\item With the constraints above, $\\mathfrak{d}(\\cdot)$ is larger the smaller $\\ell$ is. So we have $\\mathfrak{d}(x_1, x_2, \\ell) \\leq \\lim_{\\ell\\to 0} \\mathfrak{d}(\\sqrt{\\tilde{R}^2-x_2^2}, x_2, \\ell) = \\sqrt{1-x_2^2}\/(1-\\tilde{R}^2)$. This expression is maximized at $x_2=0$ and evaluates to $1\/(1-\\tanh^2(R)) = \\cosh^2(R)$.\n \\end{enumerate} \n We proceed now to prove the steps above. For the first step, we note\n \\[\n \\mathfrak{d}(x_1, x_2, x_1) = \\frac{1}{2x_1}\\log\\left(\\frac{\\sqrt{1-x_2^2}(\\sqrt{1-x_2^2}+x_1)}{\\sqrt{1-x_2^2}(\\sqrt{1-x_2^2}-x_1)}\\right) = \\frac{1}{2x_1}\\log\\left(1+\\frac{2x_1}{\\sqrt{1-x_2^2}-x_1}\\right).\n \\]\n We prove that the inverse of this expression is not increasing with respect to $x_1$. By taking a partial derivative:\n\\begin{align*} \n \\begin{aligned}\n \\frac{\\partial (1\/\\mathfrak{d}(x_1, x_2,x_1))}{\\partial x_1} = 2\\frac{\\frac{-2x_1\\sqrt{1-x_2^2}}{1-x_2^2-x_1^2}+\\log(1+2x_1\/(\\sqrt{1-x_2^2}-x_1))}{\\log(1+2x_1\/(\\sqrt{1-x_2^2}-x_1))^2} \\stackrel{?}{\\leq} 0 \\\\\n \\iff \\frac{2x_1\\sqrt{1-x_2^2}}{1-x_2^2-x_1^2}-\\log(1+(2x_1\\sqrt{1-x_2^2}+2x_1^2)\/(1-x_2^2-x_1^2)) \\stackrel{?}{\\geq} 0.\n \\end{aligned}\n\\end{align*}\nIn order to see that the last inequality is true, note that the expression on the left hand side is $0$ when $x_1 = x_2 = 0$. And the partial derivatives of this with respect to $x_1$ and $x_2$, respectively, are:\n\\[\n \\frac{4\\sqrt{1-x_2^2} x_1^2}{(1-x_2^2-x_1^2)^2} \\text{ and } \\frac{4x_2x_1^3}{\\sqrt{1-x_2^2}(1-x_2^2-x_1^2)^2} .\n\\]\nBoth are greater than $0$ in the interior of the domain $0\\leq x_2\\leq \\tilde{R}$, $0\\leq x_1 \\leq\\sqrt{\\tilde{R}^2-x_2^2}$ and at least $0$ in the border.\n Now we use this monotonicity to prove that we can consider $\\ell \\leq x_1$ only. Suppose $\\ell > x_1$. The segment $\\ell$ is divided into two parts by the $e_2$ axis and we can assume without loss of generality that the negative part is no greater than the other, i.e. $x_1 \\geq \\ell - x_1$. Otherwise, we can perform the computations after a symmetry over the $e_2$ axis. Let $\\tilde{r}$ be the point $(0, x_2)$. We want to see that the segment from $\\tilde{x}$ to $\\tilde{r}$ gives a greater value of $\\mathfrak{d}(\\cdot)$:\n \\begin{align*} \n \\begin{aligned}\n \\frac{d(x, r)}{x_1} \\geq \\frac{d(x, y)}{\\ell} & \\iff d(x, r) (x_1 + (\\ell-x_1)) \\geq x_1( d(x, r) + d(r, y) ) \\\\ &\\iff d(x,r)\/x_1 \\geq d(r, y)\/(\\ell-x_1),\n \\end{aligned}\n \\end{align*}\n and the last inequality holds true by the monotonicity we just proved.\n\n In order to prove the second step, we take the partial derivative of $\\mathfrak{d}(x_1, x_2, \\ell)$ with respect to $x_1$. We have\n \\[\n \\mathfrak{d}(x_1, x_2, \\ell) = \\frac{1}{2\\ell}\\log\\left(\\frac{1+\\ell\/(\\sqrt{1-x_2^2}-x_1)}{1-\\ell\/\\sqrt{1-x_2^2}+x_1}\\right),\n \\]\n \\[\n \\frac{\\partial \\mathfrak{d}(x_1, x_2, \\ell)}{\\partial x_1} = \\frac{\\sqrt{1-d^2}(2x_1-\\ell)}{2(1-x_2^2-x_1^2)(1-x_2^2-(x_1-\\ell)^2)}.\n \\]\n And the derivative is positive in the domain we are considering.\n\n We now prove step $3$. We want to show that $\\mathfrak{d}(\\sqrt{\\tilde{R}^2-x_2^2}, x_2, \\ell\\cdot)$ decreases with $\\ell$, within our constraints $\\ell \\leq x_1 = \\sqrt{\\tilde{R}^2-x_2^2}$, $0\\leq x_2\\leq \\tilde{R}$. If we split the segment joining $\\tilde{x}$ and $\\tilde{y}$ in half with, respect to the metric in $\\B$, we see that due to the monotonicity proved in step $1$, the segment that is farther to the origin is longer in $\\mathcal{M}$ than the other one and so $\\mathfrak{d}(\\cdot)$ is greater for this half of the segment than for the original one. In symbols, let $\\tilde{r}$ be the middle point of the segment joining $\\tilde{x}$ and $\\tilde{y}$. We have by monotonicity that $\\mathfrak{d}(x_1, x_2, \\ell\/2) \\geq \\mathfrak{d}(x_1, x_2-\\ell\/2, \\ell\/2)$. So $\\mathfrak{d}(x_1, x_2, \\ell\/2) = \\frac{d(\\tilde{x},\\tilde{r})}{\\ell\/2} \\geq \\frac{d(\\tilde{x},\\tilde{r}) + d(\\tilde{r}, \\tilde{y})}{\\ell} = \\mathfrak{d}(x_1, x_2, \\ell)$. Thus,\n \\begin{align*} \n \\begin{aligned}\n \\mathfrak{d}(x_1,x_2, \\ell) &\\leq \\lim_{\\ell\\to 0} \\mathfrak{d}(\\sqrt{\\tilde{R}^2-x_2^2},x_2, \\ell) \\\\\n &= \\lim_{\\ell\\to 0} \\frac{1}{2\\ell}\\log\\left(\\frac{1+\\ell\/\\left(\\sqrt{1-x_2^2}-\\sqrt{\\tilde{R}^2-x_2^2}\\right)}{1-\\ell\/\\left(\\sqrt{1-x_2^2}+\\sqrt{\\tilde{R}^2-x_2^2}\\right)}\\right) \\\\\n & \\circled{1}[=] \\lim_{\\ell\\to 0} \\frac{\\sqrt{1-x_2^2}}{1-\\tilde{R}^2-2\\ell\\sqrt{\\tilde{R}^2-x_2^2}+\\ell^2} \\\\\n & = \\frac{\\sqrt{1-x_2^2}}{1-\\tilde{R}^2}.\n \\end{aligned}\n \\end{align*}\n We used L'H\u00f4pital's rule for $\\circled{1}$. We can maximize the last the result of the limit by setting $x_2=0$ and obtain that for any two different $\\tilde{x}, \\tilde{y} \\in \\X$\n \\[\n \\frac{d(x, y)}{\\norm{\\tilde{x}-\\tilde{y}}} \\leq \\frac{1}{1-\\tilde{R}^2}= \\frac{1}{1-\\tanh^2(R)} = \\cosh^2(R).\n \\]\n\n The lower bound is similar, assume that $\\ell>x_1$ and define $\\tilde{r}$ as above. We assume again without loss of generality that $x_1\\geq \\ell-x_1$. Then\n \\[\n \\frac{d(x, r)+d(r, y)}{\\ell} \\geq \\frac{d(x, r)}{\\ell-x_1} \\iff \\frac{d(r, y)}{x_1} \\geq \\frac{d(x,r)}{\\ell-x_1}\n \\] \n and the latter is true by the monotonicity proved in step $1$. This means that we can also consider $\\ell \\leq x_1$. But this time, according to step $2$, we want $x_1$ to be the lowest possible, so it is enough to consider $x_1=\\ell$. Using step $1$ again, we obtain that the lowest value of $\\mathfrak{d}(\\cdot)$ can be bounded by the limit $\\lim_{\\ell\\to 0} \\mathfrak{d}(\\ell, x_2, \\ell)$ which using L'H\u00f4pital's rule in $\\circled{1}$ is\n \\begin{align*} \n \\begin{aligned}\n \\mathfrak{d}(x_1,x_2, \\ell) &\\geq \\lim_{\\ell\\to 0} \\mathfrak{d}(\\ell,x_2, \\ell) \\\\\n &= \\lim_{\\ell\\to 0} \\frac{1}{2\\ell}\\log\\left(1+\\frac{2\\ell}{\\sqrt{1-x_2^2}-\\ell}\\right) \\\\\n & \\circled{1}[=] \\lim_{\\ell\\to 0} \\frac{\\frac{2(\\sqrt{1-x_2^2}-\\ell)+2\\ell}{(\\sqrt{1-x_2^2}-\\ell)^2}}{2(1+2\\ell\/(\\sqrt{1-x_2^2}-\\ell))} \\\\\n & = \\frac{1}{\\sqrt{1-x_2^2}}.\n \\end{aligned}\n \\end{align*}\n The expression is minimized at $x_2=0$ and evaluates to $1$.\n\\end{proof}\n\nThe proof of the corresponding lemma for the sphere is analogous, we add it for completeness.\n\n\\begin{lemma} \\label{lemma:distances_spherical} \n Let $\\S=\\expon_{x_0}(\\bar{B}(0, R))$ be a subset of the sphere with constant sectional curvature $K=1$ and $R<\\frac{\\pi}{2}$. Let $x, y\\in\\S$ be two different points. Then, we have\n \\[\n \\cos^2(R) \\leq \\frac{d(x, y)}{\\norm{\\tilde{x}-\\tilde{y}}} \\leq 1.\n \\]\n\\end{lemma}\n\n\n\\begin{proof}\n We proceed in a similar way than with the hyperbolic case. We can also work with $d=2$ only, since $\\tilde{x}, \\tilde{y}$ and $\\tilde{x}_0$ lie on a plane. We parametrize a general pair of points as $\\tilde{x} = (x_1, x_2) \\in \\X$ and $y=(x_1-\\ell,x_2) \\in \\X$, so $x_1^2+x_2^2 \\leq \\tilde{R}^2$, for $\\tilde{R} = \\tan(R)$ and by definition $\\ell=\\norm{\\tilde{x}-\\tilde{y}}$. \n \n Let $\\mathfrak{d}(x_1,x_2,\\ell) \\defi d(x,y)\/\\norm{\\tilde{x}-\\tilde{y}}$. We proceed to prove the result in three steps, similarly to the hyperbolic case.\n\\begin{enumerate}\n \\item If $x_1=\\ell$ then $\\mathfrak{d}(x_1,x_2,\\ell)$ decreases whenever $x_1$ increases. This allows to prove that it is enough to consider points in which $\\ell\\leq x_1$.\n \\item $\\frac{\\partial \\mathfrak{d}(\\cdot)}{\\partial x_1} \\leq 0$, whenever $\\ell \\leq x_1$. So we can consider $x_1=\\sqrt{\\tilde{R}^2-x_2^2}$ only.\n \\item With the constraints above, $\\mathfrak{d}(\\cdot)$ increases with $\\ell$, so in order to lower bound $\\mathfrak{d}(\\cdot)$ we can consider $\\lim_{\\ell\\to 0}\\mathfrak{d}(\\sqrt{\\tilde{R}-x_2},x_2, \\ell) = \\sqrt{1+x_2^2}\/(1+\\tilde{R}^2)$. This is minimized at $x_2=0$ and evaluates to $1\/(1+\\tilde{R}^2)$.\n\n\\end{enumerate}\n\nFor the first step, we compute the partial derivative:\n\\begin{equation}\\label{eq:partial_derivative_spherical_deformation}\n \\frac{\\partial \\mathfrak{d}(x_1, x_2, x_1)}{\\partial x_1} = \\frac{x_1 \\sqrt{1 + x_2^2}\/(1 + x_1^2 + x_2^2) - \\arccos\\left(\\sqrt{(1 + x_2^2)\/(1 + x_1^2 + x_2^2)}\\right)}{x_1^2}.\n\\end{equation}\nIn order to see that it is non-positive, we compute the partial derivative of the denominator with respect to $x_2$ and obtain\n\\[\n \\frac{2x_1^3 x_2}{\\sqrt{1+x_2^2}(1+x_1^2+x_2^2)} \\geq 0.\n\\]\nso in order to maximize \\eqref{eq:partial_derivative_spherical_deformation} we set $x_2=\\sqrt{\\tilde{R}-x_1^2}$. In that case, the numerator is\n\\begin{equation}\\label{eq:aux_eq_sph_def}\n\\frac{x_1 \\sqrt{1 + R^2 - x_1^2}}{1 + R^2} - \\arccos\\left(\\sqrt{\\frac{1 + R^2 - x_1^2}{1 + R^2}}\\right),\n\\end{equation}\nand its derivative with respect to $x_1$ is\n\\[\n -\\frac{2 x_1^2}{(1 + R^2) \\sqrt{1 + R^2 - x_1^2}} \\leq 0.\n\\]\nand given that \\eqref{eq:aux_eq_sph_def} with $x_1=0$ evaluates to $0$ we conclude that \\eqref{eq:partial_derivative_spherical_deformation} is non-positive. Similarly to \\cref{lemma:distances_hyperbolic}, suppose the horizontal segment that joins $\\tilde{x}$ and $\\tilde{y}$ passes through $\\tilde{r} \\defi (0,x_2)$. And suppose without loss of generality that $d(x,r) \\geq d(r,y)$, i.e. $x_1 \\geq \\ell-x_1$. Then by the monotonicity we just proved, we have \n\\begin{equation}\\label{eq:aux_mathfrakd_functions}\n \\frac{d(x,r)}{\\norm{\\tilde{x}-\\tilde{r}}} = \\mathfrak{d}(x_1, x_2, x_1) \\leq \\mathfrak{d}(\\ell - x_1, x_2, \\ell-x_1) = \\frac{d(r,y)}{\\norm{\\tilde{r}-\\tilde{y}}}.\n\\end{equation}\nAnd this implies $\\mathfrak{d}(x_1, x_2, x_1) \\leq \\mathfrak{d}(x_1, x_2, \\ell)$. Indeed, that is equivalent to show\n\\[\n \\frac{d(x,r)}{\\norm{\\tilde{x}-\\tilde{r}}} \\leq \\frac{d(x,y)}{\\norm{\\tilde{x}-\\tilde{y}}} = \\frac{d(x,r)+d(r+y)}{\\norm{\\tilde{x}-\\tilde{r}}+\\norm{\\tilde{r}-\\tilde{y}}}.\n\\]\nWhich is true, since after simplifying we arrive to \\eqref{eq:aux_mathfrakd_functions}. So in order to lower bound $\\mathfrak{d}(\\cdot)$, it is enough to consider $\\ell\\leq x_1$.\n\nWe focus on step $2$ now. We have\n\\[\n \\frac{\\partial \\mathfrak{d}(x_1,x_2,\\ell)}{\\partial x_1} = \\frac{\\sqrt{1+x_2^2}(\\ell-2 x_1)}{(1+x_2^2+(\\ell-x_1)^2)(1+x_2^2+x_1^2)}.\n\\]\nwhich is non-positive given the restrictions we imposed after step $1$. So in order to lower bound $\\mathfrak{d}(\\cdot)$ we can consider $x_1=\\sqrt{\\tilde{R}-x_2^2}$ only.\n\nFinally, in order to complete step $3$ we compute\n\\begin{align*}\n \\begin{aligned}\n \\frac{\\partial \\mathfrak{d}(\\sqrt{\\tilde{R}-x_2^2},x_2,\\ell)}{\\partial \\ell} &= \\frac{\\sqrt{1+x_2^2}}{\\ell(1+\\tilde{R}^2) + \\ell^3-2\\ell^2\\sqrt{\\tilde{R}^2-x_2^2}} \\\\\n &\\quad - \\frac{1}{\\ell^2}\\arccos\\left(\\frac{1+\\tilde{R}^2-\\ell\\sqrt{\\tilde{R}^2-x_2^2}}{\\sqrt{(1+\\tilde{R}^2)(1+\\tilde{R}^2+\\ell^2-2\\ell\\sqrt{\\tilde{R}^2-x_2^2})}}\\right)\n \\end{aligned}\n\\end{align*}\n\nAnd in order to prove that this is non-negative, we will prove that the same expression is non-negative, when multiplied by $\\ell^2$. We compute the partial derivative of the aforementioned expression with respect to $\\ell$:\n\\begin{align*}\n \\begin{aligned}\n \\frac{\\partial}{\\partial \\ell}\\left(\\frac{\\partial \\mathfrak{d}(\\sqrt{\\tilde{R}-x_2^2},x_2,\\ell)}{\\partial \\ell} \\ell^2\\right) = \\frac{2\\ell\\sqrt{1+x_2^2}(\\sqrt{\\tilde{R}^2-x_2^2}-\\ell)}{(1+\\tilde{R}^2+\\ell^2-2\\ell\\sqrt{\\tilde{R}^2-x_2^2})^2} \\geq 0.\n \\end{aligned}\n\\end{align*}\n\nAnd $\\ell^2(\\partial \\mathfrak{d}(\\sqrt{\\tilde{R}-x_2^2},x_2,\\ell)\/\\partial \\ell )$ evaluated at $0$ is $0$ for all choices of parameters $R$ and $x_2$ in the domain. So we conclude that $\\partial \\mathfrak{d}(\\sqrt{\\tilde{R}-x_2^2},x_2,\\ell)\/\\partial \\ell \\geq 0$.\n\nThus, we can consider the limit when $\\ell\\to 0$ in order to lower bound $\\mathfrak{d}(\\cdot)$. In the defined domain, we have \n\\begin{align*}\n \\begin{aligned}\n \\lim_{\\ell\\to 0}\\mathfrak{d}(\\sqrt{\\tilde{R}-x_2},x_2, \\ell) &= \\lim_{\\ell\\to 0}\\frac{1}{\\ell}\\arccos\\left(\\frac{1+\\tilde{R}^2-x\\sqrt{\\tilde{R}^2-x_2^2}}{\\sqrt{1+\\tilde{R}^2}\\sqrt{1+x_2^2+(\\ell-\\sqrt{\\tilde{R}^2-x_2^2})^2}}\\right) \\\\\n &\\circled{1}[=] \\lim_{\\ell\\to 0} \\frac{\\sqrt{1+x_2^2}}{1+\\tilde{R}^2+\\ell^2-2\\ell\\sqrt{\\tilde{R}^2-x_2^2}}\\\\\n &=\\frac{\\sqrt{1+x_2^2}}{1+\\tilde{R}^2}.\n \\end{aligned}\n\\end{align*}\nWe used L'H\u00f4pital's rule for $\\circled{1}$. Now, the right hand side of the previous expression is minimized at $x_2=0$ so we conclude that we have\n\\[\n \\cos^2(R) = \\frac{1}{1+\\tan^2(R)} = \\frac{1}{1+\\tilde{R}^2} \\leq \\mathfrak{d}(x_1, x_2, \\ell) = \\frac{d(p,q)}{\\norm{\\tilde{p}-\\tilde{q}}}.\n\\]\n\n The upper bound uses again a similar argument. Assume that $\\ell>x_1$ and define $\\tilde{r}$ as above. We assume again without loss of generality that $x_1\\geq \\ell-x_1$. Then\n \\[\n \\frac{d(x, r)+d(r, y)}{\\ell} \\leq \\frac{d(x, r)}{\\ell-x_1} \\iff \\frac{d(r, y)}{x_1} \\leq \\frac{d(x,r)}{\\ell-x_1}\n \\] \n and the latter is true by the monotonicity proved in step $1$. Consequently we can just consider the points that satisfy $\\ell \\leq x_1$. By step $2$, $\\mathfrak{d}(\\cdot)$ is maximal whenever $x_1$ is the lowest possible, so it is enough to consider $x_1=\\ell$. Using step $1$ again, we obtain that the greatest value of $\\mathfrak{d}(\\cdot)$ can be bounded by the limit $\\lim_{\\ell\\to 0} \\mathfrak{d}(\\ell, x_2, \\ell)$ which using L'H\u00f4pital's rule in $\\circled{1}$ and simplifying is\n \\begin{align*} \n \\begin{aligned}\n \\mathfrak{d}(x_1,x_2, \\ell) &\\leq \\lim_{\\ell\\to 0} \\mathfrak{d}(\\ell,x_2, \\ell) \\\\\n &= \\lim_{\\ell\\to 0} \\frac{1}{\\ell}\\arccos\\left(\\sqrt{\\frac{1+x_2^2}{1+\\ell^2+x_2^2}}\\right) \\\\\n & \\circled{1}[=] \\frac{1}{\\sqrt{1+x_2^2}}.\n \\end{aligned}\n \\end{align*}\n The expression is maximized at $x_2=0$ and evaluates to $1$.\n\\end{proof}\n\n\\subsection{Angle deformation}\\label{app:sec_angle_deformation}\n\\begin{lemma}\\label{lemma:angle_deformation}\n Let $\\M=\\H$ or $\\M=\\S$ and $K\\in \\{1, -1\\}$. Let $x,y\\in\\M$ be two different points and different from $x_0$. Let $\\tilde{\\alpha}$ be the angle $\\angle x_0xy$, formed by the vectors $x_0-x$ and $y-x$. Let $\\alpha$ be the corresponding angle between the vectors $\\expon_x^{-1}(x_0)$ and $\\expon_x^{-1}(y)$. The following holds:\n\\begin{align*}\n\\begin{aligned}\n\\sin(\\alpha) = \\sin(\\tilde{\\alpha}) \\sqrt{\\frac{1+K\\norm{\\tilde{x}}^2}{1+K\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}}, \\quad \\quad \\cos(\\alpha) = \\cos(\\tilde{\\alpha}) \\sqrt{\\frac{1}{1+K\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}}.\n\\end{aligned}\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\n Note that we can restrict ourselves to $\\alpha \\in [0, \\pi]$ because we have $\\widetilde{(-w)} = -\\tilde{w}$ (recall our \\hyperlink{sec:notation}{notation} about vectors with tilde). This means that the result for the range $\\alpha\\in[-\\pi, 0]$ can be deduced from the result for $-\\alpha$.\n\n We start with the case $K=-1$. We can assume without loss of generality that the dimension is $d=2$, and that the coordinates of $\\tilde{x}$ are $(0,x_2)$, for $x_2\\leq \\tanh(R)$ that $\\tilde{y}=(y_1, y_2)$, for some $y_1\\leq 0$ and $\\tilde{\\delta} \\defi \\angle \\tilde{y}\\tilde{x}_0\\tilde{x} \\in [0, \\pi\/2]$, since we can make the distance $\\norm{\\tilde{x}-\\tilde{y}}$ as small as we want. Recall $\\tilde{x}_0=\\mathbf{0}$. We recall that $d(x, x_0) = \\arctanh(\\norm{\\tilde{x}})$ and we note that $\\sinh(\\arctanh(t)) = \\frac{t}{1-t^2}$, so that $\\sinh(d(x, x_0)) = \\norm{\\tilde{x}}\/\\sqrt{1-\\norm{\\tilde{x}}^2}$, for any $\\tilde{x}\\in\\B$. We will apply the hyperbolic and Euclidean law of sines \\cref{thm:law_of_sines} in order to compute the value of $\\sin(\\alpha)$ with respect to $\\tilde{\\alpha}$. Let $\\tilde{a}$ and $\\tilde{b}$ be points in the border of $\\B$ such that the segment joining $\\tilde{a}$ and $\\tilde{b}$ is a chord that contains $\\tilde{x}$ and $\\tilde{y}$ and $\\norm{\\tilde{a}-\\tilde{x}}\\leq\\norm{\\tilde{b}-\\tilde{y}}$. So $\\norm{\\tilde{a}-\\tilde{x}}$ and $\\norm{\\tilde{b}-\\tilde{y}}$ are $\\sqrt{1-\\norm{\\tilde{x}}^2\\sin(\\tilde{\\alpha})-d\\cos(\\tilde{\\alpha})}$ and $\\sqrt{1-\\norm{\\tilde{x}}^2\\sin(\\tilde{\\alpha})+d\\cos(\\tilde{\\alpha})}$, respectively. We have\n\\begingroup\n\\allowdisplaybreaks\n \\begin{align*}\n \\begin{aligned}\n \\sin(\\alpha) &\\circled{1}[=] \\frac{\\sinh(d(x_0, y))\\sin(\\tilde{\\delta})}{\\sinh(d(x, y))} \\\\\n &\\circled{2}[=] \\frac{\\norm{\\tilde{x}_0-\\tilde{y}}}{\\sqrt{1-\\norm{\\tilde{x}_0-\\tilde{y}}^2}} \\cdot\\frac{\\norm{\\tilde{x}-\\tilde{y}} \\sin(\\tilde{\\alpha})}{\\norm{\\tilde{x}_0-\\tilde{y}}}\\cdot \\frac{1}{ \\sinh(d(x, y))}\\\\\n &\\circled{3}[=] \\frac{\\sin(\\tilde{\\alpha})}{\\sqrt{1-\\norm{\\tilde{x}}^2+\\norm{\\tilde{x}-\\tilde{y}}(-2\\norm{\\tilde{x}}\\cos(\\tilde{\\alpha})+\\norm{\\tilde{x}-\\tilde{y}})}} \\cdot\\frac{\\norm{\\tilde{x}-\\tilde{y}}}{\\sinh(d(x, y))} \\\\\n &\\circled{4}[=] \\frac{\\sin(\\tilde{\\alpha})}{\\sqrt{1-\\norm{\\tilde{x}}^2}} \\lim_{d(x, y)\\to 0} \\norm{\\tilde{x}-\\tilde{y}}\\frac{1}{\\sinh(d(x,y))} \\\\\n &\\circled{5}[=] \\frac{\\sin(\\tilde{\\alpha})}{\\sqrt{1-\\norm{\\tilde{x}}^2}} \\lim_{d(x, y)\\to 0} \\frac{(e^{2d(x, y)}-1)(\\norm{\\tilde{a}-\\tilde{x}}\\cdot \\norm{\\tilde{b}-\\tilde{x}})}{e^{2d(x, y)}\\norm{\\tilde{a}-\\tilde{x}} + \\norm{\\tilde{b}-\\tilde{x}}}\\cdot \\frac{2e^{d(x, y)}}{e^{2d(x, y)}-1}\\\\\n &=\\frac{\\sin(\\tilde{\\alpha})}{\\sqrt{1-\\norm{\\tilde{x}}^2}}\\cdot\\frac{2 \\norm{\\tilde{a}-\\tilde{x}} \\cdot \\norm{\\tilde{b}-\\tilde{x}}}{\\norm{\\tilde{a}-\\tilde{x}} + \\norm{\\tilde{b}-\\tilde{x}}} \\\\\n &\\circled{6}[=]\\frac{\\sin(\\tilde{\\alpha})}{\\sqrt{1-\\norm{\\tilde{x}}^2}}\\cdot \\frac{2(1-\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})-\\norm{\\tilde{x}}^2\\cos^2(\\tilde{\\alpha}))}{2\\sqrt{1-\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}} \\\\\n &= \\sin(\\tilde{\\alpha})\\sqrt{\\frac{1-\\norm{\\tilde{x}}^2}{1-\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}}.\n \\end{aligned}\n\\end{align*}\n\\endgroup\n In $\\circled{1}$ we used the hyperbolic sine theorem. In $\\circled{2}$ we used the expression above regarding segments that pass through the origin, and the Euclidean sine theorem. In $\\circled{3}$, we simplify and use that the coordinates of $\\tilde{y}$ are $(-\\sin(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}}, \\norm{\\tilde{x}}-\\cos(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}})$. Then, in $\\circled{4}$, since $\\sin(\\alpha)$ does not depend on $\\norm{\\tilde{x}-\\tilde{y}}$, we can take the limit when $d(x, y) \\to 0$, by which we mean we take the limit $\\tilde{y}\\to\\tilde{x}$ by keeping the angle $\\tilde{\\alpha}$ constant. Since a posteriori the limit of each fraction exists, we compute them one at a time. $\\circled{5}$ uses \\eqref{eq:distances_hyper_alt} and the definition of $\\sinh(d(x, y))$. In $\\circled{6}$ we substitute $\\norm{\\tilde{a}-\\tilde{x}}$ and $\\norm{\\tilde{b}-\\tilde{x}}$ by their values.\n\n The spherical case is similar to the hyperbolic case. We also assume without loss of generality that the dimension is $d=2$. Define $\\tilde{y}$ as a point such that $\\angle \\tilde{x_0}\\tilde{x}\\tilde{y} = \\tilde{\\alpha} $. We can assume without loss of generality that the coordinates of $\\tilde{x}$ are $(0,x_2)$, that $\\tilde{y}=(y_1, y_2)$, for $y_1\\leq 0$, and $\\tilde{\\delta} \\defi \\angle \\tilde{y}\\tilde{x_0}\\tilde{x} \\in [0, \\pi\/2]$, since we can make the distance $\\norm{\\tilde{x}-\\tilde{y}}$ as small as we want. We recall that by \\eqref{eq:R_tilde_vs_R} we have $d(x_0,x) = \\arctan(\\norm{\\tilde{x}_0-\\tilde{x}})$ and we note that $\\sin(\\arctan(t)) = \\frac{t}{1+t^2}$, so that $\\sin(d(x_0, x)) = \\norm{\\tilde{x}_0-\\tilde{x}}\/\\sqrt{1+\\norm{\\tilde{x}_0-\\tilde{x}}^2}$, for any $\\tilde{x}\\in\\B$. We will apply the spherical and Euclidean law of sines \\cref{thm:law_of_sines} in order to compute the value of $\\sin(\\alpha)$ with respect to $\\tilde{\\alpha}$. We have\n\\begin{align*}\n \\begin{aligned}\n \\sin(\\alpha) &\\circled{1}[=] \\frac{\\sin(d(x_0, y))\\sin(\\tilde{\\delta})}{\\sin(d(x, y))} \\\\\n &\\circled{2}[=] \\frac{\\norm{\\tilde{x}_0-\\tilde{y}}}{\\sqrt{1+\\norm{\\tilde{x}_0-\\tilde{y}}^2}} \\cdot\\frac{\\norm{\\tilde{x}-\\tilde{y}} \\sin(\\tilde{\\alpha})}{\\norm{\\tilde{x}_0-\\tilde{y}}} \\frac{1}{ \\sin(d(x, y))}\\\\\n &\\circled{3}[=] \\frac{\\sin(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}}}{\\sqrt{1+\\norm{\\tilde{x}_0-\\tilde{y}}^2}\\sqrt{1-\\frac{(1-\\norm{x}\\cos(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}}+\\norm{\\tilde{x}}^2)^2}{(1+\\norm{\\tilde{x}}^2)(1+\\norm{\\tilde{x}_0-\\tilde{y}}^2)}}}\\\\\n &\\circled{4}[=] \\frac{\\sin(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}}}{\\sqrt{\\norm{\\tilde{x}-\\tilde{y}}^2(1+\\norm{\\tilde{x}}^2-\\norm{\\tilde{x}}^2\\cos(\\tilde{\\alpha}))\/(1+\\norm{\\tilde{x}}^2)}} \\\\\n &\\circled{5}[=] \\sin(\\tilde{\\alpha})\\sqrt{\\frac{1+\\norm{\\tilde{x}}^2}{1+\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}}.\n \\end{aligned}\n\\end{align*}\n\nIn $\\circled{1}$ we used the spherical sine theorem. In $\\circled{2}$ we used the expression above regarding segments that pass through the origin, and the Euclidean sine theorem. In $\\circled{3}$, we use the fact that the coordinates of $\\tilde{y}$ are $(-\\sin(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}}, d-\\cos(\\tilde{\\alpha})\\norm{\\tilde{x}-\\tilde{y}})$, use the distance formula \\eqref{eq:distances} and the trigonometric inequality $\\sin(\\arccos(x)) = \\sqrt{1-x^2}$. Then, in $\\circled{4}$ and $\\circled{5}$, we multiply and simplify. \n\n Finally, in both cases, the cosine formula is derived from the identity $\\sin^2(\\alpha)+\\cos^2(\\alpha)=1$ after noticing that the sign of $\\cos(\\alpha)$ and the sign of $\\cos(\\tilde{\\alpha})$ are the same. The latter fact can be seen to hold true by noticing that $\\alpha$ is monotonous with respect to $\\tilde{\\alpha}$ and the fact that $\\tilde{\\alpha}=\\pi\/2$ implies $\\sin(\\alpha)=0$.\n\\end{proof}\n\n\\begin{fact}[Constant Curvature non-Euclidean Law of Sines]\\label{thm:law_of_sines}\n Let $S_k(\\cdot)$ denote the special sine, defined as $S_K(t) = \\sin(\\sqrt{K}t)$ if $K>0$, $S_K(t) = \\sinh(\\sqrt{-K}t)$ if $K<0$ and $S_k(t)=t$ if $K=0$. Let $a, b,c$ be the lengths of the sides of a geodesic triangle defined in a manifold of constant sectional curvature. Let $\\alpha, \\beta, \\gamma$ be the angles of the geodesic triangle, that are opposite to the sides $a, b, c$. The following holds:\n \\[\n \\frac{\\sin(\\alpha)}{S_K(a)} = \\frac{\\sin(\\beta)}{S_K(b)} = \\frac{\\sin(\\gamma)}{S_K(c)}.\n \\] \n\\end{fact}\n We refer to \\cite{greenberg1993euclidean} for a proof of this classical theorem.\n\n\\subsection{Gradient deformation and smoothness of $f$} \\label{sec:app_gradient_deformation_and_smoothness}\n\n\\cref{lemma:angle_deformation}, with $\\tilde{\\alpha} = \\pi\/2$, shows that $e_1 \\perp e_j$, for $j\\neq 1$. The rotational symmetry implies $e_i\\perp e_j$ for $i\\neq j$ and $i,j>1$. As in \\cref{lemma:deformations}, let $x\\in\\M$ be a point and assume without loss of generality that $\\tilde{x}\\in\\operatorname{span}\\{\\tilde{e}_1\\}$ and $\\nabla f(\\tilde{x})\\in \\operatorname{span}\\{\\tilde{e}_1, \\tilde{e}_2\\}$. It can be assumed without loss of generality because of the symmetries. So we can assume the dimension is $d=2$. Using \\cref{lemma:deformations} we obtain that $\\tilde{\\alpha}=0$ implies $\\alpha=0$. Also $\\tilde{\\alpha} = \\pi\/2$ implies $\\alpha=\\pi\/2$, so the adjoint of the differential of $h^{-1}$ at $x$, $(\\mathrm{d} h^{-1})^\\ast_x$ diagonalizes and has $e_1$ and $e_2$ as eigenvectors. We only need to compute the eigenvalues. The computation of the first one uses that the geodesic passing from $x_0$ and $x$ can be parametrized as $h^{-1}(\\tilde{x}_0+\\arctan(\\tilde{\\lambda}\\tilde{e}_1))$ if $K=1$ and $h^{-1}(\\tilde{x}_0+\\arctanh(\\tilde{\\lambda}\\tilde{e}_1))$ if $K=-1$, by \\eqref{eq:appendix_characterization_of_geodesic_map_and_metric}. The derivative of $\\arctan(\\cdot)$ or $\\arctanh(\\cdot)$ reveals that the first eigenvector, the one corresponding to $e_1$, is $1\/(1+K\\norm{\\tilde{x}^2})$, i.e. $\\nabla f(\\tilde{x})_1 = \\nabla F(x)_1 \/ (1+K\\norm{\\tilde{x}^2})$. For the second one, let $x=(x_1, 0)$ and $y=(y_1, y_2)$, with $y_1=x_1$ the second eigenvector results from the computation, for $K=-1$: \n\\begin{align*} \n \\begin{aligned}\n \\lim_{y_2 \\to 0} \\frac{d(x,y)}{y_2} &=\\lim_{y_2 \\to 0} \\frac{1}{2y_2} \\log\\left(1+\\frac{2y_2}{\\sqrt{1-x_1^2}-y_2}\\right) \\\\ \n &\\circled{1}[=] \\lim_{y_2 \\to 0} \\frac{\\frac{2}{\\sqrt{1-x_1^2}-y_2} + \\frac{2y_2}{(\\sqrt{1-x_1^2}-y_2)^2}}{2+\\frac{4y_2}{\\sqrt{1-x_1^2}-y_2}} \\\\\n &= \\frac{1}{\\sqrt{1-x_1^2}},\n \\end{aligned}\n\\end{align*}\nand for $K=1$: \n\\begin{align*} \n \\begin{aligned}\n \\lim_{y_2 \\to 0} \\frac{d(x,y)}{y_2} &=\\lim_{y_2 \\to 0} \\frac{1}{y_2} \\arccos\\left(\\frac{\\sqrt{1+x_1^2}}{\\sqrt{1 + x_1^2 + y_2^2}}\\right) \\\\ \n &\\circled{2}[=] \\lim_{y_2 \\to 0} \\frac{\\sqrt{1 + x_1^2}}{1 + x_1^2 + y_2^2}\\\\\n &= \\frac{1}{\\sqrt{1+x_1^2}}.\n \\end{aligned}\n\\end{align*}\nSo, since $x_1=\\norm{\\tilde{x}}$, we have $\\nabla f(\\tilde{x})_2 = \\nabla F(x)_2\/\\sqrt{1+K\\norm{\\tilde{x}}^2}$ for $K\\in\\{1, -1\\}$. We used L'H\u00f4pital's rule in $\\circled{1}$ and $\\circled{2}$. \n\nAlso note that if $v \\in T_x\\M$ is a vector normal to $\\nabla F(x)$, then $\\tilde{v}$ is normal to $\\nabla f(x)$. It is easy to see this geometrically: Indeed, no matter how $h$ changes the geometry, since it is a geodesic map, a geodesic in the direction of first-order constant increase of $F$ is mapped via $h$ to a geodesic in the direction of first-order constant increase of $f$. And the respective gradients must be perpendicular to all these directions. Alternatively, this can be seen algebraically. Suppose first $d=2$, then $v$ is proportional to $(\\nabla F(x)_2, -\\nabla F(x)_1) = (\\sqrt{1+K\\norm{\\tilde{x}}^2} \\nabla f(\\tilde{x})_2, -(1+K\\norm{\\tilde{x}}^2)\\nabla f(\\tilde{x}_1))$. And a vector $\\tilde{v}'$ normal to $\\nabla f(x)$ must be proportional to $(-\\nabla f(x)_2, \\nabla f(x)_1)$. Let $\\alpha$ be the angle formed by $v$ and $-e_1$, $\\tilde{\\alpha}$ the corresponding angle formed between $\\tilde{v}$ and $-\\tilde{e}_1$, and $\\tilde{\\alpha}'$ the angle formed by $\\tilde{v}'$ and $-\\tilde{e}_1$. Then we have, using the expression for the vectors proportional to $v$ and $\\tilde{v}'$:\n\\[\n \\sin(\\alpha) = \\frac{-f(x)_2}{\\sqrt{\\nabla f(x)_2^2+(1+\\norm{x}^2)\\nabla f(x)_1^2}} \\text{ and } \\sin(\\tilde{\\alpha}') = \\frac{-f(x)_2}{\\sqrt{\\nabla f(x)_2^2+\\nabla f(x)_1^2}} \n\\] \nand an easy computation yields $\\sin(\\alpha) = \\sin(\\tilde{\\alpha}')\\sqrt{(1+K\\norm{\\tilde{x}^2})\/(1+K\\norm{\\tilde{x}^2}\\sin^2(\\tilde{\\alpha}'))}$, which after applying \\cref{lemma:angle_deformation} we obtain $\\sin(\\tilde{\\alpha}') = \\sin(\\tilde{\\alpha})$ from which we conclude that $\\tilde{\\alpha}' = \\tilde{\\alpha}$ given that the angles are in the same quadrant. So $\\tilde{v} \\perp \\nabla f(x)$. In order to prove this for $d\\geq 3$ one can apply the reduction \\eqref{claim:same_direction_when_projecting} to the case $d=2$ that we obtain in the next section. \n\nCombining the results obtained so far in \\cref{app:geometric_results}, we can prove \\cref{lemma:deformations}. We continue by proving \\cref{lemma:smoothness_of_transformed_function}, which will generalize the computations we just performed, in order to analyze the Hessian of $f$ and provide smoothness. Then, in the next section, we combine the results in \\cref{lemma:deformations} to prove \\cref{prop:bounding_hyperplane}.\n\n\\begin{proof}[Proof of \\cref{lemma:deformations}]\n The lemma follows from Lemmas \\ref{lemma:distances_hyperbolic}, \\ref{lemma:distances_spherical}, \\ref{lemma:angle_deformation} and the previous reasoning in this Section \\ref{sec:app_gradient_deformation_and_smoothness}.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of \\cref{lemma:smoothness_of_transformed_function}]\n We will compute the Hessian of $f=F\\circ h^{-1}$ and we will bound its spectral norm for any point $\\tilde{x}\\in\\B$. We can assume without loss of generality that $d=2$ and $\\tilde{x} = (\\tilde{\\ell}, 0)$, for $\\tilde{\\ell} > 0$ (the case $\\tilde{\\ell}=0$ is trivial), since there is a rotational symmetry with $e_1$ as axis. This means that by rotating we could align the top eigenvector of the Hessian at a point so that it is in $\\operatorname{span}\\{e_1, e_2\\}$. Let $\\tilde{y} = (y_1, y_2) \\in \\B$ be another point, with $y_1=\\tilde{\\ell}$. We can also assume that $y_2>0$ without loss of generality, because of our symmetry. Our approach will be the following. We know by \\cref{lemma:angle_deformation} and by the beginning of this section \\ref{sec:app_gradient_deformation_and_smoothness} that the adjoint of the differential of $h^{-1}$ at $y$, $(\\mathrm{d} h^{-1})^\\ast_y$ has $\\expon_y^{-1}(x_0)$ and a normal vector to it as eigenvectors. Their corresponding eigenvalues are $1\/(1+K\\norm{\\tilde{y}}^2)$ and $1\/\\sqrt{1+K\\norm{\\tilde{y}}^2}$, respectively. Consider the basis of $T_x\\M$ $\\{e_1, e_2\\}$ as defined at the beginning of this section, i.e. where $e_1$ is a unit vector proportional to $-\\expon_x^{-1}(x_0)$ and $e_2$ is the normal vector to $e_1$ that makes the basis orthonormal. Consider this basis being transported to $y$ using parallel transport and denote the result $\\{v_y, v_y^\\top\\}$. Assume we have the gradient $\\nabla F(y)$ written in this basis. Then we can compute the gradient of $f$ at $y$ by applying $(\\mathrm{d}h^{-1})^\\ast_y$. In order to do that, we compose the change of basis from $\\{v_y, v_y^\\top\\}$ to the basis of eigenvectors of $(\\mathrm{d} h^{-1})^\\ast_y$, then we apply a diagonal transformation given by the eigenvalues and finally we change the basis to $\\{\\tilde{e}_1, \\tilde{e}_2\\}$. Once this is done, we can differentiate with respect to $y_2$ in order to compute a column of the Hessian. Let $\\tilde{\\alpha}$ be the angle formed by the vectors $\\tilde{y}$ and $\\tilde{x}$. Note that $\\tilde{\\alpha} = \\arctan(y_2\/y_1)$. Let $\\tilde{\\gamma}$ be the angle formed by the vectors $(\\tilde{y}-\\tilde{x})$ and $-\\tilde{y}$. That is, the angle $\\tilde{\\gamma}=\\pi-\\angle \\tilde{x} \\tilde{y} \\tilde{x}_0$. Since $v_y^\\top$ is the parallel transport of $e_2^\\top$, the angle between $v_y^\\top$ and the vector $\\expon_y^{-1}(x_0)$ is $\\gamma$. Note we use the same convention as before for the angles, i.e. $\\gamma$ is the corresponding angle to $\\tilde{\\gamma}$, meaning that if $\\gamma$ is the angle between two intersecting geodesics in $\\M$, then $\\tilde{\\gamma}$ is the angle between the respective corresponding geodesics in $\\B$. Note the first change of basis is a rotation and that the angle of rotation is $\\gamma-\\pi\/2$. The last change of basis is a rotation with angle equal to the angle formed by a vector $\\tilde{v}$ normal to $-\\tilde{y}$ ( $\\tilde{v}$ is the one such that $-\\tilde{y}\\times \\tilde{v} >0$) and the vector $\\tilde{e}_2$. It is easy to see that this vector is equal to $\\tilde{\\alpha}$. So we have\n\\begin{equation}\\label{eq:equality_gradients}\n \\nabla f(y) = \n\\begin{pmatrix}\n\\cos(\\tilde{\\alpha}) & -\\sin(\\tilde{\\alpha}) \\\\ \\sin(\\tilde{\\alpha}) & \\cos(\\tilde{\\alpha})\n\\end{pmatrix}\n\\begin{pmatrix}\n \\frac{1}{1+K(y_1^2+y_2^2)} & 0 \\\\ 0 & \\frac{1}{\\sqrt{1+K(y_1^2+y_2^2)}}\n\\end{pmatrix}\n\\begin{pmatrix}\n \\sin(\\gamma) & -\\cos(\\gamma) \\\\ \\cos(\\gamma) & \\sin(\\gamma)\n\\end{pmatrix}\n\\nabla F(y)\n\\end{equation}\n\nWe want to take the derivative of this expression with respect to $y_2$ and we want to evaluate it at $y_2=0$. Let the matrices above be $A$, $B$ and $C$ so that $\\nabla f(y) = ABC\\nabla F(y)$. Using \\cref{lemma:angle_deformation} we have\n\\begin{align} \\label{eq:aux_comp_angle_gamma}\n \\begin{aligned}\n \\sin(\\gamma) &= \\sin(\\tilde{\\gamma})\\sqrt{\\frac{1+K(y_1^2+y_2^2)}{1+K(y_1^2+y_2^2)\\sin^2(\\tilde{\\gamma})}} \\circled{1}[=] \\cos(\\tilde{\\alpha})\\sqrt{\\frac{1+K(y_1^2+y_2^2)}{1+K(y_1^2+y_2^2)\\cos^2(\\tilde{\\alpha})}}, \\\\\n \\cos(\\gamma) &= -\\sin(\\tilde{\\alpha})\\sqrt{\\frac{1}{1+K(y_1^2+y_2^2)\\cos^2(\\tilde{\\alpha})}},\n \\end{aligned}\n\\end{align}\nwhere $\\circled{1}$ follows from $\\sin(\\tilde{\\gamma}) = \\sin(\\tilde{\\alpha}+\\pi\/2) = \\cos(\\tilde{\\alpha})$. Now we can easily compute some quantities\n\\[\n \\left.A\\right|_{y_2=0} = I, \\left.B\\right|_{y_2=0} = \n\\begin{pmatrix}\n \\frac{1}{1+Ky_1^2} & 0 \\\\ 0 & \\frac{1}{\\sqrt{1+Ky_1^2}}\n\\end{pmatrix},\n\\left.C\\right|_{y_2=0} = I,\n\\] \n\\[ \n \\left.\\frac{\\partial A}{\\partial y_2}\\right|_{y_2=0} = \\left.\\frac{\\partial \\tilde{\\alpha}}{\\partial y_2}\\right|_{y_2=0} \\cdot\n\\begin{pmatrix}\n 0 & -1 \\\\ 1 & 0\n\\end{pmatrix}\n\\circled{1}[=]\n\\begin{pmatrix}\n 0 & \\frac{-1}{y_1} \\\\ \\frac{1}{y_1} & 0\n\\end{pmatrix}\n,\n\\]\n\\[\n \\left.\\frac{\\partial B}{\\partial y_2}\\right|_{y_2=0} =\n\\left.\n\\begin{pmatrix}\n \\frac{2Ky_2}{(1+K(y_1^2+y_2^2))^2} & 0 \\\\ 0 & \\frac{2Ky_2}{2(1+K(y_1^2+y_2^2))^{3\/2}} \n\\end{pmatrix}\n\\right|_{y_2=0} =\n\\begin{pmatrix}\n 0 & 0 \\\\ 0 & 0\n\\end{pmatrix}\n,\n\\]\n\\[\n \\left.\\frac{\\partial C}{\\partial y_2}\\right|_{y_2=0} \\circled{2}[=]\n\\begin{pmatrix}\n 0 & \\frac{1}{y_1\\sqrt{1+Ky_1^2}} \\\\ \\frac{-1}{y_1\\sqrt{1+Ky_1^2}} & 0\n\\end{pmatrix}\n.\n\\]\nEqualities $\\circled{1}$ and $\\circled{2}$ follow after using \\eqref{eq:aux_comp_angle_gamma}, $\\tilde{\\alpha} = \\arctan(\\frac{y_2}{y_1})$ and taking derivatives. Now we differentiate \\eqref{eq:equality_gradients} with respect to $y_2$ and evaluate to $y_2=0$ using the chain rule. The result is \n\\begin{align*}\n \\begin{aligned}\n\\begin{pmatrix}\n \\nabla^2 f(\\tilde{x})_{12} \\\\ \\nabla^2 f(\\tilde{x})_{22} \n\\end{pmatrix}\n &= \\left.\\left(\\frac{\\partial A}{\\partial y_2} BC\\nabla F(x) + A\\frac{\\partial B}{\\partial y_2} C\\nabla F(x) +AB\\frac{\\partial C}{\\partial y_2} \\nabla F(x) + ABC\\frac{\\partial \\nabla F(x)}{\\partial y_2} \\right)\\right|_{y_2=0} \\\\\n &=\n\\begin{pmatrix}\n \\frac{-\\nabla f(\\tilde{x})_{2}}{y_1\\sqrt{1+Ky_1^2}} \\\\ \\frac{\\nabla f(\\tilde{x})_{1}}{y_1(1+Ky_1^2)}\n\\end{pmatrix}\n+\n\\begin{pmatrix}\n 0 \\\\ 0\n\\end{pmatrix}\n+\n\\begin{pmatrix}\n \\frac{\\nabla f(\\tilde{x})_{2}}{y_1(1+Ky_1^2)^{3\/2}} \\\\ \\frac{-\\nabla f(\\tilde{x})_{1}}{y_1(1+Ky_1^2)} \n\\end{pmatrix}\n+\n\\begin{pmatrix}\n \\frac{\\nabla^2 F(x)_{12}}{(1+Ky_1^2)^{3\/2}} \\\\ \\frac{\\nabla^2 F(x)_{22}}{1+Ky_1^2} \n\\end{pmatrix}\n \\end{aligned}\n\\end{align*}\nComputing the other column of the Hessian is easier. We can just consider \\eqref{eq:equality_gradients} with $\\tilde{\\alpha} = 0$ and $\\gamma=\\pi\/2$ and vary $y_1$. Taking derivatives with respect to $y_1$ gives\n\\[\n\\begin{pmatrix}\n \\nabla^2 f(\\tilde{x})_{11} \\\\ \\nabla^2 f(\\tilde{x})_{21}\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n \\frac{-2Ky_1\\nabla f(\\tilde{x})_{1}}{(1+Ky_1^2)^{2}} \\\\ \\frac{-Ky_1\\nabla f(\\tilde{x})_{2}}{(1+Ky_1^2)^{3\/2}} \n\\end{pmatrix}\n+\n\\begin{pmatrix}\n \\frac{\\nabla^2 F(x)_{11}}{(1+Ky_1^2)^{2}} \\\\ \\frac{\\nabla^2 F(x)_{21}}{(1+Ky_1^2)^{3\/2}} \n\\end{pmatrix}\n.\n\\]\nNote in the computations of both of the columns of the Hessian we have used \n\\[\n \\frac{\\partial{\\nabla F(y)_i}}{\\partial y_1} = \\nabla F(x)_{i1} \\cdot \\frac{1}{1+Ky_1^2} \\quad \\text{ and } \\quad\\left.\\frac{\\partial{\\nabla F(y)_i}}{\\partial y_2}\\right|_{y_2=0} = \\nabla F(x)_{i2} \\cdot \\frac{1}{\\sqrt{1+Ky_1^2}},\n\\]\nfor $i=1,2$. The eigenvalues of the adjoint of the differential of $h^{-1}$ appear as a factor because we are differentiating with respect to the geodesic in $\\B$ which moves at a different speed than the corresponding geodesic in $\\M$. Note as well, as a sanity check, that the cross derivatives are equal, since \n \\[\n -\\frac{1}{y_1\\sqrt{1+Ky_1^2}} + \\frac{1}{y_1(1+Ky_1^2)^{3\/2}} = \\frac{1}{y_1\\sqrt{1+Ky_1^2}}\\left(-1 + \\frac{1}{1+Ky_1^2}\\right) = \\frac{-Ky_1}{(1-y_1^2)^{3\/2}}.\n \\] \n Finally, we bound the new smoothness constant $\\tilde{L}$ by bounding the spectral norm of this Hessian. First note that using $y_1=\\tilde{\\ell}$ we have that $\\frac{1}{\\sqrt{1+K\\tilde{\\ell}^2}} = C_K(\\ell)$ and then for $K=-1$ it is $\\tilde{\\ell} = \\tanh(\\ell)$ and for $K=1$ it is $\\tilde{\\ell} = \\tan(\\ell)$, where $\\ell = d(x, x_0) < R$. We have that since there is a point $x^\\ast \\in \\M$ such that $\\nabla F(x^\\ast) = 0$ and $F$ is $L$-smooth, then it is $\\norm{\\nabla F(x)} \\leq 2LR$. Similarly, by $L$-smoothness $\\abs{\\nabla^2 F(x)_{ij}} \\leq L$. We are now ready prove smoothness:\n\\begin{align*} \n \\begin{aligned}\n \\tilde{L}^2& \\leq\\norm{\\nabla^2 f(\\tilde{x})}_2^2 \\\\\n & \\leq \\norm{\\nabla^2 f(\\tilde{x})}_F^2 = \\nabla^2 f(\\tilde{x})_{11}+2\\nabla^2 f(\\tilde{x})_{12}+\\nabla^2 f(\\tilde{x})_{22} \\\\\n &\\leq L^2([C_K^4(R)+4R S_K(R)C_K^3(R)]^2+2[C_K^3(R) +2RS_K(R)C_K^2(R)]^2+C_K^4(R))\n \\end{aligned}\n\\end{align*}\nand this can be bounded by $ 44 L^2 \\max\\{1,R^2\\}$ if $K=1$ and $44 L^2 \\max\\{1,R^2\\} C_K^8(R)$ if $K=-1$. In any case, it is $O(L^2)$.\n\\end{proof}\n\n\\subsection{Proof of \\cref{prop:bounding_hyperplane}} \\label{app:sec_proof_of_bounding_hyperplane}\n\\begin{proof}[Proof of \\cref{prop:bounding_hyperplane}]\n\nAssume for the moment the dimension is $d=2$. We can assume without loss of generality that $\\tilde{x} = (\\tilde{\\ell}, 0)$. We are given two vectors, that are the gradients $\\nabla F(x)$, $\\nabla f(\\tilde{x})$ and a vector $w\\in T_x\\M$. Let $\\tilde{\\delta}$ be the angle between $\\tilde{w}$ and $-\\tilde{x}$. Let $\\delta$ be the corresponding angle, i.e. the angle between $w$ and $u \\defi \\expon_x^{-1}(x_0)$. Let $\\alpha$ be the angle in between $\\nabla F(x)$ and $u$. Let $\\tilde{\\beta}$ be the angle in between $\\nabla f(\\tilde{x})$ and $-x$. $\\tilde{\\alpha}$ and $\\beta$ are defined similarly. We claim\n\\begin{equation} \\label{inner_product_comparison}\n \\frac{\\innp{\\frac{\\nabla F(x)}{\\norm{\\nabla F(x)}}, \\frac{w}{\\norm{w}}}}{\\innp{\\frac{\\nabla f(\\tilde{x})}{\\norm{\\nabla f(\\tilde{x})}}, \\frac{\\tilde{w}}{\\norm{\\tilde{w}}}}} = \\sqrt{\\frac{1+K\\tilde{\\ell}^2}{(1+K\\tilde{\\ell}^2 \\sin^2(\\tilde{\\delta}))(1+K\\tilde{\\ell}^2 \\cos^2(\\tilde{\\beta}))}}.\n\\end{equation}\n\n Let's see how to arrive to this expression. By \\cref{lemma:deformations}.c) we have\n \\begin{equation}\\label{eq:relationship_tangents_gradients}\n \\tan(\\alpha) = \\frac{\\tan(\\tilde{\\beta})}{\\sqrt{1+K\\tilde{\\ell}^2}}.\n \\end{equation}\n From this relationship we can deduce\n \\begin{equation} \\label{eq:relationship_cosines_gradients}\n \\cos(\\alpha) = \\cos(\\tilde{\\beta}) \\sqrt{\\frac{1+K\\tilde{\\ell}^2}{1+K\\tilde{\\ell}^2\\cos^2(\\tilde{\\beta})}}.\n \\end{equation}\n This comes from squaring \\eqref{eq:relationship_tangents_gradients}, reorganizing terms and noting that $\\operatorname{sign}(\\cos(\\alpha)) =\\operatorname{sign}(\\cos(\\tilde{\\beta}))$ which is implied by \\cref{lemma:deformations}.c). We are now ready to prove the claim \\eqref{inner_product_comparison} (for $d=2$). We have \n\\begin{align*} \n \\begin{aligned}\n \\frac{\\innp{\\frac{\\nabla F(x)}{\\norm{\\nabla F(x)}}, \\frac{w}{\\norm{w}}}}{\\innp{\\frac{\\nabla f(\\tilde{x})}{\\norm{\\nabla f(\\tilde{x})}}, \\frac{\\tilde{w}}{\\norm{\\tilde{w}}}}} &= \\frac{\\cos(\\alpha-\\delta)}{\\cos(\\tilde{\\beta}-\\tilde{\\delta})} \\\\\n &\\circled{2}[=] \\frac{\\cos(\\delta)+\\tan(\\alpha)\\sin(\\delta)}{\\cos(\\tilde{\\beta})\\cos(\\tilde{\\delta})+\\sin(\\tilde{\\beta})\\sin(\\tilde{\\delta})}\\cos(\\alpha) \\\\\n &\\circled{3}[=] \\frac{\\frac{\\cos(\\tilde{\\delta})}{\\sqrt{1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta})}}+\\frac{\\tan(\\tilde{\\beta})}{\\sqrt{1+K\\tilde{\\ell}^2}}\\frac{\\sin(\\tilde{\\delta)}\\sqrt{1+K\\tilde{\\ell}^2}}{\\sqrt{1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta})}}}{\\cos(\\tilde{\\beta})\\cos(\\tilde{\\delta})+\\sin(\\tilde{\\beta})\\sin(\\tilde{\\delta})}\\cos(\\tilde{\\beta}) \\sqrt{\\frac{1+K\\tilde{\\ell}^2}{1+K\\tilde{\\ell}^2\\cos^2(\\tilde{\\beta})}} \\\\\n &\\circled{4}[=] \\sqrt{\\frac{1+K\\tilde{\\ell}^2}{(1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta}))(1+K\\tilde{\\ell}^2\\cos^2(\\tilde{\\beta}))}}. \\\\\n \\end{aligned}\n\\end{align*}\n Equality $\\circled{1}$ follows by the definition of $\\alpha, \\delta, \\tilde{\\delta}$ and $\\tilde{\\beta}$. In $\\circled{2}$, we used trigonometric identities. In $\\circled{3}$ we used \\cref{lemma:angle_deformation}, \\eqref{eq:relationship_tangents_gradients} and \\eqref{eq:relationship_cosines_gradients}. By reordering the expression, the denominator cancels out with a factor of the numerator in $\\circled{4}$. \n \n\nIn order to work with arbitrary dimension, we note it is enough to prove it for $d=3$, since in order to bound\n\\[\n\\frac{\\innp{\\frac{\\nabla F(x)}{\\norm{\\nabla F(x)}}, \\frac{v}{\\norm{v}}}}{\\innp{\\frac{\\nabla f(\\tilde{x})}{\\norm{\\nabla f(\\tilde{x})}}, \\frac{\\tilde{v}}{\\norm{\\tilde{v}}}}},\n\\] \nit is enough to consider the following submanifold\n\\[\n \\M' \\defi \\expon_x(\\operatorname{span}\\{v, \\expon_x^{-1}(x_0), \\nabla F(x)\\}).\n\\]\n for an arbitrary vector $v\\in T_x\\M$ and a point $x$ defined as above. The case $d=3$ can be further reduced to the case $d=2$ in the following way. Suppose $\\M'$ is a three dimensional manifold (if it is one or two dimensional there is nothing to do). Define the following orthonormal basis of $T_x\\M$, $\\{e_1, e_2, e_3\\}$ where $e_1=-\\expon_x^{-1}(x_0)\/\\norm{\\expon_x^{-1}(x_0)}$, $e_2$ is a unit vector, normal to $e_1$ such that $e_2\\in\\operatorname{span}\\{e_1, \\nabla F(x)\\}$. And $e_3$ is a vector that completes the orthonormal basis. In this basis, let $v$ be parametrized by $\\norm{v}(\\sin(\\delta), \\cos(\\nu)\\cos(\\delta), \\sin(\\nu)\\cos(\\delta))$, where $\\delta$ can be thought as the angle between the vector $v$ and its projection onto the plane $\\operatorname{span}\\{e_2, e_3\\}$ and $\\nu$ can be thought as the angle between this projection and its projection onto $e_2$. Similarly we parametrize $\\tilde{v}$ by $\\norm{\\tilde{v}}(\\sin(\\tilde{\\delta}), \\cos(\\tilde{\\nu})\\cos(\\tilde{\\delta}), \\sin(\\tilde{\\nu})\\cos(\\tilde{\\delta}))$, where the base used is the analogous base to the previous one, i.e. The vectors $\\{\\tilde{e}_1, \\tilde{e}_2, \\tilde{e}_3\\}$. Taking into account that $e_2 \\perp e_1$, $e_3\\perp e_1$, $\\tilde{e}_2 \\perp \\tilde{e}_1$, $\\tilde{e}_3\\perp \\tilde{e}_1$, and the fact that $e_1$ is parallel to $-\\expon_x(x_0)$, by the radial symmetry of the geodesic map we have that $\\nu = \\tilde{\\nu}$. Also, by looking at the submanifold $\\expon_x(\\operatorname{span}\\{e_1,v\\})$ and using \\cref{lemma:angle_deformation} we have \n\\[\n \\sin(\\delta) = \\sin(\\tilde{\\delta})\\sqrt{\\frac{1+K\\tilde{\\ell}^2}{1+K\\tilde{\\ell}^2\\sin(\\tilde{\\delta})}}.\n\\]\n If we want to compare $\\innp{\\nabla F(x), v}$ with $\\innp{\\nabla f(\\tilde{x}), \\tilde{v}}$ we should be able to just zero out the third components of $v$ and $\\tilde{v}$ and work in $d=2$. But in order to completely obtain a reduction to the two-dimensional case we studied a few paragraphs above, we would need to prove that if we call $w\\defi(\\sin(\\delta), \\cos(\\nu)\\cos(\\delta), 0)$ the vector $v$ with the third component made $0$, then $\\tilde{w}$ is in the same direction of the vector $\\tilde{v}$, when the third component is made $0$. The norm of these two vectors will not be the same, however. Let $\\tilde{w}'=(\\sin(\\tilde{\\delta}), \\cos(\\nu) \\cos(\\tilde{\\delta}), 0)$ be the vector $\\tilde{v}$ when the third component is made $0$. Then \n\\begin{equation} \\label{eq:norms_of_projections_from_3d_vectors}\n \\norm{w} = \\norm{v} \\sqrt{\\sin^2(\\delta)+\\cos^2(\\delta)\\cos^2(\\nu)} \\text{ and } \\norm{\\tilde{w}'} = \\norm{\\tilde{v}}\\sqrt{\\sin^2(\\tilde{\\delta}) + \\cos^2(\\tilde{\\delta})\\cos^2(\\nu)}.\n\\end{equation}\nBut indeed, we claim \n \\begin{equation}\\label{claim:same_direction_when_projecting}\n \\tilde{w} \\text{ and } \\tilde{w}' \\text{ have the same direction.}\n\\end{equation}\n This is easy to see geometrically: since we are working with a geodesic map, the submanifolds $\\expon_x(\\operatorname{span}\\{v, e_3\\})$ and $\\expon_x(\\operatorname{span}\\{e_1, e_2\\})$ contain $w$. Similarly the submanifolds $x+\\operatorname{span}\\{\\tilde{v}, \\tilde{e}_3\\}$ and $x+\\operatorname{span}\\{\\tilde{e}_1, \\tilde{e}_2\\}$ contain $\\tilde{w}'$. If the intersections of each of these pair of manifolds is a geodesic then the geodesic map must map one intersection to the other one, implying $\\tilde{w}$ is proportional to $\\tilde{w}'$. If the intersections are degenerate the case is trivial. Alternatively, one can prove this fact algebraically after some computations. If we call $\\delta^\\ast$ and $\\tilde{\\delta}'$ the angles formed by, respectively, the vectors $e_2$ and $w$, and the vectors $\\tilde{e}_2$ and $\\tilde{w}'$, then we have $\\tilde{w}'$ is proportional to $\\tilde{w}$ iff $\\tilde{\\delta}'=\\tilde{\\delta}^\\ast$, or equivalently $\\delta'=\\delta^\\ast$. Using the definitions of $w$ and $\\tilde{w}'$ we have\n\\begin{align*} \n \\begin{aligned}\n \\sin(\\delta^\\ast) = \\sin\\left(\\arctan\\left(\\frac{\\sin(\\delta)}{\\cos(\\nu)\\cos(\\delta)}\\right)\\right) = \\frac{\\tan(\\delta)\/\\cos(\\nu)}{(\\tan(\\delta)\/\\cos(\\nu))^2+1} \\\\\n = \\frac{\\sin(\\delta)}{\\sqrt{\\sin^2(\\delta)+\\cos^2(\\nu)\\cos^2(\\delta)}},\n \\end{aligned}\n\\end{align*}\nand analogously\n\\begin{align} \\label{eq:aux_sine_of_angle_3d}\n \\begin{aligned}\n \\sin(\\tilde{\\delta}') = \\sin\\left(\\arctan\\left(\\frac{\\sin(\\tilde{\\delta})}{\\cos(\\nu)\\cos(\\tilde{\\delta})}\\right)\\right) = \\frac{\\tan(\\tilde{\\delta})\/\\cos(\\nu)}{(\\tan(\\tilde{\\delta})\/\\cos(\\nu))^2+1} \\\\\n = \\frac{\\sin(\\tilde{\\delta})}{\\sqrt{\\sin^2(\\tilde{\\delta})+\\cos^2(\\nu)\\cos^2(\\tilde{\\delta})}}.\n \\end{aligned}\n\\end{align}\nUsing \\cref{lemma:angle_deformation} for the pairs $\\delta'$, $\\tilde{\\delta}'$ and $\\delta^\\ast$, $\\tilde{\\delta}^\\ast$, and the equations above we obtain\n\\begin{align*} \n \\begin{aligned}\n \\sin(\\delta^\\ast) = \\frac{\\sin(\\tilde{\\delta})\\sqrt{\\frac{1+K\\tilde{\\ell}^2}{1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta})}}}{\\sqrt{\\sin^2(\\tilde{\\delta})\\frac{1+K\\tilde{\\ell}^2}{1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta})}+\\cos^2(\\nu)\\frac{\\cos^2(\\tilde{\\delta})}{1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta})}}} = \\frac{\\sin(\\tilde{\\delta})\\sqrt{1+K\\tilde{\\ell}^2}}{\\sqrt{\\sin^2(\\tilde{\\delta})(1+K\\tilde{\\ell}^2)+\\cos^2(\\nu)\\cos^2(\\tilde{\\delta})}},\n \\end{aligned}\n\\end{align*}\nand \n\\begin{align*} \n \\begin{aligned}\n \\sin(\\delta') = \\frac{\\sin(\\tilde{\\delta})}{\\sqrt{\\sin^2(\\tilde{\\delta})+\\cos^2(\\nu)\\cos^2(\\tilde{\\delta})}}\\sqrt{\\frac{1+K\\tilde{\\ell}^2}{1+K\\tilde{\\ell}^2\\left( \\frac{\\sin^2(\\tilde{\\delta})}{\\sin^2(\\tilde{\\delta})+\\cos^2(\\nu)\\cos^2(\\tilde{\\delta})}\\right)}\n},\n \\end{aligned}\n\\end{align*}\nThe two expressions on the right hand side are equal. This implies $\\sin(\\delta') = \\sin(\\delta^\\ast)$. Since the angles were in the same quadrant we have $\\delta' =\\delta^\\ast$ by checking in which sectors the angles must be.\n\nWe can now come back to the study of $\\frac{\\innp{\\nabla F(x), v}}{\\innp{\\nabla f(\\tilde{x}), \\tilde{v}}}$. By \\eqref{eq:norms_of_projections_from_3d_vectors} we have\n\\begin{align*} \n \\begin{aligned}\n \\frac{\\innp{\\nabla F(x), v}}{\\innp{\\nabla f(\\tilde{x}), \\tilde{v}}} = \\frac{\\norm{\\nabla F(x)}}{\\norm{\\nabla f(\\tilde{x})}}\\frac{\\norm{v}}{\\norm{\\tilde{v}}}\\frac{\\innp{\\frac{\\nabla F(x)}{\\norm{\\nabla F(x)}}, \\frac{w}{\\norm{w}}}}{\\innp{\\frac{\\nabla f(\\tilde{x})}{\\norm{\\nabla f(\\tilde{x})}}, \\frac{\\tilde{w}}{\\norm{\\tilde{w}}}}} \\frac{\\sqrt{\\sin^2(\\delta)+\\cos^2(\\delta)\\cos^2(\\nu)}}{\\sqrt{\\sin^2(\\tilde{\\delta})+\\cos^2(\\tilde{\\delta})\\cos^2(\\nu)}} \\end{aligned}\n\\end{align*}\nThe last two factors can be simplified. Using \\eqref{inner_product_comparison} and \\eqref{eq:norms_of_projections_from_3d_vectors} we get that this product is equal to\n\\begin{align*} \n \\begin{aligned}\n \\sqrt{\\frac{1+K\\tilde{\\ell}^2}{(1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta}^\\ast))(1+K\\tilde{\\ell}^2\\cos^2(\\tilde{\\beta}))}} \\frac{\\sqrt{\\sin^2(\\tilde{\\delta})\\frac{1+K\\tilde{\\ell}^2}{(1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta}))} +\\cos^2(\\nu)\\frac{\\cos^2(\\tilde{\\delta})}{1+K\\tilde{\\ell}^2\\sin(\\tilde{\\delta})}}}{\\sin^2(\\tilde{\\delta})+\\cos^2(\\tilde{\\delta})\\cos^2(\\nu)}\n \\end{aligned}\n\\end{align*}\nwhich after using \\eqref{eq:aux_sine_of_angle_3d} (recall $\\tilde{\\delta}^\\ast = \\tilde{\\delta}'$), and simplifying it yields\n\\[\n\\sqrt{\\frac{1+K\\tilde{\\ell}^2}{(1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta}))(1+K\\tilde{\\ell}^2\\cos^2(\\tilde{\\beta}))}}.\n\\] \nSo finally we have\n\\[\n\\frac{\\innp{\\nabla F(x), v}}{\\innp{\\nabla f(\\tilde{x}), \\tilde{v}}} = \\frac{\\norm{\\nabla F(x)}}{\\norm{\\nabla f(\\tilde{x})}}\\frac{\\norm{v}}{\\norm{\\tilde{v}}}\n\\sqrt{\\frac{1+K\\tilde{\\ell}^2}{(1+K\\tilde{\\ell}^2\\sin^2(\\tilde{\\delta}))(1+K\\tilde{\\ell}^2\\cos^2(\\tilde{\\beta}))}}.\n\\] \nWe use now \\cref{lemma:deformations}.a) and \\cref{lemma:deformations}.c), and bound $\\sin^2(\\tilde{\\delta})$ and $\\cos^2(\\tilde{\\beta})$ in order to bound the previous expression. Recall that, by \\eqref{eq:R_tilde_vs_R} we have $1\/\\sqrt{1+K\\tilde{\\ell}^2} = C_K(\\ell)$, for $\\ell=d(x, x_0) \\leq R$. Let's proceed. We obtain, for $K=-1$\n\\[\n \\cosh^{-3}(R)\\leq \\frac{1}{\\cosh^2(\\ell)} \\cdot 1 \\cdot \\frac{1}{\\cosh(\\ell)} \\leq \\frac{\\innp{\\nabla F(x), v}}{\\innp{\\nabla f(\\tilde{x}), \\tilde{v}}} \\leq \\frac{1}{\\cosh(\\ell)} \\cdot \\cosh^2(\\ell) \\cdot \\cosh(\\ell) \\leq \\cosh^2(R).\n\\] \nand for $K=1$ it is \n\\[\n \\cos^{2}(R)\\leq \\frac{1}{\\cos(\\ell)} \\cdot \\cos^2(\\ell) \\cdot \\cos(\\ell) \\leq \\frac{\\innp{\\nabla F(x), v}}{\\innp{\\nabla f(\\tilde{x}), \\tilde{v}}} \\leq \\frac{1}{\\cos^2(\\ell)} \\cdot 1 \\cdot \\frac{1}{\\cos(\\ell)} \\leq \\cos^{-3}(R).\n\\] \n\nThe first part of \\cref{prop:bounding_hyperplane} follows, for $\\gammap = \\cosh^{-3}(R)$ and $\\gamman = \\cosh^{-2}(R)$ when $K=-1$, and $\\gammap = \\cos^2(R)$ and $\\gamman = \\cos^3(R)$ when $K=1$.\n\nThe second part of \\cref{prop:bounding_hyperplane} follows readily from the first one and g-convexity of $F$, as in the following. It holds\n\\[\n f(\\tilde{x}) + \\frac{1}{\\gamman}\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}} \\circled{1}[\\leq] F(x) + \\innp{\\nabla F(x), y-x} \\circled{2}[\\leq] F(y) = f(\\tilde{y}),\n\\] \nand\n\\[\n f(\\tilde{x}) + \\gammap\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}} \\circled{3}[\\leq] F(x) + \\innp{\\nabla F(x), y-x} \\circled{4}[\\leq] F(y) = f(\\tilde{y}),\n\\] \nwhere $\\circled{1}$ and $\\circled{3}$ hold if $\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}} \\leq 0$ and $\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}} \\geq 0$, respectively. Inequalities $\\circled{2}$ and $\\circled{4}$ hold by g-convexity of $F$.\n\n\\end{proof}\n\n\\section{Introduction}\n\nAcceleration in convex optimization is a phenomenon that has drawn lots of attention and has yielded many important results, since the renowned Accelerated Gradient Descent (AGD) method of \\citet{nesterov1983method}. Having been proved successful for deep learning \\cite{DBLP:conf\/icml\/SutskeverMDH13}, among other fields, there have been recent efforts to better understand this phenomenon \\cite{allen2014linear,diakonikolas2017approximate,su2014differential,wibisono2016variational}. These have yielded numerous new results going beyond convexity or the standard oracle model, in a wide variety of settings \\cite{allen2016katyusha, allen2017natasha, allen2018katyusha, DBLP:conf\/stoc\/ZhuO15,allen2016even, allen2017much, carmon2017convex, cohen2018acceleration, cutkosky2019matrix, diakonikolas2019generalized, diakonikolas2017accelerated, DBLP:conf\/colt\/GasnikovDGVSU0W19,wang2015unified}. This surge of research that applies tools of convex optimization to models going beyond convexity has been fruitful. One of these models is the setting of geodesically convex Riemannian optimization. In this setting, the function to optimize is geodesically convex (g-convex), i.e. convex restricted to any geodesic (cf. \\cref{def:g-convex_smooth}). \n\nRiemannian optimization, g-convex and non-g-convex alike, is an extensive area of research. In recent years there have been numerous efforts towards obtaining Riemannian optimization algorithms that share analogous properties to the more broadly studied Euclidean first-order methods: deterministic \\cite{bento2017iteration,wei2016guarantees,zhang2016first}, stochastic \\cite{hosseini2019alternative,khuzani2017stochastic,tripuraneni2018averaging}, variance-reduced \\cite{sato2017riemannian,kasai2018riemannian,zhang2016fast}, adaptive \\cite{kasai2019riemannian}, saddle-point-escaping \\cite{criscitiello2019efficiently,sun2019escaping,zhang2018r,zhou2019faster, criscitiello2020accelerated}, and projection-free methods \\cite{weber2017frank,weber2019nonconvex}, among others. Unsurprisingly, Riemannian optimization has found many applications in machine learning, including low-rank matrix completion \\cite{DBLP:journals\/siamsc\/CambierA16,heidel2018riemannian,mishra2014r3mc,tan2014riemannian,vandereycken2013low}, dictionary learning \\cite{cherian2016riemannian,sun2016complete}, optimization under orthogonality constraints \\cite{edelman1998geometry}, with applications to Recurrent Neural Networks \\cite{DBLP:conf\/nips\/Casado19, DBLP:conf\/icml\/CasadoM19}, robust covariance estimation in Gaussian distributions \\cite{wiesel2012geodesic}, Gaussian mixture models \\cite{hosseini2015matrix}, operator scaling \\cite{allen2018operator}, and sparse principal component analysis \\cite{genicot2015weakly,huang2019riemannian,jolliffe2003modified}.\n\nHowever, the acceleration phenomenon, largely celebrated in the Euclidean space, is still not understood in Riemannian manifolds, although there has been some progress on this topic recently (cf. \\hyperlink{sec:related_work}{Related work}). This poses the following question, which is the central subject of this paper:\n\n\\begin{center}\n\\textit{Can a Riemannian first-order method enjoy the same rates as AGD in the Euclidean space?}\n\\end{center}\n\nIn this work, we provide an answer in the affirmative for functions defined on hyperbolic and spherical spaces, up to constants depending on the curvature and the initial distance to an optimum, and up to log factors. In particular, the main results of this work are the following.\n\n\\paragraph{Main Results:}\n\\begin{itemize}\n \\item \\textit{Full acceleration}. We design algorithms that provably achieve the same rates of convergence as AGD in the Euclidean space, up to constants and log factors. More precisely, we obtain the rates $\\bigotilde{L\/\\sqrt{\\epsilon}}$ and $\\bigoast{\\sqrt{L\/\\mu}\\log(\\mu\/\\epsilon)}$ when optimizing $L$-smooth functions that are, respectively, g-convex and $\\mu$-strongly g-convex, defined on the hyperbolic space or a subset of the sphere. The notation $\\bigotilde{\\cdot}$ and $\\bigoast{\\cdot}$ omits $\\log(L\/\\epsilon)$ and $\\log(L\/\\mu)$ factors, respectively, and constants. Previous approaches only showed local results \\cite{zhang2018towards} or obtained results with rates in between the ones obtainable by Riemannian Gradient Descent (RGD) and AGD \\cite{ahn2020nesterov}. Moreover, these previous works only apply to functions that are smooth and strongly g-convex and not to smooth functions that are only g-convex. As a proxy, we design an accelerated algorithm under a condition between convexity and \\textit{quasar-convexity} in the constrained setting, which may be of independent interest.\n\n \\item \\textit{Reductions}. We present two reductions for any Riemannian manifold of bounded sectional curvature. Given an optimization method for smooth and g-convex functions they provide a method for optimizing smooth and strongly g-convex functions, and vice versa.\n\\end{itemize}\n\nIt is often the case that methods and key geometric inequalities that apply to manifolds with bounded sectional curvatures are obtained from the ones existing for the spaces of constant extremal sectional curvature \\cite{grove1997comparison, zhang2016first, zhang2018towards}. Consequently, our contribution is relevant not only because we establish an algorithm achieving global acceleration on functions defined on a manifold other than the Euclidean space, but also because understanding the constant sectional curvature case is an important step towards understanding the more general case of obtaining algorithms that optimize g-convex functions, strongly or not, defined on manifolds of bounded sectional curvature. \n\nOur main technique for designing the accelerated method consists of mapping the function domain to a subset $\\mathcal{B}$ of the Euclidean space via a geodesic map: a transformation that maps geodesics to geodesics. Given the gradient of a point $x\\in\\M$, which defines a lower bound on the function that is affine over the tangent space of $x$, we find a lower bound of the function that is affine over the region of $\\mathcal{B}$ where the previous lower bound was at most $f(x)$, despite the map being non-conformal, deforming distances, and breaking convexity. This allows to aggregate the lower bounds easily. We believe that effective lower bound aggregation is key to achieving Riemannian acceleration and optimality. Using this strategy, we are able to provide an algorithm along the lines of the one in \\cite{diakonikolas2017accelerated} to define a continuous method that we discretize using an approximate implementation of the implicit Euler method, obtaining a method achieving the same rates as the Euclidean AGD, up to constants and log factors. Our reductions take into account the deformations produced by the geometry to generalize existing optimal Euclidean reductions \\cite{allen2016optimal, allen2014linear}.\n\n\\paragraph{Basic Geometric Definitions.} We recall basic definitions of Riemannian geometry that we use in this work. For a thorough introduction we refer to \\cite{petersen2006riemannian}. A Riemannian manifold $(\\M,\\mathfrak{g})$ is a real smooth manifold $\\M$ equipped with a metric $\\mathfrak{g}$, which is a smoothly varying inner product. For $x \\in \\M$ and any two vectors $v,w \\in T_x\\M$ in the tangent space of $\\M$, the inner product $\\innp{v,w}_x$ is $\\mathfrak{g}(v,w)$. For $v\\in T_x\\M$, the norm is defined as usual $\\norm{v}_x \\defi \\sqrt{\\innp{v,v}_x}$. Typically, $x$ is known given $v$ or $w$, so we will just write $\\innp{v,w}$ or $\\norm{v}$ if $x$ is clear from context. A geodesic is a curve $\\gamma : [0,1] \\to \\M$ of unit speed that is locally distance minimizing. A uniquely geodesic space is a space such that for every two points there is one and only one geodesic that joins them. In such a case the exponential map $\\expon_x : T_x\\M\\to \\M$ and inverse exponential map $\\expon_x^{-1}:\\M\\to T_x\\M$ are well defined for every pair of points, and are as follows. Given $x, y\\in\\M$, $v\\in T_x\\M$, and a geodesic $\\gamma$ of length $\\norm{v}$ such that $\\gamma(0) =x$, $\\gamma(1)=y$, $\\gamma'(0)=v\/\\norm{v}$, we have that $\\operatorname{Exp}_x(v) = y$ and $\\operatorname{Exp}_x^{-1}(y) = v$. Note, however, that $\\expon_x(\\cdot)$ might not be defined for each $v\\in T_x\\M$. We denote by $d(x,y)$ the distance between $x$ and $y$. Its value is the same as $\\norm{\\expon_x^{-1}(y)}$. Given a $2$-dimensional subspace $V \\subset T_x\\M$, the sectional curvature at $x$ with respect to $V$ is defined as the Gauss curvature of the manifold $\\operatorname{Exp}_x(V)$ at $x$. \n\n\\paragraph{Notation.}\\hypertarget{sec:notation}{} Let $\\M$ be a manifold and let $\\B\\subset\\R^d$. We denote by $h:\\M\\to\\B$ a geodesic map \\cite{kreyszig1991differential}, which is a diffeomorphism such that the image and the inverse image of a geodesic is a geodesic. Usually, given an initial point $x_0$ of our algorithm, we will have $h(x_0)=0$. Given a point $x\\in\\M$ we use the notation $\\tilde{x} = h(x)$ and vice versa, any point in $\\B$ will use a tilde. Given two points $x, y\\in \\M$ and a vector $v\\in T_{x}\\M$ in the tangent space of $x$, we use the formal notation $\\innp{v, y-x} \\defi -\\innp{v, x-y} \\defi \\innp{v, \\expon^{-1}_{x}(y)}$. Given a vector $v\\in T_x\\M$, we call $\\tilde{v} \\in \\R^d$ the vector of the same norm such that $\\{\\tilde{x}+\\tilde{\\lambda} \\tilde{v}|\\tilde{\\lambda}\\in\\R^+, \\tilde{x}+\\tilde{\\lambda} \\tilde{v} \\in \\B\\} = \\{h(\\expon_x(\\lambda v))|\\lambda\\in I\\subset\\R^+\\}$, for some interval $I$. Likewise, given $x$ and a vector $\\tilde{v}\\in\\R^d$, we define $v\\in T_x\\M$. Let $x^\\ast$ be any minimizer of $F:\\M\\to\\R$. We denote by $R \\geq d(x_0,x^\\ast)$ a bound on the distance between $x^\\ast$ and the initial point $x_0$. Note that this implies that $x^\\ast \\in \\expon_{x_0}(\\bar{B}(0,R))$, for the closed ball $\\bar{B}(0,R) \\subset T_{x_0}\\M$. Consequently, we will work with the manifold that is a subset of a $d$-dimensional complete and simply connected manifold of constant sectional curvature $K$, namely a subset of the hyperbolic space or sphere \\cite{petersen2006riemannian}, defined as $\\expon_{x_0}(\\bar{B}(0,R))$, with the inherited metric.\nDenote by $\\H$ this manifold in the former case and $\\S$ in the latter, and note that we are not making explicit the dependence on $d$, $R$ and $K$. We want to work with the standard choice of uniquely geodesic manifolds \\cite{ahn2020nesterov, liu2017accelerated, zhang2016first, zhang2018towards}. Therefore, in the case that the manifold is $\\S$, we restrict ourselves to $R < \\pi\/2\\sqrt{K}$, so $\\S$ is contained in an open hemisphere. The big $O$ notations $\\bigotilde{\\cdot}$ and $\\bigoast{\\cdot}$ omit $\\log(L\/\\epsilon)$ and $\\log(L\/\\mu)$ factors, respectively, and constant factors depending on $R$ and $K$.\n\nWe define now the main properties that will be assumed on the function $F$ to be minimized.\n\\begin{definition}[Geodesic Convexity and Smoothness] \\label{def:g-convex_smooth}\n Let $F:\\M \\to \\R$ be a differentiable function defined on a Riemannian manifold $(\\M,\\mathfrak{g})$. Given $L\\geq \\mu > 0$, we say that $F$ is $L$-smooth, and respectively $\\mu$-strongly g-convex, if for any two points $x, y \\in \\M$, $F$ satisfies\n \\[\n F(y) \\leq F(x) + \\innp{\\nabla F(x), y-x} + \\frac{L}{2}d(x,y)^2, \\text{ resp. } F(y) \\geq F(x) + \\innp{\\nabla F(x), y-x} + \\frac{\\mu}{2}d(x,y)^2.\n \\]\n We say $F$ is g-convex if the second inequality above, i.e. $\\mu$-strong g-convexity, is satisfied with $\\mu=0$. Note that we have used the formal notation above for the subtraction of points in the inner product. \n\\end{definition}\n\n\\hypertarget{sec:related_work}{}\\paragraph{Comparison with Related Work.} There are a number of works that study the problem of first-order acceleration in Riemannian manifolds of bounded sectional curvature. The first study is \\cite{liu2017accelerated}. In this work, the authors develop an accelerated method with the same rates as AGD for both g-convex and strongly g-convex functions, provided that at each step a given nonlinear equation can be solved. No algorithm for solving this equation has been found and, in principle, it could be intractable or infeasible. In \\cite{alimisis2019continuous} a continuous method analogous to the continuous approach to accelerated methods is presented, but it is not known if there exists an accelerated discretization of it. In \\cite{alimisis2020practical}, an algorithm presented is claimed to enjoy an accelerated rate of convergence, but fails to provide convergence when the function value gets below a potentially large constant that depends on the manifold and smoothness constant. In \\cite{huang2019extending} an accelerated algorithm is presented but relying on strong geometric inequalities that are not proved to be satisfied. \\citet{zhang2018towards} obtain a \\textit{local} algorithm that optimizes $L$-smooth and $\\mu$-strongly g-convex functions achieving the same rates as AGD in the Euclidean space, up to constants. That is, the initial point needs to start close to the optimum, $O((\\mu\/L)^{3\/4})$ close, to be precise. Their approach consists of adapting Nesterov's estimate sequence technique by keeping a quadratic on $T_{x_t}\\M$ that induces on $\\M$ a regularized lower bound on $F(x^\\ast)$ via $\\expon_{x_t}(\\cdot)$. They aggregate the information yielded by the gradient to it, and use a geometric lemma to find a quadratic in $T_{x_{t+1}}\\M$ whose induced function lower bounds the other one. \\citet{ahn2020nesterov} generalize the previous algorithm and, by using similar ideas for the lower bound, they adapt it to work globally, obtaining strictly better rates than RGD, recovering the local acceleration of the previous paper, but not achieving global rates comparable to the ones of AGD. In fact, they prove that their algorithm eventually decreases the function value at a rate close to AGD but this can take as many iterations as the ones needed by RGD to reach the neighborhood of the previous local algorithm. In our work, we take a step back and focus on the constant sectional curvature case to provide a global algorithm that achieves the same rates as AGD, up to constants and log factors. It is common to characterize the properties of spaces of bounded sectional curvature by using the ones of the spaces of constant extremal sectional curvature \\cite{grove1997comparison, zhang2016first, zhang2018towards}, which makes the study of the constant sectional curvature case critical to the development of full accelerated algorithms in the general bounded sectional curvature case. Additionally, our work studies g-convexity besides strong g-convexity.\n\nAnother related work is the \\textit{approximate duality gap technique} \\cite{diakonikolas2017approximate}, which presents a unified view of the analysis of first-order methods for the optimization of convex functions defined in the Euclidean space. It defines a continuous duality gap and by enforcing a natural invariant, it obtains accelerated continuous dynamics and their discretizations for most classical first-order methods. A derived work \\cite{diakonikolas2017accelerated} obtains acceleration in a fundamentally different way from previous acceleration approaches, namely using an approximate implicit Euler method for the discretization of the acceleration dynamics. The convergence analysis of \\cref{thm:acceleration_quasiquasarconvexity} is inspired by these two works. We will see in the sequel that, for our manifolds of interest, g-convexity is related to a model known in the literature as quasar-convexity or weak-quasi-convexity \\cite{guminov2017accelerated, hinder2019near,nesterov2018primal}.\n\n\n\\section{Algorithm}\\label{sec:algorithm}\nWe study the minimization problem $\\min_{x\\in\\M}{F(x)}$ with a gradient oracle, for a smooth function $F:\\M\\to\\R$ that is g-convex or strongly g-convex. In this section, $\\M$ refers to a manifold that can be $\\H$ or $\\S$, i.e. the subset of the hyperbolic space or sphere $\\expon_{x_0}(\\bar{B}(0, R))$, for an initial point $x_0$. For simplicity, we do not use subdifferentials so we assume $F:\\M\\to\\R$ is a differentiable function that is defined over the manifold of constant sectional curvature $\\M'\\defi \\expon_{x_0}(B(0,R'))$, for an $R'>R$, and we avoid writing $F:\\M'\\to\\R$. We defer the proofs of the lemmas and theorems in this and following sections to the appendix. We assume without loss of generality that the sectional curvature of $\\M$ is $K \\in \\{1,-1\\}$, since for any other value of $K$ and any function $F:\\M\\to\\R$ defined on such a manifold, we can reparametrize $F$ by a rescaling, so it is defined over a manifold of constant sectional curvature $K \\in \\{1,-1\\}$. The parameters $L$, $\\mu$ and $R$ are rescaled accordingly as a function of $K$, cf. \\cref{remark:rescaling_of_K}. We denote the special cosine by $C_K(\\cdot)$, which is $\\cos(\\cdot)$ if $K=1$ and $\\cosh(\\cdot)$ if $K=-1$. We define $\\X = h(\\M)\\subset \\B \\subset \\R^d$. We use classical geodesic maps for the manifolds that we consider: the Gnomonic projection for $\\S$ and the Beltrami-Klein projection for $\\H$ \\cite{greenberg1993euclidean}. They map an open hemisphere and the hyperbolic space of curvature $K\\in\\{1,-1\\}$ to $\\B=\\R^d$ and $\\B =B(0,1)\\subset\\R^d$, respectively. We will derive our results from the following characterization \\cite{greenberg1993euclidean}. Let $\\tilde{x}, \\tilde{y} \\in \\B$ be two points. Recall that we denote $x = h^{-1}(\\tilde{x}), y=h^{-1}(\\tilde{y}) \\in \\M$. Then we have that $d(x,y)$, the distance between $x$ and $y$ with the metric of $\\M$, satisfies\n\\begin{equation}\\label{eq:characterization_of_geodesic_map_and_metric}\n C_K(d(x,y)) = \\frac{1+K\\innp{\\tilde{x}, \\tilde{y}}}{\\sqrt{1+K\\norm{\\tilde{x}}^2}\\cdot\\sqrt{1+K\\norm{ \\tilde{y}}^2}}.\n\\end{equation}\nObserve that the expression is symmetric with respect to rotations. In particular, the symmetry implies $\\X$ is a closed ball of radius $\\tilde{R}$, with $C_K(R) = (1+K\\tilde{R}^2)^{-1\/2}$. \n\nConsider a point $x\\in\\M$ and the lower bound provided by the g-convexity assumption when computing $\\nabla F(x)$. Dropping the $\\mu$ term in case of strong g-convexity, this bound is affine over $T_x \\M$. We would like our algorithm to aggregate effectively the lower bounds it computes during the course of the optimization. The deformations of the geometry make it a difficult task, despite the fact that we have a simple description of each individual lower bound. We deal with this problem in the following way: our approach is to obtain a lower bound that is looser by a constant depending on $R$, and that is affine over $\\B$. In this way the aggregation becomes easier. Then, we are able to combine this lower bound with decreasing upper bounds in the fashion some other accelerated methods work in the Euclidean space \\cite{allen2014linear, diakonikolas2017accelerated, diakonikolas2017approximate, nesterov1983method}. Alternatively, we can see the approach in this work as the constrained non-convex optimization problem of minimizing the function $f:\\X\\to\\R$, $\\tilde{x} \\mapsto F(h^{-1}(\\tilde{x}))$:\n\\[\n \\text{minimize}\\ \\ f(\\tilde{x}),\\quad \\text{ for } \\tilde{x}\\in\\X.\n\\]\nIn the rest of the section, we will focus on the g-convex case. For simplicity, instead of solving the strongly g-convex case directly in an analogous way by finding a lower bound that is quadratic over $\\B$, we rely on the reductions of \\cref{sec:reductions} to obtain the accelerated algorithm in this case.\n\nThe following two lemmas show that finding the aforementioned affine lower bound is possible, and is defined as a function of $\\nabla f(\\tilde{x})$. We first gauge the deformations caused by the geodesic map $h$. Distances are deformed, the map $h$ is not conformal and, in spite of it being a geodesic map, the image of the geodesic $\\expon_x(\\lambda \\nabla F(x))$ is not mapped into the image of the geodesic $\\tilde{x}+\\tilde{\\lambda}\\nabla f(\\tilde{x})$, i.e. the direction of the gradient changes. We are able to find the affine lower bound after bounding these deformations.\n\\begin{lemma}\\label{lemma:deformations}\n Let $x,y\\in\\M$ be two different points, and in part $b)$ different from $x_0$. Let $\\tilde{\\alpha}$ be the angle $\\angle \\tilde{x}_0\\tilde{x}\\tilde{y}$, formed by the vectors $\\tilde{x}_0-\\tilde{x}$ and $\\tilde{y}-\\tilde{x}$. Let $\\alpha$ be the corresponding angle between the vectors $\\expon_x^{-1}(x_0)$ and $\\expon_x^{-1}(y)$. Assume without loss of generality that $\\tilde{x} \\in \\operatorname{span}\\{\\tilde{e}_1\\}$ and $\\nabla f(\\tilde{x}) \\in \\operatorname{span}\\{\\tilde{e}_1, \\tilde{e}_2\\}$ for the canonical orthonormal basis $\\{\\tilde{e}_i\\}_{i=1}^d$. Let $e_i\\in T_x\\M$ be the unit vector such that $h$ maps the image of the geodesic $\\expon_x(\\lambda e_i)$ to the image of the geodesic $\\tilde{x}+\\tilde{\\lambda}e_i$, for $i=1,\\dots, d$, and $\\lambda, \\tilde{\\lambda} \\geq 0$. Then, the following holds.\n \\begin{enumerate}[label=\\alph*), leftmargin=*,]\n \\item Distance deformation:\n \\begin{align*}\n\\begin{aligned}\n K C_K^2(R) \\leq K \\frac{d(x,y)}{\\norm{\\tilde{x}-\\tilde{y}}} \\leq K.\n\\end{aligned}\n\\end{align*} \n \\item Angle deformation:\n\\begin{align*}\n\\begin{aligned}\n\\sin(\\alpha) = \\sin(\\tilde{\\alpha}) \\sqrt{\\frac{1+K\\norm{\\tilde{x}}^2}{1+K\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}}, \\quad \\quad \\cos(\\alpha) = \\cos(\\tilde{\\alpha}) \\sqrt{\\frac{1}{1+K\\norm{\\tilde{x}}^2\\sin^2(\\tilde{\\alpha})}}.\n\\end{aligned}\n\\end{align*}\n \\item Gradient deformation:\n\\begin{align*}\n\\begin{aligned}\n \\nabla F(x) = (1+K\\norm{\\tilde{x}}^2)\\nabla f(\\tilde{x})_1 e_1 + \\sqrt{1+K\\norm{\\tilde{x}}^2}\\nabla f(\\tilde{x})_2 e_2 \\quad \\text{ and } \\quad e_i \\perp e_j \\ \\text{ for } i\\neq j.\n\\end{aligned}\n\\end{align*}\n And if $v \\in T_x\\M$ is a vector normal to $\\nabla F(x)$, then $\\tilde{v}$ is normal to $\\nabla f(x)$. \n \\end{enumerate}\n\\end{lemma}\n\nThe following uses the deformations described in the previous lemma to obtain the affine lower bound on the function, given a gradient at a point $\\tilde{x}$. Note that \\cref{lemma:deformations}.c implies that we have $\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}= 0$ if and only if $\\innp{\\nabla F(x), y-x}= 0$. In the proof we lower bound, generally, affine functions defined on $T_x\\M$ by affine functions in the Euclidean space $\\B$. This generality allows to obtain a result with constants that only depend on $R$.\n\n\\begin{lemma}\\label{prop:bounding_hyperplane}\n Let $F:\\M\\to\\R$ be a differentiable function and let $f=F\\circ h^{-1}$.\n Then, there are constants $\\gamman, \\gammap \\in (0, 1]$ depending on $R$ such that for all $x, y\\in\\M$ satisfying $\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}\\neq 0$ we have: \n \\begin{align}\\label{eq:quotient_of_inner_products_is_bounded} \n \\begin{aligned}\n \\gammap \\leq\\frac{\\innp{\\nabla F(x), y-x}}{\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}} \\leq \\frac{1}{\\gamman}.\n \\end{aligned}\n\\end{align}\nIn particular, if $F$ is g-convex we have:\n\\begin{align} \\label{eq:quasiquasarconvexity}\n \\begin{aligned}\n f(\\tilde{x}) + \\frac{1}{\\gamman}\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}\\leq f(\\tilde{y}) &\\ & & {\\text{ if } \\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}}\\leq 0, \\\\\n f(\\tilde{x})+\\gammap\\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}} \\leq f(\\tilde{y}) &\\ & & {\\text{ if } \\innp{\\nabla f(\\tilde{x}), \\tilde{y}-\\tilde{x}}} \\geq 0.\n \\end{aligned}\n\\end{align}\n\\end{lemma}\n\nThe two inequalities in \\eqref{eq:quasiquasarconvexity} show the affine lower bound. Only the first one is needed to bound $f(\\tilde{x}^\\ast)=F(x^\\ast)$. The first inequality applied to $\\tilde{y}=\\tilde{x}^\\ast$ defines a model known in the literature as quasar-convexity or weak-quasi-convexity \\cite{guminov2017accelerated, hinder2019near,nesterov2018primal}, for which accelerated algorithms exist in the \\textit{unconstrained case}, provided smoothness is also satisfied. However, to the best of our knowledge, there is no known algorithm for solving the constrained case in an accelerated way. The condition in \\eqref{eq:quasiquasarconvexity} is, trivially, a relaxation of convexity that is stronger than quasar-convexity. We will make use of \\eqref{eq:quasiquasarconvexity} in order to obtain acceleration in the constrained setting. This is of independent interest. Recall that we need the constraint to guarantee bounded deformation due to the geometry. We also require smoothness of $f$. The following lemma shows that $f$ is as smooth as $F$ up to a constant depending on $R$. \n\n\\begin{lemma}\\label{lemma:smoothness_of_transformed_function}\n Let $F:\\M\\to\\R$ be an $L$-smooth function and $f=F\\circ h^{-1}$. Assume there is a point $x^\\ast \\in \\M$ such that $\\nabla F(x^\\ast)=0$. Then $f$ is $O(L)$-smooth.\n\\end{lemma}\n\n\nUsing the \\textit{approximate duality gap technique} \\cite{diakonikolas2017approximate} we obtain accelerated continuous dynamics, for the optimization of the function $f$. Then we adapt AXGD to obtain an accelerated discretization. AXGD \\cite{diakonikolas2017accelerated} is a method that is based on implicit Euler discretization of continuous accelerated dynamics and is fundamentally different from AGD and techniques as Linear Coupling \\cite{allen2014linear} or Nesterov's estimate sequence \\cite{nesterov1983method}. The latter techniques use a balancing gradient step at each iteration and our use of a looser lower bound complicates guaranteeing keeping the gradient step within the constraints. We state the accelerated theorem and provide a sketch of the proof in \\cref{subsec:sketch_of_my_axgd_proof}.\n\n\\begin{theorem} \\label{thm:acceleration_quasiquasarconvexity}\n Let $Q\\subset \\R^d$ be a convex set of diameter $2R$. Let $f:Q\\to\\R$ be an $\\tilde{L}$-smooth function satisfying \\eqref{eq:quasiquasarconvexity} with constants $\\gamman,\\gammap \\in (0,1]$. Assume there is a point $\\tilde{x}^\\ast\\in Q$ such that $\\nabla f(\\tilde{x}^\\ast)=0$. Then, we can obtain an $\\epsilon$-minimizer of $f$ using $\\bigotilde{ \\sqrt{\\tilde{L}\/(\\gamman^2\\gammap\\epsilon)} }$ queries to the gradient oracle of $f$.\n\\end{theorem}\n\nFinally, we have Riemannian acceleration as a direct consequence of \\cref{thm:acceleration_quasiquasarconvexity}, \\cref{prop:bounding_hyperplane} and \\cref{lemma:smoothness_of_transformed_function}.\n\\begin{theorem}[g-Convex Acceleration] \\label{thm:riemannian_acceleration}\n Let $F:\\M\\to\\R$ be an $L$-smooth and g-convex function and assume there is a point $x^\\ast\\in\\M$ satisfying $\\nabla F(x^\\ast)=0$. \\cref{alg:accelerated_gconvex} computes a point $x_t\\in\\M$ satisfying $F(x_t)-F(x^\\ast) \\leq \\epsilon$ using $\\bigotilde{\\sqrt{L\/\\epsilon}}$ queries to the gradient oracle.\n\\end{theorem}\n\nWe observe that if there is a geodesic map mapping a manifold into a convex subset of the Euclidean space then the manifold must necessarily have constant sectional curvature, cf. Beltrami's Theorem \\cite{busemann1984general, kreyszig1991differential}.\nThis precludes a straightforward generalization from our method to the case of non-constant bounded sectional curvature.\n\n\\begin{algorithm}\n \\caption{Accelerated g-Convex Minimization}\n \\label{alg:accelerated_gconvex}\n\n\\begin{algorithmic}[1] \n \\REQUIRE Smooth and g-convex function $F:\\M\\to\\R$, for $\\M=\\H$ or $\\M=\\S$.\n \\Statex Initial point $x_0$; Constants $\\tilde{L}$, $\\gammap$, $\\gamman$. Geodesic map $h$ satisfying \\eqref{eq:characterization_of_geodesic_map_and_metric} and $h(x_0)=0$.\n \\Statex Bound on the distance to a minimum $R \\geq d(x_0,x^\\ast)$. Accuracy $\\epsilon$ and number of iterations $t$.\n \\hrule\n \\State $\\X \\defi h(\\expon_{x_0}(B(0, R)))\\subset \\B$; $\\quad f \\defi F \\circ h^{-1}\\quad$ and $\\quad\\psi(\\tilde{x}) \\defi \\frac{1}{2}\\norm{\\tilde{x}}^2$ \n \\State $\\tilde{z}_0 \\gets \\nabla \\psi(\\tilde{x}_0)$; $\\quad A_0 \\gets 0$\n \\FOR {$i \\text{ \\textbf{from} } 0 \\text{ to } t-1$}\n \\State $a_{i+1} \\gets (i+1)\\gamman^2\\gammap\/2\\tilde{L}$\n \\State $A_{i+1} \\gets A_{i} + a_{i+1}$\n \\State $\\lambda \\gets \\text{BinaryLineSearch}(\\tilde{x}_i, \\tilde{z}_i, f, \\X, a_{i+1}, A_{i}, \\epsilon, \\tilde{L}, \\gamman, \\gammap)$ \\Comment{(cf. \\cref{alg:bin_search} in \\cref{app:acceleration})}\n \\State $\\tilde{\\chi}_{i} \\gets (1-\\lambda)\\tilde{x}_i + \\lambda \\nabla \\psi^\\ast(\\tilde{z}_i)$ \n \\State $\\tilde{\\zeta}_{i} \\gets \\tilde{z}_i-(a_{i+1}\/\\gamman)\\nabla f(\\tilde{\\chi}_i)$\n \\State $\\tilde{x}_{i+1} \\gets (1-\\lambda)\\tilde{x}_i + \\lambda \\nabla \\psi^\\ast(\\tilde{\\zeta}_i)$ \\Comment{$\\big[\\nabla \\psi^\\ast(\\tilde{p})= \\argmin_{\\tilde{z} \\in \\X} \\{\\norm{\\tilde{z}-\\tilde{p}}\\} = \\Pi_\\X(\\tilde{p})\\big]$}\n \\State $\\tilde{z}_{i+1} \\gets \\tilde{z}_i-(a_{i+1}\/\\gamman)\\nabla f(\\tilde{x}_{i+1})$\n \\ENDFOR\n \\State return $x_t$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Sketch of the proof of \\cref{thm:acceleration_quasiquasarconvexity}.} \\label{subsec:sketch_of_my_axgd_proof}\n\nInspired by the \\textit{approximate duality gap technique} \\cite{diakonikolas2017approximate}, let $\\alpha_t$ be an increasing function of time $t$, and denote $A_t=\\int_{t_0}^t d\\alpha_\\tau= \\int_{t_0}^t \\dott{\\alpha}_\\tau d\\tau$. We define a continuous method that keeps a solution $\\tilde{x}_t$, along with a differentiable upper bound $U_t$ on $f(x_t)$ and a lower bound $L_t$ on $f(\\tilde{x}^\\ast)$. In our case $f$ is differentiable so we can just take $U_t=f(x_t)$. The lower bound comes from \n\\begin{equation}\\label{eq:raw_lower_bound}\n f(\\tilde{x}^\\ast) \\geq \\frac{\\int_{t_{0}}^{t} f(\\tilde{x}_{\\tau}) d \\alpha_{\\tau}}{A_{t}}+\\frac{\\int_{t_{0}}^{t}\\frac{1}{\\gamman}\\innp{\\nabla f(\\tilde{x}_{\\tau}), \\tilde{x}^\\ast-\\tilde{x}_{\\tau}} d \\alpha_{\\tau}}{A_{t}},\n\\end{equation}\nafter applying some desirable modifications, like regularization with a $1$-strongly convex function $\\psi$ and removing the unknown $\\tilde{x}^\\ast$ by taking a minimum over $\\X$. Note \\eqref{eq:raw_lower_bound} comes from averaging \\eqref{eq:quasiquasarconvexity} for $\\tilde{y} = \\tilde{x}^\\ast$. Then, if we define the gap $G_t=U_t-L_t$ and design a method that forces $\\alpha_tG_t$ to be non-increasing, we can deduce $f(x_t)-f(x^\\ast) \\leq G_t \\leq \\alpha_{t_0}G_{t_0}\/\\alpha_t$. By forcing $\\frac{d}{dt}(\\alpha_tG_t) =0$, we naturally obtain the following continuous dynamics, where $z_t$ is a mirror point and $\\psi^\\ast$ is the Fenchel dual of $\\psi$, cf. \\cref{def:fenchel_dual}.\n\\begin{align}\\label{modified_accelerated_continuous_dynamics}\n\\begin{aligned}\n \\dott{\\tilde{z}}_{t}&=-\\frac{1}{\\gamman}\\dott{\\alpha}_{t} \\nabla f(\\tilde{x}_{t}); \\ \\ \\\n \\dott{\\tilde{x}}_{t}&=\\frac{1}{\\gamman}\\dott{\\alpha}_{t} \\frac{\\nabla \\psi^{\\ast}({\\tilde{z}}_{t})-\\tilde{x}_{t}}{\\alpha_{t}}; \\ \\ \\\n \\tilde{z}_{t_{0}}&=\\nabla \\psi(\\tilde{x}_{t_{0}}), \\tilde{x}_{t_{0}} \\in \\X\n\\end{aligned}\n\\end{align}\nWe note that except for the constant $\\gamman$, these dynamics match the accelerated dynamics used in the optimization of convex functions \\cite{diakonikolas2017approximate, diakonikolas2017accelerated, DBLP:conf\/nips\/KricheneBB15}. The AXGD algorithm \\cite{diakonikolas2017accelerated}, designed for the accelerated optimization of convex functions, discretizes the latter dynamics following an approximate implementation of implicit Euler discretization. This has the advantage of not needing a gradient step per iteration to compensate for some positive discretization error. Note that in our case we must use \\eqref{eq:quasiquasarconvexity} instead of convexity for a discretization. We are able to obtain the following discretization coming from an approximate implicit Euler discretization: \n\\begin{equation}\\label{general_rule_modified_quasar_axgd}\n\\left\\{\\begin{array}{l}\n \\tilde{\\chi}_{i}=\\frac{\\hat{\\gamma}_iA_i}{A_i \\hat{\\gamma}_i+a_{i+1}\/\\gamman} \\tilde{x}_{i}+\\frac{a_{i+1}\/\\gamman}{A_i\\hat{\\gamma}_i + a_{i+1}\/\\gamman} \\nabla \\psi^{\\ast}(\\tilde{z}_{i}); \\ \\ \\ \\ \\ \\ \n \\tilde{\\zeta}_i = \\tilde{z}_{i}-\\frac{a_{i+1}}{\\gamman} \\nabla f(\\tilde{\\chi}_{i}) \\\\\n \\tilde{x}_{i+1}=\\frac{\\hat{\\gamma}_iA_i}{A_i \\hat{\\gamma}_i+a_{i+1}\/\\gamman} \\tilde{x}_{i}+\\frac{a_{i+1}\/\\gamman}{A_i\\hat{\\gamma}_i + a_{i+1}\/\\gamman} \\nabla \\psi^{\\ast}(\\tilde{\\zeta}_i); \\ \\\n \\tilde{z}_{i+1} = \\tilde{z}_{i}-\\frac{a_{i+1}}{\\gamman} \\nabla f(\\tilde{x}_{i+1}) \n\\end{array}\\right.\n\\end{equation}\nwhere $\\hat{\\gamma}_i\\in[\\gammap, 1\/\\gamman]$ is a parameter, $\\tilde{x}_0\\in\\X$ is an arbitrary point, $\\tilde{z}_0=\\nabla \\psi(\\tilde{x}_0)$ and now $\\alpha_t$ is a discrete measure and $\\dott{\\alpha}_t$ is a weighted sum of Dirac delta functions $\\dott{\\alpha}_t=\\sum_{i=1}^\\infty a_i \\delta(t-(t_0+i-1))$. Compare \\eqref{general_rule_modified_quasar_axgd} with the discretization in AXGD \\cite{diakonikolas2017accelerated} that is equal to our discretization but with no $\\gamman$ or $\\hat{\\gamma}_i$. Or equivalently with $\\hat{\\gamma}_i = 1\/\\gamman$ and with no $\\gamman$ for the mirror descent updates of $\\tilde{\\zeta}_i$ and $\\tilde{z}_{i+1}$. However, not having convexity, in order to have per-iteration discretization error less than $\\hat{\\epsilon}\/A_T$, we require $\\hat{\\gamma}_i$ to be such that $\\tilde{x}_{i+1}$ satisfies\n\\begin{equation}\\label{eq:approximate_binary_search_main_paper}\n f(\\tilde{x}_{i+1})-f(\\tilde{x}_i) \\leq \\hat{\\gamma}_i \\innp{\\nabla f(\\tilde{x}_{i+1}), \\tilde{x}_{i+1}-\\tilde{x}_i} + \\hat{\\epsilon}, \n\\end{equation}\nwhere $\\hat{\\epsilon}$ is chosen so that the accumulated discretization error is $<\\epsilon\/2$, after having performed the steps necessary to obtain an $\\epsilon\/2$ minimizer. We would like to use \\eqref{eq:quasiquasarconvexity} to find such a $\\hat{\\gamma}_i$ but we need to take into account that we only know $\\tilde{x}_{i+1}$ a posteriori. Indeed, using \\eqref{eq:quasiquasarconvexity} we conclude that setting $\\hat{\\gamma}_i$ to $1\/\\gamman$ or $\\gammap$ then we either satisfy \\eqref{eq:approximate_binary_search_main_paper} or there is a point $\\hat{\\gamma}_i \\in (\\gammap, 1\/\\gamman)$ for which $\\innp{\\nabla f(\\tilde{x}_{i+1}), \\tilde{x}_{i+1}-\\tilde{x}_i}=0$, which satisfies the equation for $\\hat{\\epsilon}=0$. Then, using smoothness of $f$, existence of $x^\\ast$ (that satisfies $\\nabla f(x^\\ast)=0$), and boundedness of $\\X$ we can guarantee that a binary search finds a point satisfying \\eqref{eq:approximate_binary_search_main_paper} in $O(\\log (\\tilde{L}i\/\\gamma_n\\hat{\\epsilon}))$ iterations. Each iteration of the binary search requires to run \\eqref{general_rule_modified_quasar_axgd}, that is, one step of the discretization. Computing the final discretization error, we obtain acceleration after choosing appropriate learning rates $a_{i}$. \\cref{alg:accelerated_gconvex} contains the pseudocode of this algorithm along with the reduction of the problem from minimizing $F$ to minimizing $f$. We chose $\\psi(\\tilde{x}) \\defi \\frac{1}{2}\\norm{\\tilde{x}}^2$ as our strongly convex regularizer.\n\n\n\n\\section{Reductions} \\label{sec:reductions}\nThe construction of reductions proves to be very useful in order to facilitate the design of algorithms in different settings. Moreover, reductions are a helpful tool to infer new lower bounds without extra ad hoc analysis. We present two reductions. We will see in \\cref{coroll:acceleration_st_g_convex} and \\cref{example:application_of_reduction_to_st_convex} that one can obtain full accelerated methods to minimize smooth and strongly g-convex functions from our accelerated methods for smooth and g-convex functions and vice versa. These are generalizations of some reductions designed to work in the Euclidean space \\cite{allen2016optimal, allen2014linear}. The reduction to strongly g-convex functions takes into account the effect of the deformation of the space on the strong convexity of the function $F_y(x) = d(x,y)^2\/2$, for $x, y \\in \\M$. The reduction to g-convexity requires the rate of the algorithm that applies to g-convex functions to be proportional to the distance between the initial point and the optimum $d(x_0,x^\\ast)$. The proofs of the statements in this section can be found in the appendix. We will use $\\timens(\\cdot)$ and $\\time(\\cdot)$ to denote the time algorithms $\\mathcal{A}_{\\operatorname{ns}}$ and $\\mathcal{A}$ below require, respectively, to perform the tasks we define below.\n\n\\begin{theorem}\\label{thm:reduction_to_g_convex}\n Let $\\M$ be a Riemannian manifold, let $F:\\M\\to\\R$ be an $L$-smooth and $\\mu$-strongly g-convex function, and let $x^\\ast$ be its minimizer. Let $x_0$ be a starting point such that $d(x_0,x^\\ast) \\leq R$. Suppose we have an algorithm $\\mathcal{A_{\\operatorname{ns}}}$ to minimize $F$, such that in time $T =\\timens(L, \\mu, R)$ it produces a point $\\hat{x}_T$ satisfying $F(\\hat{x}_T)-F(x^\\ast) \\leq \\mu \\cdot d(x_0,x^\\ast)^2\/4$. Then we can compute an $\\epsilon$-minimizer of $F$ in time $O(\\timens(L, \\mu, R)\\log(R^2\\mu\/\\epsilon))$.\n\\end{theorem}\n\n\\cref{thm:reduction_to_g_convex} implies that if we forget about the strong g-convexity of a function and we treat it as it is just g-convex we can run in stages an algorithm designed for optimizing g-convex functions. The fact that the function is strongly g-convex is only used between stages, as the following corollary shows by making use of \\cref{alg:accelerated_gconvex}.\n\n\\begin{corollary}\\label{coroll:acceleration_st_g_convex}\n We can compute an $\\epsilon$-minimizer of an $L$-smooth and $\\mu$-strongly g-convex function $F:\\M\\to\\R$ in $\\bigoast{\\sqrt{L\/\\mu}\\log(\\mu\/\\epsilon)}$ queries to the gradient oracle, where $\\M=\\S$ or $\\M=\\H$.\n\\end{corollary}\n\nWe note that in the strongly convex case, by decreasing the function value by a factor we can guarantee, we decrease the distance to $x^\\ast$ by another factor, so we can periodically recenter the geodesic map to reduce the constants produced by the deformations of the geometry, see the proof of \\cref{coroll:acceleration_st_g_convex}. Finally, we show the reverse reduction.\n\n\\begin{theorem}\\label{thm:reduction_to_g_st_convex}\n Let $\\M$ be a Riemannian manifold of bounded sectional curvature, let $F:\\M\\to\\R$ be an $L$-smooth and g-convex function, and assume there is a point $x^\\ast\\in\\M$ such that $\\nabla F(x^\\ast)=0$. Let $x_0$ be a starting point such that $d(x_0,x^\\ast) \\leq R$ and let $\\Delta$ satisfy $F(x_0)-F(x^\\ast) \\leq \\Delta $. Assume we have an algorithm $\\mathcal{A}$ that given an $L$-smooth and $\\mu$-strongly g-convex function $\\hat{F}:\\M\\to\\R$, with minimizer in $\\expon_{x_0}(\\bar{B}(0,R))$, and any initial point $\\hat{x}_0\\in\\M$ produces a point $\\hat{x}\\in\\expon_{x_0}(\\bar{B}(0, R))$ in time $\\hat{T}=\\time(L,\\mu,\\M, R)$ satisfying $\\hat{F}(\\hat{x})-\\min_{x\\in\\M}\\hat{F}(x) \\leq (\\hat{F}(\\hat{x}_0) - \\min_{x\\in\\M}\\hat{F}(x))\/4$. Let $T=\\lceil\\log_2(\\Delta\/\\epsilon)\/2\\rceil+1$. Then, we can compute an $\\epsilon$-minimizer in time $\\sum_{t=0}^{T-1}\\time(L+2^{-t}\\Delta\\distorn\/R^2, 2^{-t}\\Delta\\distorp\/R^2, \\M, R)$, where $\\distorp$ and $\\distorn$ are constants that depend on $R$ and the bounds on the sectional curvature of $\\M$.\n\\end{theorem}\n\n\\begin{example}\\label{example:application_of_reduction_to_st_convex}\n Applying reduction \\cref{thm:reduction_to_g_st_convex} to the algorithm in \\cref{coroll:acceleration_st_g_convex} we can optimize $L$-smooth and g-convex functions defined on $\\H$ or $\\S$ with a gradient oracle complexity of $\\bigotilde{L\/\\sqrt{\\epsilon}}$. \n\\end{example}\n\nNote that this reduction cannot be applied to the locally accelerated algorithm in \\citep{zhang2018towards}, that we discussed in the related work section. The reduction runs in stages by adding decreasing $\\mu_i$-strongly convex regularizers until we reach $\\mu_i =O(\\epsilon)$. The local assumption required by the algorithm in \\citep{zhang2018towards} on the closeness to the minimum cannot be guaranteed. In \\citep{ahn2020nesterov}, the authors give an unconstrained global algorithm whose rates are strictly better than RGD. The reduction could be applied to a constrained version of this algorithm to obtain a method for smooth and g-convex functions defined on manifolds of bounded sectional curvature and whose rates are strictly better than RGD. \n\n\\section{Conclusion}\n\nIn this work we proposed a first-order method with the same rates as AGD, for the optimization of smooth and g-convex or strongly g-convex functions defined on a manifold other than the Euclidean space, up to constants and log factors. We focused on the hyperbolic and spherical spaces, that have constant sectional curvature. The study of geometric properties for the constant sectional curvature case can be usually employed to conclude that a space of bounded sectional curvature satisfies a property that is in between the ones for the cases of constant extremal sectional curvature. Several previous algorithms have been developed for the optimization in Riemannian manifolds of bounded sectional curvature by utilizing this philosophy, for instance \\cite{ahn2020nesterov,ferreira2019gradient,DBLP:journals\/jota\/WangLY15,zhang2016first,zhang2018towards}. In future work, we will attempt to use the techniques and insights developed in this work to give an algorithm with the same rates as AGD for manifolds of bounded sectional curvature.\n\nThe key technique of our algorithm is the effective lower bound aggregation. Indeed, lower bound aggregation is the main hurdle to obtain accelerated first-order methods defined on Riemannian manifolds. Whereas the process of obtaining effective decreasing upper bounds on the function works similarly as in the Euclidean space---the same approach of locally minimizing the upper bound given by the smoothness assumption is used---obtaining adequate lower bounds proves to be a difficult task. We usually want a simple lower bound such that it, or a regularized version of it, can be easily optimized globally. We also want that the lower bound combines the knowledge that the g-convexity or g-strong convexity provides for all the queried points, commonly an average. These Riemannian convexity assumptions provide simple lower bounds, namely linear or quadratic, but each with respect to each of the tangent spaces of the queried points only. The deformations of the space complicate the aggregation of the lower bounds. Our work deals with this problem by finding appropriate lower bounds via the use of a geodesic map and takes into account the deformations incurred to derive a fully accelerated algorithm. We also needed to deal with other technical problems. Firstly, we needed a lower bound on the whole function and not only on $F(x^\\ast)$, for which we had to construct two different affine lower bounds, obtaining a relaxation of convexity. Secondly, we had to use an implicit discretization of an accelerated continuous dynamics, since at least the vanilla application of usual approaches like Linear Coupling \\cite{allen2014linear} or Nesterov's estimate sequence \\cite{nesterov1983method}, that can be seen as a forward Euler discretization of the accelerated dynamics combined with a balancing gradient step \\cite{diakonikolas2017approximate}, did not work in our constrained case. We interpret that the difficulty arises from trying to keep the gradient step inside the constraints while being able to compensate for a lower bound that is looser by a constant factor.\n\nWe also provided two reductions, which are generally useful for designing new algorithms and proving new lower bounds. Improving the reduction to smooth and g-strongly convex functions with respect to the curvature constants is another interesting direction for future work.\n\n\n\\ackShort{We thank Mario Lezcano-Casado for helpful discussions on this work. We thank Varun Kanade and Patrick Rebeschini for proofreading of this work. This work was supported by EP\/N509711\/1 from the EPSRC MPLS division, grant No 2053152.}\n\n\n\\nolinks\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nReinforcement learning (RL) has achieved impressive performance on many continuous control tasks~\\cite{schulman2015high,lillicrap2015continuous}, and\npolicy optimization is one of the main workhorses for such applications \\cite{duan2016benchmarking,sutton2000policy,schulman2015trust,schulman2017proximal}. \nRecently, there have been extensive research efforts studying the global convergence properties of policy optimization methods on benchmark control problems including linear quadratic regulator (LQR)~\\cite{pmlr-v80-fazel18a,bu2019lqr,malik2019derivative,yang2019provably,mohammadi2021convergence,furieri2020learning,hambly2021policy}, stabilization \\cite{perdomo2021stabilizing,ozaslan2022computing}, linear robust\/risk-sensitive control~\\cite{zhang2021policy,zhang2020stability,gravell2020learning,zhang2021derivative,zhao2021primal,cui2022mixed}, Markov jump linear quadratic control~\\cite{jansch2020convergence,jansch2020policyMDP,jansch2020policy,rathod2021global}, Lur'e system control~\\cite{qu2021exploiting}, output feedback control~\\cite{fatkhullin2020optimizing,zheng2021analysis,li2021distributed,duan2021optimization,duan2022optimization,mohammadi2021lack,zheng2022escaping}, and dynamic filtering \\cite{umenberger2022globally}.\nFor all these benchmark problems, the objective function in the policy optimization formulation is always differentiable over the entire feasible set, and the existing convergence theory heavily relies on this fact.\nConsequently, an important open question remains whether direct policy search can enjoy similar global convergence properties when applied to the famous $\\mathcal{H}_\\infty$ control problem whose objective function can be non-differentiable over certain points in the policy space \\cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs,noll2005spectral}. \nDifferent from LQR which considers stochastic disturbance sequences, $\\mathcal{H}_\\infty$ control directly addresses the worst-case disturbance, and provides arguably the most fundamental robust control paradigm \\cite{zhou96,Dullerud99,skogestad2007multivariable,basar95,doyle1988state,Gahinet1994}. \nRegarding the connection with RL, it has also been shown that $\\mathcal{H}_\\infty$ control can be applied to stabilize the training of adversarial RL schemes in the linear quadratic setup \\cite[Section 5]{zhang2020stability}.\nGiven the fundamental importance of $\\mathcal{H}_\\infty$ control, we view it as an important benchmark for understanding the theoretical properties of direct policy search in the context of robust control and adversarial RL. In this work, we study and prove the global convergence properties of direct policy search on the $\\mathcal{H}_\\infty$ state-feedback synthesis problem. \n\n\n\n\n\n\n\n\n\nThe objective of the $\\mathcal{H}_\\infty$ state-feedback synthesis is to design a linear state-feedback policy that stabilizes the closed-loop system and minimizes the $\\mathcal{H}_\\infty$ norm from the disturbance to a performance signal at the same time.\nThe design goal is also equivalent to synthesizing a state-feedback policy that minimizes a quadratic cost subject to the worst-case disturbance. We will present the problem formulation for the $\\mathcal{H}_\\infty$ state-feedback synthesis and discuss such connections in Section~\\ref{sec:PF}. Essentially, $\\mathcal{H}_\\infty$ state-feedback synthesis can be formulated as a constrained policy optimization\nproblem $\\min_{K\\in\\mathcal{K}} J(K)$, where the decision variable $K$ is a matrix parameterizing the linear state-feedback policy, the objective function $J(K)$ is the closed-loop $\\mathcal{H}_\\infty$-norm for given $K$, and the feasible set $\\mathcal{K}$ consists of all the linear state-feedback policies stabilizing the closed-loop dynamics. Notice that the feasible set for the $\\mathcal{H}_\\infty$ state-feedback control problem is the same as the nonconvex feasible set for the LQR policy search problem~\\cite{pmlr-v80-fazel18a,bu2019lqr}. However, the objective function $J(K)$ for the $\\mathcal{H}_\\infty$ control problem\ncan be non-differential over certain feasible points, introducing new difficulty to direct policy search. There has been a large family of nonsmooth $\\mathcal{H}_\\infty$ policy search algorithms developed based on the concept of Clarke subdifferential \\cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}. \nHowever, a satisfying global convergence theory is still missing from the literature. Our paper bridges this gap by making the following two contributions.\n\\begin{enumerate}\n\\item We show that all Clarke stationary points for the $\\mathcal{H}_\\infty$ state-feedback policy search problem are also global minimum. \n\\item We identify the coerciveness of the $\\mathcal{H}_\\infty$ cost function and use this property to show that Goldstein's subgradient method \\cite{goldstein1977optimization} and its implementable variants \\cite{pmlr-v119-zhang20p, davis2021gradient,burke2020gradient,burke2005robust,kiwiel2007convergence,kiwiel2010nonderivative} can be guaranteed to stay in the nonconvex feasible set of stabilizing policies during the optimization process and eventually find the global optimal solution of the $\\mathcal{H}_\\infty$ state-feedback control problem. Finite-time complexity bounds for finding $(\\delta,\\epsilon)$-stationary points are also provided.\n\\end{enumerate}\nOur work sheds new light on the theoretical properties of policy optimization methods on $\\mathcal{H}_\\infty$ control problems, and serves as a meaningful initial step towards a general global convergence theory of direct policy search on nonsmooth robust control synthesis.\n\nFinally, it is worth clarifying the differences between $\\mathcal{H}_\\infty$ control and mixed $\\mathcal{H}_2\/\\mathcal{H}_\\infty$ design. For mixed $\\mathcal{H}_2\/\\mathcal{H}_\\infty$ control, the objective is to design a stabilizing policy that minimizes an $\\mathcal{H}_2$ performance bound and satisfies an $\\mathcal{H}_\\infty$ constraint at the same time \\cite{glover1988state,khargonekar1991mixed,kaminer1993mixed,mustafa1991lqg}. In other words, mixed $\\mathcal{H}_2\/\\mathcal{H}_\\infty$ control aims at improving the average $\\mathcal{H}_2$ performance while ``maintaining\" a certain level of robustness by keeping the closed-loop $\\mathcal{H}_\\infty$ norm to be smaller than a pre-specified number. In contrast, $\\mathcal{H}_\\infty$ control aims at ``improving\" the system robustness and the worst-case performance via achieving the smallest closed-loop $\\mathcal{H}_\\infty$ norm. In \\cite{zhang2021policy}, it has been shown that the natural policy gradient method initialized from a policy satisfying the $\\mathcal{H}_\\infty$ constraint can be guaranteed to maintain the $\\mathcal{H}_\\infty$ requirement during the optimization process and eventually converge to the optimal solution of the mixed design problem. However, notice that the objective function for the mixed $\\mathcal{H}_2\/\\mathcal{H}_\\infty$ control problem is still differentiable over all the feasible points, and hence the analysis technique in \\cite{zhang2021policy} cannot be applied to our $\\mathcal{H}_\\infty$ control setting. \nMore discussions on the connections and differences between these two problems will be given in the supplementary material. \n\n\n\n\n\n\n\n\\section{Problem Formulation and Preliminaries}\n\\label{sec:PF}\n\\subsection{Notation}\nThe set of $p$-dimensional real vectors is denoted as $\\field{R}^p$.\nFor a matrix $A$, we use the notation \\(A^\\mathsf{T}\\), \\( \\|A\\| \\), \\(\\tr{A}\\), \\(\\sigma_{\\min}(A)\\), \\(\\norm{A}_2\\), and \\(\\rho(A) \\) to denote its transpose, largest singular value, trace, smallest singular value, Frobenius norm, and spectral radius, respectively. \nWhen a\nmatrix $P$ is negative semidefinite (definite), we will use the notation $P \\preceq (\\prec) 0$. When $P$ is positive\nsemidefinite (definite), we use the notation $P \\succeq (\\succ) 0$.\nConsider a (real) sequence $\\mathbf{u}:=\\{u_0,u_1,\\cdots\\}$ where\n$u_t \\in \\field{R}^{n_u}$ for all $t$. This sequence is said to be in $\\ell_2^{n_u}$\nif $ \\sum_{t=0}^\\infty \\| u_t\\|^2<\\infty$ where $\\|u_t\\|$ denotes\nthe standard (vector) 2-norm of $u_t$. In addition, the $2$-norm for\n$\\mathbf{u} \\in \\ell_2^{n_u}$ is defined as\n$\\|\\mathbf{u}\\|^2:=\\sum_{t=0}^\\infty \\| u_t\\|^2$. \n\n\n\n\n\n\n\n\\subsection{Problem statement: $\\mathcal{H}_\\infty$ state-feedback synthesis and a policy optimization formulation}\n\n\nWe consider the following linear time-invariant (LTI) system\n\\begin{align}\\label{eq:lti1}\nx_{t+1}=Ax_t+Bu_t+w_t, \\,\\,x_0=0\n\\end{align}\nwhere $x_t\\in\\field{R}^{n_x}$ is the state, $u_t\\in\\field{R}^{n_u}$ is the control action, and $w_t\\in\\field{R}^{n_w}$ is the disturbance. We have $A\\in \\field{R}^{n_x\\times n_x}$, $B\\in \\field{R}^{n_x\\times n_u}$, and $n_w=n_x$. \nWe denote $\\mathbf{x}:=\\{x_0,x_1,\\cdots\\}$, $\\mathbf{u}:=\\{u_0,u_1,\\cdots\\}$, and $\\mathbf{w}:=\\{w_0, w_1, \\cdots\\}$.\nThe initial condition is fixed as $x_0=0$.\nThe objective of $\\mathcal{H}_\\infty$ control is to choose $\\{u_t\\}$ to minimize the quadratic cost $\\sum_{t=0}^\\infty (x_t^\\mathsf{T} Q x_t+u_t^\\mathsf{T} R u_t)$ in the presence of the worst-case $\\ell_2$ disturbance satisfying $\\norm{\\mathbf{w}}\\le 1$. In this paper, the following assumption is adopted.\n\n\n\n\\begin{assumption}\\label{assump1}\nThe matrices $Q$ and $R$ are positive definite. The matrix pair $(A,B)$ is stabilizable.\n\\end{assumption}\n\n\n\n\n\n In $\\mathcal{H}_\\infty$ control, $\\{w_t\\}$ is considered to be the worst-case disturbance satisfying the $\\ell_2$ norm bound $\\norm{\\mathbf{w}}\\le 1$, and can be chosen in an adversarial manner. \nThis is different from LQR which makes stochastic assumptions on $\\{w_t\\}$. \nWithout loss of generality, we have chosen the $\\ell_2$ upper bound on $\\mathbf{w}$ to be $1$. In principle, we can formulate the $\\mathcal{H}_\\infty$ control problem with any arbitrary $\\ell_2$ upper bound on $\\mathbf{w}$, and there is no technical difference. We will provide more explanations on this fact in the supplementary material.\nTherefore, $\\mathcal{H}_\\infty$ control can be formulated as the following minimax problem\n\\begin{align}\\label{eq:minmax}\n\\min_{\\mathbf{u}}\\max_{\\mathbf{w}:\\norm{\\mathbf{w}}\\le 1} \\sum_{t=0}^\\infty (x_t^\\mathsf{T} Q x_t+ u_t^\\mathsf{T} R u_t)\n\\end{align}\nUnder Assumption \\ref{assump1}, it is well known that \nthe optimal solution for \\eqref{eq:minmax} can be achieved using a linear state-feedback policy $u_t=-Kx_t$ (see \\cite{basar95}).\nGiven any $K$, \nthe LTI system \\eqref{eq:lti1} can be rewritten as\n\\begin{align}\\label{eq:lti2}\nx_{t+1}=(A-BK)x_t+w_t, \\,x_0=0.\n\\end{align}\nNow we define $z_t=(Q+K^\\mathsf{T} R K)^{\\frac{1}{2}}x_t$. We have $\\norm{z_t}^2=x_t^\\mathsf{T} (Q+K^\\mathsf{T} R K) x_t=x_t^\\mathsf{T} Q x_t+u_t^\\mathsf{T} R u_t$. \nWe denote $\\mathbf{z}:=\\{z_0,z_1,\\cdots\\}$. If $\\mathbf{x}\\in \\ell_2^{n_x}$, then we have $\\norm{\\mathbf{z}}^2=\\sum_{t=0}^\\infty (x_t^\\mathsf{T} Q x_t+u_t^\\mathsf{T} R u_t)<+\\infty$. \nTherefore, the closed-loop LTI system \\eqref{eq:lti2} can be viewed as a linear operator mapping any disturbance sequence $\\{w_t\\}$ to another sequence $\\{z_t\\}$. We denote this operator as $G_K$, where the subscript highlights the dependence of this operator on $K$.\nIf $K$ is stabilizing, i.e. $\\rho(A-BK)<1$, then $G_K$ is bounded in the sense that it maps any $\\ell_2$ sequence $\\mathbf{w}$ to \nanother sequence $\\mathbf{z}$ in $\\ell_2^{n_x}$. For any stabilizing $K$, the $\\ell_2\\rightarrow \\ell_2$ induced norm of $G_K$ can be defined as:\n\\begin{align}\n\\norm{G_K}_{2\\rightarrow 2}:=\\sup_{0\\neq \\norm{\\mathbf{w}}\\le 1}\\frac{\\norm{\\mathbf{z}}}{\\norm{\\mathbf{w}}}\n\\end{align}\nSince $G_K$ is a linear operator, it is straightforward to show\n\\begin{align*}\n\\norm{G_K}_{2\\rightarrow 2}^2:=\\max_{\\mathbf{w}:\\norm{\\mathbf{w}}\\le 1} \\sum_{t=0}^\\infty x_t^\\mathsf{T} (Q+K^\\mathsf{T} R K) x_t=\\max_{\\mathbf{w}:\\norm{\\mathbf{w}}\\le 1} \\sum_{t=0}^\\infty (x_t^\\mathsf{T} Q x_t+u_t^\\mathsf{T} R u_t).\n\\end{align*}\n\nTherefore, the minimax optimization problem \\eqref{eq:minmax} can be rewritten as the policy optimization problem:\n$\\min_{K\\in\\mathcal{K}}\\norm{G_K}_{2\\rightarrow 2}^2$, where $\\mathcal{K}$ is the set of all linear state-feedback stabilizing policies, i.e. $\\mathcal{K}=\\{K\\in\\field{R}^{n_x\\times n_u}: \\,\\rho(A-BK)<1\\}$. In the robust control literature \\cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}, it is standard to drop the square in the cost function and just reformulate \\eqref{eq:minmax} as $\\min_{K\\in\\mathcal{K}} \\norm{G_K}_{2\\rightarrow 2}$. This is exactly the policy optimization formulation for $\\mathcal{H}_\\infty$ state-feedback control.\nThe main reason why\nthis problem is termed as $\\mathcal{H}_\\infty$ state-feedback control is that in the frequency domain, $G_K$ can be viewed as a transfer function which lives in the Hardy $\\mathcal{H}_\\infty$ space and has an $\\mathcal{H}_\\infty$ norm being exactly equal to $\\norm{G_K}_{2\\rightarrow 2}$. \nApplying the frequency-domain formula for the $\\mathcal{H}_\\infty$ norm, we can calculate $\\norm{G_K}_{2\\rightarrow 2}$ as\n\\begin{align}\\label{eq:hinfcost}\n\\norm{G_K}_{2\\rightarrow 2}=\\sup_{\\omega\\in[0, 2\\pi]}\\lambda_{\\max}^{1\/2}\\big((e^{-j\\omega}I-A+BK)^{-\\mathsf{T}}(Q+K^{\\mathsf{T}}RK)(e^{j\\omega}I-A+BK)^{-1}\\big),\n\\end{align}\nwhere $I$ is the identity matrix, and $\\lambda_{\\max}$ denotes the largest eigenvalue of a given symmetric matrix.\nTherefore, eventually the $\\mathcal{H}_\\infty$ state-feedback control problem can be formulated as \n\\begin{align}\\label{eq:hinfopt}\n\\min_{K\\in\\mathcal{K}} J(K),\n\\end{align}\nwhere $J(K)$ is equal to the $\\mathcal{H}_\\infty$ norm specified by \\eqref{eq:hinfcost}. Classical $\\mathcal{H}_\\infty$ control theory typically solves \\eqref{eq:hinfopt} via introducing extra Lyapunov variables and\nreparameterizing the problem into a higher-dimensional convex domain\nover which convex optimization algorithms can be applied~\\cite{zhou96,Dullerud99,befb94}.\nIn this paper, we revisit \\eqref{eq:hinfopt} as a benchmark for direct policy search, and discuss how to search the optimal solution of \\eqref{eq:hinfopt} in the policy space directly. Applying direct policy search to address \\eqref{eq:hinfopt} leads to a nonconvex nonsmooth optimization problem.\nA main technical challenge is that the objective function \\eqref{eq:hinfcost} can be non-differentiable over some important feasible points \\cite{apkarian2006controller,apkarian2006nonsmooth,arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}.\n\n\n\\subsection{Direct policy search: A nonsmooth optimization perspective}\nNow we briefly review several key facts known for the $\\mathcal{H}_\\infty$ policy optimization problem \\eqref{eq:hinfopt}. \n\\begin{prop} \\label{prop:1}\nThe set $\\mathcal{K}=\\{K: \\rho(A-BK)<1\\}$ is open. In general, it can be unbounded and nonconvex. The cost function \\eqref{eq:hinfcost} is continuous and nonconvex in $K$.\n\\end{prop}\nSee \\cite{pmlr-v80-fazel18a,bu2019topological} for some related proofs. We have also included more explanations in the supplementary material. An immediate consequence is that \\eqref{eq:hinfopt} becomes a nonconvex optimization problem.\nAnother important fact is that the objective function \\eqref{eq:hinfcost} is also nonsmooth. As a matter of fact, \\eqref{eq:hinfcost} is subject to two sources of nonsmoothness. Based on \\eqref{eq:hinfcost}, we can see that the largest eigenvalue for a fixed frequency $\\omega$ is nonsmooth, and the optimization step over $\\omega\\in [0, 2\\pi]$ is also nonsmooth. As a matter of fact, the $\\mathcal{H}_\\infty$ objective function \\eqref{eq:hinfcost} can be non-differentiable over important feasible points, e.g. optimal points. \nFortunately, it is well known\\footnote{We cannot find a formal statement of Proposition \\ref{prop:regular} in the literature. However, based on our discussion with other researchers who have worked on nonsmooth $\\mathcal{H}_\\infty$ synthesis for long time, this fact is well known and hence we do not claim any credits in deriving this result. As a matter of fact, although not explicitly stated, the proof of Proposition \\ref{prop:regular} is hinted in the last paragraph of \\cite[Section III]{apkarian2006nonsmooth} given the facts that the $\\mathcal{H}_\\infty$ norm is a convex function over the Hardy $\\mathcal{H}_\\infty$ space (which is a Banach space) and the mapping from $K\\in\\mathcal{K}$ to the (infinite-dimensional) Hardy $\\mathcal{H}_\\infty$ space is strictly differentiable.\nFor completeness, a simple proof of Proposition~\\ref{prop:regular} based on Clarke's chain rule \\cite{clarke1990optimization} is included in the supplementary material.} that the $\\mathcal{H}_\\infty$ objective function \\eqref{eq:hinfcost} has the following desired property so it is Clarke subdifferentiable. \n\\begin{prop}\\label{prop:regular}\nThe $\\mathcal{H}_\\infty$ objective function \\eqref{eq:hinfcost} is locally Lipschitz and subdifferentially regular over the stabilizing feasible set $\\mathcal{K}$.\n\\end{prop}\n\nRecall that $J:\\mathcal{K}\\rightarrow \\field{R}$ is locally Lipschitz if for\nany bounded $S\\subset \\mathcal{K}$, there exists a constant $L > 0$ such that $|J(K)-J(K')|\\le L \\norm{K-K'}_2$ for all $K,K'\\in S$. \nBased on Rademacher's theorem, a locally Lipschitz function is differentiable almost everywhere, and the Clarke subdifferential is well defined for all feasible points. Formally, the Clarke subdifferential is defined as \n\\begin{align}\n\\partial_C J(K):=\\conv\\{\\lim_{i\\rightarrow \\infty}\\nabla J(K_i):K_i\\rightarrow K,\\,K_i\\in\\dom(\\nabla J)\\subset \\mathcal{K}\\}\n\\end{align}\nwhere $\\conv$ denotes the convex hull. Then we know that the Clarke subdifferential for the $\\mathcal{H}_\\infty$ objective function \\eqref{eq:hinfcost} is well defined for all $K\\in \\mathcal{K}$. We say that $K$ is a Clarke stationary point if $0\\in \\partial_C J(K)$. The following fact is also well known.\n\\begin{prop} \\label{pro3}\nIf $K$ is a local min of $J$, then $0\\in\\partial_C J(K)$ and $K$ is a Clarke stationary point.\n\\end{prop}\nUnder Assumption \\ref{assump1}, it is well known that there exists $K^*\\in \\mathcal{K}$ achieving the minimum of~\\eqref{eq:hinfopt}. Since $\\mathcal{K}$ is an open set, $K^*$ has to be an interior point of $\\mathcal{K}$ and hence $K^*$ has to be a Clarke stationary point. In Section \\ref{sec:land}, we will prove that any Clarke stationary points for \\eqref{eq:hinfopt} are actually global minimum.\n\nNow we briefly elaborate on the subdifferentially regular property stated in Proposition \\ref{prop:regular}.\nFor any given direction $d$ (which has the same dimension as $K$), the generalized Clarke directional derivative of $J$ is defined as\n\\begin{align}\nJ^{\\circ}(K,d):=\\lim_{K'\\rightarrow K}\\sup_{t\\searrow 0} \\frac{J(K'+td)-J(K')}{t}.\n\\end{align}\nIn contrast, the (ordinary) directional derivative is defined as follows (when existing)\n\\begin{align}\nJ'(K,d):=\\lim_{t\\searrow 0} \\frac{J(K+td)-J(K)}{t}.\n\\end{align}\n\nIn general, the Clarke directional derivative can be different from the (ordinary) directional derivative. Sometimes\n the ordinary directional derivative may not even exist.\nThe objective function $J(K)$ is subdifferentially regular if for every $K\\in \\mathcal{K}$, the ordinary directional\nderivative always exists and coincides with the generalized one for every direction, i.e. $J'(K,d)=J^{\\circ}(K,d)$.\nThe most important consequence of the subdifferentially regular property is given as follows.\n\\begin{cor} \\label{cor1}\nSuppose $K ^\\dag\\in\\mathcal{K}$ is a Clarke stationary point for $J$. If $J$ is subdifferentially regular, then the directional derivatives $J'(K^\\dag, d)$ are non-negative for all $d$.\n\\end{cor}\nSee \\cite[Theorem 10.1]{rockafellar2009variational} for related proofs and more discussions. Notice that having non-negative directional derivatives does not mean that the point $K^\\dag$ is a local minimum. Nevertheless, the above fact will be used in our main theoretical developments.\nNow we briefly summarize two key difficulties in establishing a global convergence theory for direct policy search on the $\\mathcal{H}_\\infty$ state-feedback control problem \\eqref{eq:hinfopt}. First, it is unclear whether the direct policy search method will get stuck at some local minimum. Second, it is challenging to guarantee the direct policy search method to stay in the nonconvex feasible set $\\mathcal{K}$ during the optimization process. Since $\\mathcal{K}$ is nonconvex, we cannot use a projection step to maintain feasibility. Our main results will address these two issues. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Goldstein subdifferential}\n\nGenerating a good descent direction for nonsmooth optimization is not trivial. Many nonsmooth optimization algorithms are based on the concept of Goldstein subdifferential \\cite{goldstein1977optimization}. Before proceeding to our main result, we briefly review this concept here.\n\n\\begin{defn}[Goldstein subdifferential]\nSuppose $J$ is locally Lipschitz. Given a point $K\\in\\mathcal{K}$ and a parameter $\\delta>0$, the Goldstein subdifferential of $J$ at $K$ is defined to be the following set \n\\begin{align} \\label{Gold_sub}\n\\partial_\\delta J(K):=\\conv \\left\\{\\cup_{K'\\in\\mathbb{B}_\\delta(K)} \\partial_C J(K')\\right\\},\n\\end{align}\nwhere $\\mathbb{B}_\\delta(K)$ denotes the $\\delta$-ball around $K$. The above definition implicitly requires $\\mathbb{B}_\\delta(K)\\subset\\mathcal{K}$.\n\\end{defn}\nBased on the above definition, one can further define the notion of $(\\delta,\\epsilon)$-stationarity.\nA point $K$ is said to be $(\\delta,\\epsilon)$-stationary if $\\dist(0, \\partial_\\delta J(K))\\le \\epsilon$.\n It is well-known that the minimal norm\nelement of the Goldstein subdifferential generates a good descent direction. This fact is stated as follows.\n\\begin{prop}[\\cite{goldstein1977optimization}]\nLet $F$ be the minimal norm element in $\\partial_\\delta J(K)$. Suppose $K-\\alpha F\/\\norm{F}_2\\in \\mathcal{K}$ for any $0\\le \\alpha \\le \\delta$. \nThen we have \n\\begin{align}\\label{eq:descent}\nJ(K-\\delta F\/\\norm{F}_2)\\le J(K)-\\delta \\norm{F}_2.\n\\end{align}\n\\end{prop}\nThe idea of Goldstein subdifferential has been used in designing algorithms for nonsmooth $\\mathcal{H}_\\infty$ control \\cite{arzelier2011h2,gumussoy2009multiobjective,burke2020gradient,curtis2017bfgs}. We will show that such policy search algorithms can be guaranteed to find the global minimum of \\eqref{eq:hinfopt}. It is worth mentioning that there are other notions of enlarged subdifferential~\\cite{apkarian2006nonsmooth} which can lead to good descent directions for nonsmooth $\\mathcal{H}_\\infty$ synthesis. In this paper, we focus on the notion of Goldstein subdifferential and related policy search algorithms. \n\n\n\n\n\\section{Optimization Landscape for $\\mathcal{H}_\\infty$ State-Feedback Control}\n\\label{sec:land}\n\nIn this section, we investigate the optimization landscape of the $\\mathcal{H}_\\infty$ state-feedback policy search problem, and show that any Clarke stationary points of \\eqref{eq:hinfopt} are also global minimum. We start by showing the coerciveness of the $\\mathcal{H}_\\infty$ objective function \\eqref{eq:hinfcost}.\n\\begin{lem}\\label{lem1}\nThe $\\mathcal{H}_\\infty$ objective function $J(K)$ defined by \\eqref{eq:hinfcost} is coercive over the set $\\mathcal{K}$ in the sense that for any sequence $\\{K^l\\}_{l=1}^\\infty\\subset \\mathcal{K}$ we have\n $J(K^l) \\rightarrow +\\infty$, \nif either $\\|K^l\\|_2 \\rightarrow +\\infty$, or $K^l$ converges to an element in the boundary $\\partial \\mathcal{K}$.\n\\end{lem}\n\\begin{proof}\nWe will only provide a proof sketch here. A detailed proof is presented in the supplementary material. Suppose we have a sequence $\\{K^l\\}$ satisfying $\\norm{K^l}_2\\rightarrow +\\infty$. We can choose $\\mathbf{w}=\\{w_0,0,0,\\cdots\\}$ with $\\norm{w_0}=1$ and show \n$J(K^l)\\ge w_0^\\mathsf{T} (Q+(K^l)^\\mathsf{T} R K^l) w_0 \\ge \\lambda_{\\min}(R) \\norm{K^l w_0}^2$.\nClearly, we have used the positive definiteness of $R$ in the above derivation. Then by carefully choosing $w_0$, we can ensure $J(K^l)\\rightarrow +\\infty$ as $\\norm{K^l}_2\\rightarrow +\\infty$. Next, we assume $K^l\\rightarrow K\\in \\partial \\mathcal{K}$.\nWe have $\\rho(A-BK)=1$, and hence there exists some $\\omega_0$ such that $(e^{j\\omega_0}I-A+BK)$ becomes singular.\nThen we can use the positive definiteness of $Q$ to show\n$J(K^l)\\ge \\lambda^{1\/2}_{\\min}(Q) (\\| (e^{j\\omega_0}I-A+BK^l)^{-1} \\|\\cdot \\| (e^{-j\\omega_0}I-A+BK^l)^{-1} \\|)^{\\frac{1}{2}}$.\nNotice $\\sigma_{\\min} (e^{\\pm j\\omega_0}I-A+BK^l) \\to 0$ as $l \\to \\infty$, which implies $ \\| (e^{\\pm j\\omega_0}I-A+BK^l)^{-1} \\| \\to +\\infty$ as $l \\to \\infty$. Therefore, we have $J(K^l) \\to +\\infty$ as $K^l\\to K\\in \\partial \\mathcal{K}$.\nMore details for the proof can be found in the supplementary material.\n\\end{proof}\nWe want to emphasize that the positive definiteness of $(Q,R)$ are crucial for proving the coerciveness of the cost function \\eqref{eq:hinfcost}. Built upon Lemma \\ref{lem1}, we can obtain the following nice properties of the sublevel sets of \\eqref{eq:hinfopt}. \n\n\n\\begin{lem}\\label{lem2}\nConsider the $\\mathcal{H}_\\infty$ state-feedback policy search problem \\eqref{eq:hinfopt} with the objective function $J(K)$ defined in \\eqref{eq:hinfcost}. Under Assumption \\ref{assump1}, the sublevel set defined as $\\mathcal{K}_\\gamma:=\\{K\\in \\mathcal{K}: J(K)\\le \\gamma\\}$ is compact and path-connected for every $\\gamma\\ge J(K^*)$ where $K^*$ is the global minimum of \\eqref{eq:hinfopt}.\n\\end{lem}\n\\begin{proof}\nThe compactness of $\\mathcal{K}_\\gamma$ directly follows from the continuity and coerciveness of $J(K)$, and is actually a consequence of \\cite[Proposition 11.12]{bauschke2011convex}. The path-connectedness of the strict sublevel sets for the continuous-time $\\mathcal{H}_\\infty$ control problem has been proved in \\cite{hu2022connectivity}. We can slightly modify the proof in \\cite{hu2022connectivity} to show that the strict sublevel set $\\{K\\in \\mathcal{K}: J(K)<\\gamma\\}$ is path-connected. Based on the fact that every non-strict sublevel sets are compact, now we can apply \\cite[Theorem 5.2]{martin1982connected} to show $\\mathcal{K}_\\gamma$ is also path-connected.\nAn independent proof based on the non-strict version of the bounded real lemma is also provided in the supplementary material.\n\\end{proof}\nThe path-connectedness of $\\mathcal{K}_\\gamma$ for every $\\gamma$ actually implies the uniqueness of the minimizing set in a certain strong sense \\cite[Sections 2\\&3]{martin1982connected}. Due to the space limit, we will defer the discussion on the uniqueness of the minimizing set to the supplementary material. Here, we present a stronger result which is one of the main contributions of our paper.\n\\begin{thm}\\label{thm1}\nConsider the $\\mathcal{H}_\\infty$ state-feedback policy search problem \\eqref{eq:hinfopt}. Under Assumption \\ref{assump1}, any Clarke stationary point of $J(K)$ is a global minimum.\n\\end{thm}\n\nA detailed proof is presented in the supplementary material. Here we provide a proof sketch. Since $Q$ and $R$ are positive definite, the non-strict version of the bounded real lemma\\footnote{The difference between the strict and non-strict versions of the bounded real lemma is quite subtle \\cite[Section 2.7.3]{befb94}. For completeness, we will provide more explanations for the non-strict version of the bounded real lemma in the supplementary material.} states that $J(K)\\le \\gamma$ if and only if there exists a positive definite matrix $P$ such that the following matrix inequality holds\n\\begin{align}\\label{eq:lmi1}\n\\bmat{(A-BK)^\\mathsf{T} P (A-BK) - P & (A-BK)^\\mathsf{T} P \\\\ P(A-BK) & P}+\\bmat{Q+K^\\mathsf{T} R K & 0 \\\\ 0 & -\\gamma^2 I}\\preceq 0.\n\\end{align}\nThe above matrix inequality is linear in $P$ but not linear in $K$. A standard trick from the control theory can be combined with the Schur complement lemma to convert the above matrix inequality condition to another condition which is linear in all the decision variables \\cite{befb94}. Specifically, there exists a matrix function $\\lmi(Y,L,\\gamma)$ which is linear in $(Y,L,\\gamma)$ such that $\\lmi(Y,L,\\gamma)\\preceq 0$ and $Y\\succ 0$ if and only if \\eqref{eq:lmi1} is feasible with $K=L Y^{-1}$ and $P=\\gamma Y^{-1}\\succ 0$. The matrix function $\\lmi(Y,L,\\gamma)$ involves a larger matrix. Hence we present the analytical formula of $\\lmi(Y,L,\\gamma)$ in the supplementary material and skip it here. Since $\\lmi(Y,L,\\gamma)$ is linear in $(Y,L,\\gamma)$, we know $\\lmi(Y,L,\\gamma)\\preceq 0$ is just a convex semidefinite programming condition. Based on this convex necessary and sufficient condition for $J(K)\\le \\gamma$, we can prove the following important lemma. \n\\begin{lem}\\label{lem3}\nFor any $K\\in\\mathcal{K}$ satisfying $J(K)>J^*$, there exists a matrix direction $d\\neq 0$ such that $J'(K,d)\\le J^*-J(K)<0$, where $J^*=J(K^*)$ and $K^*$ is the global minimum of \\eqref{eq:hinfopt}.\n\\end{lem}\n\\begin{proof}\nSuppose we have $K=LY^{-1}$ where $(Y,L, J(K))$ is a feasible point for the convex regime $\\lmi(Y,L,J(K))\\preceq 0$.\nIn addition, we have $K^*=L^* (Y^*)^{-1}$ where $(Y^*,L^*, J(K^*))$ is a point satisfying $\\lmi(Y^*,L^*,J(K^*))\\preceq 0$.\n Since the LMI condition is convex, the line segment between $(Y,L,J(K))$ and $(Y^*,Q^*,J(K^*))$ is also in this convex set. For any $t>0$, we know $(Y+t \\Delta Y,L+t \\Delta L, J(K)+t(J(K^*)-J(K)))$ also satisfies \n$\\lmi(Y+t\\Delta Y, L+t\\Delta L, J(K)+t(J(K^*)-J(K)))\\preceq 0$, \nwhere $\\Delta L=L^*-L$, and $\\Delta Y=Y^*-Y$. \nTherefore, based on the bounded real lemma, we know $J((L+t\\Delta L) (Y+t \\Delta Y)^{-1})\\le J(K)+t(J(K^*)-J(K))$.\nLet's choose $d=\\Delta L Y^{-1} - LY^{-1}\\Delta Y Y^{-1}$. Then we have\n\\begin{align*}\nJ'(K,d)\\le \\lim_{t\\searrow 0} \\left( \\frac{J((L+t\\Delta L) (Y+t \\Delta Y)^{-1})-J(K)}{t}+o(t)\\right)\\le J^*-J(K)<0.\n\\end{align*}\nA detailed verification of the above inequality is provided in the supplementary material. Notice $d\\neq 0$. If $\\Delta L Y^{-1} -LY^{-1}\\Delta Y Y^{-1}=0$, the above argument still works and we reach to the conclusion $J'(K,0)<0$. But this is impossible since we always have $J'(K,0)=0$. Hence we have $d\\neq 0$. This completes the proof for this lemma.\n\\end{proof}\n\n\n\nNow we are ready to provide the proof for Theorem \\ref{thm1}. Based on Lemma \\ref{lem3} and the fact that $J(\\cdot)$ is subdifferentially regular, the proof can be done by contradiction. Suppose $K^*$ is the global minimum, and $K^\\dag$ is a Clarke stationary point. If $K^\\dag$ is not a global minimum. Then by Lemma \\ref{lem3}, there exists $d \\neq 0$ such that $J'(K^\\dag,d) < 0$, this contradicts the fact that $J'(K^\\dag,d) \\ge 0$ for all $d$ by Corollary \\ref{cor1}. Therefore, $K^\\dag$ has to be the global minimum of \\eqref{eq:hinfopt}.\n\n\n\n\n\n\nThe above proof relies on Lemma \\ref{lem3} and the fact that $J$ is subdifferentially regular.\nWithout using the subdifferentially regular property, Lemma \\ref{lem3} itself is not sufficient for proving Theorem \\ref{thm1}.\nIt is also worth mentioning that Lemma \\ref{lem3} can be viewed as a modification of the convex parameterization\/lifting results in \\cite{sun2021learning,umenberger2022globally} for non-differentiable points.\n\n\n\n\n\n\n\\section{Global Convergence of Direct Policy Search on $\\mathcal{H}_\\infty$ State-Feedback Control}\n\\label{sec:main1}\n\nIn this section, we first show that Goldstein's subgradient method \\cite{goldstein1977optimization} can be guaranteed to stay in the nonconvex feasible regime $\\mathcal{K}$ during the optimization process and eventually converge to the global minimum of \\eqref{eq:hinfopt}. \nThe complexity of finding $(\\delta,\\epsilon)$-stationary points of \\eqref{eq:hinfopt} is also presented.\nThen we further discuss the convergence guarantees for various implementable forms of Goldstein's subgradient method.\n\n\n\n\n\n\n\n\\subsection{Global convergence and complexity of Goldstein's subgradient Method}\nWe will investigate the global convergence of Goldstein's subgradient method for direct policy search of the optimal $\\mathcal{H}_\\infty$ state-feedback policy. Goldstein's subgradient method iterates as follows\n\\begin{align}\\label{eq:gold1}\nK^{n+1}=K^n-\\delta^n F^n\/\\norm{F^n}_2,\n\\end{align}\nwhere $F^n$ is the minimum norm element of the Goldstein subdifferential $\\partial_{\\delta^n} J(K^n)$.\nWe assume that an initial stabilizing policy is available, i.e. $K^0\\in\\mathcal{K}$. The same initial policy assumption has also been made in the global convergence theory for direct policy search on LQR \\cite{pmlr-v80-fazel18a}. More recently, some provable guarantees have been obtained for finding such stabilizing policies via direct policy search~\\cite{perdomo2021stabilizing,ozaslan2022computing}. Hence such an assumption on the initial policy $K^0$ is reasonable. \nOur global convergence result relies on the fact that there is a strict separation between any sublevel set of \\eqref{eq:hinfopt} and the boundary of $\\mathcal{K}$. This fact is formalized as follows.\n\\begin{lem}\\label{lem4}\nConsider the $\\mathcal{H}_\\infty$ state-feedback policy search problem \\eqref{eq:hinfopt} with the cost function $J(K)$ defined in \\eqref{eq:hinfcost}. Denote the complement of the feasible set $\\mathcal{K}$ as $\\mathcal{K}^c$. \nSuppose Assumption \\ref{assump1} holds and $\\gamma\\ge J^*$. Then there is a strict separation between the sublevel set $\\mathcal{K}_\\gamma$ and $\\mathcal{K}^c$. In other words, we have $dist(\\mathcal{K}_\\gamma, \\mathcal{K}^c)>0$.\n\\end{lem}\n\\begin{proof}\nObviously, the set $\\mathcal{K}_\\gamma \\cap \\mathcal{K}^c$ is empty (since we know $\\mathcal{K}_\\gamma\\subset \\mathcal{K}$). Based on Lemma \\ref{lem2}, we know $\\mathcal{K}_\\gamma$ is compact. Since $\\mathcal{K}$ is open, we know $\\mathcal{K}^c$ is closed. Therefore, there is a strict separation between $\\mathcal{K}_\\gamma$ and $\\mathcal{K}^c$, and we have $dist(\\mathcal{K}_\\gamma, \\mathcal{K}^c)>0$.\n\\end{proof}\n\nNow we are ready to present our main convergence result. \n\\begin{thm} \\label{thm2}\nConsider the $\\mathcal{H}_\\infty$ state-feedback policy search problem \\eqref{eq:hinfopt} with the cost function $J(K)$ defined in \\eqref{eq:hinfcost}. Suppose Assumption \\ref{assump1} holds, and an initial stabilizing policy is given, i.e. $K^0\\in\\mathcal{K}$. Denote $\\Delta_0:=\\dist(\\mathcal{K}_{J(K^0)}, \\mathcal{K}^c)>0$. Choose $\\delta^n=\\frac{c\\Delta_0}{n+1}$ for all $n$ with $c$ being a fixed number in $(0,1)$.\nThen Goldstein's subgradient method \\eqref{eq:gold1} is guaranteed to stay in $\\mathcal{K}$ for all $n$. In addition, we have $J(K^n)\\rightarrow J^*$ as $n\\rightarrow \\infty$.\n\\end{thm}\n\\begin{proof}\nWe have $\\delta^n\\le c\\Delta_0< \\Delta_0$ for all $n$. Now we use an induction proof to show $K^n\\in \\mathcal{K}_{J(K^0)}$ for all $n$. For $n= 0$, we know $K^0-c\\Delta_0 F^0\/\\norm{F^0}_2$ has to be within the $\\Delta_0$ ball around $K^0$ since we know the norm of $F^0\/\\norm{F^0}_2$ is exactly equal to $1$. Since $\\Delta_0:=\\dist(\\mathcal{K}_{J(K^0)}, \\mathcal{K}^c)>0$, we know $K^0-\\delta^0 F^0\/\\norm{F^0}_2\\in \\mathcal{K}$. As a matter of fact, we know $\\mathbb{B}_{\\delta^0}(K^0)$ has to be a subset of $\\mathcal{K}$. Hence we can apply \\eqref{eq:descent} to show that $K^1$ exists and is also in $\\mathcal{K}_{J(K^0)}$. Similarly, we can repeat this argument to show $K^n\\in \\mathcal{K}_{J(K^0)}$ for all $n$. \nNext, we can apply \\eqref{eq:descent} to every step and then sum the inequalities over all $n$. Then the following inequality holds for all $N$:\n\\begin{align}\\label{eq:iterative}\n\\sum_{n=0}^N \\delta_n \\norm{F^n}_2 \\le J(K^0)-J^*\n\\end{align}\nSince we have $\\sum_{n=0}^\\infty \\delta^n=+\\infty$, we know $\\liminf_{n\\rightarrow\\infty} \\norm{F^n}_2= 0$. There exists one subsequence $\\{i_n\\}$ such that $\\norm{F^{i_n}}_2\\rightarrow 0$. For this subsequence, the resultant policy sequence $\\{K^{i_n}\\}$ is also bounded (notice that the policy parameter sequence stays in the compact set $\\mathcal{K}_{J(K^0)}$ for all $n$) and has a convergent subsequence. We can show that the limit of this subsequence is a Clarke stationary point.\nHence the function value associated with this subsequence converges to $J^*$. Notice that $J(K^n)$ is monotonically decreasing for the entire sequence $\\{n\\}$. Hence we have $J(K^n)\\rightarrow J^*$. \n\\end{proof}\n\n\n\nWe have tried to be brief in giving the above proof. We will present a more detailed proof in the supplementary material. We believe that \nthis is the first result showing that direct policy search can be guaranteed to converge to the global optimal solution of the $\\mathcal{H}_\\infty$ state-feedback control problem. The above result only provides an asymptotic convergence guarantee to ensure $J(K^n)\\rightarrow J^*$. One can use a similar argument to establish a finite-time complexity bound for finding the $(\\delta,\\epsilon)$-stationary points of \\eqref{eq:hinfopt}. Such a result is given as follows.\n\n\\begin{thm} \\label{thm3}\nConsider the $\\mathcal{H}_\\infty$ problem \\eqref{eq:hinfopt} with the cost function \\eqref{eq:hinfcost}. Suppose Assumption \\ref{assump1} holds, and $K^0\\in\\mathcal{K}$. Denote $\\Delta_0:=\\dist(\\mathcal{K}_{J(K^0)}, \\mathcal{K}^c)>0$. For any $\\delta<\\Delta_0$, we can \nchoose $\\delta^n=\\delta$ for all $n$ to ensure that\nGoldstein's subgradient method \\eqref{eq:gold1} stays in $\\mathcal{K}$ and satisfies the following finite-time complexity bound:\n\\begin{align}\n \\min_{n:0\\le n\\le N} \\norm{F^n}_2\\le \\frac{J(K^0)-J^*}{(N+1)\\delta}\n\\end{align}\nIn other words, we have $\\min_{0\\le n\\le N} \\norm{F^n}_2\\le \\epsilon$ after $N=\\mathcal{O}\\left(\\frac{\\Delta}{\\delta\\epsilon}\\right)$ where $\\Delta:=J(K^0)-J^*$. For any $\\delta<\\Delta_0$ and $\\epsilon>0$, the complexity of finding a $(\\delta,\\epsilon)$-stationary point is $\\mathcal{O}\\left(\\frac{\\Delta}{\\delta\\epsilon}\\right)$.\n\\end{thm}\n\\begin{proof}\nThe above result can be proved using a similar argument from Theorem \\ref{thm2}. \nWe can use the same induction argument to show $K^n\\in\\mathcal{K}_{J(K^0)}$ for all $n$, and \\eqref{eq:iterative} holds with $\\delta^n=\\delta$. Then the desired conclusion directly follows.\n\\end{proof}\nThe complexity for nonsmooth optimization of Lipschitz functions is quite subtle. While the above result gives a reasonable characterization of the finite-time performance of Goldstein's subgradient method on the $\\mathcal{H}_\\infty$ state-feedback control problem, it does not quantify how fast $J(K^n)$ converges to $J^*$. Recall that $(\\delta,\\epsilon)$-stationarity means $\\dist(0, \\partial_\\delta J(K))\\le \\epsilon$, while $\\epsilon$-stationarity means $\\dist(0, \\partial_C J(K))\\le \\epsilon$.\nAs commented in \\cite{shamir2020can,pmlr-v119-zhang20p,davis2021gradient}, $(\\delta,\\epsilon)$-stationarity does not imply being $\\delta$-close to an\n$\\epsilon$-stationary point of $J$. Importantly, the function value of a $(\\delta,\\epsilon)$-stationary point can be far from $J^*$ even for small $\\delta$ and $\\epsilon$. Theorem 5 in \\cite{pmlr-v119-zhang20p} shows that\nthere is no finite time algorithm that can find $\\epsilon$-stationary points provably for all Lipschitz functions. It is still possible that one can develop some finite time bounds for $(J(K^n)-J^*)$ via exploiting other advanced properties of the $\\mathcal{H}_\\infty$ cost function \\eqref{eq:hinfcost}. This is an important future task.\n\n\n\n\n\n\n\n\n\n\\subsection{Implementable variants and related convergence results}\n\\label{sec:imple}\n\nIn practice, it can be difficult to evaluate the minimum norm element of the Goldstein subdifferential.\n Now we discuss implementable variants of Goldstein's subgradient method and related guarantees. \n\n\\textbf{Gradient sampling \\cite{burke2020gradient,burke2005robust,kiwiel2007convergence}.} \nThe gradient sampling (GS) method \nis the main optimization algorithm used in the robust control package HIFOO \\cite{arzelier2011h2,gumussoy2009multiobjective}.\nSuppose we can access a first-order oracle which can evaluate $\\nabla J$ for any differentiable points in the feasible set\\footnote{When $(A,B)$ is known, one can calculate the $\\mathcal{H}_\\infty$ gradient at differential points using the chain rule in \\cite{apkarian2006nonsmooth}. More explanations can be found in the supplementary material.}. Based on Rademacher's theorem, a locally Lipschitz function is differentiable almost everywhere. Therefore, for any $K^n\\in\\mathcal{K}$, we can randomly sample policy parameters over $\\mathbb{B}_{\\delta^n}(K^n)$ and obtain differentiable points with probability one. For all these sampled differentiable points, the Clarke subdifferential at each point is just the gradient. Then the convex hull of these sampled gradients can be used as an approximation for the Goldstein subdifferential $\\partial_{\\delta^n} K^n$. The minimum norm element from the convex hull of the sampled gradients can be solved via a simple convex quadratic program, and is sufficient for generating a reasonably good descent direction for updating $K^{n+1}$ as long as we sample at least $(n_x n_u+1)$ differentiable points for each $n$ \\cite{burke2020gradient}. In the unconstrained setup, the cluster points of the GS algorithm can be guaranteed to be Clarke stationary \\cite{kiwiel2007convergence,burke2020gradient}. Such a result can be combined with Theorem \\ref{thm1} and Lemma \\ref{lem4} to show the global convergence of the GS method on the $\\mathcal{H}_\\infty$ state-feedback synthesis problem. The following theorem will be treated formally in the supplementary material.\n\\begin{thm}[Informal statement]\\label{thm4}\nConsider the policy optimization problem \\eqref{eq:hinfopt} with the $\\mathcal{H}_\\infty$ cost function defined in \\eqref{eq:hinfcost}. Suppose Assumption \\ref{assump1} holds, and $K^0\\in \\mathcal{K}$. The iterations generated from the trust-region version of the GS method (described in \\cite[Section 4.2]{kiwiel2007convergence} and restated in the supplementary material) can be guaranteed to stay in $\\mathcal{K}$ for all iterations and achieve $J(K^n)\\rightarrow J^*$ with probability~one. \n\\end{thm}\n\n\n\\textbf{Non-derivative sampling (NS) \\cite{kiwiel2010nonderivative}.} The NS method can be viewed as the derivative-free version of the GS algorithm. Suppose we only have the zeroth-order oracle which can evaluate the function value $J(K)$ for $K\\in \\mathcal{K}$. The main difference between NS and GS is that the NS algorithm relies on estimating the gradient from function values via Gupal's estimation method. In the unconstrained setting, the cluster points of the NS method can be guaranteed to be Clarke stationary with probability one \\cite[Theorem 3.8]{kiwiel2010nonderivative}. We can combine \\cite[Theorem 3.8]{kiwiel2010nonderivative} with our results (Theorem \\ref{thm1} and Lemma~\\ref{lem4}) to prove the global convergence of NS in our setting. A detailed discussion is given in the supplementary material. \n\n\n\\textbf{Model-free implementation of NS.} When the system model is unknown, there are various methods available for estimating the $\\mathcal{H}_\\infty$-norm from data \\cite{muller2019gain,muller2017stochastic,rojas2012analyzing,rallo2017data,wahlberg2010non,oomen2014iterative,tu2019minimax,tu2018approximation}. \nBased on our own experiences\/tests, the multi-input multi-output (MIMO) power iteration method~\\cite{oomen2013iteratively} works quite well as a stochastic zeroth-order oracle for the purpose of implementing NS in the model-free setting. While the sample complexity for model-free NS is unknown, we will provide some numerical justifications to show that such a model-free implementation closely tracks the convergence behaviors of its model-based counterpart. \n\n\n\\textbf{Interpolated normalized gradient descent (INGD) with finite-time complexity.} No finite-time guarantees for finding $(\\delta,\\epsilon)$-stationary points have been reported for the GS\/NS methods. In \\cite{pmlr-v119-zhang20p,davis2021gradient}, the INGD method has been developed as another implementable variant of Goldstein's subgradient method, and is proved to satisfy high-probability finite-time complexity bounds for finding $(\\delta,\\epsilon)$-stationary points of Lipschitz functions. INGD uses an iterative sampling strategy to generate a descent direction which serves a role similar to the minimal norm element of the Goldstein subdifferential. A first-order oracle for differentiable points is needed for implementing the version of INGD in \\cite{davis2021gradient}. \nIt has been show \\cite{pmlr-v119-zhang20p,davis2021gradient} that for unconstrained nonsmooth optimization of $L$-Lipschitz functions\\footnote{We slightly abuse our notation by denoting the Lipschitz constant as $L$. Previously, we have used $L$ to denote a particular matrix used in the LMI formulation for $\\mathcal{H}_\\infty$ state-feedback synthesis.}, the INGD algorithm can be guaranteed to find the $(\\delta,\\epsilon)$-stationary point with the high-probability iteration complexity $\\mathcal{O}\\left(\\frac{\\Delta L^2}{\\epsilon^3 \\delta}\\log(\\frac{\\Delta}{p\\delta\\epsilon})\\right)$, where $\\Delta:=J(K^0)-J^*$ is the initial function value gap, and $p$ is the failure probability (i.e. the optimization succeeds with the probability $(1-p)$).\nWe can combine the proofs for\n\\cite[Theorem 2.6]{davis2021gradient} and Theorem \\ref{thm3} to obtain the following complexity result for our $\\mathcal{H}_\\infty$ setting. A formal treatment is given in the supplementary material.\n\n\\begin{thm}[Informal statement]\\label{thm5}\nConsider the policy optimization problem \\eqref{eq:hinfopt} with the $\\mathcal{H}_\\infty$ cost function defined in \\eqref{eq:hinfcost}. Suppose Assumption \\ref{assump1} holds, and the initial policy is stabilizing, i.e. $K^0\\in \\mathcal{K}$. Denote $\\Delta_0:=\\dist(\\mathcal{K}_{J(K^0)}, \\mathcal{K}^c)>0$, and let $L_0$ be the Lipschitz constant of $J(K)$ over the set $\\mathcal{K}_{J(K^0)}$. For any $\\delta<\\Delta_0$, we can \nchoose $\\delta^n=\\delta$ for all $n$ to ensure that the iterations of\nthe INGD algorithm stay in $\\mathcal{K}$ almost surely, and find a $(\\delta,\\epsilon)$-stationary point with the high-probability iteration complexity $\\mathcal{O}\\left(\\frac{\\Delta L_0^2}{\\epsilon^3 \\delta}\\log(\\frac{\\Delta}{p\\delta\\epsilon})\\right)$, where $p$ is the failure probability.\n\\end{thm}\n\n\\section{Numerical Simulations}\nTo support our theory, we provide some numerical simulations in this section. The left plot in Figure~\\ref{Simulationrelts} shows that GS, NS, INGD, and model-free NS work well for the following example:\n\\begin{equation} \\label{set_matric}\n A = \\bmat{1 &0 &-5 \\\\ -1 &1 &0\\\\ 0 &0 &1},\\,\\, B= \\bmat{1 \\\\ 0 \\\\ -1},\\,\\, Q = \\bmat{2 &-1 &0 \\\\ -1 &2 &-1 \\\\ 0 &-1 &2}, \\,\\, R = 1.\n\\end{equation}\nFor this example, we have $J^* = 7.3475$. We initialize from $K^0 = \\bmat{0.4931 &-0.1368 &-2.2654}$, which satisfies $\\rho(A-BK^0) = 0.5756 < 1$. The hyperparameter choices are detailed in the supplementary material. We can see that model-free NS closely tracks the trajectory of NS and works well.\nIn the middle plot of Figure \\ref{Simulationrelts}, we test the NS method on randomly generated cases. We set $A\\in \\mathbb{R}^{3\\times 3}$ to be $I + \\xi$, where each element of $\\xi \\in \\mathbb{R}^{3 \\times 3}$ is sampled uniformly from $[0,1]$. For $B\\in \\mathbb{R}^{3\\times 1}$, each element is uniformly sampled from $[0,1]$. We have $Q = I + \\zeta I \\in \\mathbb{R}^{3\\times 3}$ with $\\zeta$ uniformly sampled from $[0,0.1]$, and $R \\in \\mathbb{R}$ uniformly sampled from $[1,1.5]$. For each experiment, the initial condition $K^0 \\in \\mathbb{R}^{1\\times 3}$ is also randomly sampled such that $\\rho(A-BK^0) < 1$. The NS method converges globally for all the cases. In the right plot, we focus on the model-free setting for \\eqref{set_matric}. We decrease the number of samples used in the $\\mathcal{H}_\\infty$ estimation and show how this increases the noise in the zeroth-order $\\mathcal{H}_\\infty$ oracle and worsens the convergence behaviors of the model-free NS method. \nNevertheless, the model-free NS method tracks its model-based counterpart with enough samples. More numerical results can be found in the supplementary material. \n\n\\begin{figure}\n\\minipage{0.33\\textwidth}\n \\includegraphics[width=\\linewidth]{plot\/all_sime.pdf}\n \\label{fig:awesome_image2}\n\\endminipage\\hfill\n\\minipage{0.33\\textwidth}%\n \\includegraphics[width=\\linewidth]{plot\/random_exp2.pdf}\n \\label{fig:awesome_image3}\n\\endminipage\\hfill\n\\minipage{0.33\\textwidth}%\n \\includegraphics[width=\\linewidth]{plot\/vary_N.pdf}\n \\label{fig:awesome_image3}\n\\endminipage\n\\caption{Simulation results. Left: The trajectory of relative error of GS, NS, INGD, and Model-free NS methods on \\eqref{set_matric}. Middle: The trajectory of relative optimality gap of 8 randomly generated cases for NS method. Right: The trajectory of Model-free NS method with more noisy oracle on \\eqref{set_matric}.} \\label{Simulationrelts}\n\\end{figure}\n\n\n\\section{Conclusions and Future Work}\n\nIn this paper, we developed the global convergence theory for direct policy search on the $\\mathcal{H}_\\infty$ state-feedback synthesis problem. Although the resultant policy optimization formulation is nonconvex and nonsmooth, we managed to show that any Clarke stationary points for this problem are actually global minimum, and\nthe concept of Golstein subdifferential can be used to build direct policy search algorithms which are guaranteed to converge to the global optimal solutions. The finite-time guarantees in this paper are developed only for finding $(\\delta,\\epsilon)$-stationary points. \nAn important future task is to investigate the finite-time bounds for the optimality gap (i.e. $J(K^n)-J^*$) as well as\nthe sample complexity of direct policy search on model-free $\\mathcal{H}_\\infty$ control. It is also of great interests to investigate the convergence properties of direct policy search in nonlinear\/output-feedback settings\\footnote{Some discussions on possible extensions along this direction have been given in the supplementary material.}.\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{ack}\nThis work is generously supported by the NSF award\nCAREER-2048168 and the 2020 Amazon research award. The authors would like to thank Michael L. Overton, Maryam Fazel, Yang Zheng, Peter Seiler, Geir\nDullerud, Aaron Havens, Darioush Kevian, Kaiqing Zhang, Na Li, Mehran Mesbahi, Tamer Ba\\c{s}ar, Mihailo Jovanovic, and Javad Lavaei for the valuable discussions, as well as the helpful suggestions from the anonymous reviewers of NeurIPS.\n\\end{ack}\n\n\n\n\n\\bibliographystyle{abbrv}\n{\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt's well known that the discontinuity of a Feynman amplitude comes from the singularity of the Feynman propagator \\cite{c0,c1}. In an 1-Particle-Irreducible (1PI) Feynman diagram the discontinuity will contribute imaginary part to the Feynman amplitude. One of the way to calculate the discontinuity is the causal perturbative theory \\cite{c1}, another is called as cutting rules proposed by Cutkosky \\cite{c0}. Here we mainly discuss the cutting rules. We note that although the cutting rules is well-defined, it hasn't given an integrative algorithm to practically calculate the discontinuity. So we want to investigate it further.\n\nOn the other hand, the optical theorem has given a strong constraint on the imaginary part of physical amplitude. Whether they agree with each other hasn't been clearly investigated. So we will discuss their relationship and investigate the origin of the optical theorem. The arrangement of this paper is: firstly we discuss how to ameliorate the cutting rules and investigate its physical meaning; then we calculate the imaginary parts of some Feynman diagrams to see if the ameliorated cutting rules coincides with the conventional integral algorithm; in section IV we investigate the breaking of the optical theorem; in Sect.V we calculate a physical amplitude to see if the physical result keeps gauge independent under the ameliorated cutting rules; lastly we give the conclusion.\n\n\\section{Ameliorate the conventional cutting rules}\n\nWe usually encounter branch cut in Feynman amplitude calculation. Such branch cut will contribute imaginary part to Feynman amplitude in an 1PI Feynman diagram. Consider the Feynman propagator\n\\begin{equation}\n D_F(x-y)\\,=\\,\\int\\frac{d^4 p}{(2\\pi)^4}\\frac{i}{p^2-m^2+i\\varepsilon}\n e^{-i p\\cdot(x-y)}\\,,\n\\end{equation}\nthere exist two singularities $p_0=\\pm(({\\bf p}^2+m^2)^{1\/2}-i\\varepsilon)$ in the denominator. They separately represent the processes of the on-shell particle propagating from position $y$ to position $x$ and from position $x$ to position $y$. The cutting rules is just the algorithm to calculate the contribution of the processes to Feynman amplitude. But it isn't suitable for actual calculation since it assumes that only one of the two singularities of Eq.(1) has contribution to Feynman amplitude, however, it doesn't tell which singularity is the case \\cite{c0}. In the follows we investigate this problem from the imaginary part of the unstable particle's self energy. We firstly calculate the imaginary part of the unstable particle's self energy when keeping all of the contributions of the singularities of the Feynman propagators to Feynman amplitude. At one-loop level the result is\n\\begin{eqnarray}\n Im(-i)\\int\\frac{d^4 k}{(2\\pi)^4}\\frac{1}{k^2-m_1^2+i\\varepsilon}\n \\frac{1}{(k-p)^2-m_2^2+i\\varepsilon}&\\rightarrow&\n \\frac{(m^4+m_1^4+m_2^4-2 m^2 m_1^2-2 m^2 m_2^2-2 m_1^2 m_2^2)^{1\/2}}{16\\pi m^2}\n \\nonumber \\\\\n &\\times&\\bigl{(} \\theta[m_1-m-m_2]+\\theta[m-m_1-m_2]+\\theta[m_2-m-m_1] \\bigr{)}\\,,\n\\end{eqnarray}\nwhere the external-line momentum $p$ is on shell $p_0=({\\bf p}^2+m^2)^{1\/2}$ and $\\theta$ is the Heaviside function. According to the Breit-Wigner formula the imaginary part of the unstable particle's one-loop self energy is proportional to its decay width: $Im {\\cal M}(p\\rightarrow p)=m\\,\\Gamma$ \\cite{c2}. So one can easily see that the result of Eq.(2) is wrong since only the second term of the right-hand side of Eq.(2) satisfies the Breit-Wigner formula. We can draw a sketch to illustrate the origin of the three terms of the right-hand side of Eq.(2),\n\\vspace{2mm}\n\\begin{center} \\begin{picture}(278,25)\n \\SetScale{1.1} \\SetWidth{0.45}\n \\ArrowLine(0,0)(25,0)\n \\ArrowArcn(38,0)(13,180,0)\n \\ArrowArcn(38,0)(13,0,180)\n \\ArrowLine(50,0)(75,0)\n \\Text(10,8)[]{$m$}\n \\Text(42,22)[]{$m_1$}\n \\Text(42,-23)[]{$m_2$}\n \\ArrowLine(85,0)(110,0)\n \\ArrowArc(123,0)(13,180,0)\n \\ArrowArcn(123,0)(13,180,0)\n \\ArrowLine(135,0)(160,0)\n \\Text(104,8)[]{$m$}\n \\Text(134,22)[]{$m_1$}\n \\Text(134,-23)[]{$m_2$}\n \\ArrowLine(170,0)(195,0)\n \\ArrowArc(208,0)(13,0,180)\n \\ArrowArc(208,0)(13,180,0)\n \\ArrowLine(220,0)(245,0)\n \\Text(198,8)[]{$m$}\n \\Text(230,22)[]{$m_1$}\n \\Text(230,-23)[]{$m_2$}\n\\end{picture} \\end{center} \\vspace{11mm}\nwhere the arrow denotes the propagator with it is cut and the\nmomentum $q$ along it satisfies the {\\em positive} on-shell\ncondition $q_0=({\\bf q}^2+M^2)^{1\/2}$ ($M$ is the mass of the\npropagator). We call such momentum propagating direction denoted\nby the arrow as {\\em positive-on-shell} momentum propagating\ndirection. The three cut self-energy diagrams separately in turn\nrepresent the origins of the three terms of the right-hand side of\nEq.(2). Obviously only the second cut is acceptable by the\nBreit-Wigner formula, the others must be eliminated. Comparing the\nthree cuts we find that the only difference between them is that\nthe two {\\em positive-on-shell} momentum propagating directions of\nthe cut propagators of the second cut are inverse in the momentum\nloop, but that of the others are equi-directional in the momentum\nloop. We can use this point to constrain which cut is acceptable,\ni.e. the {\\em positive-on-shell} momentum propagating directions\nof the cut propagators must be inverse in every cut momentum loop.\nIn an 1PI Feynman diagram such constraint makes all of the {\\em\npositive-on-shell} momentum propagating directions of the cut\npropagators are along a same direction, i.e. the momentum-energy\npropagating direction of the external-line particles (see Fig.1-5\nfor a rough knowledge). We note that such picture can be used to\ndescribe the on-shell virtual particles' propagating processes\nhappening in the quantum field vacuum, if such processes exist.\nObviously in this picture the on-shell virtual particles satisfy\nthe energy conservation law. Contrarily, if the above constraint\ndoesn't exist, these on-shell virtual particles will not keep the\nenergy conserved because they will provide with energy\nreciprocally in the cut momentum loops thus the total energy of\nthem can be greater than that of the incoming particles.\n\nObviously there are at most two cuts in each momentum loop based on this constraint. On the other hand, according to the knowledge that cutting once in a momentum loop is equivalent to performing the conventional loop momentum integral, each cut momentum loop can only be cut twice. For the uncut propagator, since its singularity has no contribution to Feynman amplitude, it should be replaced by its Cauchy principle value in the loop momentum integral. Summing up all the discussions we obtain the following ameliorated cutting rules:\n\\begin{enumerate}\n\\item Cut through the Feynman diagram in all possible ways such that the cut propagators can simultaneously be put on mass shell and the cut propagators' {\\em positive-on-shell} momentum propagating directions are reverse in every cut momentum loop; keep only two cuts in each cut momentum loop.\n\\item For each cut propagator with definite {\\em positive-on-shell} momentum propagating direction, replace $1\/(p^2-m^2+i\\varepsilon)\\rightarrow -2\\pi\\,i\\,\\theta[p_0]\\,\\delta(p^2-m^2)$ (where the momentum $p$ is along the {\\em positive-on-shell} momentum propagating direction), for each uncut propagator, apply Cauchy principle value to it, then perform the loop integrals.\n\\item Sum the contributions of all possible cuts.\n\\end{enumerate}\n\nOne can easily find that the above ameliorated cutting rules is\ndifferent from the conventional one in some aspects. Contrarily to\nthe conventional cutting rules, the ameliorated one constrains\nthat only two cuts can exist in one momentum loop and implies that\nboth of the two singularities of Eq.(1) can have contributions to\nFeynman amplitude. Besides, it definitely points out that the\nCauchy principle value should be applied to the uncut propagator\nin the loop momentum integrals. We note that to the best of our\nknowledge there exist no similar discussions before. But then\nthere exist some calculations of the imaginary parts of the Green\nfunctions by the causal perturbative theory which coincide with\nour results \\cite{c3} (see section III).\n\nThe above ameliorated cutting rules is very abstract. So we give some illustrations in Fig.1-5. The arrows in Fig.1-5 represent the {\\em positive-on-shell} momentum propagating directions of the cut propagators. In order to explain the cutting conditions we give two examples in Fig.2.\n\\vspace{4mm}\n\\begin{center} \\begin{picture}(218,25)\n \\SetScale{1.1} \\SetWidth{0.45}\n \\ArrowLine(0,0)(25,0)\n \\ArrowLine(25,0)(46,13)\n \\ArrowLine(46,13)(60,20)\n \\ArrowLine(25,0)(46,-13)\n \\ArrowLine(46,-13)(60,-20)\n \\Line(46,13)(46,-13)\n \\ArrowLine(70,0)(95,0)\n \\ArrowLine(95,0)(116,13)\n \\ArrowLine(116,13)(130,20)\n \\Line(95,0)(116,-13)\n \\ArrowLine(116,-13)(130,-20)\n \\ArrowLine(116,-13)(116,13)\n \\ArrowLine(140,0)(165,0)\n \\Line(165,0)(186,13)\n \\ArrowLine(186,13)(200,20)\n \\ArrowLine(165,0)(186,-13)\n \\ArrowLine(186,-13)(200,-20)\n \\ArrowLine(186,13)(186,-13)\n\\end{picture} \\vspace{10mm} \\\\\n{\\small FIG. 1: Cuts of the one-loop three-point irreducible Feynman diagram.}\n\\end{center}\n\\vspace{2mm}\n\\begin{center} \\begin{picture}(335,90)\n \\ArrowLine(0,0)(25,0)\n \\Line(25,0)(50,0)\n \\ArrowLine(50,0)(75,0)\n \\ArrowLine(25,25)(25,0)\n \\Line(50,0)(50,25)\n \\ArrowLine(0,25)(25,25)\n \\ArrowLine(25,25)(50,25)\n \\ArrowLine(50,25)(75,25)\n \\ArrowLine(85,0)(110,0)\n \\ArrowLine(110,0)(135,0)\n \\ArrowLine(135,0)(160,0)\n \\ArrowLine(110,0)(110,25)\n \\Line(135,0)(135,25)\n \\ArrowLine(85,25)(110,25)\n \\Line(110,25)(135,25)\n \\ArrowLine(135,25)(160,25)\n \\ArrowLine(170,0)(195,0)\n \\Line(195,0)(220,0)\n \\ArrowLine(220,0)(245,0)\n \\Line(195,25)(195,0)\n \\ArrowLine(220,0)(220,25)\n \\ArrowLine(170,25)(195,25)\n \\ArrowLine(195,25)(220,25)\n \\ArrowLine(220,25)(245,25)\n \\ArrowLine(255,0)(280,0)\n \\ArrowLine(280,0)(305,0)\n \\ArrowLine(305,0)(330,0)\n \\Line(280,25)(280,0)\n \\ArrowLine(305,25)(305,0)\n \\ArrowLine(255,25)(280,25)\n \\Line(280,25)(305,25)\n \\ArrowLine(305,25)(330,25)\n \\ArrowLine(25,55)(50,55)\n \\ArrowLine(50,55)(75,55)\n \\ArrowLine(75,55)(100,55)\n \\Line(50,80)(50,55)\n \\Line(75,55)(75,80)\n \\ArrowLine(25,80)(50,80)\n \\ArrowLine(50,80)(75,80)\n \\ArrowLine(75,80)(100,80)\n \\ArrowLine(125,55)(150,55)\n \\Line(150,55)(175,55)\n \\ArrowLine(175,55)(200,55)\n \\ArrowLine(150,80)(150,55)\n \\ArrowLine(175,80)(175,55)\n \\ArrowLine(125,80)(150,80)\n \\Line(150,80)(175,80)\n \\ArrowLine(175,80)(200,80)\n \\Text(133,88)[]{$p_1$}\n \\Text(192,88)[]{$p_2$}\n \\Text(162,45)[]{$(p_{10}>p_{20})$}\n \\ArrowLine(225,55)(250,55)\n \\Line(250,55)(275,55)\n \\ArrowLine(275,55)(300,55)\n \\ArrowLine(250,55)(250,80)\n \\ArrowLine(275,55)(275,80)\n \\ArrowLine(225,80)(250,80)\n \\Line(250,80)(275,80)\n \\ArrowLine(275,80)(300,80)\n \\Text(233,88)[]{$p_1$}\n \\Text(292,88)[]{$p_2$}\n \\Text(262,45)[]{$(p_{10}$ and $|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>$. To evaluate the right-hand side, insert a complete set of intermediate states,\n\\begin{equation}\n <\\hspace{-1mm}{\\bf p}_1{\\bf p}_2|T^{\\dagger}T|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>\\,=\\,\n \\sum_f\\int d\\prod_f <\\hspace{-1mm}{\\bf p}_1{\\bf p}_2|T^{\\dagger}|f\\hspace{-1mm}>\n <\\hspace{-1mm}f|T|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>\\,,\n\\end{equation}\nwhere the sum runs over all possible sets $f$ of intermediate states and $d\\prod_f$ represents the phase space integral of $f$. Now express the T-matrix elements as invariant matrix elements $\\cal M$ times 4-momentum-conserving delta functions. Eq.(14) then becomes\n\\begin{equation}\n {\\cal M}(k_1 k_2\\rightarrow p_1 p_2)-{\\cal M}^{\\ast}(p_1 p_2\\rightarrow k_1 k_2)\\,=\\,\n i\\sum_f \\int d\\prod_f{\\cal M}^{\\ast}(p_1 p_2\\rightarrow f)\n {\\cal M}(k_1 k_2\\rightarrow f)\\,.\n\\end{equation}\nSetting $k_i=p_i$ and applying the kinematic factors required by Eq.(16) to build a cross section, one obtains the standard form of the optical theorem,\n\\begin{equation}\n Im{\\cal M}(k_1 k_2\\rightarrow k_1 k_2)\\,=\\,2 E_{cm} p_{cm}\n \\sigma_{tot}(k_1 k_2\\rightarrow anything)\\,,\n\\end{equation}\nwhere $E_{cm}$ is the total center-of-mass energy and $p_{cm}$ is the momentum of either particle in the center-of-mass frame. Considering the $1\\rightarrow 1$ process one obtains the well-known relationship,\n\\begin{equation}\n Im{\\cal M}(a\\rightarrow a)\\,=\\,m_a\\Gamma_a\\,,\n\\end{equation}\nwhere $m_a$ and $\\Gamma_a$ is the mass and decay width of particle $a$.\n\nIt seems that the optical theorem is a straightforward consequence of the unitarity of the S-matrix. But we find that there are some severe contradictions in the optical theorem. We firstly discuss a concrete physical amplitude. It is known that the imaginary part of physical amplitude comes from two cases: one is the coupling constant has imaginary part, the other is the Feynman amplitude has branch cut. At one-loop level a physical amplitude can be expressed as\n\\begin{equation}\n {\\cal M}(i\\rightarrow f)\\,=\\,\\sum_k g_k (a_k+i\\,b_k)\\,,\n\\end{equation} where the sum runs over different interactions, $a_k$ is a\nreal number, $b_k$ is the imaginary part coming from the branch\ncut of Feynman amplitude, and all of the imaginary parts of the\ncoupling constants are included in the coefficient $g_k$. For\nconvenience we define two concepts: the {\\em quasi-imaginary part}\nand the {\\em quasi-real part} of Feynman amplitude, which\nseparately represent the parts of Feynman amplitude coming from\nthe branch cut and normal integral. For example in Eq.(19) the\n{\\em quasi-imaginary part} of the physical amplitude is $\\sum_k\ng_k b_k$ and the {\\em quasi-real part} of the physical amplitude\nis $\\sum_k g_k a_k$. Under the charge conjugation transformation\nonly the imaginary part of the coupling constant changes into its\ncontra-value, all of the other quantities in Eq.(19) keep\nunchanged. So we have from the CPT conservation law \\begin{equation}\n {\\cal M}^{\\ast}(f\\rightarrow i)\\,=\\,{\\cal M}^{\\ast}(\\bar{i}\\rightarrow \\bar{f})\n \\,=\\,\\sum_k g_k(a_k-i\\,b_k)\\,.\n\\end{equation}\nFrom Eqs.(19) and (20) we have\n\\begin{equation}\n {\\cal M}(i\\rightarrow f)-{\\cal M}^{\\ast}(f\\rightarrow i)\\,=\\,2\\,i\\,\\sum_k g_k b_k\\,\\equiv\\,\n 2\\,i\\,\\tilde{Im}{\\cal M}(i\\rightarrow f)\\,,\n\\end{equation}\nwhere $\\tilde{Im}$ takes the {\\em quasi-imaginary part} of the physical amplitude. Obviously this result clearly demonstrates the relationship between the optical theorem and the cutting rules. If the optical theorem is right, one will have for the physical amplitude of top quark decaying into charm quark and gauge boson Z\n\\begin{equation}\n \\tilde{Im}{\\cal M}(t\\rightarrow c\\,Z)\\,=\\,\\frac{1}{2}\\sum_f \\int d\\prod_f {\\cal M}^{\\ast}(c\\,Z\\rightarrow f)\n {\\cal M}(t\\rightarrow f)\\,.\n\\end{equation}\nOn the other hand, one can calculate the {\\em quasi-imaginary part} of ${\\cal M}(t\\rightarrow c\\,Z)$ from the branch cuts of the Feynman amplitude. The one-loop Feynman diagrams of $t\\rightarrow c\\,Z$ have been shown in Fig.8.\n\\begin{figure}[htbp]\n\\begin{center}\n \\epsfig{file=tcZ_all.ps,width=12cm} \\\\\n \\caption{One-loop diagrams of the physical amplitude of $t\\rightarrow c\\,Z$.}\n\\end{center}\n\\end{figure}\nWe note that there is no need to introduce counterterm into the one-loop physical amplitude of $t\\rightarrow c\\,Z$ after including the contribution of the amplitude of Fig.8. From the branch cuts of Fig.8 we find a contradiction in the optical theorem: Eq.(22) forbids the cuts representing the contribution of $\\sum_f\\int d\\prod_f {\\cal M}^{\\ast}(c\\rightarrow f){\\cal M}(t\\rightarrow Z\\,f)$ in which the cuts of $t\\rightarrow c$ two-point diagrams in the seven and eight diagrams of Fig.8 are included, but that cuts of $t\\rightarrow c$ two-point diagrams in the seven and eight diagrams of Fig.8 are permitted by Eq.(18). Obviously the contribution of the cuts of $t\\rightarrow c$ two-point diagrams in the seven and eight diagrams of Fig.8 to the physical amplitude isn't equal to zero.\n\nIn fact there exists a more severe contradiction. From the\nunitarity of S-matrix we also have $T^{\\dagger}T=T T^{\\dagger}$.\nThus Eq.(15) can be changed into \\begin{equation}\n <\\hspace{-1mm}{\\bf p}_1{\\bf p}_2|T^{\\dagger}T|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>\\,=\\,\n \\sum_f\\int d\\prod_f <\\hspace{-1mm}{\\bf p}_1{\\bf p}_2|T|f\\hspace{-1mm}>\n <\\hspace{-1mm}f|T^{\\dagger}|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>\\,.\n\\end{equation}\nSince each intermediate state $f$ has its own unique parameters (e.g. mass), Eq.(23) being equal to Eq.(15) means all of the terms about $f$ are separately equal, i.e. for arbitrary $f$\n\\begin{equation}\n <\\hspace{-1mm}{\\bf p}_1{\\bf p}_2|T^{\\dagger}|f\\hspace{-1mm}>\\,=\\,\n <\\hspace{-1mm}{\\bf p}_1{\\bf p}_2|T|f\\hspace{-1mm}>\\,, \\hspace{10mm}\n <\\hspace{-1mm}f|T|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>\\,=\\,\n <\\hspace{-1mm}f|T^{\\dagger}|{\\bf k}_1{\\bf k}_2\\hspace{-1mm}>\\,.\n\\end{equation}\nSince $f$ are arbitrary, we ultimately obtain\n\\begin{equation}\n {\\cal M}(k_1 k_2\\rightarrow p_1 p_2)-{\\cal M}^{\\ast}(p_1 p_2\\rightarrow k_1 k_2)\\,=\\,\n 0\\,.\n\\end{equation}\nObviously Eq.(25) contradicts Eq.(16).\n\nAnother problem about the optical theorem is: if the charge\nconjugation is conserved, from the CPT conservation law the\nleft-hand side of Eq.(16) is a pure imaginary, but the right-hand\nside of Eq.(16) may contain real part which comes from the branch\ncut of the physical amplitudes. Why are there such contradictions\nand problem? In fact the S-matrix is only a symbolistic operator,\nit doesn't have the algebraic algorithm that a true operator\nshould have, e.g. the multiplication and addition. This is because\nthe S-matrix represents the summation of the interactions of the\nphysical world, so it is no possible to express it by a concrete,\nhaving certain algebraic algorithm's, operator. Therefore the\nderivation of Eqs.(14-16) isn't right.\n\n\\section{Gauge independence of physical result under ameliorated cutting rules}\n\nIn order to investigate whether the ameliorated cutting rules is reasonable, we calculate the physical amplitude $t\\rightarrow c\\,Z$ to see if the ameliorated cutting rules keeps the physical result gauge independent.\n\nIn Fig.8 we have shown the one-loop diagrams of the physical process $t\\rightarrow c\\,Z$. Since there is no tree-level diagram, it is no need to introduce counterterm to one-loop level. We note that in the following calculations we have used the program packages {\\em FeynArts, FeynCalc, FormCalc and LoopTools} \\cite{c6}. Our calculations have shown that the {\\em quasi-real part} of ${\\cal M}(t\\rightarrow c\\,Z)$ is gauge-parameter independent. So we only need to consider the {\\em quasi-imaginary part} of ${\\cal M}(t\\rightarrow c\\,Z)$. Using the ameliorated cutting rules we obtain (see also Fig.1)\n\\begin{equation}\n \\tilde{Im}{\\cal M}(t\\rightarrow c\\,Z)|_{\\xi}\\,=\\,0\\,,\n\\end{equation}\nwhere subscript $\\xi$ takes the gauge-dependent part of the quantity. From Eq.(26) one can easily see that all of the unphysical cuts (e.g. $t\\rightarrow G^{+}d_i$) in Fig.8 have been cancelled out in the physical amplitude by the ameliorated cutting rules. This coincides with the conventional knowledge \\cite{c7}. Obviously such result makes the decay width of $t\\rightarrow c\\,Z$ gauge independent to two-loop level.\n\nBy the way, from the above discussions we know that the optical theorem gives a contradictory result about the {\\em quasi-imaginary part} of ${\\cal M}(t\\rightarrow c\\,Z)$. If keeping the cuts of $t\\rightarrow c$ two-point diagrams in the seven and eight diagrams of Fig.8 and for the first six diagrams of Fig.8 only keeping the first kind cut of Fig.1 required by Eq.(22), we obtain\n\\begin{eqnarray}\n \\tilde{Im}{\\cal M}(t\\rightarrow c\\,Z)|_{\\xi}&=&\\bar{c}\\,\n {\\xslash \\epsilon^{\\ast}}\\gamma_L\\,t\\,\\sum_i\\frac{V_{2i}V^{\\ast}_{3i}\\,e^3\n (3-4 s_W^2)(x_c-\\xi_W-x_{d,i})}{384\\pi\\,c_W\\,s_W^3 x_c} \\nonumber \\\\\n &\\times&\\left( x_c^2-2(\\xi_W+x_{d,i})x_c+(\\xi_W-x_{d,i})^2 \\right)^{1\/2}\\,\n \\theta[m_c-m_{d,i}-M_W\\sqrt{\\xi_W}]\\,,\n\\end{eqnarray}\nwhere $\\gamma_L$ is the left-handed helicity operator, $V_{2i}$ and $V_{3i}$ are the CKM matrix elements \\cite{c8}, $e$ is electron charge, $s_W$ and $c_W$ is the sine and cosine of the weak mixing angle, $M_W$ and $\\xi_W$ is the mass and gauge parameter of gauge boson $W$, $m_c$ and $m_{d,i}$ are the masses of charm quark and down-type $i$ quark, $x_c=m_c^2\/M_W^2$ and $x_{d,i}=m_{d,i}^2\/M_W^2$. This result is gauge dependent and will lead to the decay width of $t\\rightarrow c\\,Z$ gauge dependent. One can easily see that the gauge-dependent terms of Eq.(27) come from the unphysical cuts of $t\\rightarrow c$ two-point diagrams in the seven and eight diagrams of Fig.8.\n\n\\section{Conclusion}\n\nIn order to calculate the contribution of the singularity of Feynman propagator to Feynman amplitude we investigate the cutting rules and the optical theorem. We ameliorate the cutting rules in order to make it suitable for actual calculation and give the right result of the imaginary part of physical amplitude. The calculations of the imaginary parts of several Feynman diagrams show that the ameliorated cutting rules agrees with the conventional integral algorithm very well (see Fig.6,7 and Eq.(6,8,9,10)). On the other hand, through careful investigation we find that the optical theorem has severe contradictions and problem thus isn't right. Besides, the calculation of the physical amplitude $t\\rightarrow c\\,Z$ shows that the ameliorated cutting rules keeps the decay width of the physical process gauge independent to two-loop level (see Eq.(26)).\n\nIn the viewpoint of the conventional cutting rules the branch cut of Feynman diagram contributes the imaginary part to physical amplitude. Our investigation further finds that although the Feynman diagram doesn't represent the real physical process, it does provide a physical-like picture to describe the on-shell virtual particles' propagating processes happening in the quantum field vacuum if such processes exist. The ameliorated cutting rules not only gives a feasible algorithm to calculate the contributions of the virtual processes to physical amplitude, but also can help us to discover the deep-seated physical meaning of the quantum field theory.\n\n\\vspace{5mm} {\\bf \\Large Acknowledgments} \\vspace{2mm}\n\nThe author thanks Prof. Cai-dian Lu for the fruitful discussions and the corrections of the words. The author also thanks Yue-long Shen, doctor Jian-feng Cheng, Xian-qiao Yu, Ge-liang Song and Ying Li for the fruitful discussions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\\subsection{Historical motivation}\n\\label{history}\nEinstein made a number of very pertinent remarks about the foundations of quantum mechanics, often summarised in the famous\n phrase ``God\ndoes not play dice\". For example, in 1935 he wrote \\cite{Einstein1935}:\n\\begin{quote}\n``\nIn any case one does not have the right today to maintain that the foundation must consist in a field theory in the sense of Maxwell.\nThe other possibility, however, leads in my opinion to a renunciation of the time-space continuum and a purely algebraic physics.\nLogically this is quite possible [...] Such a theory doesn't have to be based on the probability concept.\"\n\\end{quote}\nIn 1954, he went further, and opined \\cite{Einstein1954}:\n\\begin{quote}\n``I consider it quite possible that physics cannot be based on the field concept, i.e., continuous structure.\nIn that case, nothing remains of my entire castle in the air, gravitation theory included.\"\n\\end{quote}\nNevertheless, modern physics has stuck with the field concept, and his `castle in the air' remains in general use.\n\nIn 1935, there simply was not enough experimental evidence to provide much of a clue as to the possible structure\nof a `purely algebraic physics', beyond the obvious fact that it must contain a copy of the quaternion group. But today, there is\nan enormous amount of experimental evidence, that essentially shows there is a unique possibility for a purely\nalgebraic physics \\cite{perspective}. Whether it actually works or not, is a separate question. But we should at least try it.\nI begin by considering how much of physics, including continuous approximations interpreted as\nspacetime and fields \nin spacetime, \ncan be obtained from the quaternion group alone.\n\n\n\n\\subsection{The spin group and the quaternion group}\n\\label{quaternion}\nGroup theory \\cite{Zee}\nfirst entered into quantum mechanics with the discovery of the spin of an electron, which has a direction relative\nto the ambient space, and therefore requires a group $SU(2)$ for its description. On a macroscopic scale, the sum of the spins of a large number of electrons, protons and neutrons gives rise to\nthe phenomenon of magnetism, when there is a sufficiently large bias in the directions of spin. \n\nBut it is experimentally impossible to measure the\ndirection of spin of an individual elementary particle. \nAll one can do is choose a direction, and measure whether the spin is `up' or `down' in that direction. \nIndeed, even for larger particles, such as silver atoms, the Stern--Gerlach experiment \\cite{SternGerlach}\ndemonstrates that the direction of spin is\nquantised.\nTherefore we must assume that the elementary particles have,\nintrinsically, only finite symmetries. The smallest finite group that is available to describe spin symmetries is the\nquaternion group $Q_8$, with $8$ elements $\\pm 1$, $\\pm i$, $\\pm j$ and $\\pm k$ satisfying the rules \n\\begin{eqnarray}\n&ij=k, jk=i, ki=j, \\cr\n&i^2=j^2=k^2=-1.\n\\end{eqnarray}\n\nNow if we take a large number of copies of $Q_8$, and add them together, we obtain \\cite{grouprings} \nthe integral group ring $\\mathbb Z Q_8$, in which all the copies of $Q_8$ are identical, meaning that all electrons are identical, and so on. In macroscopic physics, the individual quanta disappear from view, and it\nbecomes reasonable to approximate the scaled copy of the integers $\\mathbb \\varepsilon \\mathbb Z$ by the real numbers $\\mathbb R$, and study the\ngroup algebra $\\mathbb RQ_8$ instead. There is, up to equivalence, only one faithful representation of $Q_8$, in the quaternion algebra $\\mathbb H$,\nand there are four\n$1$-dimensional real representations, of which one is trivial and three are representations of the three quotients $Q_8\/Z_4\\cong Z_2$.\nTherefore the group algebra has a canonical (Wedderburn) decomposition as \n\\begin{eqnarray}\n\\mathbb R Q_8 &\\cong & 4\\mathbb R + \\mathbb H.\n\\end{eqnarray}\nTaking out the five real scale factors from this algebra, we are left with the group $SU(2)$, which is exactly what we require in order to describe\nmagnetic spin in ordinary macroscopic space.\n\n\\subsection{Analysis of the group algebra}\n\\label{analysis}\nIn the representation theory of finite groups, the usual convention is for the finite group to act on the algebra by right multiplication, and the continuous groups to act by left-multiplication. This allows us to treat both the discrete and continuous symmetries simultaneously. \nThe macroscopic behaviour, on the other hand, requires the continuous group to act on both sides,\nby conjugation, so that $SU(2)$ acts on $\\mathbb H$ as $SO(3)$, that is, fixing the real part, and rotating the imaginary part as a Euclidean\n$3$-space. \nIt is standard to interpret this $3$-space as real physical space, in the case when $SU(2)$ represents magnetic spin.\nIt would be reasonable, then, to interpret the real part of $\\mathbb H$ as a (non-relativistic) time. This allows the `spin' to be modelled \nas something\nthat takes place \nin space and time, rather than in isolation.\n\nBut the finite group has given us something in addition, namely four copies of the real numbers. These \npresumably represent four physical \nparticles, at least three of which we can\ndetect magnetically. \nThese\nfour particles \nare acted on by the finite group on the right, in three cases to change the sign, so that these three \nappear in a pair of spin states. They are invariant under conjugation by $SU(2)$, so under symmetries of\nmacroscopic space. But they are not invariant under the left action of the finite group. Since we started out modelling spin of electrons\nand protons, these had better be two of the particles, and the most plausible\ninterpretation of the other two is as the neutron\nand (electron) neutrino. The first three appear in two spin states, the last in only one, as we know also from the Wu experiment \\cite{Wu}.\nThus this action describes the weak interaction, in the form of beta decay:\n\\begin{eqnarray}\ne+p &\\leftrightarrow& n+\\nu.\n\\end{eqnarray}\n\nNote that this is an action of the finite group, not of the spin group.\nIn the standard model \\cite{Griffiths}, the weak interaction is described instead by a second copy of $SU(2)$, commuting with the spin group.\nIt is of course perfectly possible to define an action of $SU(2)$ on this $4$-dimensional real space, and have this action\ncommute with the spin $SU(2)$, and try to build a model of physics on top.\nBut this action is not compatible with \nthe group algebra.\nThe standard model therefore puts the weak $SU(2)$ elsewhere, and doubles up the Weyl spinor into a Dirac spinor\nso that there will be no conflict between the finite action and the continuous action. By doing so, however, it breaks the connection\nbetween the weak interaction and the spin group, which then has to be put back in by hand, in the form of a mixing\nbetween the weak force and electromagnetism. In the toy model discussed here, the mixing between magnetism and\nthe weak force happens automatically.\n\nIt is worth analysing the modelling of these particles in a little more detail. In the group algebra, the four $1$-dimensional representations \ncontain the following elements:\n\\begin{eqnarray}\n1a:&& 1+(-1)+i+(-i)+j+(-j)+k+(-k);\\cr\n1b: && 1+(-1)+i+(-i)-j-(-j)-k-(-k);\\cr\n1c:&& 1+(-1)-i-(-i)+j+(-j)-k-(-k);\\cr\n1d:&& 1+(-1)-i-(-i)-j-(-j)+k+(-k).\n\\end{eqnarray}\nWe then see that the elements of the group can be interpreted, not as particles, but as quantum numbers of the particles. Two quantum numbers are sufficient to distinguish these four particles, so that for example, if $\\lambda_i$ and $\\lambda_j$ denote the coefficients of $i$ and $j$ respectively, then we can take the charge to be $(\\lambda_j-\\lambda_i)\/2$, and the weak isospin to be $\\lambda_j\/2$. We do not need to specify the coefficient of $k$, that is $\\lambda_k=\\lambda_i\\lambda_j$.\n\nIn a similar way, the other half of the group algebra has a basis consisting of\n\\begin{eqnarray}\nt&:=& 1-(-1),\\cr\nx&:=& i-(-i),\\cr\ny&:=& j-(-j),\\cr\nz&:=& k-(-k).\n\\end{eqnarray}\nThe names are chosen with the intention of suggesting a potential quantisation of spacetime, with $z$ representing the direction of spin, and the $x,y$ plane relating to the charge and isospin. \nThe (speculative) idea, which I shall explore in more detail below, is that the combination of spin, charge and weak isospin for a (large) collection of interacting elementary particles \nmay be sufficient to define the ambient physical spacetime.\n\nThis \nexample illustrates the general principle of how a macroscopic force, in this case magnetism, emerges from individual quanta. In particular,\nit illustrates how a macroscopic continuous symmetry arises from a microscopic discrete symmetry. It also illustrates how a breaking of the symmetry\nat the microscopic level arises from an interaction between elementary particles, in this case the weak interaction,\nand gives rise to a breaking of symmetry also at the\nmacroscopic level. The quaternion group $Q_8$ on its own is therefore sufficient to model a united magneto-weak force. It is not sufficient for\na complete electro-magneto-weak unification, however, for which we need a larger group.\nThis larger group must be capable of dealing with the fact that the electron comes in three different mass states, that is in three different\ngenerations. \n\n\\subsection{The automorphism group}\n\\label{auto}\nThe symmetries of this finite model of spin are described by the automorphism group of $Q_8$, that is isomorphic to\n$Sym(4)$, the symmetric group on $4$ letters.\nThe automorphisms are of five types, where I denote the letters $W,X,Y,Z$: \n\\begin{itemize}\n\\item the identity element, of order $1$.\n\\item $3$ even permutations of order $2$, such as $(W,X)(Y,Z)$; these are inner automorphisms, that is conjugation by $i$, $j$ and $k$. \n\\item $8$ even permutations of order $3$, such as $(X,Y,Z)$; these can be represented as conjugation by unit quaternions $(-1\\pm i\\pm j\\pm k)\/2$.\n\\item $6$ odd permutations of order $2$, such as $(W,X)$; represented as conjugation by $i\\pm j$, $j\\pm k$ and $k\\pm i$.\n\\item $6$ odd permutations of order $4$, such as $(W,X,Y,Z)$; represented as conjugation by $1\\pm i$, $1\\pm j$ and $1\\pm k$.\n\\end{itemize} \n\nThese symmetries have many different interpretations, depending on what the original copy of $Q_8$ represents. The kind of symmetry that\nit might be useful to look for is an extension of the neutrino\/electron\/proton\/neutron symmetry discussed in the previous section, to\nthree generations of electron, plus the baryon octet \\cite{GellMann}. \nIn this way the baryons would exhibit a $3$-fold symmetry represented by the $3$-cycles,\nthat could perhaps be interpreted as a colour symmetry of the three constituent quarks \\cite{QCD}. Similarly,\nthe electrons would divide into two transpositions, possibly representing the left-handed and right-handed spins. \n\nAlternatively, or in addition,\none might want to consider the odd permutations as representing leptons and quarks. The transpositions could then represent either\nthe left-handed and right-handed electrons, \nor the left-handed parts of both neutrinos and electrons. Similarly,\nthe $4$-cycles could represent quarks, either in up\/down pairs \nor in left\/right pairs.\nAdditional interpretations might arise from consideration of particle interactions, so that the identity element represents electromagnetic\ninteractions, \n the $3$ elements of order $2$ represent the weak interaction,\nand the $8$ elements of order $3$ the strong interaction. The strong interaction manifests itself both in massless form (as gluons) and\nin massive form (as the meson octet, including pions and kaons).\n\nIt may be worth mentioning in passing that if one extends the $S_4$ action by conjugation of the quaternions listed above,\nto include both left and right multiplications, then one obtains the Weyl group of type $F_4$. This may be the fundamental reason\nwhy the Lie group of type $F_4$, and various related types such as $D_4$ and $E_6$, are so attractive as potential ways\nto extend the standard model \\cite{E6,E8,DG,Furey,DMW}. \n\n\\section{The binary tetrahedral group}\n\\label{tetrahedron}\n\\subsection{Overview}\n\\label{overview}\nThere is only one non-trivial way to adjoin a triplet symmetry to $Q_8$,\nand that is to extend \nto the binary tetrahedral group, $G$ say, of order $24$, by adjoining the quaternion\n\\begin{eqnarray}\nw&:=& (-1+i+j+k)\/2,\n\\end{eqnarray}\nand therefore also its quaternion conjugate\n\\begin{eqnarray}\nv&:=& (-1-i-j-k)\/2.\n\\end{eqnarray}\nThis group and its representation theory are described in some detail in \\cite{perspective}, but we shall require \nyet more detail here. \nI begin with a summary. \n\nThe most important thing to note is the structure of the real group algebra:\n\\begin{eqnarray} \n\\mathbb R G &\\cong & \\mathbb R + \\mathbb C + \\mathbb H + M_2(\\mathbb C) + M_3(\\mathbb R).\n\\end{eqnarray}\nTaking out four real scalars and one complex scalar leaves us with the group\n\\begin{eqnarray}\nU(1) \\times SU(2) \\times SL(2,\\mathbb C)\\times SL(3,\\mathbb R),\n\\end{eqnarray}\nwhich contains all the groups we need for the standard model of particle physics, apart from the fact that we have the\nsplit real form $SL(3,\\mathbb R)$ rather than the compact real form $SU(3)$ of the Lie group of type $A_2$.\n\nThis difference may reflect the fact that we are looking at finite (generation) \nsymmetries that can be observed in\nexperiments, rather than continuous\n(colour) symmetries that are not observable. It may be possible to resolve this\nissue by extending to the complex group algebra, but this process obscures a number of important features of\nthe real group algebra, so I will only do this if absolutely necessary. \nIn any case, we must bear in mind that this difference must be resolved at some point,\nsince it may be the difference between a viable model and an unviable model.\n\n\\subsection{Irreducible representations}\n\\label{irreps}\nRepresentation theory is usually presented first over the complex numbers, since this is the simplest theory, but we shall\nrequire the extra subtlety of the representation theory over real numbers.\nFirst note that the conjugation action of the group on itself divides the group into $7$\nconjugacy classes, of sizes $1,1,6,4,4,4,4$, as listed in this table: \n\\begin{eqnarray}\n&\\begin{array}{cccc}\n\\mbox{Size} & \\mbox{Elements} & \\mbox{Order}\\cr\\hline\n1 & 1 = e& 1\\cr\n1 & -1 = i^2& 2\\cr\n6 & \\pm i, \\pm j, \\pm k & 4\\cr\n4 & w, wi, wj, wk & 3\\cr\n4 & v, -vi,-vj, -vk & 3\\cr\n4 &- w, -wi, -wj, -wk & 6\\cr\n4 & -v, vi,vj, vk & 6\\cr\\hline\n\\end{array}&\n\\end{eqnarray}\n\nThere are therefore $7$ complex irreducible representations, whose characters are listed as rows in the following table. The top row gives a representative of the conjugacy class, and the entries are the traces of the representing matrices, where $\\omega$ and $\\bar\\omega$ are the complex cube roots of unity.\n\\begin{eqnarray}\n\\begin{array}{ccccccc}\n1&-1&i&w&-w&v&-v\\cr\\hline\n1&1&1&1&1&1&1\\cr\n1&1&1&\\omega&\\omega&\\bar\\omega&\\bar\\omega\\cr\n1&1&1&\\bar\\omega&\\bar\\omega&\\omega&\\omega\\cr\n3&3&-1&0&0&0&0\\cr\n2&-2&0&-1&1&-1&1\\cr\n2&-2&0&-\\omega&\\omega&-\\bar\\omega&\\bar\\omega\\cr\n2&-2&0&-\\bar\\omega&\\bar\\omega&-\\omega&\\omega\\cr\\hline\n\\end{array}\n\\end{eqnarray}\nThe three $2$-dimensional representations will eventually play the role of Weyl spinors, but there are three of them, \nnot just the familiar left-handed and right-handed Weyl spinors. \n Of course, the standard model for electro-weak interactions really has three Weyl spinors, two left-handed and one right-handed, so the question is whether it is possible, and if so, how, to match up the three spinors in the two models.\n\nThe structure of the complex group algebra \ncan be read off from the dimensions of the representations, and is\n\\begin{eqnarray}\n3\\mathbb C + M_3(\\mathbb C) + 3M_2(\\mathbb C).\n\\end{eqnarray}\nIn addition to the problem of mixing and matching the three copies of $SU(2)$ or $SL(2,\\mathbb C)$ inside the three copies of $M_2(\\mathbb C)$,\n there is the problem of mixing and matching the three copies of $U(1)$ inside the three copies of $\\mathbb C$. There is too much choice at this stage,\n and the problem becomes rather more tractable if we restrict to the real group algebra. Of course, this restriction means that we lose the group $SU(3)$\nfrom $M_3(\\mathbb C)$, but we have $SL_3(\\mathbb R)$ instead. This different real form may or may not be a satisfactory replacement for the \ngauge group of the strong force. \n\nThe irreducible real representations can be described by taking the sum of a complex representation with its complex conjugate, and at the same time\ntaking the union of each conjugacy class with its inverse class. This reduces the table to five rows and five columns, as follows:\n\\begin{eqnarray}\n\\begin{array}{ccccc}\n1&-1&i&w&-w\\cr\\hline\n1&1&1&1&1\\cr\n2&2&2&-1&-1\\cr\n3&3&-1&0&0\\cr\n4&-4&0&-2&2\\cr\n4&-4&0&1&-1\\cr\\hline\n\\end{array}\n\\end{eqnarray} \nThe first of the two $4$-dimensional representations is quaternionic, so has a continuous $3$-parameter family of possible complexifications.\nThe second of the two $4$-dimensional representations, on the other hand, has a discrete set of exactly two possible complexifications. This interplay between the discrete and the continuous \nmay possibly play an important role in the structure of the Dirac spinor and the Dirac algebra \\cite{Dirac}.\n\nIn order to distinguish these two representations, let us write the first one $4_H$, to denote a Hamiltonian quaternionic representation, \nand the second one $4_C$, to denote a classical complex representation. Then the group algebra, \nas a representation of the finite group $G$, has the structure\n\\begin{eqnarray}\n1+2+3+3+3+4_H+4_C+4_C.\n\\end{eqnarray}\n\n\\subsection{Tensor products}\n\\label{tensors}\nSo far, I have associated the various Lie groups (gauge groups and spin groups) with a single representation of the finite group.\nThe finite group links them together in various ways, described in part by the decompositions of tensor products of representations.\nThese decompositions can be calculated easily from the character table, and are as follows, with $+$ signs omitted to save space:\n\\begin{eqnarray}\n\\begin{array}{c|cccc}\n1&2 & 3 &4_H & 4_C\\cr\\hline\n2 & 13 & 123 & 4_C4_C & 4_H4_C\\cr\n3 & 123\n& 1233 & 4_H4_C4_C & 4_H4_C4_C\\cr\n4_H & 4_C4_C&4_H4_C4_C & 11113333 & 223333\\cr\n4_C & 4_H4_C&4_H4_C4_C & 223333\n& 1123333 \n\\end{array}\n\\end{eqnarray}\n\nThere is clearly plenty of structure in here that might relate to the structure of the standard model. For example the fact that $(1+2)\\otimes 4_H = 4_H+4_C+4_C$ means that the whole of the fermionic part of the algebra can be obtained from a real plus a complex `version' of the spin representation $4_H$.\n\n If one distinguishes the two copies of the real numbers by labelling one of them with $\\gamma_5$, then there is \n some prospect of being able to match up the finite model with the standard model. Similarly, all of the $4\\otimes 4$ representations look like different real slices of the complex Clifford algebra in the standard model, which for a finite (therefore compact) group must split as $1+(1+3)+(3+3)+(1+3)+1$. Again we see a mixing of the top and bottom degrees of the Clifford algebra ($1$ and $i\\gamma_5$) into a real $2$-space, and a similar mixing in the odd part of the Clifford algebra. All this suggests that a careful distinction between $4_H$ and $4_C$ may be able to throw some light on the mixing of quantum electrodynamics and the weak interaction in the standard model. \n\nA similar picture emerges from the bosonic part of the algebra, in which one obtains the whole algebra from the tensor product $(1+3)\\otimes 3 = 1+2+3+3+3$. For the purpose of matching with the standard model, one might wish to go further and observe the isomorphism\n\\begin{eqnarray}\n(1+3)\\otimes (1+3)&\\cong&1+1+2+3+3+3+3\\cr\n&\\cong& 4_C\\otimes 4_C. \n\\end{eqnarray} \nHowever one tries to interpret this equivalence, it seems to imply a mixing between the strong force, with a gauge group acting on $1+3$, and the\nelectroweak forces, with a gauge group acting on $4_C$.\n\nIndeed, there are a number of other suggestive isomorphisms between different tensor product representations. For example, for any\nrepresentation $R$ that does not involve the trivial representation $1$, we have\n\\begin{eqnarray}\n(1+2)\\otimes R &\\cong & 3\\otimes R,\n\\end{eqnarray}\nwhich gives ample scope for mixing a broken $1+2$ symmetry with an unbroken $3$ symmetry.\nAnother example is that\n\\begin{eqnarray}\n3\\otimes 4_H &\\cong& \n3\\otimes 4_C,\n\\end{eqnarray}\nwhich gives plenty of scope for mixing a spinor of type $4_H$ with a spinor of type $4_C$.\n\n\\subsection{Explicit matrices}\n\\label{matrices}\nFor the purposes of explicit calculation, it will be useful to have explicit matrix copies of all the irreducible representations. It is sufficient to specify matrices representing the generators $i$ and $w$. In the $1$-dimensional representation, they both act trivially. In the $2$-dimensional representation, $w$ is a rotation of order $3$, and $i$ acts trivially, so we may take the matrices\n\\begin{eqnarray}\ni\\mapsto \\begin{pmatrix}1&0\\cr 0&1\\end{pmatrix}, && w\\mapsto \\frac12\\begin{pmatrix}-1& \\sqrt 3\\cr -\\sqrt 3 & -1\\end{pmatrix}\n\\end{eqnarray}\nThe $3$-dimensional representation is the representation as symmetries of a regular tetrahedron, which can be embedded as alternate vertices of a cube, so that the matrices can be taken as\n\\begin{eqnarray}\ni\\mapsto\\begin{pmatrix}1&0&0\\cr 0&-1&0\\cr 0&0&-1\\end{pmatrix},&&\nw\\mapsto\\begin{pmatrix}0&1&0\\cr 0&0&1\\cr 1&0&0\\end{pmatrix}.\n\\end{eqnarray}\nThe representation $4_H$ is the representation by right-multiplication on the quaternions, so that the matrices can be taken as\n\\begin{eqnarray}\ni\\mapsto \\begin{pmatrix}0&1&0&0\\cr -1&0&0&0\\cr 0&0&0&-1\\cr 0&0&1&0\\end{pmatrix},&&\nw\\mapsto\\frac12\\begin{pmatrix}-1&1&1&1\\cr -1&-1&-1&1\\cr -1&1&-1&-1\\cr -1&-1&1&-1\\end{pmatrix}\n\\end{eqnarray}\nIn the standard model the quaternionic symmetry is broken, and a particular complex basis is chosen. The following matrices give an example,\nwhich may or may not be similar to what is done in the standard model:\n\\begin{eqnarray}\ni\\mapsto \\begin{pmatrix} i & 0\\cr 0 & -i\\end{pmatrix},&& w\\mapsto \\frac12\\begin{pmatrix}-1+i&1+i\\cr -1+i & -1-i\\end{pmatrix}\n\\end{eqnarray}\nFinally, the representation $4_C$ can be written as \n\\begin{eqnarray}\ni\\mapsto \\begin{pmatrix}0&1&0&0\\cr -1&0&0&0\\cr 0&0&0&-1\\cr 0&0&1&0\\end{pmatrix},&&\nw\\mapsto\\frac14\\begin{pmatrix}\n\\alpha&-\\beta&-\\beta&-\\alpha\\cr \\beta&\\alpha&\\alpha&-\\beta\\cr\n \\alpha&-\\beta&\\beta&\\alpha\\cr \\beta&\\alpha&-\\alpha&\\beta\n \\end{pmatrix}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\alpha&=&1+\\sqrt 3\\cr\n\\beta&=& 1-\\sqrt 3.\n\\end{eqnarray}\nIt is unfortunately not possible to get rid of the annoying square roots of $3$ in this real representation.\nOn the positive side, they are what permits the \nextension of the standard model from one generation of fermions to three. \n\n\\subsection{Projections}\n\\label{projections}\nThe theory of finite group representations contains a canonical set of\nprojections from the group algebra onto the\nvarious matrix subalgebras. \n\nFirst there is the basic division into bosons and fermions, obtained via\nprojection with the idempotents $(e\\pm i^2)\/2$. \nWithin the fermions, there are two projections obtained from the\nsum of all the elements of order $3$, that is\n\\begin{eqnarray}\ns&:=& w+ iw+ jw+ kw + v-iv-jv- kv.\n\\end{eqnarray}\nThese projections are defined by the elements $(2e-s)\/6$ and $(4e+s)\/6$ of the group algebra. Notice that the coefficients\nof the identity element here are $1\/3$ and $2\/3$, which suggests some connection with the charges on the quarks.\n\nWithin the bosons, \nthere are three projections, so that the full list is as follows:\n\\begin{eqnarray}\n&\\begin{array}{cc}\n\\mbox{Subalgebra} & \\mbox{Idempotent}\\cr\\hline\n\\mathbb R & (e+i^2)(e+i)(e+j)(e+w+v)\/24\\cr\n\\mathbb C & (e+i^2)(e+i)(e+j)(2e-w-v)\/24\\cr\nM_3(\\mathbb R) & (e+i^2)(3e-i-j-k)\/8\\cr\\hline\n\\mathbb H & (e-i^2)(2e-s)\/12\\cr\nM_2(\\mathbb C) & (e-i^2)(4e+s)\/12 \\cr\\hline\n\\end{array}&\n\\end{eqnarray}\nIf we omit the subalgebra $M_3(\\mathbb R)$ from this description \nwe obtain the $15$-dimensional subalgebra\n\\begin{eqnarray}\n&\\mathbb R+\\mathbb C+\\mathbb H + M_2(\\mathbb C)&\n\\end{eqnarray}\n in which there is a close parallel between the splitting \n $\\mathbb R+\\mathbb C$ using\n $(e+w+v)\/3$ and $(2e-v-w)\/3$, and the splitting $\\mathbb H + M_2(\\mathbb C)$ using $(2e-s)\/6$ and $(4e+s)\/6$.\n \nThese two \npairs of projections appear to be the finite analogues of the \npair of projections $(1\\pm \\gamma_5)\/2$\nthat split the Dirac spinor into left-handed and right-handed Weyl spinors in the standard model,\nand that distinguish the weak force from electromagnetism. \nHowever, the fact that in the finite model the projections on the scalars are different from the projections on the spinors\nallows the finite model to be more subtle than the standard model, and perhaps incorporate a generation structure\ninto this distinction. It may be possible, for example, to add in the scalars from $M_3(\\mathbb R)$, to obtain a $16$-dimensional\nalgebra that fulfils the function of the Dirac algebra, but with a more subtle structure derived from the action of the finite group.\n\nIt is worth remarking here that because these projections involve dividing by $2$ and $3$, they can be implemented in the real group algebra\n$\\mathbb RG$ or the rational group algebra $\\mathbb QG$, but not in the integral group ring $\\mathbb ZG$. Indeed, the structure of\n$\\mathbb ZG$ is much more subtle than that of $\\mathbb RG$. Ultimately, to implement the finite model in full we will need to grapple\nwith this structure in detail. For the purposes of the present paper, however, it is enough consider only the real group algebra. \n\nJust to give a flavour of the implications of using the integral group ring, I'll describe a toy model of the proton using the integral group\nring of the group $Z_3$ of order $3$. If we take $e,v,w$ as the elements of $Z_3$, then the idempotents in $\\mathbb RZ_3$ are\n$(e+v+w)\/3$ and $(2e-v-w)\/3$ as above. The former projects onto a real $1$-space containing a down quark, and the latter projects onto a real\n$2$-space containing two up quarks. These are perfectly good real representations of $Z_3$, and give a perfectly good description of the\ninternal structure of a proton.\n\nBut they are not integral representations of $Z_3$. The integral \ngroup algebra $\\mathbb ZZ_3$ does not support any projections, and is\nindecomposable. This I interpret as saying that the proton itself cannot be decomposed into smaller particles, so that, in the real\nuniverse, protons never decay. Thus the quarks are `real' particles, but they are not `whole' (integer) particles.\n\nIt is also possible to use the action of $Z_3$ by conjugation to describe the corresponding bosons. These bosons consist of one `left-handed'\nand one `right-handed' quark, that correspond to `quark' and `anti-quark' in the standard model. So I get three pions this way, consisting of two \ncharged pions, $\\pi^+=u\\bar d$ and $\\pi^-=d\\bar u$, and a neutral pion $\\pi^0=u\\bar u$. While this description of the neutral pion is not the same\nas in the standard model, it is the only possibility in a discrete model, in which quantum superposition cannot be implemented as an intrinsic property\nof an elementary particle, but only as a property of the experiment or the environment.\n\n\n\\section{Implementing the standard model}\n\\subsection{The Dirac equation}\nI have made various tentative suggestions for parts of the group algebra that could be used for various parts of the standard model.\nThe time has come to make these suggestions more definite, and to begin to implement the standard model in this new mathematical framework.\nThe first and most fundamental requirement is to implement the Dirac equation. The most suitable place to do this is surely the matrix\nalgebra $M_2(\\mathbb C)$. Thus we must use these $2\\times 2$ matrices to implement the Dirac spinor, divided into the left-hand\ncolumn and the right-hand column, that should be related in some way to the left-handed and right-handed Weyl spinors in the standard model.\n\nActions on the Dirac spinors can then be obtained from both left-multiplications and right-multiplications, which together generate a $16$-dimensional\ncomplex algebra that we must identify with the complex Clifford algebra in the standard model. For this purpose, we need a choice of\nDirac gamma matrices. There are many possible choices, but since each of the four Dirac matrices swaps the left-handed and right-handed parts of the spinor, the following looks to be a good choice. Each Dirac matrix is written as a pair\nof a left-multiplication and a right-multiplication by Pauli matrices.\n\\begin{eqnarray}\n\\gamma_0 =(1,\\sigma_1)&=& \\left( \\begin{pmatrix}1&0\\cr 0&1\\end{pmatrix}_L, \\begin{pmatrix}0&1\\cr1&0\\end{pmatrix}_R\\right),\\cr\n\\gamma_1 =(\\sigma_1,i\\sigma_2)&=& \\left( \\begin{pmatrix}0&1\\cr 1&0\\end{pmatrix}_L, \\begin{pmatrix}0&1\\cr-1&0\\end{pmatrix}_R\\right),\\cr\n\\gamma_2 =(\\sigma_2,i\\sigma_2)&=& \\left( \\begin{pmatrix}0&-i\\cr i&0\\end{pmatrix}_L, \\begin{pmatrix}0&1\\cr-1&0\\end{pmatrix}_R\\right),\\cr\n\\gamma_3=(\\sigma_3,i\\sigma_2)&=& \\left( \\begin{pmatrix}1&0\\cr 0&-1\\end{pmatrix}_L, \\begin{pmatrix}0&1\\cr-1&0\\end{pmatrix}_R\\right).\n\\end{eqnarray}\n\nWith these definitions, we can write down the Dirac equation in the usual way. The group algebra model, however, contains more structure than\nthe standard model, and in particular contains an important distinction between the Lie group acting on the left, and the finite group acting on the right.\nSeparating out the left-multiplications and the right-multiplications in the above, we have the following right-multiplications:\n\\begin{eqnarray}\ni\\gamma_0&=&(1,i\\sigma_1),\\cr\ni\\gamma_1\\gamma_2\\gamma_3&=&(1,i\\sigma_2),\n\\end{eqnarray}\nwhich together generate the quaternion group $Q_8$.\nHence this Dirac equation can be used to study four particles, as in the toy model discussed earlier.\nThe same equation applies to a variety of different particles, but only to four at a time.\nIt is possible to use\nthis equation to study one generation of elementary fermions, but not to study all three simultaneously.\nIn particular, the standard model has to implement the generation structure outside the Dirac algebra.\n\nOn the left, we have \n\\begin{eqnarray}\ni&=& (i,1)\\cr\n\\gamma_1\\gamma_2&=&(i\\sigma_3,1)\\cr\n\\gamma_2\\gamma_3&=&(i\\sigma_1,1),\n\\end{eqnarray}\nwhich on exponentiation generate the Lie group $U(2)$, that is, a group isomorphic to the electro-weak gauge group in the standard model. \nAlternatively, we can obtain $SL(2,\\mathbb C)$ from $i\\gamma_1\\gamma_2$ and $i\\gamma_2\\gamma_3$.\nNote, however, that this copy of $SL(2,\\mathbb C)$ is distinct from Dirac's relativistic spin group generated by\n$\\gamma_0\\gamma_1$, $\\gamma_1\\gamma_2$ and $\\gamma_2\\gamma_3$. In particular, these two copies of $SL(2,\\mathbb C)$\nhave different physical meanings. At some point it will be worth thinking carefully about the desired interpretations of these groups.\n\n\\subsection{Electro-weak mixing}\nIn the standard model, electro-weak mixing \\cite{Weinberg} is described as a mixing of the gauge groups $U(1)$ and $SU(2)$, but it is fairly clear that\nit is fundamentally a discrete phenomenon rather than a continuous one. In practice, there appears to be a small experimental variation\nof the Weinberg angle from a theoretical maximum of $30^\\circ$.\n\n If we assume for the moment that the underlying discrete property is described\nby an angle of exactly $30^\\circ$, then this angle appears naturally as the angle between the complex numbers $i$ and $\\omega$.\nSince the group algebra model contains both $i$ and $\\omega$, where the standard model only contains $i$, there is every prospect\nthat the group algebra model can explain electro-weak mixing, at least to first order. The small deviation of the Weinberg angle\nfrom $30^\\circ$ will of course require more detailed investigation.\n\nMore specifically, the finite versions of $U(1)$ and $SU(2)$ in the standard model are the scalar group of order $4$ generated by $i$, and the\nquaternion group $Q_8$ generated by $i\\sigma_1$ and $i\\sigma_2$. But the scalars that appear in the finite model at this point are the scalars\n$\\omega$ and $\\bar\\omega$ that are used to convert from the representation $4_H$ to $4_C$, and from one complex version of $4_C$ to the other, by multiplying with the elements of order $3$ in the group $G$. Thus the pair of Weyl spinors defined by $i$ and $-i$ in the standard model becomes a\ntriplet of Weyl spinors defined by $1$, $\\omega$ and $\\bar\\omega$.\n\n\\subsection{The three generations}\nI have shown how the fermionic part of the algebra, $\\mathbb H+M_2(\\mathbb C)$, contains all the structure of the Dirac spinor, the Dirac equation,\nthe Dirac algebra and the basic principles of electro-weak mixing. These parts of the standard model use essentially all of the structure of this\npart of the group algebra. The rest of the standard model must therefore lie in the bosonic part of the algebra $\\mathbb R + \\mathbb C + M_3(\\mathbb R)$.\nI have suggested ways of constructing this part of the algebra via a finite analogue of the construction of the Dirac algebra.\n\nAs far as the Lie group $SL(2,\\mathbb C)$ is concerned, the Dirac algebra is the complex tensor product of two copies of the Dirac spinor representation.\nBut if this structure arises from the structure of the finite group, we must also be able to construct the Dirac algebra from the (complex) tensor product of\ntwo copies of the $4_C$ representation of $G$. Moreover, the complex structure adds nothing of physical significance to the algebra, so that we might\nas well work with the real tensor product, which has structure 1+1+2+3+3+3+3. This contains the whole of the bosonic part of the algebra, together\nwith a spare copy of $1+3$.\n\nThere are two ways of looking at this tensor product. One way is to look at the finite group acting on the spinors on both sides. This gives a\ndescription of the internal symmetries of the elementary particles, without any Lie groups and therefore without any measurements or observations.\nThe other way is to look at the finite group acting on the vectors on the right. This gives a description of things that can be measured\nby experiments that act on the vectors on the left. The representation theory of the finite group links the two viewpoints.\n\nIn particular, the spare copy of $1+3$ must represent things that cannot be measured, for example colours,\nor direction of spin. The corresponding part of the Dirac algebra\nconsists of two $4$-vectors, one of which is usually interpreted as $4$-momentum, so that we might interpret the other as spacetime position,\nand allocate the non-measurability to the Heisenberg uncertainty principle. That leaves us with $1+2+3+3+3$ things that can be\nmeasured, with a macroscopic group $U(1)$ acting on the $2$, and $SL_3(\\mathbb R)$ acting on $3+3+3$.\n\nNow in removing the fourth copy of $3$ we may have removed either momentum or position, but not both. Since position is irrelevant in the\nstandard model, we have presumably kept the momentum. Moreover, we have a real scalar in the remaining copy of $1$, which must similarly\nbe the energy. We then have a macroscopic group $GL(3,\\mathbb R)$ acting on the momentum. This is a little disconcerting, since our everyday\nexperience is that momentum has only an $SO(3)$ symmetry. But we know from the theory of special relativity \\cite{SR} that momentum mixes\nwith mass, and we know from the weak interaction that mass can be converted into momentum, so we should not be too surprised. \n\nIn any case, $3+3+3$ contains a discrete set of $9$ `things' to measure, and a continuous group of dimension $9$ to do the measuring with.\nHence we can measure $9$ masses, for example of $3$ generations of electrons and up and down quarks. What we observe is that\nthree of these masses, the electron masses, are masses of particles whose momentum we can also measure, and they are therefore well-defined\nand precisely measured masses. The other six, the quark masses, are masses of particles whose momentum we cannot measure, so that the\nmasses are quite ill-defined, and vary from one experiment to another. Most of all, they vary from one type of experiment to another, for\nexample between baryon experiments and meson experiments.\n\nThese $8$ mass ratios are closely related in the group algebra to $8$ parameters in the subalgebra $M_2(\\mathbb C)$, but the standard model\ndoes not include a finite group relationship between these two parts of the algebra, so has an entirely different set of $8$ parameters here.\nThese are conventionally written in the symmetric square representation as $3\\times 3$ complex matrices, and are the $8$ mixing angles in\nthe Cabibbo--Kobayashi--Maskawa (CKM) matrix \\cite{Cabibbo,KM} and the Pontercorvo--Maki--Nakagawa--Sakata (PMNS) matrix\n\\cite{Pontecorvo,MNS}.\n\nIn this section I have given only a very brief sketch of how the differing masses of the three generations of fermions can be distinguished,\nbut without offering any physical principles by which the mass is generated. For this, I rely for now on the standard model, and defer the\nquestion of how the experiments give the particular mass values that they do. Answering this question\nrequires a much deeper analysis of\nthe quantum structure of the experimental apparatus itself, and \nis beyond the scope of this paper.\n\n\\section{Field theories}\n\\label{physics}\n\\subsection{Principles}\n\\label{principles}\nThere are three (or four) ways of looking at the groups acting on the group algebra. One can look at the finite groups acting on both left and right, or the Lie groups acting on both left and right, or one of each. The general principle must surely \nbe that if the Lie groups act on both sides, then we see no quantisation,\nand we must recover a reasonable approximation to a description of classical physics. \n\nSimilarly, if the finite group acts on both sides, then all we see are\nquantum numbers, and no macroscopic variables like position, momentum, energy, mass and so on. In between, we see a right-multiplication by a finite group, which flips the quantum numbers, and a left-multiplication by some Lie groups, that permits us to measure the properties of mass, momentum, energy, and so on, that are associated to particular sets of quantum numbers in particular types of experiment \nthat we might contemplate performing.\nThis, at least, must be the general set-up in any hypothetical discrete theory of the type envisaged by Einstein. \n\nThe standard model\ninterprets things differently, and regards the Lie groups as being intrinsic to the elementary particles, despite being unable to provide any\nplausible physical process by which this could happen. (Although in practice, \nthe de Broglie--Bohm pilot wave scenario is rather close to the approach I am taking.) \n\nWhat is clear, of course, is that the calculations that are done in the standard model are essentially correct. It is only the interpretation\nthat must be somewhat different in a discrete model.\nThe discrete model must \nin fact be capable of explaining not only the standard model, but also classical physics. This is a major challenge for\nany model, and success is hardly to be expected. Nevertheless, let us see how the proposed finite model deals with this challenge.\nThe first and most basic issue is to determine how macroscopic spacetime emerges from the discrete properties of elementary particles\nand their interactions. This must be some kind of generalisation of the toy example already given in Section~\\ref{analysis}.\n\n\\subsection{Classical forces and relativity}\n\\label{classical}\nLooking at the real group algebra first from a macroscopic perspective, with the \nmatrix groups acting on themselves by conjugation,\nwe see three real and two complex scalars, that act trivially, plus three non-trivial symmetry groups: \n\\begin{eqnarray} \nSO(3) \\times SO(3,1) \\times SL(3,\\mathbb R).\n\\end{eqnarray}\nTherefore, in addition to the magneto-weak `spacetime' $\\mathbb H$ with symmetry group $SO(3)$, discussed in Section~\\ref{analysis},\nwe see an electromagnetic (special\nrelativistic) `spacetime' with symmetry group $SO(3,1)$, and a further space (with or without time) with symmetry group\n$SL(3,\\mathbb R)$. \nThe issue then is to decide how the macroscopic spacetime that underlies classical physics relates to these three different\nversions of \nspacetime \nthat seem to emerge in some way from different parts of particle physics. \n\nThere is nothing in classical physics directly corresponding to the group $SL(3,\\mathbb R)$, which allows\nstretching and shearing of space. But \ncombining it with the Lorentz group $SO(3,1)$ acting on the same spacetime gives the group $SL(4,\\mathbb R)$\nof all unimodular coordinate changes, that describes the general covariance of general relativity \\cite{GR1,GR2,GR3}\nas a\ntheory of gravity. So $SL(3,\\mathbb R)$ must in some sense describe a `fluid' gravitational `space'. \n\nClassically, of course, there is only one spacetime, with symmetry group $SO(3)$ \ndefined by the observer. The groups $SO(3,1)$ and $SL(3,\\mathbb R)$ are then interpreted, not as different types of spacetime, but as forces. \nClearly $SO(3,1)$ is the symmetry group of electromagnetism, as most elegantly expressed by Einstein's interpretation of Maxwell's equations\nin the theory of Special Relativity.\nTo be more precise, the electromagnetic field is an association of a (trace $0$) element of $M_2(\\mathbb C)$ with each point in spacetime $\\mathbb H$.\nIt is therefore expressed mathematically as a function \n\\begin{eqnarray}\n\\label{field}\nf&:& \\mathbb H \\rightarrow M_2(\\mathbb C).\n\\end{eqnarray}\n\nMaxwell's equations \nthen describe how the values of the electromagnetic field, in the adjoint representation of $SO(3,1)$, \ntogether with the inertial mass and charge, \nin the scalar part of $M_2(\\mathbb C)$, relate to the ambient physical space, in the adjoint representation of $SO(3)$,\nand to time, in the scalar part of $\\mathbb H$. \nThe theory of special relativity explains how the equations are invariant under coordinate changes on spacetime described by the Lorentz group $SO(3,1)$. That is, any change in spacetime coordinates defined by an element of $SO(3,1)$ can be compensated for by the\ncorresponding change in coordinates for the values of the electromagnetic field, by the corresponding element of $SL(2,\\mathbb C)$ acting\nby conjugation on the trace $0$ matrices.\n\nThe Lorentz group can in turn be interpreted as a change of spacetime coordinates between different observers. But the model under discussion is a model appropriate to a single observer, with a fixed notion of space and time. Therefore the appropriate interpretation of the Lorentz group in this context is as a gauge group, that can be used to choose coordinates for the (electromagnetic) gauge field. The content of the theory of special relativity is then that the theory of electromagnetism does not depend on the choice of coordinates. \n\nAnalogously, we must seek a gravitational field in the adjoint representation of $SL(3,\\mathbb R)$. From the point of view of an individual observer,\nwith $SO(3)$ symmetry, this representation splits up as the sum of a spin $1$ field and a spin $2$ field. Newton's universal theory of gravitation \nalready includes both: the spin $1$ field is the gravitational field, and the spin $2$ field describes the tidal forces that arise from rotations of matter within a gravitational field (or, equivalently, rotations of the gravitational field around the matter). Newton's theory is very good indeed in most circumstances, but has two main drawbacks that have become apparent in the ensuing centuries. One is that it does not distinguish between the gravitational force due to a rotating body, and that due to a stationary body. The other is that it has a static gravitational field, rather than a dynamic field that propagates at a finite speed (presumably the speed of light).\n\nThe former drawback is (at least partly) addressed by Einstein's theory of general relativity, in the sense that\nrotations of the observer (or test particle) are taken into account. But it is not clear that rotations of the gravitating body are fully\naccounted for, \nnor \nthe finite speed of propagation.\nIt is therefore not required for a new model to reproduce general relativity exactly, but merely to reproduce general relativity in the limit where the effects of the finite speed of light can be ignored. There are indeed \ntwo circumstances in which such effects might already have been observed in practice. One is in the effects on fast-moving satellites as the rotate very fast around the Earth. The other is at the edges of galaxies, in which the gravitational attraction of the galactic centre can take hundreds of thousands of years to reach the outer edges.\n \n The former is known as the flyby anomaly \\cite{flyby}, and it is claimed by Hafele \\cite{Hafele} that this anomaly can be entirely explained\n by the finite speed of propagation of the gravitational field of a massive rotating object. The latter goes by many names, and many proposals\n have been made for theoretical explanations for the observed effect, most notably the hypotheses of (a) dark matter, or (b) modified Newtonian \n dynamics, or MOND \\cite{Milgrom1,Milgrom2,Milgrom3,MOND,MOND2}. As far as I am aware, however, there is no proposed explanation that takes into account either the finite speed of propagation\n of the gravitational field, or the very fast rotation of the super-massive black hole (or other objects) at the centre of the galaxy. \n\nThe proposed finite model permits a separation of the $SL(3,\\mathbb R)$ symmetries of gravitational space from the $SO(3,1)$ symmetries of\nelectromagnetic spacetime, and therefore permits a macroscopic theory of gravity that is independent of time, and therefore independent\nof the finite speed of light. But to do that it requires the symmetry group of the gravitating body to be extended from $SO(3)$ to $SL(3,\\mathbb R)$.\nIn other words, the centre of the galaxy must be regarded as a rapidly rotating fluid rather than a solid object. Given that a typical galaxy has many billions of stars,\nthis is surely a reasonable hypothesis to make. \n\nThe general principle of relativity says that it makes no difference\nto the theory whether the group $SL(3,\\mathbb R)$ is attached to the galactic centre or to the periphery, so that we can interpret the theory\neither way. \nThe latter interpretation seems to be closely related to the dark matter hypothesis: that is the galactic centre is regarded as a Newtonian\npoint mass, and all the\notherwise unexplained behaviour of the outer stars must then be due to some `dark matter halo' surrounding the galaxy. \nBut perhaps the assumption of a Newtonian point mass at the centre of the galaxy is unrealistic?\n\nThe former interpretation is more closely related to the MOND hypothesis: that there is some new unexplained gravitational force,\nthat \ncan only be observed when the Newtonian gravitational field is extremely weak. \nThe proposed finite model\nfavours the MOND interpretation, and suggests (very speculatively) that the new force might be explained in terms of the very fast rotation of the extremely massive\ngalactic centre.\nMost importantly, the group $SL(3,\\mathbb R)$ is now interpreted as the gauge group of the theory of gravity, and is completely separate\nfrom the gauge group $SO(3,1)$ of classical electromagnetism. In the theory of general relativity, these two gauge groups are combined into\na group $SL(4,\\mathbb R)$, which has no meaning in the proposed model. This might explain why attempts to quantise general relativity with\na gauge group $SL(4,\\mathbb R)$ or $GL(4,\\mathbb R)$ have failed \\cite{GL4R1,GL4R2}.\n\n\\subsection{Quantum field theory}\nThe above discussion suggests interpreting the $24$ dimensions of the group algebra as classical fields, divided into $7$ dimensions of scalar fields,\n$3$ dimensions of space, $6$ dimensions of electromagnetic field, and $8$ dimensions of gravitational field. These fields are mathematical\nconstructs, rather than physically real objects. For example, we do not want to interpret the $3$ dimensions of the `space' field as an `aether'\nin the 19th century sense. What is physically real, however, is the propagation of the fields through space, or to be more precise, the propagation\nin spacetime.\n\nI have not precisely identified all of the scalar fields, though four of them appear to be time, inertial mass, charge and gravitational mass.\nEnergy would seem to be separate from both types of mass, and there appears to be a second type of (neutral) charge. This leaves one more\nscalar, which might be identified with the Higgs field in the standard model. Whatever identification is eventually decided on, these scalar\nfields propagate by being attached to matter particles. In other words, it is the movement of matter that defines the propagation of the\nscalar fields. \n\nJust as in the classical case, the fields must be modelled as functions from spacetime to the appropriate subspace of the group algebra.\nIn the quantum case, this function decomposes as a sum of `infinitesimal' quanta, that individually are linear functions. These linear functions\nthemselves lie in representations of the finite group $G$, so that we can use the representation theory to analyse the structure of the\nquantum fields.\nSince the spacetime representation $\\mathbb H$ is self-dual, the quantum fields can be regarded as lying in the tensor product of\n$\\mathbb H$ with the appropriate representation. \n\nThese tensor products then describe the particles that mediate the corresponding forces. Since the spacetime \n representation is fermionic in this model, the bosonic fields are carried by fermionic\nparticles, and the fermionic fields are carried by bosonic particles. In particular, four of the scalars are bosonic, so carried by fermions, and\nthree of the scalars are fermionic, so carried by bosons.\n\nThe electromagnetic field values are fermionic in this model, and the field \nis therefore propagated by bosons. This agrees with the standard model, in which the propagation is effected by photons, which are parametrised by $3$ dimensions of momentum in $2$ distinct polarisations.\nThe three fermionic scalars, which might be interpreted as inertial mass (dual to Euclidean time, as opposed to energy,\nwhich is dual to Lorentzian time), electric and neutral charge, are \ncarried by the weak bosons, that is the $Z$ and $W$ bosons.\nConversely, in this model the gravitational field values are bosonic, and the field is therefore propagated by fermions. \n\nNo such process exists in the standard model,\nin which all fields are assumed to be carried by bosons.\nBut it would appear to be consistent with experiment \nto suppose that these propagators are neutrinos,\nparametrised by $3$ dimensions of momentum in $3$ distinct generations. This parametrisation would result in a $9$-dimensional\ngravitational field, however, and the model (as well as observation) supports only an $8$-dimensional field. \nIn other words, the $3$ generations of neutrinos cannot be linearly independent. This means that macroscopic rotations act not only on\nthe momentum coordinates, but also on the generation coordinates. The model therefore predicts that the generation of a neutrino\nis not an invariant. Indeed,\nthe non-invariance of neutrino generation is well-attested experimentally, and goes by the name of neutrino oscillation\n\\cite{oscillation,neutrinos,SNO}. \n\n\nAt this point, the neutrinos appear to have taken over the group $SL(3,\\mathbb R)$, that was originally supposed to be allocated to the strong force,\nand in particular to the $8$ gluons. Indeed, since the group $SL(3,\\mathbb R)$ in the proposed model acts on a bosonic field, the corresponding\nmediators must be fermions. We can reconcile the two viewpoints by interpreting the gluons as representing the \\emph{values} of the quantum field,\nrather than the field itself, which is a \\emph{function}. Then the mediators can be interpreted as virtual neutrinos, and the gluons as pairs of virtual neutrinos. However, the model suggests that it may be better not to interpret the gluons as particles at all, but only as symmetries.\n\nAt the same time, we need to address the distinction between the group $SL(3,\\mathbb R)$ used here and the group $SU(3)$ used in the\nstandard model. The latter is a compact group, and fixes a complex inner product, so describes rigid symmetries of a complex $3$-space.\nThe phenomenon of asymptotic freedom \\cite{freedom1,freedom2} suggests that such rigidity does not in fact characterise the strong force. The use of\n$SL(3,\\mathbb R)$, on the other hand, suggests complete freedom to change scale in one direction, providing this is compensated for in\nanother direction. In other words, replacing the (confined) gluons by the (free) neutrinos seems to require a split group, just as the (free)\nphotons are described by the split group\n$SL(2,\\mathbb C)$.\n\n\\subsection\n{Elementary particles and the standard model}\nLet us now turn our attention to the mixed case, with the finite group acting on the right and the Lie groups acting on the left. This is the domain\nof the standard model, where measurements of macroscopic variables such as mass, momentum, energy, angular momentum,\nmagnetic moments and so on are made on individual quanta or `elementary particles'. We then have 24 discrete objects on the right\nthat we can measure, and 24 degrees of freedom for the operators on the left that define what we can measure. The standard model has a kind of duality\nbetween things that can be measured and things that can't, which is sometimes an absolute distinction, and sometimes an uncertainty principle.\n\n\nFor example, the Heisenberg uncertainty principle says you can measure position or momentum, but not both at the same time. Quantum chromodynamics, on the other hand, assigns (unobservable) colours to quarks, and puts the corresponding observable property into the notion of particle `flavour' or generation. In some cases, if the associated representation is a real or complex scalar, the appropriate concept is its own dual, and\nthere is no duality or uncertainty involved. This appears to be the case, for example, with the electric charge.\nThe details are not so important. What is important is, how many distinct measurements can we make?\n\nThe finite model says that we can measure exactly $24$ independent things. \nIf we make a specific choice of the $24$ independent things we want to measure, we obtain $24$ (dimensionless) real numbers that describe everything\nthere is to know about how the elementary particles behave. \nThe standard model has made a particular choice, and has measured precisely $24$ independent things. These $24$ things are usually described as\n$12$ fermion masses ($3$ generations each of neutrino, electron, up and down quark), $3$ boson masses (the $Z$, $W$ and Higgs bosons), (hence $14$ mass ratios), $2$ coupling constants (the finite-structure constant and the strong coupling constant) and $8$ mixing angles ($4$ each in the\nCKM and PMNS matrices)\n\\cite{Cabibbo,KM,Pontecorvo,MNS}, and are regarded as the fundamental parameters.\n\nThe standard model therefore has exactly the right number of parameters to contain a complete description of quantum reality. It produces the right answers, because it contains enough variables and enough equations to calculate everything that can be calculated. There is nothing wrong\nwith the standard model. It is a mathematically correct, and complete, model of everything that can possibly happen. So \nwhy are people\nstill looking for a `theory of everything', if the standard model already \\emph{is} a theory of everything?\n\nThe main reason is that we don't understand where the $24$ parameters come from. We understand perfectly well that the Lorentz group\ncan be interpreted either as a change of coordinates on spacetime between two observers, or as a gauge group for electromagnetism, and that\nthese two interpretations are equivalent, \nso that the theory of electromagnetism is the same for all observers. We understand perhaps a little less well that the same applies to $SL(3,\\mathbb R)$ and the theory of gravity. \n\nSo why don't we understand that the same applies to the\ngauge groups of the standard model? The groups describe the relationship between the observer and the observed, and we have a choice\nbetween regarding the groups as acting on the observed (as in the standard model) or the observer (as in relativity). But we have to make a choice:\nwe cannot both have our cake and eat it. The incompatibility of the standard model with general relativity may be nothing more than the fact that the two\ndisciplines have made incompatible choices of interpretation.\n\nSo to resolve the issue we \nneed to make a consistent choice of interpretation. For practical purposes, it makes no difference which choice we make.\nEither way we do the same calculations, get the same answers, and reconcile them with experiment. But philosophically, there is no contest. The general principle of relativity is such a powerful and obvious philosophical principle that we should not under any circumstances contemplate abandoning it. \nThe consequence of this philosophical viewpoint is that the gauge groups of the standard model have to be transferred to act on the observer, so that\n(most of) the $24$ parameters become parameters associated with the experiment, the environment and the observer, rather than with the\nelementary particles themselves. \n\nI re-iterate that both points of view are mathematically valid, but only one of them is philosophically valid.\nMoreover, since many of the $24$ parameters are known to vary with the experiment, and in particular to `run' with the energy scale, \nit is also the case that only one of the two viewpoints is\nphysically valid. It is not physically reasonable to treat a parameter as a universal constant,\nif experiment shows that it is not.\nIn the model I am proposing, $7$ of the $24$ parameters are scalars, and can therefore be taken as universal constants.\nThe other $17$ must be regarded as properties of the experiment. \n\nIf we look carefully at the experimental evidence, $6$ or $7$ of the $14$ mass\nratios \nshow evidence of not being constant, namely the $6$ quark masses and the $Z\/W$ mass ratio. All $8$ of the mixing angles are similarly suspect.\nFinally, there is no positive, model-independent, evidence of different masses for the three generations of neutrinos, or indeed any mass different from zero. Moreover, the phenomenon of\nneutrino oscillation suggests that there is no intrinsic difference between the three generations of neutrino anyway, which add another $2$ or $3$\nparameters to make up a total of between $16$ and $18$. \nIn other words, the proposed finite model is consistent with experiment on this point.\n\n\\subsection{The wave-function}\nThe one thing that one might really hope for from a finite model of elementary particles is an insight into the measurement problem.\nThis was really the focus of Einstein's objections to quantum mechanics throughout his life, and although his attacks, most notably\nthe EPR paradox \\cite{EPR}, were never enough to sink the ship, the fundamental problem has not gone away.\nTo simplify the problem almost to the point of caricature, we may ask, what \\emph{is} the wave-function, and how does it `collapse'?\n\nIn the group algebra model, the wave-function is implemented at the middle level, with the finite group acting on the right\nand the Lie groups on the left. \nThe typical example is equation (\\ref{field}), which describes a\nfunction from spacetime to the Dirac spinors. The `collapse' is some kind of operation that moves down to the discrete level,\nwith the finite group on both sides. In practical terms, the wave-function describes the quantum field in the experiment, and the collapse describes\nthe result of the experiment.\n\nFrom this point of view, the measurement problem arises from the assumption that the wave-function is intrinsic to the elementary\nparticle under investigation. The finite group model, however, does not allow any continuous variables to be associated with an elementary\nparticle. The continuous variables are always associated with the macroscopic measuring apparatus. The model, in other words, does not\n\\emph{solve} the measurement problem. It \\emph{does not find} the measurement problem.\nPhilosophically, this is always the best way to solve problems, namely, to realise that, if looked at in the right way, the problem does not exist.\n\n\\subsection{The real universe}\nThe structure of the model indicates that of the $24$ unexplained dimensionless parameters of the standard model, exactly $7$\nare universal constants, and the rest are dependent in some way on the experiment, the environment and\/or the observer. There is nothing\nlike enough detail in the model as so far developed to indicate which parameters should be regarded as universal constants, nor how the\nother parameters vary. Indeed, there may be \na certain amount of choice as to which parameters can be defined as constant, so that the other\nparameters can be calibrated against them. \n\nWhat is clear, however, is that we must be prepared for the possibility that\ncertain parameters that we are certain\nmust be universal constants, may not be so.\nAt this stage, we can do little more than speculate on these matters, and apply some educated guesswork. There is no real need to use\nthe same fundamental parameters as the standard model, but other mass ratios such as those between electron, proton and neutron might\nalso be worth looking at. The model suggests that approximately $8$ of the fundamental parameters are dependent in some way on the\ngravitational field, including tidal forces. \n\nAt first glance, the very idea is preposterous. But on closer inspection, one realises that while the\nstandard model is in theory based in an inertial frame, the experiments that measure the fundamental parameters are not done in an\ninertial frame, but in a frame which moves with the Earth. It is therefore very hard to rule out, on experimental grounds, the possibility\nthat some of the measurements depend crucially on some dimensionless property of the tides that is constant over all experiments done\non the Earth.\n\nThere are four basic dimensionless parameters of the tides on the Earth, of which two are obvious spatial angles: the angle of tilt of the Earth's axis,\nand the inclination of the Moon's orbit to the ecliptic. The other two are the ratios of the periods of rotation\/revolution, so are angles in Euclidean spacetime. Of course, these\nparameters are not precisely constant, and there are other effects that might be expected to contaminate the results, such as the\ngravitational pull of Jupiter. It will therefore be somewhat tricky to distinguish a real correlation from an unwanted coincidence.\n\nSome $8$ such correlations\/coincidences were presented in \\cite{INI13}, without any very solid justification, but with the suggestion\nthat, while some of them may indeed be pure coincidences, it is very unlikely that they are all coincidences. The parameters discussed\nin that paper include the electron\/proton\/neutron mass ratios, the pion mass ratio, the kaon mass ratio, the kaon\/eta mass ratio,\nthe Cabibbo angle, the Weinberg angle and the CP-violating phase in the CKM matrix. In addition, the paper discusses a number of other\nequations that seem to suggest that some of the fundamental parameters may not be independent of each other. This may be a\nsomewhat less unpalatable suggestion than the suggestion, made originally by Einstein \\cite{gravimass} more than a century ago, \nthat they might depend on the gravitational field! On the other hand, the paper \\cite{INI13} also shows that the observed CP-violating\nbehaviour of neutral kaons \\cite{CP} is quantitatively consistent with the hypothesis that the effect is caused by the small difference in the\ndirection of the gravitational field between the two ends of the experiment.\n\n\\section{Conclusion}\nIn this paper, I have examined what I believe to be the unique possible mathematical model of a discrete algebraic universe,\nin the hope that it will have something useful to say about the seemingly intractable problems in the foundations of physics.\nI have shown how all the essential ingredients of the standard theories of both classical and quantum physics arise from this\nfinite model, and discussed at a general level the relationships between them. \nI have traced the conflict between quantum mechanics and\nrelativity to a conflict in interpretations, that does not affect the mathematics of either theory, and sketched a possible way to\nresolve this conflict.\n\nI have not done all the necessary detailed calculations to show that the proposed model reproduces the standard models exactly,\nso there is still room for doubt as to whether the model I propose is viable. Nevertheless,\n I have shown that the proposed model has enough complexity to incorporate all of the\n subtleties of the standard model of particle physics, including the $24$ dimensionless parameters. \nThe proposed algebraic model seems to be consistent with the experimental fact that the standard model is essentially a complete and correct\ntheory of everything. As for physics `beyond the standard model', the model \nexplains what experiment has \ndemonstrated, namely that, essentially, there is none.\n\n\nSimilarly, it appears to be consistent with general relativity as a theory of gravity, but suggests that the simplifying assumptions\nmade in practical calculations break down in extreme circumstances.\nThe model suggests that incorporating a contribution to gravity from the rotation of\na (not spherically symmetric) gravitating body, \nand taking into account the finite speed of propagation of gravitational waves, may be sufficient to account for the\nanomalous rotation of stars in the outer regions of galaxies, that was the original reason for the hypothesis of\ndark matter. If this is not enough, then the model has room to distinguish gravitational mass from inertial mass,\nand to incorporate a scale factor between the two that is dependent on \nproperties of the motion of the observer.\n\nSo what remains of Einstein's `castle in the air'? The finite model has, if anything, given it slightly firmer foundations. \nSpecial relativity has always been built on solid rock, but the foundations of general relativity are more fluid. \nI see general relativity, therefore, not as a castle in the air, but as an aeroplane,\nthat stays up despite having no visible means of support. My proposed model provides some support,\nin the form of a neutrino wind, to keep it flying. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzghmj b/data_all_eng_slimpj/shuffled/split2/finalzzghmj new file mode 100644 index 0000000000000000000000000000000000000000..d9dc1913725739b06fe79bca6ccafba33ed2de1e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzghmj @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nOne of interesting effects produced by flare energy release in the solar atmosphere is excitation of helioseismic waves, so-called sunquakes \\citep{Kosovichev1998}. Such waves usually propagate as expanding ripples from local impact sources occupying several pixels in photospheric Dopplergrams. The cause of these events is a subject of intensive debates \\citep[e.g.][]{Donea2011,Kosovichev2014}. Generally, the necessary condition for producing sunquakes is a sudden momentum enhancement in the lower solar atmosphere. One of the possible agents of such disturbance can be chromospheric heating due to injection of accelerated charged particles postulated by the standard model of solar flares \\citep[for a recent review see][]{Fletcher2011}. Models of the gas dynamics processes induced by nonthermal electron beams \\citep[e.g.][]{Kostiuk1975, Fisher1985, Kosovichev1986} predict formation of a shock wave or a chromospheric condensation moving towards the solar photosphere and, thus, transferring momentum to the dense plasma. \\cite{Kosovichev1995} discussed such beam-driven mechanism of sunquakes. However, the plasma momentum transfer is also possible by other mechanisms, such as sharp enhancement of pressure gradient due to flux-rope eruption \\citep[e.g.][]{Zharkov2013} or by an impulse of the Lorenz force which can be stimulated by electric currents in the lower solar atmosphere \\citep{Fisher2012}. Also, it is possible that different sunquake events are caused by different mechanisms. Usually, sunquakes are associated with M and X class flares. However, many X-class flares did not produce sunquakes \\cite[e.g.][]{Donea2011}, whereas these events had been noticed during relatively weak M-class flares \\citep{Martinez-Oliveros2008, Kosovichev2014}.\n\nIn this Letter, we discuss observations of the C7.0 flare of February 17, 2013, which produced a rather strong sunquake initiated during the HXR burst. We use data from four space instruments: EUV observations from SDO\/AIA \\citep{Lemen2012}, vector magnetic field measurements from SDO\/HMI \\citep{Scherrer2012}, integrated soft X-ray emission from GOES, and X-ray spectroscopic imaging data from RHESSI \\citep{Lin2002}. We investigate potential mechanisms of the sunquake initiation, and find that despite the precise temporal coincidence between the HXR impulse and the photospheric impact this event is not consistent with the standard flare model, because the HXR source and the sunquake impact were at different spatial locations, at two different footpoints of the flare loop. Our analysis leads to a suggestion that a significant role in the sunquake initiation may be played by electric currents in the low atmosphere.\n\n\\section{General description of the event and sunquake}\n\nThe flare event of Febrary 17, 2013, was observed in active region NOAA 11675. It consists of two subflares clearly separated in time and space: the first subflare has the C7.0 GOES X-ray class, and the second subflare reached M1.9 peak intensity. The duration of the whole double flare is about 8 min, starting at 15:46:00 UT and ending at 15:54:00 UT (Fig. 1). The highest energy of the hard X-ray (HXR) emission (maximum at 15:47:20 UT), detected by RHESSI during the first subflare is $\\sim 1$ MeV. The second subflare is characterized by weaker intensity and energy ($<$300 keV) of HXR emission which reached maximum at 15:50:30 UT.\n\nTop panels of Fig. 2 present the AIA images in the 94 $\\rm\\AA$ channel. The temporal and spatial resolutions are 12 seconds and $1.2^{\\prime\\prime}$ (with the angular pixel size of $0.6^{\\prime\\prime}$). The preflare state reveals a compact loop-like structure where the flare process occurs.\n\nThe sunquake is observed as an expanding circular wave in the HMI Dopplergrams filtered in the frequency high range, 5-6 mHz, to isolate the sunquake signal from the convective noise. The propagation of the sunquake wave is shown on the time-distance diagram \\citep{Kosovichev1998,Zharkova2007} presented in Fig.2 (middle right panel) comparing the observed signal with the theoretical ray-path theory prediction (dotted yellow line). The time-distance analysis shows that the sunquake is initiated by the first subflare with the initiation point at the flare impulse signal. The inclined wave pattern above the theoretical curve is associated with the frequency dispersion of the helioseismic wave packet.\n\nThe initiation of the sunquake is observed as a strong localized impulse in the HMI Dopplergrams and line-of-sight magnetogram at $\\approx$15:47:54 UT. The location of the impulse on the HMI magnetogram is illustrated in Fig. 2 (middle left panel). Because of the rapid variations during the flare impulse the Doppler velocity and magnetic field measurements in the impact pixels can be inaccurate. Therefore, we use the original level-1 HMI filtergram data from the two HMI cameras to locate the exact time and place of the flare impact. The bottom panels of Fig. 2 shows the time difference between the HMI filtergram images (from HMI Camera-1). We see that compared to the preflare time during the flare there is enhancement of emission in the pixels associated with the AIA brightnings and the place of the sunquake initiation. The timing of the photospheric impact is illustrated in Fig. 1 (bottom) which shows the signals from both HMI cameras as a function of time. The periodic variations of these curves are due to the line scanning. The plot shows that the photospheric impact coincides with the HXR impulse within 3 sec (the HMI camera resolution).\n\n\\section{Spatial structure of the flare region}\n\nHere we present a description of the spatial structure of the flare region according to the RHESSI and AIA\/SDO observations. RHESSI uses a Fourier technique to reconstruct X-ray emission sources \\citep{Hurford2002}. We apply the CLEAN algorithm to synthesize the X-ray images using detectors 1,3-6,8 (integration times are shown in Fig.3). In Fig. 3, the RHESSI HXR and SXR contour images are compared with the corresponding AIA 94 \\AA ~images for the time interval covering the HXR peaks of both flares. To compare positions of the EUV and X-ray sources with the structure of magnetic field we plot the polarity inversion line from the HMI magnetogram. The structure of the EUV emission sources is rather complicated. There are ribbon-like structures\nlocated on both sides of the magnetic field inversion line. During the HXR burst we observe a loop-like structure with one footpoint associated with very strong HXR emission (25-200 keV), and the other footpoint located in the place of the photospheric impact (sunquake initiation), which also coincides with a weak HXR emission source. The total emission intensity of the weaker X-ray source is approximately five times less than the emission intensity of the stronger HXR source. If the sunquake were initiated by an impact of high-energy electrons, then their impact would be in the place of the intensive energy loss of the accelerated particles, and coincide with the strongest HXR emission source. However, we observe the opposite situation when the sunquake impact correlates with a weaker HXR emission source. This indicates that the sunquake is unlikely be generated by the impact of high-energy electrons.\n\nThe second subflare has SXR source (6-12 keV) coinciding with the HXR source (25-50 keV) and saturated UV emission above the magnetic field inversion line. However, this subflare is located $\\sim$3 Mm away from the place of energy release in the first subflare and according to our analysis is not associated with the sunquake.\n\n\\section{Analysis of RHESSI spectra: accelerated particles and heating}\n\nTo determine properties of the accelerated particles, the plasma and their energetics we use the RHESSI data in the range 5-250 keV. We investigate two spectra taken during the HXR peaks of two subflares. The power-law approximation $f(E)=AE^{-\\gamma}$ ($A$ is normalization coefficient) is considered for the hard X-ray (HXR) nonthermal emission $\\gtrsim 20$ keV. To simulate the presence of the low-energy cutoff we use the broken power law \\citep{Holman2003} with fixed photon spectral index $\\gamma_0=1.5$ below the break energy ($E_{low}$). For the first subflare, an additional break energy ($E_{br}$) is considered, and, thus, for this case we have two spectral indices $\\gamma_1$($E_{low}E_{br}$). For the second subflare we consider only one spectral index $\\gamma_1$($E>E_{low}$) and also make a pileup correction as the count rate is sufficient to observe such effect.\n\nThe thermal soft X-ray (SXR) spectrum $\\lesssim 20$ keV is approximated by one-temperature thermal bremsstrahlung emission with two parameters: temperature ($T$) and emission measure ($EM$). The RHESSI spectra are fitted by means of the least squares technique implemented in the OSPEX package with 7 free parameters ($EM$, $T$, $A$, $E_{low}$, $\\gamma_1$, $E_{br}$ and $\\gamma_2$) for the first subflare and 5 parameters ($EM$, $T$, $A$, $E_{low}$ and $\\gamma_1$) for the second subflare. Fig. 4 displays results of the fitting.\n\nFrom the thermodynamics point of view the second subflare is hotter than the first one, but the emission measure is smaller. Volume $V$ of the UV loop estimated in the previous section is $10^{26}$ cm$^3$, so that plasma density $n_1 = \\sqrt{(EM_1\/V)}\\approx 6\\times 10^{10}$ cm$^{-3}$ for the first subflare. The plasma density for the second subflare, assuming the same flare region volume, is $n_2\\approx 2\\times 10^{11}$ cm$^{-3}$. Due to compactness of the flare region the plasma within the magnetic loops is rather dense.\n\nThe HXR photon spectrum is harder for the first HXR burst than for the second one. Normalization coefficient $A$ of the HXR spectrum is also one order of magnitude larger in the case of the first subflare. This means that the acceleration process is more efficient during the first subflare. The total flux $Fl$ [electrons s$^{-1}$] of accelerated electrons can be estimated following the work of \\cite{Syrovatskii1972}:\n\n$$\nFl(E_{low}=[dF_z\/dt]\/cL$, where $F_z$ is total magnetic flux inside a contour with length $L$, which covers the flare region. The evolution of $$ presented in Fig.1 (gray histograms in top panels) shows that both subflares correlate with the peaks of $$.\n\nTo calculate the vertical currents we use the disambiguated HMI vector magnetic field data \\citep{Centeno2014} with time cadence 720 seconds, and the same spatial resolution as of the line-of-sight magnetograms. The vertical electric current density is calculated from the HMI vector magnetograms using the Ampere's law \\citep[e.g.][]{Guo2013}:\n\n$$\nj_z=\\frac{c}{4\\pi}(\\nabla\\times\\vec{B})_z = \\frac{c}{4\\pi}\\left(\\frac{\\partial B_x}{\\partial y} - \\frac{\\partial B_y}{\\partial x}\\right)\n$$\n\nThe resulted $j_z$ map during the flare, effectively averaged over 12 min due to the HMI temporal resolution, is presented in Fig.~3. Figure 5 displays the evolution of $$ averaged through the flare region with area $\\approx 1.5\\times 10^{18}$ cm$^2$, and reveals a maximum corresponding to the flare. We estimated errors for $$ as the standard deviation of the $j_z$ distribution in the quite Sun regions.\n\nIn Fig.~3 we see that the place of sunquake generation correlates with the strong electric currents, and that there is no significant HXR emission in this place. The HXR is mostly emitted from the source located on the other side of the magnetic field polarity inversion line at the opposite footpoint of the flare loop. Such observation, and the time evolution of $$ and $$ can be an evidence of a non beam-driven origin of sunquake. The correlation with the location of the strongest electric currents suggests that the sunquake event could be initiated due to a local heating or impulsive Lorentz force in the flare region.\n\n\n\\section{Discussion}\n\nIn this section we will discuss the contributions of the electric currents and nonthermal electrons in flare energy release and the generation of the sunquake.\n\nFor the estimated fluxes of accelerated electrons for the first subflare, the total kinetic power $P_{nonth}\\approx 1.5\\times 10^{27}$ erg s$^{-1}$ in the HXR peak. To estimate the Joule heating in the sunquake generation region we need to estimate effective electric conductivity $\\sigma_{eff}$. In the regime of electric currents dissipation the magnetic Reynolds number $Re_m=4\\pi\\sigma_{eff}L^2\/(c^2\\tau)\\sim 1$, where $\\tau$ is a characteristic time of electric current dissipation ($\\sim 100$ s, the duration of the HXR burst), and $L$ is a characteristic length scale ($\\sim 1^{\\prime\\prime}$, the size of the impulsive region on the Dopplergrams). For these characteristic values we get $\\sigma_{eff}\\sim 10^6$ CGS units. This value is substantially higher than the theoretical Spitzer conductivity \\citep{Kopecky1966}. However, recent studies of the partially ionized plasma of the solar chromosphere show that the electric conductivity can be substantially reduced due to Pedersen resistivity \\citep[e.g.][]{Leake2012} or due to small scale MHD turbulence \\citep{Vishniac1999}. The volumetric energy release is $Q_j = j^2\/\\sigma_{eff}\\approx 8\\times 10^3$ erg s$^{-1}$ cm$^{-3}$ for $j\\approx 0.3$ A\/m$^2$. The total energy release due to dissipation of electric currents in the sunquake region is $Q_j^{tot}\\approx 3\\times10^{27}$ erg s$^{-1}$, estimating the volume for a box with length scale $L\\sim 1^{\\prime\\prime}$. So, we see that $P_{nonth}\\sim Q_j^{tot}$, and both types of energy release have the energy budget sufficient to explain heating in the flare according to the GOES data: the change of the plasma internal energy, $d(3nk_BTV)\/dt\\sim 10^{27}$ erg s$^{-1}$, and the radiation losses, $L_{rad}\\sim 5\\times 10^{26}$ erg s$^{-1}$.\n\nTo produce the sunquake we need a strong impulsive force in the lower solar atmosphere. The sunquake momentum can be estimated from the initial impact as $p_{sq}\\sim \\rho L^3v\\sim 10^{22}$ g$\\cdot$cm s$^{-1}$ for $\\rho\\sim 10^{-8}$ g$\\cdot$cm$^{-3}$ (photospheric value) and $v\\sim c_s\\sim 10$ km\/s, where $c_s$ is photospheric sound speed. In principle, the force generating sunquakes can be produced directly by energetic electron beams. The total momentum of injected nonthermal electrons is\n\n$$\np_e=\\tau\\sqrt{2m_e}\\int_{E_{low}}^\\infty f(E)\\sqrt{E}dE\n$$\n\nwhere $m_e$ is the electron mass, $f(E)$ is the distribution function of nonthermal electrons, and $\\tau$ is the characteristic time of the injection. For the first HXR burst, $p_e\\sim 10^{20}$ g$\\cdot$cm s$^{-1}$. As the emission intensity of the weaker HXR source associated with sunquake impact is five times less comparing with strong HXR source, than nonthermal electrons momentum in the footpoint associated with sunquake impact $\\sim 0.2\\times 10^{20}$ g$\\cdot$cm s$^{-1}$.\n\nMomentum of the accelerated protons can be much larger than in the case of electrons and lead to stronger disturbances in the solar atmosphere. Assuming that the protons roughly (not accounting collisions) have energy $E_p\\lesssim E_e$, the momentum contained in the proton beam $p_p\\lesssim p_e\\sqrt{(m_p\/m_e)}\\sim 45p_e\\sim 0.5\\times 10^{22}$ g$\\cdot$cm s$^{-1}$. We see that the momentum of accelerated protons represents a more probable agent of the sunquake initiation than the momentum of electrons.\n\nOur observations show that while the sunquake impact and the HXR impulse are simultaneous in time they are clearly separated in space, and located at the different footpoints of the flare magnetic loop. In addition, we find that the impact location correlates with the strongest electric currents. This suggests that, perhaps, energetic particles are accelerated by electric field in the place of sunquake initiation, and then the particles travel along the flare magnetic loop to the other footpoint and caused the HXR emission.\n\nThe impulsive plasma motion in the lower solar atmosphere may be caused by fast heating due to Joule dissipation or sharp increasing of the Lorentz force. In the first case we can estimate the plasma momentum as $p_J\\sim \\tau V\\nabla P\\sim P\\tau L^2$, where $\\nabla P$ is the pressure gradient on the length scale, $L$. The pressure can be estimated from the energy equation\n\n$$\n\\frac{dP}{dt}=\\frac{j^2}{\\sigma_{eff}}-L_{rad}\n$$\n\nwhere $L_{rad}$ is the radiation heat loss which is the main source of cooling in the lower solar atmosphere. From this equation $P\\lesssim j^2\\tau\/\\sigma_{eff}$ and, hence, $p_J \\lesssim (j\\tau L)^2\/\\sigma_{eff}\\sim 10^{23}$ g$\\cdot$cm s$^{-1}$.\n\nThe plasma momentum, associated with the Lorentz force, is $p_L\\sim jB\\tau L^3\/c\\sim 10^{22}$ g$\\cdot$cm s$^{-1}$, where $B\\sim 100$ G is the magnetic field in the sunquake source, and $c$ is the speed of light.\n\nFrom the estimated values of $Q_j$, $p_J$ and $p_L$ one can conclude that the appearance of strong electric currents in the lower solar atmosphere is sufficient to explain the flare energy release and generation of the sunquake. Moreover, these estimations show that the electric current driven disturbances are sufficiently strong, and also that the electric currents are concentrated in the place of the sunquake initiation while the strongest HXR impulse is $\\sim 3$ Mm away. Therefore, it is likely that not only high-energy particles play significant role in the flares, as assumed by the standard flare model, but also electric currents in the lower solar atmosphere can be also a significant part of flare energy release. In our recent paper, we discuss the relationship between electric currents and the fine structure of flare ribbons \\citep{Sharykin2014}.\n\n\\section{Summary and conclusion}\n\nThe main results of the work are as the following:\n\n\\begin{enumerate}\n\\item We observed a strong sunquake event in a weak C-class flare.\n\\item The sunquake is initiated exactly, within 3 sec observational accuracy, during the burst of the HXR emission.\n\\item The place of the photospheric impact associated with the sunquake generation corresponds to the weaker HXR emission source, while there is no significant photospheric impact in the stronger HXR emission source, which is located at the opposite footpoint of a flare loop observed in the EUV AIA images.\n\\item The place of the photospheric impact associated with sunquake initiation corresponds to the most intense electric currents.\n\\item The total (C7.0-M1.9) flare event temporarily correlates with the maxima of vertical and transversal electric currents estimated in the energy release site.\n\\end{enumerate}\n\nThe main conclusion of the presented observational results is that the helioseismic response (sunquake) and flare energy release in the lower solar atmosphere may have strong connection to photospheric electric currents. The sunquake impact may be initiated by a pressure gradient caused due a rapid current dissipation or impulsive Lorentz force. The discovery of the strong photospheric impact produced by a weak C7 flare, which initiated the helioseismic response, opens new perspectives for studying the flare energy release and transport because such flares usually have relatively simple magnetic topology and do not saturate detectors of space and ground-based telescopes. However, our results show that high spatial and temporal resolutions are needed for these studies.\n\nThe work was partially supported by RFBR grant 13-02-91165, President's grant MK-3931.2013.2, NASA grant NNX14AB70G, and NJIT grant.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent cosmological observations point to an strong evidence for an spatially flat and accelerated expanding universe\n\\cite{Riess2011, Komatsu2011, Reid2010}. Despite the great agreement of observations with the concordance model\n\\cite{Li2011} \\footnote{The Cosmological Constant Model.}, it is a fact that quintom model, whose Equation of State\n(EoS) can cross the cosmological constant barrier $w=-1$, is not exclude by observations \\cite{Alam2004a,Feng2005,Huterer2005,Nesseris2007,Jassal2010,Novosyadlyj2012,Novosyadlyj2012a,Hinshaw2012}. A popular way\nto realize a viable quintom model and, at the same time, avoid the restrictions imposed by the \\textit{No-Go Theorem} \\cite{Vikman2005,Hu2005, Caldwell2005, Xia2008, Cai2010}\nis the introduction of extra degrees of freedom\\footnote{The only way to realize the crossing\nwithout any ghosts and gradient instabilities in standard gravity and with one single scalar degree of freedom was obtained in \\cite{Deffayet2010}. \n}. Following\nthis recipe, the simple quintom paradigm requires a canonical quintessence scalar\nfield $\\sigma$ and simultaneously a phantom scalar field $\\phi$ where the effective potential can be of arbitrary form,\nwhile the two components can be either coupled \\cite{Zhang2006} or decoupled \\cite{Feng2005, Guo2005}. \n\nThe properties of the quintom models have been studied from different points of view. Among them, the phase space studies,\nusing the dynamical systems tools, are very useful in order to analyze the asymptotic behavior of the model. In quintom models\n this program have been carried out in \\cite{Guo2005, Zhang2006, Lazkoz2006, Lazkoz2007, Setare2009, Setare2008a, Setare2009c, Cai2010}.\n In \\cite{Guo2005} the decoupled case between the canonical and phantom field with an exponential potential is studied shown that\n the phantom-dominated scaling solution is the unique late-time attractor. In \\cite{Zhang2006} the potential considers the interaction\n between the fields and shows that in the absence of interactions, the solution dominated by the phantom field should be the attractor\n of the system and the interaction does not affect its attractor behavior. This result is correct only in the case in which the\n existence of the phantom phase excludes the existence scaling attractors \\cite{Lazkoz2006}. Some of these\nresults were extended in \\cite{Lazkoz2007} for arbitrary potentials. In \\cite{Setare2009c} the authors showed that all quintom models\n with nearly flat potentials converge to a single expression for EoS of dark energy, in addition, the necessary conditions for the\n determination of the direction of the $w=-1$ crossing was found.\n\n\nThe aim of this paper is to extend the study of Refs. \\cite{Guo2005,\nLazkoz2006, Lazkoz2007, Setare2009, Cai2010} -investigation of the\ndynamics of quintom cosmology- to include a wide variety of\npotential beyond the exponential potential without interaction\nbetween the fields, all of them can be constructed using the Bohm\nformalism \\cite{Guzman2007, Socorro2010, Bohm1952} of the quantum\nmechanics under the integral systems premise, which is known as\nquantum potential approach. This approach makes it possible to\nidentify trajectories associated with the wave function of the\nuniverse \\cite{Guzman2007} when we choose the superpotential\nfunction as the momenta associated to the coordinate field $q$. This\ninvestigation was undertaken within the framework of the\nminisuperspace approximation to quantum theory when we investigate\nthe dynamics of only a finite number of models. Here we make use of\nthe dynamical systems tools to obtain useful information about the\nasymptotic properties of the model. In order to be able to analyze\nself-interaction potentials beyond the exponential one, we rely on\nthe method introduced in Ref. \\cite{Fang2009} in the context of\nquintessence models and that have been generalized to several cosmological contexts\nlike: Randall-Sundrum II and DGP branes \\cite{Leyva2009, Escobar2012a,\nEscobar2012e}, Scalar Field Dark Matter models \\cite{Matos2009},\ntachyon and phantom fields \\cite{Quiros2010, Fang2010a,\nFarajollahi2011} and loop quantum gravity \\cite{Xiao2011j}.\n\nThe plan of the paper is as follow: in section \\ref{ss1} we introduce the quintom model for arbitrary potentials and in section \\ref{ss2}\n we build the corresponding autonomous system. The results of the study of the corresponding critical points,\n their stability properties and the physical discussion are shown in section \\ref{s2}. The section \\ref{ss4} is devoted to conclusions. Finally, we include in the two appendices \\ref{apen1} and \\ref{apen2} the center manifold calculation of the solutions dominated by either the phantom or quintessence potential. \n\n\\section{The model}\\label{ss1}\nThe starting action of our model, containing the canonical field $\\sigma$ and the phantom field $\\phi$, is \\cite{Feng2005, Guo2005, Zhang2006}:\n\n\\begin{eqnarray}\\label{action}\nS=\\int d^{4}x\\sqrt{-g}\\left(\\frac{1}{2}R-\\frac{1}{2}g^{\\mu\\nu}\\partial_{\\mu}\\sigma\\partial_{\\nu}\\sigma+V_{\\sigma}(\\sigma) + \\right. \\nonumber\\\\\n \\left. + \\frac{1}{2}g^{\\mu\\nu}\\partial_{\\mu}\\phi\\partial_{\\nu}\\phi+V_{\\phi}(\\phi)\\right),\n\\end{eqnarray}\n\nwhere we used natural units ($8 \\pi G=1$) and $V_{\\sigma}(\\sigma)$ and $V_{\\phi}(\\phi)$ are respectively the self interactions\npotential of the quintessence and phantom fields. \n\nFrom this action the Friedmann equations for a flat geometry reads \\cite{Guo2005, Zhang2006}:\n\\begin{equation}\\label{F1}\n H^{2}=\\frac{1}{3}\\left( \\frac{\\dot{\\sigma}^{2}}{2}+V_{\\sigma}(\\sigma)-\\frac{\\dot{\\phi}^{2}}{2}+V_{\\phi}(\\phi)\\right)\n\\end{equation}\n\\begin{equation}\\label{F2}\n \\dot{H}=-\\frac{1}{2}\\left(\\dot{\\sigma}^{2}-\\dot{\\phi}^{2} \\right)\n\\end{equation} where \nwhere $H=\\frac{\\dot{a}}{a}$ is the Hubble parameter and the dot denotes derivative with respect the time. \n\nThe evolution of the quintessence and phantom field are:\n\\begin{equation}\\label{KG1}\n \\ddot{\\sigma}+3H\\dot{\\sigma}+V_{\\sigma}'(\\sigma)=0\n\\end{equation}\n\\begin{equation}\\label{KG2}\n \\ddot{\\phi}+3H\\dot{\\phi}- V_{\\phi}'(\\phi)=0,\n\\end{equation} where the coma denotes the derivative of a function with respect to their argument. \n\nAdditionally we can introduce the total energy density and pressure as:\n\\begin{equation}\n \\rho_{DE}=\\rho_{\\sigma}+\\rho_{\\phi},\\;\\;\\;p_{DE}=p_{\\sigma}+p_{\\phi}\n\\end{equation}\nwhere\n\\begin{equation}\n \\rho_{\\sigma}=\\frac{\\dot{\\sigma}^2}{2}+V_{\\sigma}(\\sigma),\\;\\;\\;\\rho_{\\phi}=-\\frac{\\dot{\\phi}^2}{2}+V_{\\phi}(\\phi)\n\\end{equation}\n\\begin{equation}\n p_{\\sigma}=\\frac{\\dot{\\sigma}^2}{2}-V_{\\sigma}(\\sigma),\\;\\;\\;p_{\\phi}=-\\frac{\\dot{\\phi}^2}{2}-V_{\\phi}(\\phi)\n\\end{equation}\nand its equation of state parameter is given by\n\\begin{equation}\\label{weff}\n w_{eff}=\\frac{p_{\\sigma}+p{\\phi}}{\\rho_{\\sigma}+\\rho_{\\phi}}=\\frac{\\dot{\\sigma}^{2}-\\dot{\\phi}^{2}-2V_{\\sigma}(\\sigma)-2V_{\\phi}(\\phi)}{\\dot{\\sigma}^{2}-\\dot{\\phi}^{2}+2V_{\\sigma}(\\sigma)+2V_{\\phi}(\\phi)}\n\\end{equation}\nand \n\\begin{equation}\\label{omegaaa}\n \\Omega_{\\sigma}=\\frac{\\rho_{\\sigma}}{\\rho_{DE}},\\;\\;\\Omega_{\\phi}=\\frac{\\rho_{\\phi}}{\\rho_{DE}}\n\\end{equation}\n\\begin{equation}\n \\Omega_{\\sigma}+ \\Omega_{\\phi}=1\n\\end{equation}\nare the the individual and total dimensionless densities parameters. \n\n\\section{The autonomous system}\\label{ss2}\nIn order to study the dynamical properties of the system\n(\\ref{F1}-\\ref{KG2}) we introduce the following dimensionless phase\nspace variables to build an autonomous system \\cite{Copeland1998, Chen2009}:\n\\begin{equation}\\label{nv1}\nx_{\\sigma}=\\frac{\\dot{\\sigma}}{\\sqrt{6}H}, \\\\\nx_{\\phi}=\\frac{\\dot{\\phi}}{\\sqrt{6}H}, \\\\\ny_{\\sigma}=\\frac{\\sqrt{V_{\\sigma}(\\sigma)}}{\\sqrt{3}H}, \\\\\n\\end{equation}\n\\begin{equation}\n\\lambda_{\\sigma}=-\\frac{V_{\\sigma}'(\\sigma)}{V_{\\sigma}(\\sigma)}, \\\\\n\\lambda_{\\phi}=-\\frac{V_{\\phi}'(\\phi)}{V_{\\phi}(\\phi)}, \\\\\n\\end{equation}\n\nNotice that the phase space variables $\\lambda_{\\sigma}$ and\n$\\lambda_{\\phi}$ are sensitive of the kind of self interactions\npotential chosen for quintessence and phantom component,\nrespectively and are introduced in order to be able to study\narbitrary potentials. Applying the above dimensionless variables to\nthe system (\\ref{F1}-\\ref{KG2}) we obtain the following autonomous\nsystem:\n\n\\begin{eqnarray}\n \\frac{d x_{\\sigma}}{dN}&=& -3 x_{\\sigma} \\left(1+x_\\phi^2-x_\\sigma^2 \\right)+\\sqrt{\\frac{3}{2}} y_{\\sigma}^2 \\lambda_\\sigma \\label{auto1}\\\\\n \\frac{d x_{\\phi}}{dN}&=& -3 x_{\\phi} \\left(1+x_\\phi^2-x_\\sigma^2 \\right)+\\nonumber\\\\&&-\\sqrt{\\frac{3}{2}}\\left(1+x_\\phi^2-x_\\sigma^2-y_\\sigma^2\\right) \\lambda_\\phi \\label{auto2}\\\\\n\\frac{d y_{\\sigma}}{dN} &=&\\frac{1}{2}y_\\sigma \\left(6x_\\sigma^2-\\sqrt{6} x_\\sigma\n \\lambda_\\sigma -6 x_\\phi^2\\right) \\label{auto3}\\\\\n \\frac{d \\lambda_{\\sigma}}{dN}&=& -\\sqrt{6} x_{\\sigma} f(\\lambda_\\sigma) \\label{auto4}\\\\\n\\frac{d \\lambda_{\\phi}}{dN} &=& -\\sqrt{6}x_\\phi g(\\lambda_\\phi)\\label{auto5}\n\\end{eqnarray}\nwhere $N=\\ln a$ is the number of e-foldings and $f(\\lambda_\\sigma)= \\lambda_\\sigma\n^2(\\Gamma_{\\sigma}-1)$ and $ g(\\lambda_\\phi)= \\lambda_\\phi\n^2(\\Gamma_{\\phi}-1) $ where:\n\\begin{equation}\n\\Gamma_{\\sigma}=\\frac{V_{\\sigma}(\\sigma)V_{\\sigma}''(\\sigma)}{(V_{\\sigma}'(\\sigma))^{2}},\n\\;\\;\\; \\Gamma_{\\phi}=\\frac{V_{\\phi}(\\phi)V_{\\phi}''(\\phi)}{(V_{\\phi}'(\\phi))^{2}}\n\\end{equation}\nIn order to get from the autonomous equation (\\ref{auto1}-\\ref{auto5}) a closed system of ordinary differential equation\nwe have assumed that the funtions $\\Gamma_{\\sigma}$ and $\\Gamma_{\\phi}$ can be written as a function of the\nvariables $\\lambda_{\\sigma}\\in\\mathbb{R}$ and $\\lambda_{\\phi}\\in\\mathbb{R}$ respectively \\cite{Fang2009}.\n\nThe phase space for the autonomous dynamical system driven by de evolutions of Eqs. (\\ref{auto1}-\\ref{auto5}) can be defined as follows:\n\\begin{eqnarray}\n \\Psi=\\{(x_{\\sigma},x_{\\phi},y_{\\sigma}):y_{\\sigma}\\geq 0, x_\\sigma^2-x_\\phi^2+y_\\sigma^2\\leq 1 \\}\\times\\nonumber\\\\\n \\times\\{(\\lambda_{\\sigma},\\lambda_{\\phi})\\in\\mathbb{R}^{2} \\}\n\\end{eqnarray}\n\nWith the aim of explain the physical significance of the critical points of the autonomous system (\\ref{auto1}-\\ref{auto5}) we need to obtain the relevant cosmological parameters in terms of the dimensionless phase space variables (\\ref{nv1}). Following this, the cosmological parameter (\\ref{weff}) and (\\ref{omegaaa}) can be expressed as\n\\begin{equation}\n w_{eff}=-1+2x_{\\sigma}^2-2x_{\\phi}^2\n\\end{equation}\n\\begin{equation}\n \\Omega_{\\sigma}=x_{\\sigma}^2+y_{\\sigma}^2,\\;\\;\\ \\Omega_{\\phi}=1-x_{\\sigma}^2-y_{\\sigma}^2,\n\\end{equation}\nwhile the deceleration parameter becomes\n\\begin{equation}\n q=-\\left[1+\\frac{\\dot{H}}{H^2}\\right]=-1+3x_{\\sigma}^2-3x_{\\phi}^2.\n\\end{equation}\n\n\n\n\\section{Critical points and stability}\\label{s2}\nThe critical points of the system (\\ref{auto1}-\\ref{auto2}) are\nsummarized in Table \\ref{tab1}. The eigenvalues of the corresponding\nJacobian matrices are show in Table \\ref{tab2}. In both cases\n$\\lambda_{\\sigma}^{\\ast}$ and $\\lambda_{\\phi}^{\\ast}$ are the values\nwhich makes the functions\n$f(\\lambda_{\\sigma})=\\lambda_\\sigma^2\\left(\\Gamma_{\\sigma}-1\\right)$\nand $g(\\lambda_{\\phi})=\\lambda_\\phi^2\\left(\\Gamma_{\\phi}-1\\right)$\nvanish respectively.\n\n\n\\begin{table*}\\caption[crit]{Properties of the critical points for the\nautonomous system (\\ref{auto1}-\\ref{auto5})}\n\\begin{tabular}\n{l c c c c c c c c c c}\n\\hline\\hline\\\\[-0.3cm]\n$Label$&$x_{\\sigma}$&$y_{\\sigma}$&$x_{\\phi}$&$\\lambda_{\\sigma}\n$&$\\lambda_{\\phi}$&Existence&$\\Omega_{\\sigma}$&$\\Omega_{\\phi}\n$&$q$&$w_{eff}$\\\\\n\\hline\\\\[-0.2cm]\n\n$P_1^\\pm$ &$0$ &$0$ &$\\pm i$ &$\\lambda_{\\sigma}$ &\n$\\lambda_{\\phi}^{\\ast}$ & Non real &$0$ &$1$& $2$ & $1$\\\\[0.2cm]\n\n$P_2^\\pm$ &$\\pm1$ & $0$ &$0$ &$\\lambda_{\\sigma}^{\\ast}$\n&$\\lambda_{\\phi}$& Always &$1$ &$0$ &$2$ &$1$ \\\\[0.2cm]\n\n$P_3^\\pm$ &$\\pm\\sqrt{1+x_\\phi^2}$ &$0$ & $x_\\phi$ &$\\lambda_{\\sigma}^{\\ast}$ &\n$\\lambda_{\\phi}^{\\ast}$& \\textquotedblright &$1+x_{\\phi}^{2}$ &$-x_{\\phi}^{2}$\n&$2$ &$1$ \\\\[0.2cm]\n\n$P_{4}$ &$\\frac{\\lambda_{\\sigma}^{\\ast}}{\\sqrt{6}}$\n&$\\sqrt{1-\\frac{(\\lambda_{\\sigma}^{\\ast})^{2}}{6}}$&$0$\n&$\\lambda_{\\sigma}^{\\ast}$ &$\\lambda_{\\phi}$& $-\\sqrt{6}\\leq \\lambda_{\\sigma}^{\\ast} \\leq \\sqrt{6}$ &$1$ &$0$\n&$-1+\\frac{(\\lambda_{\\sigma}^{\\ast})^{2}}{2}$ &\n$-1+\\frac{(\\lambda_{\\sigma}^{\\ast})^{2}}{3}$ \\\\[0.2cm]\n\n$P_5$ & $0$ & $0$ & $0$ & $\\lambda_{\\sigma}$ &\n$0$& Always & $0$ & $1$ &$-1$ & $-1$ \\\\[0.2cm]\n\n$P_6$ &$0$ &$1$ &$0$ &$0$ &$\\lambda_{\\phi}$ & \\textquotedblright & $1$ &\n$0$ & $-1$ &$-1$ \\\\[0.2cm]\n\n$P_{7}$ &$0$ &$y_{\\sigma}$ &$0$ & $0$\n&$0$ & $0< y_{\\sigma}< 1$ &$y_{\\sigma}^{2}$ &$1-y_{\\sigma}^{2}$ &\n$-1$ &$-1$\\\\ [0.2cm]\n\n$P_{8}$&$0$ &$0$ &$-\\frac{\\lambda_{\\phi}^{\\ast}}{\\sqrt{6}}$\n&$\\lambda_{\\sigma}$ & $\\lambda_{\\phi}^{\\ast}$ & $\\lambda_{\\sigma}\n\\in \\mathbb{R}$ &$0$ &$1$\n&$-1-\\frac{(\\lambda_{\\phi}^{\\ast})^{2}}{2}$\n&$-1-\\frac{(\\lambda_{\\phi}^{\\ast})^{2}}{3}$ \\\\ [0.2cm] \\hline \\hline\n\\end{tabular}\\label{tab1}\n\\\\ [0.2cm]\n\\end{table*}\n\n\\begin{table*}\\caption[crit2]{Eigenvalues of the linear perturbation matrix associated to each of the critical points displayed in Table \\ref{tab1}}\n\\begin{tabular}\n{l c c c c c c c c c c}\n\\hline\\hline\\\\[-0.3cm]\n$Label$&$m_1$&$m_2$&$m_3$&$m_4$&$m_5$\\\\\n\\hline\\\\[-0.2cm]\n\n$P_1^\\pm$ & $3$ & $0$ & $0$ & $\\mp {i}\\sqrt{6}g'(\\lambda_{\\phi}^{\\ast})$ & $6\\mp{ i}\\sqrt{6}\\lambda_{\\phi}^{\\ast}$\\\\[0.2cm]\n\n$P_2^\\pm$ & $6$ & $0$ & $0$ & $\\mp\\sqrt{6}f'(\\lambda_{\\sigma}^{\\ast})$ & $3\\mp\\sqrt{\\frac{3}{2}}\\lambda_\\sigma^{\\ast}$\\\\[0.2cm]\n\n$P_3^\\pm$ & $0$ & $-\\sqrt{6} g'(\\lambda_{\\phi}^{\\ast}) {x_\\phi}$&$\\mp\\sqrt{6} {f'(\\lambda_{\\sigma}^{\\ast})}\n \\sqrt{x_\\phi^2+1}$&$3\\mp\\sqrt{\\frac{3}{2}} \\sqrt{x_\\phi^2+1}\n \\lambda_\\sigma ^{\\ast}$& $6-\\sqrt{6} x_\\phi \\lambda_\\phi ^{\\ast}$ \\\\[0.2cm]\n\n$P_4$ & $0$ &$-{f'(\\lambda_{\\sigma}^{\\ast})} \\lambda \\sigma ^{\\ast}$ & $\\left(\\lambda \\sigma\n ^{\\ast}\\right)^2$ &$\\frac{1}{2} \\left(\\left(\\lambda \\sigma\n ^{\\ast}\\right)^2-6\\right)$ &$\\frac{1}{2} \\left(\\left(\\lambda \\sigma\n ^{\\ast}\\right)^2-6\\right)$\\\\ [0.2cm]\n\n$P_5$ & $-3$ & $0$ & $0$ & $-\\frac{3}{2}\\left(1+\\sqrt{1+\\frac{4}{3}g(0)}\\right)$ & $-\\frac{3}{2}\\left(1-\\sqrt{1+\\frac{4}{3}g(0)}\\right)$\\\\[0.2cm]\n\n$P_6$ & $-3$ & $0$ & $0$ & $-\\frac{3}{2}\\left(1+\\sqrt{1-\\frac{4}{3}f(0)}\\right)$ & $-\\frac{3}{2}\\left(1-\\sqrt{1-\\frac{4}{3}f(0)}\\right)$\\\\[0.2cm]\n\n$P_7$ & $0$ & $\\frac{1}{2} \\left(-\\sqrt{9-12 f(0) y_\\sigma^2}-3\\right)$ & $\\frac{1}{2} \\left(\\sqrt{9-12 f(0) y_\\sigma^2}-3\\right)$ & $\\frac{1}{2} \\left(-\\sqrt{9-12 g(0) \\left(y_\\sigma^2-1\\right)}-3\\right)$ & $\\frac{1}{2} \\left(\\sqrt{9-12 g(0)\n \\left(y_\\sigma^2-1\\right)}-3\\right)$\\\\[0.2cm]\n\n$P_8$ & $0$ &${g'(\\lambda_{\\phi}^{\\ast})} \\lambda_\\phi ^{\\ast}$& $-\\frac{1}{2} \\left(\\lambda_\\phi\n ^{\\ast}\\right)^2$&$-\\frac{1}{2} \\left(\\left(\\lambda_\\phi\n ^{\\ast}\\right)^2+6\\right)$&$-\\frac{1}{2} \\left(\\left(\\lambda_\\phi\n ^{\\ast}\\right)^2+6\\right)$\\\\ [0.2cm]\n\n\n\n\\hline \\hline\\\\[-0.3cm]\n\\end{tabular}\\label{tab2}\n\\end{table*}\nAs we see from Table \\ref{tab1}, the points $P_1^\\pm$ do not exist in the strict sense ($x_\\phi$ is purely imaginary at the fixed points).\nPoint $P_5$ is associated with a combination of a phantom potential whose first $\\phi$-derivative vanishes at some\/several point\/points, i.e.,\n$\\lambda_\\phi=0$ (this case include the exponential potential whose $\\phi$-derivative at any order vanish everywhere) and an arbitrary self\ninteraction potential for the quintessence component (arbitrary value of $\\lambda_{\\sigma}$). Point $P_6$ is associated with a combination\n of a quintessence potential whose first $\\sigma$-derivative vanishes at some\/several point\/points, i.e., $\\lambda_\\sigma=0$\n (this case include the exponential potential whose $\\sigma$-derivative at any order vanish everywhere) and an arbitrary self\n interaction potential for the phantom component (arbitrary value of $\\lambda_{\\phi}$). Point $P_7$ is associated with a\n combination of a phantom potential whose first $\\phi$-derivative vanishes at some\/several point\/points, i.e., $\\lambda_\\phi=0$\n (this case include the exponential potential whose $\\phi$-derivative at any order vanish everywhere) and a self interaction\n potential for the quintessence component whose first $\\sigma$-derivative vanishes also at some\/several point\/points, i.e., $\\lambda_\\phi=0.$ \nIt is worth noticing that the existence of points $P_2^\\pm$, $P_3^\\pm$, $P_4$, $P_8$ and $P_9$ depends of the concrete form\nof the potential. From the table of the eigenvalues, notice, besides, that all the points belongs to nonhyperbolic sets of\ncritical point with a least one null eigenvalue.\n\n\\subsection{Stability of the critical points}\n\nAlthough all these critical points are shown in the Tables \\ref{tab1} here we have summarized their basic properties:\n\\begin{itemize}\n \\item $P_{1}^\\pm$, $P_{2}^\\pm$ and $P_{3}^\\pm$ correspond to a solution dominated by the kinetic energy of the scalar fields\n (stiff fluid solution: $q=2$ and $\\omega=1$). The exact dynamical behavior differs for each points. $P_1^\\pm$ corresponds to\n a phantom kinetic energy dominated ($\\Omega_{\\sigma}=0$ and $\\Omega_{\\phi}=1$). However, these points have a purely imaginary\n value of $x_\\phi,$ thus, they do not exists in the strict sense. They have a three$-$dimensional center subspace and a two-dimensional\n unstable manifold ($m_1=3>0,\\; \\Re(m_5)=6>0$). Thus they cannot be late-time attractors. $P_3^\\pm$ represents an scaling regimen\n between the kinetics energies of the quintessence and phantom fields ($\\Omega_{\\sigma}=1+x_{\\phi}^{2}$ and $\\Omega_{\\phi}=-x_{\\phi}^{2}$).\n These points depend of the form of the potentials and under certain conditions they have a four dimensional unstable subspace which could\n correspond to the past attractor. However, this point is unphysical since $\\Omega_{\\phi}<0$ . $P_{2}^\\pm$ is dominated by the\n quintessence kinetic term ($\\Omega_{\\sigma}=1$ and $\\Omega_{\\phi}=0$). Since they are non-hyperbolic due to the existence of\n two null eigenvalues, we are not able to extract information about their stability by using the standard tools of the linear\n dynamical analysis. However, since these points seems to be particular cases of $P_3^\\pm,$ they should share the same dynamical\n behavior. Because all of these points are nonhyperbolic, as we notice before, we cannot rely on the standard linear\n dynamical systems analysis for deducing their stability. Thus, we need to rely our analysis on numerical inspection of\n the phase portrait for specific potentials or use more sophisticated techniques like Center Manifold theory.\n \\item $P_4$ is an scaling solution between the kinetic and the potential energy of the quintessence component of dark energy. This solution in sensitive to the explicit form of the potential. This is always a saddle equilibrium point in the phase space since $m_2=(\\lambda_{\\sigma}^{\\ast})^{2}$ and $m_4=\\frac{1}{2}((\\lambda_{\\sigma}^{\\ast})^{2}-6)$ are of opposite sign in the existence region of this point. It represents an accelerated solution for a potential $V_\\sigma(\\sigma)$ whose function $f(\\lambda_{\\sigma})$ vanish for $\\lambda_{\\sigma}=\\lambda_{\\sigma}^{\\ast}$ in the interval $-\\sqrt{2}<\\lambda_{\\sigma}^{\\ast}<\\sqrt{2}$, leading to a $-1\\leq w_{eff}<-1\/3$. When $\\lambda_{\\sigma}^{\\ast}=0$ the critical point $P_4$ becomes in $P_6$. In the regions $-\\sqrt{6}\\leq\\lambda_{\\sigma}^{\\ast}\\leq-\\sqrt{2}$ or $\\sqrt{2}\\leq\\lambda_{\\sigma}^{\\ast}\\leq\\sqrt{6}$, the critical point $P_4$ represents a non-accelerated phase. A very interesting issue of this critical point appears when, for an specific form of\nthe quintessence potential, $\\lambda_{\\sigma}^{\\ast}=\\pm\\sqrt{3}$, driving to $w_{eff}=0$. This means that the quintessence field is able to mimic the dark matter behavior.\n \\item $P_5$, $P_6$ and $P_7$ represents solutions dominated by the potential energies of the potentials (all of them represent de Sitter solutions: $q=-1$ and $w_{eff}=-1$). Once again the exact dynamical nature differs from one point to the other: $P_5$ is dominated by the potential energy of the phantom component ($\\Omega_{\\sigma}=0$ and $\\Omega_{\\phi}=1$). Because of the existence of two null eigenvalues is not possible to conclude about its dynamics. However it has a three-dimensional stable manifold for $g(0)<0$ (in the interval $g(0)<-\\frac{3}{4}$ it has to complex conjugated eigenvalues with negative real parts). In this cases it is worthy to analyze its stability using the center manifold theory. $P_6$ is a critical point dominated by the quintessence potential energy term ($\\Omega_{\\sigma}=1$ and $\\Omega_{\\phi}=0$), despite its nonhyperbolicity, it has three-dimensional stable manifold for $f(0)>0$ (in the case $f(0)>\\frac{3}{4}$ it has to complex conjugated eigenvalues with\nnegative real parts), thus, it is worthy to analyze its stability using the center manifold theory. $P_7$ denotes a segment (curve) of non-isolated fixed points, representing a scaling regimen between the quintessence and phantom potential ($\\Omega_{\\sigma}=y_{\\sigma}^{2}$ and $\\Omega_{\\phi}=1- y_{\\sigma}^{2}$). The existence of one non-zero eigenvalue is due to the fact that it is a curve of fixed points. As an invariant set of non-isolated singular points it is normally-hyperbolic, since the eigenvector associated to the zero eigenvalue, $(0,0,1,0,0)^T,$ is tangent to the curve. Thus its stability is determined by the sign of the remaining non-null eigenvalues. Hence, it is stable for $00, \\; g(0)<0$ or a saddle otherwise.\n\\item $P_8$ is a line of fixed points parameterized by $\\lambda_\\sigma\\in\\mathbb{R}$. The existence of one non-zero eigenvalue is due to the fact that it is a curve of fixed points. As an invariant set of non-isolated singular points it is normally-hyperbolic, since the eigenvector associated to the zero eigenvalue, $(0,0,0,1,0)^T,$ is tangent to the curve. Thus its stability is determined by the sign of the remaining non-null eigenvalues. From table \\ref{tab2} follows that $P_8$ admits a four dimensional stable subspace provided $g'(\\lambda_{\\phi}^{\\ast}) \\lambda_{\\phi}^{\\ast}<0$, thus, the invariant curve is stable. It represents accelerated solutions dominated by the phantom potential providing a crossing through the phantom divide ($\\Omega_{\\sigma}=0$ and $\\Omega_{\\phi}=1$). For every value of $\\lambda_{\\phi}^{\\ast}$ this point provide the typical superaccelerated expansion of quintom paradigm ($w=-1-\\frac{(\\lambda_{\\phi}^{\\ast})^{2}}{3}$) the only exception occurs when $\\lambda_{\\phi}^{\\ast}=0$\nrecovering the behavior of the de Sitter solution $P_5$ ($\\omega=-1$). This line of critical point corresponds to the stable point $P$ in \\cite{Guo2005} and B in \\cite{Cai2010} (phantom dominated solution). Summarizing, the line $P_8$ is the late time stable attractor provided $g'(\\lambda_{\\phi}^{\\ast}) \\lambda_{\\phi}^{\\ast}<0$, otherwise, it is a saddle point.\n\\end{itemize}\n\n \\subsection{Cosmological consequences}\n\nAs was shown in the previous subsection the autonomous systems only admits\nseven classes of critical points (some of them are actually curves) \\footnote{$P_{1}^\\pm$ and $P_{3}^\\pm$ are ruled out. The first one because of they lead to imaginary values of dimensionless variable $x_\\phi$. And the last one because of is outside of the physical phase space, representing a critical point with a negative energy density $\\Omega_{\\phi}<0$.}. The curves $P_{2}^\\pm$ correspond to decelerated solutions, with $q=2$, where the Friedmann constraint (\\ref{F1}) is dominated by the kinetic energy of the quintessence field with an equation of state of stiff type, $w_{eff}=1$. These solutions are only relevant a early times and should be unstable \\cite{Copeland1998}. Unfortunately these critical points are nonhyperbolic (it has two zero eigenvalues) meaning that is not possible to obtain conclusions about its stability with the previous linear analysis. However the numerical analysis performed in the next subsection with a particular potentials confirm the previous results in \nliterature.\n\nAn important result come from the stability of critical point $P_4$. This points exists if $-\\sqrt{6}\\leq\\lambda_{\\sigma}^{\\ast}\\leq\\sqrt{6}$ and always behave as a saddle fixed point. The latter means that under certain initial conditions the orbits in the phase space will approach to this point spending some time in its vicinity before being repelled toward the attractor solution of the system. In the case of this point, as we mentioned before, if the quintessence potential fulfill the condition:\n\\begin{equation}\\label{condiDEDM}\n \\lambda_{\\sigma}^{\\ast}=\\pm\\sqrt{3}\n\\end{equation}\nthen the effective equation of state of this dark energy component would mimic pressureless fluid ($w_{eff}=0$), in other words: it will dynamically behave exactly as cold dark matter. The possibility of this dynamical characteristic impose a fine tunning over the shape of quintessence potentials and a priori there is no guarantee that all possible quintessence potentials may satisfy the above condition (\\ref{condiDEDM}). Let's note that in order to obtain the lower possible dimensionality of the phase space and to studying in a relatively simple way the effects of include arbitrary quintom potentials, we have neglected the contribution of the usual matter fields: radiation and baryonic matter in our model \\footnote{See Eqs. (\\ref{action}-\\ref{F1}).}. As a result, a full study of important aspects, derived from realization of condition (\\ref{condiDEDM}), such as: transition redshift between the decelerated and accelerated expansion phase and the clustering properties of this \\textit{effective dark matter} \nare beyond the present study and will be left for a future paper.\n\nAnother important characteristics of the model is the presence of three accelerated solutions, described by critical points $P_5$, $P_6$ and $P_7$. All of them are de Sitter solutions ($w_{eff}=-1$) dominated by the potentials of the scalar fields. As in the case of $P_4$, they behave as saddle points and, depending on the initial conditions, the orbits can evolve from the unstable fixed point ($P_{2}^\\pm$ in our case) towards one or the other of the saddle points. A favorable scenario would be one in which the initial condition lead to an evolution from $P_{2}^\\pm$ to the saddle point $P_4$ \\footnote{we are assuming that if (\\ref{condiDEDM}) is fulfilled, then quintessence field behave as the Dark Matter.} and then, the orbits tend to one of the de Sitter solutions $P_5$, $P_6$ or $P_7$ or to the late time phantom attractor ($P_8$). In terms of the cosmological evolution of the Universe, the above favorable scenario implies that the Universe started at early times from an stage dominated by the kinetic \nterm of \nthe quintessence, then evolve into an epoch dominated by the \\textit{effective dark matter} and finally enter in the final phase of accelerated expansion. This accelerated phase can be the de Sitter solutions or a phantom dominated solution ($w_{eff}<-1$) \\footnote{In fact, these models admits the possibility of having two stable solutions: a de Sitter solution ($P_7$) and a phantom solution ($P_8$), each one within their basin of attraction as was shown in previous subsection.}. This final stage of evolutions towards critical point $P_8$ is consistent with the recent joint results from \\textit{WMAP}+\\textit{eCMB}+\\textit{BAO}+$H_0$+\\textit{SNe} \\cite{Hinshaw2012} which suggest a mild preference for a dark energy equation-of-state parameter in the phantom region ($w_{eff}<-1$). \n\nFinally, in order to examine the stability of the nonhyperbolic points that cannot consistently be studied via the present linear analysis, we present a concrete example. We provide a numerical elaboration of the phase space orbits of the corresponding quintom model.\n\n\\subsection{$V(\\sigma,\\phi)=V_{0}\\sinh^{2}(\\alpha\\sigma)+V_{1}\\cosh^{2}(\\beta\\phi)$}\\label{cosh}\nThis potential is derived, in a Friedmann-Robertson-Walker\ncosmological model, from canonical quantum cosmology under determined\nconditions in the evolution of our universe\\footnote{This is part of a forthcoming paper.}, using the bohmian\nformalism \\cite{Guzman2007}. For this potential:\n\\begin{equation}\\label{f1}\n f(\\lambda_{\\sigma})=-\\frac{\\lambda_{\\sigma}^{2}}{2}+ 2 \\alpha^{2}, \\;\\;\\lambda_{\\sigma}^{\\ast}=\\pm 2 \\alpha, \\;\\;f'(\\lambda_{\\sigma}^{\\ast})=-{\\lambda_{\\sigma}^{\\ast}}\n\\end{equation}\nand\n\\begin{equation}\\label{f2}\n g(\\lambda_{\\phi})=-\\frac{\\lambda_{\\phi}^{2}}{2}+ 2 \\beta^{2}, \\;\\;\\lambda_{\\phi}^{\\ast}=\\pm 2 \\beta, \\;\\;g'(\\lambda_{\\phi}^{\\ast})=-{\\lambda_{\\phi}^{\\ast}}.\n\\end{equation}\nFrom the Table \\ref{tab2} and the equation (\\ref{f2}) we see that the condition\nto ensure that Point $P_8$ has a four dimensional stable subspace is\nalways satisfied due to the opposite signs between\n$\\lambda_{\\phi}^{\\ast}$ and $g'(\\lambda_{\\phi}^{\\ast})$. In order to\nhaving achieved success scalar field dark matter domination era we\nneed that $\\lambda_{\\sigma}^{\\ast}=\\pm\\sqrt{3},$ since this is the only way to have a standard transient matter dominated solution ($P_4$). Recall that for the choice $\\lambda_{\\sigma}^{\\ast}=\\pm\\sqrt{3},$ the standard quintessence dominated solution mimics dark matter ($w_{eff}=0$).\nImposing the condition $\\lambda_{\\sigma}^{\\ast}=\\pm\\sqrt{3},$ we have as a degree of freedom the potential\nparameter $\\alpha$ that can be adjusted using (\\ref{f1}). Furthermore, we impose\none of the following conditions:\n\\begin{equation}\n \\lambda_{\\sigma}^{\\ast}=\\sqrt{3},\\;\\lambda_\\phi ^{\\ast}\\leq -\\sqrt{6}, \\; 1