diff --git a/data_all_eng_slimpj/shuffled/split2/finalzrml b/data_all_eng_slimpj/shuffled/split2/finalzrml new file mode 100644 index 0000000000000000000000000000000000000000..61ad9ff6a4bac6bae1961aea1108ba8ba0bfc380 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzrml @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nWe consider the stochastic Swift-Hohenberg equation on the whole real line.\nThis is one of the prototypes of pattern forming equations and its first instability is supposed \nto be a toy model for the convective instability in Rayleigh-B\\'enard convection.\nIt is given by \n\\begin{equation}\n \\label{e:SHintro}\n \\partial_t u = -(1+\\partial_x^2)^2 u +\\nu\\varepsilon^2 u -u^3 + \\sigma \\varepsilon^{3\/2} \\xi\n\\end{equation}\nwith space-time white noise $\\xi$. \nHere $\\nu\\in\\mathbb{R}$ measures the distance from bifurcation, which scales with $\\varepsilon^2$ \nand $\\sigma\\geqslant0$ measures the noise strength that scales with $\\varepsilon^{3\/2}$, \nfor a small $0\\leqslant \\varepsilon\\ll 1$.\nWe will see later that the scaling is in such a way that close to the bifurcation both terms have an impact on the dynamics. \n\nDue to the presence of the noise we run into several problems. \nFirst, solutions have very poor regularity properties and solutions are at most H\\\"older continuous. \nThus we need to consider weaker concepts of solutions like the mild formulation of the equation. \nMoreover, due to translation invariance of the noise solutions are in general immediately unbounded in space,\nand we need to work in spaces that do allow for growth of solutions for $|x|\\to\\infty$.\nThese weighted spaces are not closed under pointwise multiplication, which is a serious problem in the construction of solutions\ndue to the cubic nonlinearity.\n\nOur main results show that \nclose to the bifurcation, i.e. for small $\\varepsilon>0$, solutions of \\eqref{e:SHintro} are well approximated by a modulated wave \n\\begin{equation*}\n u (t,x) \\approx \\varepsilon A(\\varepsilon^2 t,\\varepsilon x) e^{ix} + c.c. \n\\end{equation*}\nwhere the amplitude $A$ solves a so called modulation or amplitude equation, which is in our case a stochastic complex-valued Ginzburg-Landau equation\n\\[\n\\partial_T A = 4 \\partial_X A +\\nu A -3A|A|^2 + \\eta \n\\]\nfor some complex-valued space-time white noise $\\eta$.\n\n\\subsection{Modulation equations for deterministic PDEs}\nThe Ginzburg-Landau equation as an effective amplitude equation for the \ndescription of pattern forming systems close to the first instability \nhas first been derived in the 1960s by Newell and Whitehead, cf.~\\cite{NW69}.\nThe mathematical justification of this approach beyond pure formal calculations \nhas been done by Mielke and Melbourne together with coauthors \neither with the help of a Lyapunov-Schmidt reduction (see \\cite{Mi92,Mel98,Mel00}),\nor with the construction of special solutions, cf.~\\cite{IMD89}. \nApproximation results showing that there are solutions of the pattern-forming system \nwhich behave as predicted by the Ginzburg-Landau equation has been shown by various authors for instance in \n\\cite{CE90,vH91,KSM92,Schn94a,Schn94c,Takac96}. \nMoreover, there are attractivity results by Eckhaus \\cite{Eck93} and Schneider \\cite{Schn95}, \nshowing that every small solution can be described after a certain time by the Ginzburg-Landau equation. \nVarious results followed in subsequent years: combining the approximation and attractivity results allows to prove the upper semi-continuity of attractors~\\cite{MS:95,Schn99c}, \nshadowing by pseudo-orbits, and global existence results for the pattern-forming systems \\cite{Schn94b,Schn99b}.\nA number of approximation theorems have been proven in slightly modified situations, such as the\ndegenerated case of a vanishing cubic coefficient \\cite{BS07}, the Turing-Hopf case description\nby mean-field coupled Ginzburg-Landau equations \\cite{Schn97}, \nthe Hopf bifurcation at the Fourier wave number k = 0 \\cite{Schn98}, and \nthe time-periodic situation \\cite{SU07}. \nRecently, such results have been established in case of pattern forming \nsystems with conservation laws, too, cf. \\cite{HSZ11,SZ13,DSSZ16}.\nLet us finally point out that this section is just a brief summary of those of the numerous deterministic results existing in the literature which are most closely related to the one presented here.\n\\subsection{SPDEs in weighted spaces on unbounded domains}\n\nThe theory of higher order parabolic stochastic partial differential equations (SPDEs) \non unbounded domains with translation invariant additive noise \nlike space-time white noise is not that well studied, \nwhile for the wave equation with multiplicative noise there are many recent publications. \nSee for example~\\cite{Kho14, DaSS09, Dal09}.\n\nIn older publications often only noise with a spatial cut off or a decay condition at infinity is treated, \nas for example by Eckmann and Hairer~\\cite{EckHai2001}, where the cutoff is in real and in Fourier space, or by Funaki~\\cite{Fun1995}.\nFurthermore, Rougemont \\cite{Ro02} studied the stochastic Ginzburg-Landau using \nexponentially weighted spaces and relatively simple noise that is white in time, but bounded in space.\n\nIn many examples, using trace class noise implies $L^2$-valued Wiener processes \nand thus a decay condition both of solutions and of the noise at infinity. \nThis leads to $L^2$-valued solutions, as for example \nby Brzezniak and Li~\\cite{BreLi2006} or by Kr\\\"uger and Stannat~\\cite{KruSta2014}, where an integral equation is considered.\n In the next paragraph we will comment on the fact that a decay at infinity rules out the effect we want to study here \n using modulation equations.\n\nThe stochastic Ginzburg-Landau Equation in a weighted $L^2$-space was already studied by Bl\\\"omker and Han \\cite{BH:10}. \nThe existence and uniqueness result based on a Galerkin-Approximation is briefly sketched there \nand the asymptotic compactness of the stochastic dynamical system is shown.\n\nRecently, several publications treat SPDEs with space-time-white noise in weighted Besov spaces: \nsee for example R\\\"ockner, Zhu, and Zhu\n\\cite{RoeZhZh:P15} or Mourrat and Weber\\cite{MoWe:P15}. \nThey work with the two-dimensional $\\Phi^4$-model, which is similar to Ginzburg-Landau and \nwhere renormalization is needed to give a meaning to the two-dimensional equation with this choice of noise.\nIn order to construct solutions they consider approximations on the torus and then send the size of the domain to infinity,\nwhich is the method also used in this paper. But the authors work directly in weighted Besov spaces, \nwhile we show our existence and uniqueness result in spaces with less regularity. \nMoreover the result in Besov spaces relies heavily on properties of the heat-semigroup, \nwhich do not seem to hold for fourth order operators like the Swift-Hohenberg operator.\nFor example we will see later, that the operator is not dissipative in weighted $L^p$-spaces, while the Laplacian is.\nThus we cannot derive useful a-priori bounds for Swift-Hohenberg in $L^p$-spaces.\nThis is also the reason that our final approximation result is only valid in a weighted $L^2$-space, \nwhile the residual is bounded in spaces with H\\\"older regularity.\n\nLet us finally remark that spaces without weight like $L^2(\\mathbb{R})$ and the usual Sobolev spaces do not include constant functions and \nmodulated pattern, that do appear close to the bifurcation, and which we want to study here using modulation equations. \nIn order to include these special solutions one needs to consider weighted spaces, see for example~\\cite{BLW:13} or~\\cite{BBY:16}\nfor publications treating random attractors.\n\n \\subsection{Modulation equations for SPDEs}\n \n\nIn Bl\\\"omker, Hairer and Pavliotis \\cite{BHP:05} modulation equations for SPDEs on large domains were treated.\nThe results are quite similar to the ones presented here, but they hold only on large domains of size proportional to $1\/\\varepsilon$ for Swift-Hohenberg \nand thus the Ginzburg-Landau equation is posed on a domain of order $1$. \nThe main advantage is that one can still work with Fourier series, \nand only finitely many modes change stability at the bifurcation. \nMoreover, solutions of the amplitude equation are not unbounded in space and there is no need to consider weighted spaces. \nThe drawback is that various constants depend on the size of the domain and the results do not extend to unbounded domains.\n\nThe first results for modulation equations for Swift-Hohenberg on the whole real line were presented by Klepel, Mohammed, and Bl\\\"omker \n\\cite{MBK:13,KBM:13}. Here the authors used spatially constant noise of a strength of order $\\varepsilon$, which is stronger than the one treated here.\nAlthough the noise does not appear \ndirectly in the amplitude equation, due to nonlinear interaction and averaging \nadditional deterministic terms appear in the Ginzburg-Landau equation. \nDue to the spatial regularity of the noise, the main advantage is that one can work in spaces with much more spatial \nregularity than we have to use here.\nAs a consequence, solutions are still bounded in space and do not grow towards infinity at $|x|\\to\\infty$.\n\n\nThe key result towards a full result for amplitude equations \non the whole real line with space-time white noise \nis by Bl\\\"omker and Bianchi~\\cite{BB:15}. Here the full approximation result for linear SPDEs, namely the Swift-Hohenberg and Ginzburg-Landau equations without cubic terms,\nis established. \nThis is very useful in the results presented here, as we use it to approximate \nthe stochastic convolutions in the mild formulation.\n\nLet us finally remark that a decay at infinity of the noise and thus the solution leads to a completely different result.\nUnder the rescaling in space used to obtain the modulation equation,\nwe conjecture to finally obtain a point-forcing at the origin in the Ginzburg-Landau equation, \nwhich is an interesting question in itself.\n\n\n\n\n\n\\subsection{Outline of the paper and main results}\n\n \nIn Section \\ref{sec:set} we introduce basic notation and especially the weighted spaces we are going to work in.\nThe existence and uniqueness of solutions to the Swift-Hohenberg and the Ginzburg-Landau equations is \nbriefly sketched in Section~\\ref{sec:exuni} and again at the beginning of Section~\\ref{sec:reg}.\nThe main results are Theorem~\\ref{thm:exSH} and~\\ref{thm:exGL}, where the existence and uniqueness of solutions is proven, \nfor the Swift-Hohenberg and the Ginzburg-Landau equations respectively.\nFor the proof we use the approximation by finite domains with periodic boundary conditions.\n\nA key technical point is the result of Corollary \\ref{cor:maxregA} in Section \\ref{sec:reg} \nwhere we show that the solution of the amplitude equation \nis H\\\"older up to exponent almost $1\/2$ in space, which is more regularity than we can show for solutions of the Swift-Hohenberg equation.\nThe main idea here is to introduce the standard transformation $B=A-\\mathcal{Z}$ \nusing the Ornstein-Uhlenbeck (OU) process $\\mathcal{Z}$ that solves the stochastic linear Ginzburg-Landau equation and is thus Gaussian. \n\nThe H\\\"older-regularity of $\\mathcal{Z}$ is a well-known result (see Lemma~\\ref{lem:regZ}).\nThe key idea of the transformation is that $B$ solves a random PDE and we can apply energy type estimates in $L^p$- and $W^{1,p}$-spaces for any $p\\geq2$, \nwhich show that $B$ is more regular than $\\mathcal{Z}$ and thus $A$ is as regular as $\\mathcal{Z}$.\n\nLet us remark, that these $L^p$-estimates are not available for the Swift-Hohenberg equation, where we only have $L^2$- or $H^1$-estimates.\nThus we do not know how to establish higher regularity for solutions in that case.\n \n \nThe main approximation result for the amplitude equation is Theorem \\ref{thm:res} in Section \\ref{sec:res}, \nwhere we bound the residual of the approximation uniformly for times up to order $\\varepsilon^{-2}$ in a weighted $C^0$-space. \n\nIn the final Section \\ref{sec:app} we establish in Theorem \\ref{thm:final} the approximation result\n again uniformly for times up to order $\\varepsilon^{-2}$ but now only in a weighted $L^2$-space, as we use $L^2$-energy \n estimates for an equation for the error.\n\n\n\n \\section{Setting}\n \\label{sec:set}\n \n\n \n Consider the stochastic Swift-Hohenberg equation\n \\begin{equation}\n \\label{e:SH}\n \\tag{SH}\n \\partial_t u = {\\mathcal{L}}_\\nu u -u^3 +\\sigma \\partial_t W\\;, \\quad u(0)=u_0\n \\end{equation}\non $\\mathbb{R}$ with a standard cylindrical Wiener process $W$ in $L^2(\\mathbb{R})$,\nwhich means that $\\partial_t W$ is space-time white noise, and the operator ${\\mathcal{L}}_\\nu= -(1+\\partial_x^2)^2 + \\nu\\varepsilon^2 $, where $\\sigma$ and $\\nu$ are constants.\nIn the following, we will also consider ${\\mathcal{L}}_0= -(1+\\partial_x^2)^2 $.\n\nDue to lack of regularity \\eqref{e:SH} is not defined classically. \nIn order to give a rigorous meaning, we use the standard transformation to a random PDE.\nWe define the stochastic convolution \n\\[\nZ(t)= W_{{\\mathcal{L}}_\\nu}(t) = \\int_0^t e^{(t-s){\\mathcal{L}}_\\nu}dW(s)\\;.\n\\]\nWe will see later that $Z$ is for any $\\varrho>0$ in the weighted space $C^0_\\varrho$ of continuous functions defined in the next section in \\eqref{e:C0rho}.\nNote that $\\mathcal{Z}$ is the OU-process corresponding to the Ginzburg-Landau equation.\n\n\nIn order to give a meaning to~\\eqref{e:SH} we define $v=u-Z$ and consider weak solutions of \n\\begin{equation}\n \\label{e:SHt}\n \\partial_t v= {\\mathcal{L}}_\\nu v - (v+Z)^3\\;, \\quad v(0)=u_0\\;. \n\\end{equation}\n\n\n\\begin{definition}[Weak solution]\n\\label{def:weak}\nWe call an $L^3_{\\text{loc}}(\\mathbb{R})$-valued stochastic process $v$ with integrable trajectories a weak solution of \\eqref{e:SHt} \nif for all smooth and compactly supported functions $\\varphi$ one has with probability $1$ that for all $t\\in[0,T]$\n\\[\n\\int_\\mathbb{R} v(t)\\varphi dx = \\int_\\mathbb{R} u_0\\varphi dx + \\int_0^t\\int_\\mathbb{R} v(s) {\\mathcal{L}}_\\nu\\varphi dx - \\int_0^t\\int_\\mathbb{R}(v(s)+Z(s))^3\\varphi dx \\;. \n\\] \n\\end{definition}\n\n\\begin{remark}\nA sufficiently regular weak solution of~\\eqref{e:SHt} is a mild solution of~\\eqref{e:SHt}\ngiven by the integral equation\n\\[\nv(t)=e^{t{\\mathcal{L}}_\\nu}v(0) - \\int_0^t e^{(t-s){\\mathcal{L}}_\\nu}(v(s)+Z(s))^3 ds\n\\]\nwhich is under the substitution $v=u-Z$ a mild solution of~\\eqref{e:SH}. \nThis concept is also known as Duhamel's formula or variation of constants.\nThe equivalence of mild and weak solutions under the assumption of relatively weak regularity can be \nfound for example in~\\cite{paz83,lun95}. \nThe transfer to our situation is straightforward. \n\\end{remark}\n\n\\begin{remark}\n The main problem of mild solutions for existence and uniqueness of solutions is the following. \n In weighted spaces $C^0_\\varrho$ of continuous functions with weights decaying to $0$ at infinity, which are defined in the next section, \n we cannot use the direct fixed point argument for mild solutions, \n as the cubic nonlinearity is an unbounded operator on $C^0_\\varrho$, as it maps\n $C^0_\\varrho$ to~$C^0_{3\\varrho}$. We always have to cube the weight, too.\n Thus one can show that the right hand side of the mild formulation can not be a contraction in $C^0_\\varrho$.\n\n\n Similar problems appear for other weighted spaces, as solutions are allowed to be unbounded in space.\n Thus later in the paper we use the weak formulation to prove existence and uniqueness, \n and then the mild formulation to verify error estimates.\n\\end{remark}\n\n\n\\subsection{Spaces}\n\n\nFor $\\varrho\\in\\mathbb{R}$, denote by $C^0_\\varrho$ the space of continuous functions $v: \\mathbb{R} \\to \\mathbb{R}$ \nsuch that the following norm is finite\n\\begin{equation}\n \\label{e:C0rho}\n \\|v\\|_{C^0_\\varrho} = \\sup_{x\\in\\mathbb{R}} \\Big\\{ \\frac1{(1+x^2)^{\\varrho\/2}} |v(x)| \\Big\\} \\;.\n\\end{equation}\nThis is a monotone increasing sequence of spaces of continuous functions with growth condition at $\\pm\\infty$ for $\\varrho>0$.\nSee also Bianchi, Bl\\\"omker \\cite{BB:15}. \n\n\\begin{definition}[Weights]\n \\label{def:weight}\n We define for $\\varrho>0$ the weight function $w_\\varrho(x)= (1+x^2)^{-\\varrho\/2}$.\n We also define for $c>0$ the scaled weight function $w_{\\varrho,c}(x) = w_\\varrho(cx) = (1+c^2x^2)^{-\\varrho\/2}$\\;.\n\\end{definition}\nWe have the following properties \n\\begin{equation}\\label{eq:propweight}\n|w_\\varrho'(x)| \\leqslant C w_\\varrho(x)\\;,\n\\quad \n|w_\\varrho^{(n)}(x)| \\leqslant C_n w_\\varrho(x)\n\\quad\\text{and}\\quad \n|w_{\\varrho,c}^{(n)}(x)| \\leqslant C_n c^n w_{\\varrho,c}(x)\n\\;.\n\\end{equation}\nMoreover, $w_\\varrho\\in L^1(\\mathbb{R})$ if and only if $\\varrho>1$.\n\nLet us remark that we have the equivalence of norms (see Lemma 2.1 of \\cite{BB:15})\n\\begin{equation}\n \\label{e:equivC0}\n c \\|v\\|_{C^0_\\varrho} \n \\leqslant \n \\sup_{L\\geqslant 1} \\{ L^{-\\varrho} \\|v\\|_{C^0([-L,L])}\\}\n \\leqslant C\n \\|v\\|_{C^0_\\varrho},\n\\end{equation}\nwith strictly positive constants $c$ and $C$.\n\nWe can also define as in Bates, Lu, Wang~\\cite{BLW:13}\nweighted spaces for integrable functions.\n\\[\nL^p_\\varrho = \\{ u \\in L^p_{\\text{loc}}(\\mathbb{R}) \\ :\\ uw_\\varrho^{1\/p} \\in L^p(\\mathbb{R})\\}\n\\]\nwith norm\n\\[\n\\|u\\|_{L^p_\\varrho} = \\Big(\\int_\\mathbb{R} w_\\varrho(x)|u(x)|^p dx\\Big)^{1\/p} \\;.\n\\]\nMoreover, we need weighted Sobolev spaces $W^{k,p}_\\varrho$ and $H^k_\\varrho:=W^{k,2}_\\varrho$ defined by the norm\n\\[\n\\|u\\|_{W^{k,p}_\\varrho} := \\Big( \\sum_{\\ell=0}^k \\|\\partial_x^\\ell u\\|^p_{L^p_\\varrho} \\Big)^{1\/p}\\;. \n\\]\nAs $1 \\leqslant L^{\\varrho} w_\\varrho(x)$ on $[-L,L]$, it is easy to check that,\n\\begin{equation}\n \\label{e:Wkpbound}\n \\sup_{L\\geqslant 1} \\{ L^{-\\varrho\/p} \\|v\\|_{W^{k,p}([-L,L])}\\}\n \\leqslant C \\|v\\|_{W^{k,p}_\\varrho}\\;.\n\\end{equation}\nIn general this is not an equivalence of norms as the opposite inequality is not true.\nNote finally that for $\\varrho>1$ we have an integrable weight $w_\\varrho\\in L^1(\\mathbb{R})$ and thus \nby H\\\"older inequality for all $k\\in\\mathbb{N}$, $p\\geqslant1$ and $\\delta>0$ the embedding\n\\[\nW^{k,p+\\delta}_\\varrho \\subset W^{k,p}_\\varrho\\;. \n\\]\nNote that this is false for $\\varrho<1$ which includes the case of no weight ($\\varrho=0$).\n\nWe also \ndefine weighted H\\\"older spaces $C^{0,\\eta}_\\kappa$ of locally H\\\"older continuous functions such that the following norm is finite:\n\\begin{equation}\n \\label{def:wHoe}\n \\|A\\|_{ C^{0,\\eta}_\\kappa} = \\sup\\{ L^{-\\kappa}\\|A\\|_{C^{0,\\eta}[-L,L]} \\ : \\ L>1\\}\\;.\n\\end{equation}\nThis is the natural space for solutions of the SPDE, as the stochastic convolution $Z$ \nwill be in such spaces. See for example Lemma~\\ref{lem:regZ} later.\n\n\n\n\n\n\\section{Existence and Uniqueness of solutions}\n\\label{sec:exuni}\n\n\nHere in the presentation we mainly focus on the Swift-Hohenberg equation and state later \nthe analogous result for the Ginzburg-Landau equation without proof, as they are very similar.\nMoreover, there is already the result of Mourrat and Weber \\cite{MoWe:P15} for the two-dimensional real-valued Ginzburg-Landau \n(or Allen-Cahn) equation that is a similar from the technical point of view, although it is proven in Besov spaces.\nThe main result of this section is:\n\n\\begin{theorem}\n \\label{thm:exSH} \nFor all $u_0\\in L^2_\\varrho$ and $T>0$ with $\\varrho>3$ there is a stochastic process such that $\\mathbb{P}$-almost surely \n\\[\nv\\in L^\\infty(0,T,L^2_\\varrho) \\cap L^2(0,T,H^2_\\varrho) \\cap L^4(0,T,L^4_\\varrho) \n\\]\nand $v$ is a weak solution of \\eqref{e:SHt} in the sense of Definition \\ref{def:weak}. \nMoreover, for any other such weak solution $\\tilde{v}$ we have\n\\[\n\\mathbb{P}\\Big( \\sup_{t\\in[0,T]}\\|v(t)- \\tilde{v}(t)\\|_{L^2_\\varrho}=0 \\Big) =1\\;.\n\\]\n\\end{theorem}\n\n\\begin{remark} \\label{rem:rho3}\nAs we are looking at periodic solutions, the weight $w_\\varrho$ with $\\varrho>3$ has to decay sufficiently fast, so that it guarantees \nthat all boundary terms at $\\pm\\infty$ arising in integration by parts formula \nin the following proof do all vanish.\n\\end{remark}\n\n\nFor the relatively straightforward proof of Theorem \\ref{thm:exSH} we could follow some ideas of \\cite{BH:10} for the Ginzburg-Landau equation,\nwhere a Galerkin method based on an orthonormal basis of $L^2_\\varrho$ was used.\nBut here we consider the approximation using finite domains and periodic boundary conditions. \nThis is a fairly standard approach also presented in~\\cite{MoWe:P15} for the $\\Phi^4$-model, which is similar to the Ginzburg-Landau equation.\nNevertheless, the approach of \\cite{MoWe:P15} in Besov spaces does not seem to work for the Swift-Hohenberg equation, \nas we are for example not able to establish a-priori bounds in Besov spaces.\n\n\nLet $v^{(n)}$ be a $2n$-periodic solution of~\\eqref{e:SHt} on $[-n,n]$ with initial condition \n$v^{(n)}(0)=u_0|_{[-n,n]} $ and forcing $Z^{(n)}=Z|_{[-n,n]}$ both $2n$-periodically extended.\n\nBy standard parabolic theory there is for all $n\\in\\mathbb{N}$ a $2n$-periodic solution \n\\[\nv^{(n)}\n \\in L^\\infty(0,T,L^2([-n,n])) \\cap L^2(0,T,H^2([-n,n])) \\cap L^4(0,T,L^4([-n,n]))\\;,\n\\]\nwhich extends by periodicity to the whole real line $\\mathbb{R}$.\n\nUsing a weight $\\varrho>3$, so that all integrals and integrations by parts are well defined,\nwe obtain:\n\\begin{align}\n\\label{e:aprio}\n \\frac12\\partial_t \\|v^{(n)}\\|^2_{L^2_\\varrho} \n & = \\int_\\mathbb{R} w_\\varrho v^{(n)} \\partial_t v^{(n)} dx \\nonumber\\\\\n &\\leqslant \\int_\\mathbb{R} w_\\varrho v^{(n)} {\\mathcal{L}}_\\nu v^{(n)} dx - \\int_\\mathbb{R} w_\\varrho v^{(n)}(v^{(n)}+Z^{(n)})^3 dx \\\\\n &\\leqslant \\int_\\mathbb{R} w_\\varrho v^{(n)} {\\mathcal{L}}_\\nu v^{(n)} dx - \\frac12 \\int_\\mathbb{R} w_\\varrho |v^{(n)}|^4 dx + C \\int_\\mathbb{R} w_\\varrho |Z^{(n)}|^4 dx \\nonumber \\;.\n\\end{align}\nNow we first use that $Z$ is uniformly bounded in $C^0_\\gamma$ for any small $\\gamma>0$. See Lemma \\ref{lem:regZ} below.\nThus the $L^4_\\varrho$-norm of $Z$ is finite for $\\varrho>1$, and we now want to uniformly bound $\\|Z^{(n)}\\|^4_{L^4_\\varrho}$ in the formula \\eqref{e:aprio} above by $\\|Z\\|^4_{L^4_\\varrho}$.\nFor this first a simple calculation verifies for $k,n\\in\\mathbb{N}_0$ \n\\[\n\\sup_{|x|\\leqslant n} \\frac{w_\\varrho(x+2nk)}{w_\\varrho(x)} \n= \\left[ \\sup_{|z|\\leqslant 1}\\frac{1+z^2n^2}{1+(z+2k)^2n^2} \\right]^{\\varrho\/2} \\leqslant \\left[ \\frac{2}{(2|k|-1)^2} \\right]^{\\varrho\/2} \\;.\n\\]\nThus we obtain by periodicity for $\\varrho>1$\n\\begin{align*}\n \\|Z^{(n)}\\|^4_{L^4_\\varrho}\n = \\int_\\mathbb{R} w_\\varrho |Z^{(n)}|^4 dx \n & = \\sum_{k\\in\\mathbb{Z}} \\int_{(2k-1)n}^{(2k+1)n} w_\\varrho(x) |Z(x)|^4 dx \\\\\n & = \\sum_{k\\in\\mathbb{Z}} \\int_{-n}^n w_\\varrho(x+2nk) |Z(x)|^4 dx \\\\\n& \\leqslant \\sum_{k\\in\\mathbb{Z}} 2^{\\varrho\/2} |2|k|-1|^{-\\varrho} \\int_{-n}^n w_\\varrho(x) |Z(x)|^4 dx \\\\\n& \\leqslant 2^{1+\\varrho\/2} \\sum_{k\\in\\mathbb{N}_0} |2k-1|^{-\\varrho} \\|Z\\|^4_{L^4_\\varrho}\\;.\n\\end{align*}\n\nTo proceed with \\eqref{e:aprio}, we need a bound on the quadratic form of the operator~${\\mathcal{L}}_\\nu$. For this we use the following Lemma (compare to Lemma 3.8 of Mielke, Schneider~\\cite{MS:95}).\n\n\\begin{lemma}\n\\label{lem:spec}\n For any weight $w_\\varrho$ with $\\varrho>0$ given by Definition~\\ref{def:weight} we have \n \\[\n \\int_\\mathbb{R} w_\\varrho v {\\mathcal{L}}_0 v\\, dx \\leqslant - \\frac{C_2}{1+2C_2 } \\| v''\\|_{L^2_\\varrho} \n + C_2 (3 +\\frac52 C_2) \\| v\\|_{L^2_\\varrho}^2 \\;.\n \\]\n\\end{lemma}\n\n\n\\begin{remark}\nThis is not sufficient for the approximation result later, as we need $C_2 = O(\\varepsilon^2)$, which is achieved \nif we consider $w_{\\varrho,\\varepsilon}$ instead of $w_\\varrho$. \n\\end{remark}\n\n\\begin{proof}\nWe have to prove the Lemma first for smooth compactly supported or periodic $v$ and then also extend by \ncontinuity to any $v\\in H^2_\\varrho$.\nResults like this are standard.\nIn order to not overload the subsequent presentation with indices, we do not recall this approximation step.\nThe same proof as presented below would hold for the approximation, \nand one just needs to check in the final estimate that we can pass to the limit. \n\nIntegration by parts and H\\\"older's inequality yield\n\\begin{multline*}\n \\int_\\mathbb{R} w_\\varrho v {\\mathcal{L}}_0 v\\, dx =\\\\\n \\begin{aligned}\n &= - \\int_\\mathbb{R} w_\\varrho v^2\\, dx - 2 \\int_\\mathbb{R} w_\\varrho v v''\\, dx + \\int_\\mathbb{R} w_\\varrho' v v''' \\, dx + \\int_\\mathbb{R} w_\\varrho v' v''' \\, dx\\nonumber\\\\\n &= - \\int_\\mathbb{R} w_\\varrho v^2\\, dx - 2 \\int_\\mathbb{R} w_\\varrho v v''\\, dx - \\int_\\mathbb{R} w_\\varrho'' v v'' \\, dx -2 \\int_\\mathbb{R} w_\\varrho' v' v'' \\, dx - \\int_\\mathbb{R} w_\\varrho v'' v'' \\, dx \\nonumber \\\\\n &= - \\int_\\mathbb{R} w_\\varrho v^2\\, dx - 2 \\int_\\mathbb{R} w_\\varrho v v''\\, dx - \\int_\\mathbb{R} w_\\varrho'' v v'' \\, dx + \\int_\\mathbb{R} w_\\varrho'' (v')^2 \\, dx - \\int_\\mathbb{R} w_\\varrho v'' v'' \\, dx \\nonumber \\\\\n &\\leqslant - \\| v\\|^2_{L^2_\\varrho} - \\|v''\\|^2_{L^2_\\varrho}\n - \\int_\\mathbb{R} (2 w_\\varrho+ w_\\varrho'') v v'' \\, dx + C_2\\int_\\mathbb{R} w_\\varrho |v'|^2 \\, dx \\nonumber \\;.\n \\end{aligned}\n \\end{multline*}\nNow we use the following interpolation inequality\n\\begin{align*} \n \\int_\\mathbb{R} w_\\varrho (v')^2 \\, dx \n & = - \\int_\\mathbb{R} w_\\varrho' vv' \\, dx - \\int_\\mathbb{R} w_\\varrho vv'' \\, dx \\\\\n & = \\frac12 \\int_\\mathbb{R} w_\\varrho'' v^2 \\, dx - \\int_\\mathbb{R} w_\\varrho vv'' \\, dx \\\\\n & \\leqslant \\frac{C_2}{2} \\| v\\|_{L^2_\\varrho}^2 + \\| v\\|_{L^2_\\varrho}\\| v''\\|_{L^2_\\varrho} \n\\end{align*}\nto obtain:\n \\begin{multline*} \n \\int_\\mathbb{R} w_\\varrho v {\\mathcal{L}}_0 v\\, dx\\leqslant\\\\\n \\begin{aligned}\n &\\leqslant - (1- \\frac{C_2^2}{2}) \\| v\\|^2_{L^2_\\varrho} - \\|v''\\|^2_{L^2_\\varrho}\n - \\int_\\mathbb{R} (2 w_\\varrho+ w_\\varrho''+ C_2 w_\\varrho) v v'' \\, dx \\nonumber\\\\\n &\\leqslant - (1- \\frac{C_2^2}{2}) \\| v\\|^2_{L^2_\\varrho} - \\|v''\\|^2_{L^2_\\varrho}\n + 2 (1 + C_2) \\| v\\|_{L^2_\\varrho}\\|v''\\|_{L^2_\\varrho} \\nonumber\\\\\n &\\leqslant - (1- \\frac{C_2^2}{2}-(1+C_2)\\delta) \\| v\\|^2_{L^2_\\varrho} - (1-(1+C_2)\\delta^{-1})\\|v''\\|^2_{L^2_\\varrho} \\nonumber\\\\\n & = - \\frac{C_2}{1+2C_2 } \\| v''\\|_{L^2_\\varrho} + C_2 (3 +\\frac52 C_2) \\| v\\|_{L^2_\\varrho}^2 \\nonumber \\;,\n \\end{aligned}\n\\end{multline*}\nwhere we used Young's inequality with $\\delta=1+2C_2$. This finishes the proof of the Lemma.\n\\end{proof}\n\n \nGoing back to \\eqref{e:aprio} and using Lemma \\ref{lem:spec} \nwe obtain the following result for the $2n$-periodic approximation $v^{(n)}$.\n\n \\begin{lemma}\n \\label{lem:apprio}\n Let $u_0$ be in $L^2_\\varrho$ for some $\\varrho>3$, then \n there is a small constant $c>0$ and a large constant $C>0$ such that for all $t>0$\n \\[\n \\partial_t \\|v^{(n)}\\|^2_{L^2_\\varrho} \n \\leqslant - c \\| v^{(n)}\\|^2_{H^2_\\varrho} - \\| v^{(n)}\\|_{L^4_\\varrho}^4 + C \\| v^{(n)}\\|_{L^2_\\varrho}^2 + C\\| Z\\|_{L^4_\\varrho}^4 \\;.\n \\]\n \\end{lemma}\n \n As already mentioned this result at least on bounded domains is well known. \n Usually one would estimate the $v^2-v^4$ by $C-v^2$ in order to obtain bounds on the $L^2$-norm that are uniform in time.\n But as we are only after the existence of solutions in this section, \n we keep the $L^4$-norm in order to exploit that regularity.\n \n \n The following corollary is standard for a-priori estimates as in Lemma \\ref{lem:apprio}.\n First by neglecting \n the negative terms on the right hand side and by applying Gronwall inequality \n we obtain an $L^\\infty(0,T,L^2_\\varrho)$-bound.\n The final two estimates follow by integrating in time the inequality in Lemma \\ref{lem:apprio}.\n\n \n \\begin{corollary}\nUnder the assumptions of the previous Lemma~\\ref{lem:apprio} the sequence $\\{v^{(n)}\\}_{n\\in\\mathbb{N}}$ is uniformly bounded \n in $L^\\infty(0,T,L^2_\\varrho) \\cap L^2(0,T, H^2_\\varrho)\\cap L^4(0,T,L^4_\\varrho)$\nfor all $T>0$. Moreover, $\\{v^{(n)}\\}_{n\\in\\mathbb{N}}$ is also uniformly bounded \n in \n \\[L^\\infty(0,T,L^2([-L,L])) \\cap L^2(0,T, H^2([-L,L]))\\cap L^4(0,T,L^4([-L,L]))\n \\]\n for all $T>0$ and all $L>0$.\n \\end{corollary}\n\n Now we can finalize the proof of\nTheorem \\ref{thm:exSH}. \n By taking consecutively subsequences \n we obtain a $v \\in L^\\infty(0,T,L^2_\\varrho) \\cap L^2(0,T, H^2_\\varrho)\\cap L^4(0,T,L^4_\\varrho)$\n such that \n \\[\nv^{(n_k)} \\rightharpoonup v\n\\quad\\text{in } L^2(0,T, H^2_\\varrho)\\cap L^4(0,T,L^4_\\varrho)\n \\]\n and \n \\[\nv^{(n_k)}\\stackrel{*}{\\rightharpoonup} v\n\\quad\\text{in } L^\\infty(0,T,L^2_\\varrho)\\;.\n \\]\nFurthermore using a diagonal argument, \n \\[\nv^{(n_k)} \\rightharpoonup v\n\\quad\\text{in } L^2(0,T, H^2_\\text{loc})\\cap L^4(0,T,L^4_\\text{loc})\n \\]\n and \n \\[\nv^{(n_k)}\\stackrel{*}{\\rightharpoonup} v\n\\quad\\text{in } L^\\infty(0,T,L^2_\\text{loc})\\;.\n \\]\nThis yields by compactness on bounded intervals that \n\\[\nv^{(n_k)} \\to v \\quad\\text{in all } L^2([0,T],H^1[-L,L])\n\\]\nand thus \n\\[\nv^{(n_k)}(t,x) \\to v(t,x) \\quad\\text{for almost all } (t,x)\\;.\n\\]\nThus, by passing to the limit in the weak formulation for $v^{(n)}$\nwe obtain for any $\\varphi\\in C^\\infty_c(\\mathbb{R})$ (smooth and compactly supported)\nthat\n\\[\n\\langle v(t) ,\\varphi\\rangle = \\langle u_0 ,\\varphi\\rangle \n- \\int_0^t \\langle (1+\\partial_x^2) v(s), (1+\\partial_x^2)\\varphi \\rangle ds \n- \\int_0^t \\langle ( v(s)+Z(s))^3 , \\varphi \\rangle ds\\;.\n\\]\nThis implies that $v$ is a weak solution of\n\\[\n\\partial_t v= {\\mathcal{L}}_\\nu v - (v+Z)^3\\;, \\quad v(0)=u_0\\;.\n\\]\nFurthermore by regularity of $v$ we can take the scalar product with $v$ here. \nThis will be used in the proof of uniqueness.\n\n\\begin{remark}\nOne needs to be careful here, as the resulting \nlimit (i.e., the solution) is in general not a measurable random variable.\nThis is well known and due to the fact that we take subsequences that might depend on the given realization of $Z$, and thus the limit is in general not measurable. \nUniqueness, which is proved in the next step, \nenforces that the whole sequence $v^{(n)}$ converges to the solution $v$ and thus the limit is a measurable random variable.\n\\end{remark}\n\n\nFor uniqueness consider two weak solutions $v_1$ and $v_2$ \nin $L^\\infty(0,T,L^2_\\varrho) \\cap L^2(0,T, H^2_\\varrho)\\cap L^4(0,T,L^4_\\varrho)$ with $\\varrho>3$.\nDefine \n\\[d=v_1-v_2\n\\]\nwhich solves \n\\[\n\\partial_t d= {\\mathcal{L}}_\\nu d - (v_1+Z)^3 + (v_2+Z)^3\\;, \\quad d(0)=0\\;.\n\\]\nBy the regularity of $d$ we have $\\partial_t d \\in L^2(0,T,H^{-2}_\\varrho)$, thus we can take the $L^2_\\varrho$-scalar product with $d$ \nto obtain \n\\begin{align*}\n \\frac12 \\partial_t \\|d\\|^2_{L^2_\\varrho}\n &= \\int_\\mathbb{R} w_\\varrho d[ {\\mathcal{L}}_\\nu d - (v_1+Z)^3 + (v_2+Z)^3] dx\\\\\n &= \\int_\\mathbb{R} w_\\varrho d[ {\\mathcal{L}}_\\nu d - (d+v_2+Z)^3 + (v_2+Z)^3] dx\\\\\n &= \\int_\\mathbb{R} w_\\varrho d[ {\\mathcal{L}}_\\nu d - d^3 - 3 d^2(v_2+Z) - 3d(v_2+Z)^2] dx\\\\\n &\\leqslant \\int_\\mathbb{R} w_\\varrho d {\\mathcal{L}}_\\nu d dx - \\int_\\mathbb{R} w_\\varrho [d^4 + 3d^2(v_2+Z)^2] - 3 \\int_\\mathbb{R} w_\\varrho d^3(v_2+Z) dx\\\\ \n &\\leqslant - c \\|d\\|^2_{H^2_\\varrho} + C \\|d\\|^2_{L^2_\\varrho} \n\\end{align*}\nusing that $3d^3b \\leqslant d^4 + \\frac94 d^2b^2$ and Lemma \\ref{lem:spec}. \nNeglecting now all negative terms and using Gronwall's inequality yields (as $d(0)=0$) that\n\\[d(t)=0\\quad \\text{for all }t\\geqslant0 \\;,\n\\]\nand thus uniqueness of solutions.\n \n\n\n\\section{Additional regularity for the Ginzburg-Landau equation}\n\\label{sec:reg}\n\n\nAt the moment we need a very strong weight for the existence and uniqueness of solutions, \nand also related results like the one of \\cite{MoWe:P15} always use Besov spaces with integrable weights.\nRecall the amplitude equation for the complex-valued amplitude $A$\n\\begin{equation}\n\\label{e:GL}\n \\partial_T A =4\\partial_X^2 A + \\nu A -3A|A|^2 + \\partial_T \\mathcal{W}\\;, \\qquad A(0)=A_0\\;,\n \\tag{GL}\n\\end{equation}\nwith complex-valued space time white noise $ \\partial_T \\mathcal{W}$.\nNow we use again the standard substitution \n\\[\nB=A- \\mathcal{Z}\n\\]\nwith stochastic convolution for $\\Delta_\\nu = 4\\partial_X^2+\\nu$ defined by\n\\begin{equation*}\n \\mathcal{Z}(T) =\\mathcal{W}_{\\Delta_\\nu}(T)\n = \\int_0^T {\\mathrm{e}}^{(T-s)\\Delta_\\nu} d \\mathcal{W}(s)\\;.\n\\end{equation*}\nNow $B$ solves \n\\begin{equation}\n \\label{e:GLt}\n \\partial_T B =4\\partial_X^2 B + \\nu B -3(B+\\mathcal{Z})|B+\\mathcal{Z}|^2\\;, \\qquad B(0)=A_0\\;.\n\\end{equation}\nIn the regularity results of this section, we will try to weaken the weight as much as possible. \nMoreover, we show spatial H\\\"older regularity, which is the most we can hope for, \nas we are limited by the regularity of the stochastic convolution~$\\mathcal{Z}$. See Lemma \\ref{lem:regZ} below.\n\nThe key idea is to use energy estimates together with a classical bootstrap argument:\n\\begin{itemize}\n\\item Using the $L^2_\\varrho$-energy estimate we obtain \n $A-\\mathcal{Z}\\in L^\\infty(0,T,L^2_\\varrho) \\cap L^2(0,T,H^1_\\varrho) \\cap L^4(0,T,L^4_\\varrho)$\n in the proof of existence in Theorem \\ref{thm:exGL}.\n\\item Using the $L^p_\\varrho$-norm we derive $A \\in L^\\infty (0,T,L^q_\\varrho)$ in Lemma \\ref{lem:5.1}.\n\\item The $H^1_\\varrho$-norm yields $ A-\\mathcal{Z} \\in L^\\infty (0,T,H^1_\\varrho) \\cap L^2 (0,T,H^2_\\varrho)$ \nin Lemma \\ref{lem:apH1}\n\\item Sobolev embedding yields H\\\"older regularity $A\\in L^\\infty(0,T, C^0_\\kappa)$ for arbitrarily small weight $\\kappa>0$. \nSee Theorem \\ref{thm:apHoeld}.\n\\item Using the $W^{1,p}_\\varrho$-norm we derive $A - \\mathcal{Z} \\in L^\\infty(0,T, W^{1,2p}_\\varrho)$\nin Lemma \\ref{lem:apW1p}.\n\\item The final result again by Sobolev embedding is $ A \\in L^\\infty(0,T,C^{0,\\eta}_\\kappa )$\nfor all H\\\"older exponents $\\eta\\in(0,1\/2)$ and for arbitrarily small weight $\\kappa>0$. \nSee Corollary \\ref{cor:maxregA}.\n\\end{itemize}\n\nThis procedure can only be done for the amplitude equation, but not for the Swift-Hohenberg equation.\nFor example for the $L^p$-estimate we need $\\langle u^{p-1}, \\Delta u \\rangle_{L^2_\\varrho} \\leq c \\|u\\|_{L^p_\\varrho}$, which holds for \nthe Laplacian for any $p\\geq2$, but\nfor the Swift-Hohenberg operator only for $p=2$.\n\nLet us now start with the bootstrap argument.\nThe proof of existence and uniqueness with a strong weight is the same as for the Swift-Hohenberg equation before.\nWe only need the slightly weaker assumption $\\varrho>2$ in this case. \nThe precise theorem is:\n\\begin{theorem}\n \\label{thm:exGL} \nFor $T>0$ and all $A_0\\in L^2_\\varrho$ with $\\varrho>2$ there is a complex-valued stochastic process $B$ such that $\\mathbb{P}$-almost surely \n\\[\nB\\in L^\\infty(0,T,L^2_\\varrho) \\cap L^2(0,T,H^1_\\varrho) \\cap L^4(0,T,L^4_\\varrho) \n\\]\nand $B$ is a weak solution of \\eqref{e:GLt}. \nMoreover, for any other such weak solution $\\tilde{B}$ we have\n\\[\n\\mathbb{P}( \\sup_{t\\in[0,T]}\\|B(t)- \\tilde{B}(t)\\|_{L^2_\\varrho}=0 ) =1\\;.\n\\]\n\\end{theorem}\nNote that in the previous theorem we only assumed $\\varrho>2$ for the weight. \nThis is due to the fact that for the Laplacian we need one integration by parts less. See Remark \\ref{rem:rho3}.\n\n\nWhile for $B$, in the following we can go all the way up to H\\\"older exponent $1$ and even show $W^{1,p}_\\varrho$-regularity.\nFor the amplitude $A$ we are limited by the following Lemma on the regularity of the stochastic convolution $\\mathcal{Z}$.\n\\begin{lemma}\\label{lem:regZ}\nFor all $\\eta<1\/2$, $T>0$ and small weight $\\kappa>0$ one has $\\mathbb{P}$-almost surely \n\\[\n\\mathcal{Z} \\in L^\\infty(0,T,C^{0,\\eta}_\\kappa )\\;.\n\\]\nActually, for all $p>0$ there exists a constant $C_p$ such that\n\\[\n\t\\mathbb{E}[\\sup_{[0,T]}\\|\\mathcal{Z}\\|^p_{C^{0,\\eta}_\\kappa}]\\leqslant C_p.\n\\]\n\\end{lemma}\n\n\\begin{proof}[Sketch of Proof]\nWe refrain from giving all the lengthy details of this proof here.\nMore details on the estimates used can for example be found \nin Lemma~2.4 and Lemma~3.1 in~\\cite{BB:15}, where all tools necessary to prove this \nlemma are presented.\n\nThe proof for regularity of the stochastic convolution is fairly standard and based on the proof \nof the Kolmogorov test for H\\\"older continuity of stochastic processes.\nFirst note that by H\\\"older's inequality it is enough to verify the claim for large $p$.\nFor spatial regularity we consider the embedding of $C^{0,\\gamma}([-L,L])$ into \n$W^{\\alpha,p}([-L,L])$ for $\\gamma<\\alpha<1\/2$ and $p\\to\\infty$. \nThen we can use explicit representation of these norms in terms of integrated H\\\"older quotients.\nFor the bound in time, we can use the celebrated factorization method of Da Prato, Kwapie\\'n and Zabzcyck~\\cite{DKZ87}. \n\\end{proof}\n\n \nLet us first start with a standard energy estimate for the $L^{2p}_\\varrho$-norm. \nHere and in all other energy estimates, we need to perform these estimates \nfor the approximating sequence from the proof of existence, \nand then pass to the limit later. \nBut for simplicity of presentation, we do not state the index $n$ in the estimate.\nThe proof for approximating sequence is the same as the one presented below. One just needs to check in the final estimate, \nwhether one can pass to the limit or not.\n\\begin{lemma}\\label{lem:5.1}\nLet $A$ be such that $B=A-\\mathcal{Z}$ is a weak solution of \\eqref{e:GLt} given by Theorem \\ref{thm:exGL} and fix $T>0$.\nIf $\\varrho>1$ and $q$ such that $A(0)\\in L^q_\\varrho$,\nthen $\\mathbb{P}$-almost surely\n\\[A, B\\in L^\\infty (0,T,L^q_\\varrho)\\;.\n\\]\nMoreover, for all $q\\geqslant 1$ \nthere exists a constant $C_q$ such that\n\\[\n\t\\mathbb{E}[\\int_0^T \\|A\\|^q_{L^q_\\varrho}ds]\\leqslant C_q, \\qquad \\mathbb{E}[\\int_0^T \\|B\\|^q_{L^q_\\varrho}ds]\\leqslant C_q.\n\\]\n\\end{lemma}\n\n\\begin{proof} In view of Lemma \\ref{lem:regZ} it is sufficient to consider only $B$. We use $q=2p$,\nthe notation $B'=\\partial_X B$, and the estimate $\\text{Re}(z) \\leqslant |z|$ to obtain:\n %\n\\begin{align*}\n\\frac1p \\partial_T \\|B\\|^{2p}_{L^{2p}_\\varrho} \n = {}& \\frac1p \\partial_T \\int_\\mathbb{R} w_\\varrho B^p \\overline{B}^p dx \\\\\n = {}& 2\\text{Re} \\int_\\mathbb{R} w_\\varrho B^{p-1}\\partial_T B \\cdot \\overline{B}^p dx\\\\\n = {}& 8\\text{Re} \\int_\\mathbb{R} w_\\varrho B^{p-1} B'' \\cdot \\overline{B}^p dx + 2\\nu \\int_\\mathbb{R} w_\\varrho |B|^{2p} dx\\\\\n& - 6\\text{Re} \\int_\\mathbb{R} w_\\varrho B^{p-1}(B+\\mathcal{Z}) |B+\\mathcal{Z}|^2 dx \\\\\n = {}& - 8\\text{Re} \\int_\\mathbb{R} w_\\varrho' B^{p-1} B' \\overline{B}^p dx \n+ 2\\nu \\|B\\|^{2p}_{L^{2p}_\\varrho} \n- 3 \\|B\\|^{2p+2}_{L^{2p+2}_\\varrho} \n+ C \\|\\mathcal{Z}\\|^{2p+2}_{L^{2p+2}_\\varrho} \\\\\n& - 8(p-1)\\text{Re} \\int_\\mathbb{R} w_\\varrho B^{p-2} (B')^2 \\overline{B}^p dx \n- 8p \\int_\\mathbb{R} w_\\varrho |B|^{2p-2} |B'|^2dx \\\\\n\\leqslant {}& 8C_1 \\int_\\mathbb{R} w_\\varrho |B|^{2p-1} |B'| dx + 2\\nu \\|B\\|^{2p}_{L^{2p}_\\varrho} \n- 3 \\|B\\|^{2p+2}_{L^{2p+2}_\\varrho} \n+ C \\|\\mathcal{Z}\\|^{2p+2}_{L^{2p+2}_\\varrho} \\\\\n& - 8 \\int_\\mathbb{R} w_\\varrho |B|^{2p-2} |B'|^2dx \\\\\n\\leqslant {}& (2C_1^2+\\nu) \\|B\\|^{2p}_{L^{2p}_\\varrho} - 3 \\|B\\|^{2p+2}_{L^{2p+2}_\\varrho} + C \\|Z\\|^{2p+2}_{L^{2p+2}_\\varrho} \\\\\n\\leqslant {}& C - 2 \\|B\\|^{2p+2}_{L^{2p+2}_\\varrho} + C \\|\\mathcal{Z}\\|^{2p+2}_{L^{2p+2}_\\varrho}\\;,\n\\end{align*}\nwhere we need that the weight is integrable while applying H\\\"older's inequality in the last step.\nIntegrating and taking expectations yields the claim.\n\\end{proof}\nWe also need $L^\\infty(0,T,H^1_\\varrho)$-regularity.\n\\begin{lemma}\n\\label{lem:apH1}\nLet $A$ be such that $B=A-\\mathcal{Z}$ is a weak solution of~\\eqref{e:GLt} given by Theorem \\ref{thm:exGL} and fix $T>0$.\nIf $\\varrho>2$ and $A(0)\\in H^1_\\varrho \\cap L^6_\\varrho$,\nthen we have $\\mathbb{P}$-almost surely\n\\[\nB= A-\\mathcal{Z} \\in L^\\infty (0,T,H^1_\\varrho) \\cap L^2 (0,T,H^2_\\varrho)\\;.\n\\]\nMoreover, there exists a constant $C$ such that\n\\[\n\\mathbb{E}[\\int_0^T \\|B\\|^2_{H^2_\\varrho}ds]\\leqslant C, \\qquad \\mathbb{E}[\\sup_{[0,T]}\\|B\\|^2_{H^1_\\varrho}]\\leqslant C\\;.\n\\]\n\\end{lemma}\n\n\\begin{proof} Consider \n\\begin{align*}\n\\frac12 \\partial_T \\|\\partial_XB\\|^{2}_{L^{2}_\\varrho} \n={}& \\frac12 \\partial_T \\int_\\mathbb{R} w_\\varrho B' \\cdot \\overline{B'}\\; dx \\\\\n={}& \\text{Re} \\int_\\mathbb{R} w_\\varrho \\partial_T B' \\cdot \\overline{B'}\\; dx\\\\\n={}& 4\\text{Re} \\int_\\mathbb{R} w_\\varrho B''' \\cdot \\overline{B'} dx + \\nu \\int_\\mathbb{R} w_\\varrho |B'|^{2} dx - 3 \\text{Re} \\int_\\mathbb{R} w_\\varrho \\left(A |A|^{2}\\right)' \\cdot \\overline{B'} \\;dx \\\\\n={}& - 4\\text{Re} \\int_\\mathbb{R} w_\\varrho' B'' \\overline{B'} \\; dx - 4 \\int_\\mathbb{R} w_\\varrho |B''|^2 \\; dx \n+ \\nu \\|B'\\|^{2}_{L^{2}_\\varrho} \n\\\\& + 3 \\text{Re} \\int_\\mathbb{R} w_\\varrho A |A|^{2} \\cdot \\overline{B''} \n+ 3 \\text{Re} \\int_\\mathbb{R} w_\\varrho' A |A|^{2} \\cdot \\overline{B'} \\\\\n\\leqslant {}& - 4\\int_\\mathbb{R} w_\\varrho |B''|^2 \\; dx + C_1 \\int_\\mathbb{R} w_\\varrho |B''| |B'| \\; dx + \\nu \\|B'\\|^{2}_{L^{2}_\\varrho}\n\\\\&\n+ 3 \\int_\\mathbb{R} w_\\varrho |A|^{3} |B''| \\; dx + C_1 \\int_\\mathbb{R} w_\\varrho |A|^{3}|B'| dx \\\\\n\\leqslant {}& (\\nu + C_1^2 +\\frac14) \\|B'\\|_{L^2_\\varrho}^2 + (\\frac12-4) \\|B''\\|_{L^2_\\varrho}^2 \n + (C_1^2+9) \\|A\\|_{L^6_\\varrho}^6 \\\\\n\\leqslant {}& -\\frac72 \\|B''\\|_{L^2_\\varrho}^2 + C \\|B'\\|_{L^2_\\varrho}^2 + C^\\ast \\|B\\|_{L^6_\\varrho}^6 + C^\\ast \\|\\mathcal{Z}\\|_{L^6_\\varrho}^6\\;,\n\\end{align*}\nwhere we used Young's inequality $ab\\leqslant \\frac14a^2+b^2$, partial integration and the given form of the weight $ w_\\varrho(x)$ and its properties \\eqref{eq:propweight}.\nIn order to finish the proof, we first drop the term with $\\|B''\\|_{L^2_\\varrho}^2$ thanks to the negative constant in front of it and apply Gronwall's \ninequality to obtain the $L^\\infty(0,T,H^1_\\varrho)$-regularity. Secondly, we just integrate and take expectations to obtain the $L^2 (0,T,H^2_\\varrho)$-regularity.\n\\end{proof}\nNow we are aiming at space-time regularity for $B$ and thus $A$ in $L^\\infty(0,T,C^0_\\kappa)$.\nThis is achieved by a pointwise interpolation of $C^0([-L,L])$ between $L^p([-L,L])$ and $H^1([-L,L])$ \nfor each fixed $t\\in[0,T_0]$.\nBut we need to be careful in the arguments as the interpolation-constants depend on the spatial domain size $L$. \nWe will prove:\n\\begin{theorem}\n\\label{thm:apHoeld}\nLet $A$ be such that $B=A-\\mathcal{Z}$ is a weak solution of \\eqref{e:GLt} given for $T>0$ by Theorem \\ref{thm:exGL}.\nIf $\\varrho>2$, $A(0)\\in H^1_\\varrho$ and\n$A(0)\\in L^p_\\varrho$ for all large~$p$, \nthen $\\mathbb{P}$-almost surely\n\\[ A, B \\in L^\\infty(0,T, C^0_\\kappa)\\quad \\text{for all small }\\kappa>0\\;.\n\\]\nMoreover, for all $p>0$ there exists a constant $C_p$ such that\n\\[\n\\mathbb{E}[\\sup_{[0,T]}\\|A\\|^p_{C^{0}_\\kappa}]\\leqslant C_p\\;,\n\\qquad \\mathbb{E}[\\sup_{[0,T]}\\|B\\|^p_{C^{0}_\\kappa}]\\leqslant C_p\\;.\n\\]\n\\end{theorem}\n\n\\begin{proof}\nFor a bounded interval $I=[-1,1]$ we use for $1\/2 > \\alpha >1\/p$ first the Sobolev embedding of $W^{\\alpha,p}(I)$ into $C^0(I)$,\nthen interpolation between $W^{\\alpha,p}$ spaces and finally \nthe Sobolev embedding of $H^1(I)$ into $W^{1\/2 ,p}(I)$ for all $p\\in(1,\\infty)$ \nto obtain \n\\[\n\\|A\\|_{C^0(I)} \\leqslant C \\|A\\|_{W^{\\alpha,p}(I)} \\leqslant C \\|A\\|_{L^p(I)}^{1-2\\alpha}\\|A\\|_{W^{1\/2 ,p}(I)}^{2\\alpha}\n\\leqslant C \\|A\\|_{L^p(I)}^{1-2\\alpha}\\|A\\|_{H^1(I)}^{2\\alpha}.\n\\]\nNow we use the scaling for $L\\geqslant 1$ in order to derive the precise scaling in the domain size $L$\nof the constant in the Sobolev embedding.\n\\[\n\t\\begin{split}\n\t\t \\|B(L \\cdot )\\|_{L^p(I)} &= \\Big(\\int_I |B(L x )|^p\\; dx\\Big)^{1\/p} = \\Big(\\frac1L \\int_{-L}^L |B(x )|^p\\; dx\\Big)^{1\/p} \\\\\n\t\t & = L^{-1\/p} \\|B \\|_{L^p([-L,L])} \\;.\n\t\\end{split}\n \\]\nThus we obtain \n \\begin{align*}\n \\|B(L \\cdot )\\|_{H^1(I)}\n & = \n L \\|B'( L \\cdot )\\|_{L^2(I)} + \\|B( L \\cdot )\\|_{L^2(I)}\\\\ \n &=L^{1\/2} \\|B'\\|_{L^2([-L,L])} + L^{-1\/2} \\|B\\|_{L^2(I)} \\leqslant L^{1\/2} \\|B\\|_{H^1([-L,L])}\\;.\n \\end{align*}\n %\n Moreover, we obtain using \\eqref{e:Wkpbound}\n \\begin{equation*}\n\\begin{split}\n \\|B\\|_{C^0([-L,L])} & = \\|B(L\\cdot)\\|_{C^0([-1,1])} \\\\\n &\\leqslant C \\|B(L\\cdot)\\|_{L^p([-1,1])}^{1-2\\alpha}\\|B(L\\cdot)\\|_{H^1([-1,1])}^{2\\alpha} \\\\\n &\\leqslant C L^{-\\frac1p(1-2\\alpha)} \\|B\\|_{L^p([-L,L])}^{1-2\\alpha} L^{\\frac12(2\\alpha)} \\|B\\|_{H^1([-L,L])}^{2\\alpha} \\\\\n &\\leqslant C L^{-\\frac1p(1-2\\alpha)} L^{\\alpha} L^{\\varrho(1-2\\alpha)\/p} \\|B\\|_{L^p_\\varrho}^{1-2\\alpha} L^{2\\varrho\\alpha\/2} \\|B\\|_{H^1_\\varrho}^{2\\alpha} \\\\\n &\\leqslant C L^{\\frac1p(1-2\\alpha)(\\varrho-1)} L^{\\alpha(1+\\varrho)} \\|B\\|_{L^p_\\varrho}^{1-2\\alpha} \\|B\\|_{H^1_\\varrho}^{2\\alpha} \\;.\n\\end{split}\n \\end{equation*}\nNow we can first choose $\\alpha >0$ small and then $p>1\/\\alpha$ sufficiently large, so that for any given $\\kappa>0$ we have a $C>0$\nso that \n\\begin{equation*}\n\\|B\\|_{C^0([-L,L])} \n\\leqslant \n C L^\\kappa \\|B\\|_{L^p_\\varrho}^{1-2\\alpha} \\|B\\|_{H^1_\\varrho}^{2\\alpha} \\;.\n\\end{equation*}\nThus the claim for $B$ follows from the equivalent definition of the $C^0_\\kappa$-norm (see~\\eqref{e:equivC0}).\nFor $A=B-\\mathcal{Z}$ we just use the fact that the stochastic convolution $\\mathcal{Z}$ is more regular.\n\nWe can get the bounds for all sufficiently large moments from \n\\[\\|B\\|_{C^0_\\kappa} \n\\leqslant C \\|B\\|_{L^p_\\varrho}^{1-2\\alpha} \\|B\\|_{H^1_\\varrho}^{2\\alpha}\n\\]\nby carefully choosing $\\alpha>0$ sufficiently small and taking the supremum in $[0,T]$,\nas we have any moments in $L^p_\\varrho$ and second moments in $H^1_\\varrho$.\n\\end{proof}\nFor H\\\"older continuity of $B$ and thus $A$, we need to proceed with the bootstrap-argument\nand show $W^{1,2p}_\\varrho$-regularity for $B$ first. \nHere our proof is based again on energy estimates, but now for $\\|B'\\|^{2p}_{L^{2p}}$.\n\\begin{lemma}\n\\label{lem:apW1p}\nLet $A$ be such that $B=A-\\mathcal{Z}$ is a weak solution of \\eqref{e:GLt} given for $T>0$ by Theorem \\ref{thm:exGL}.\nIf $\\varrho>2$, $p>2$, $A(0)\\in W^{1,2p}_\\varrho$ and\n$A(0)\\in L^{6p}_\\varrho$,\nthen \n\\[ \nB= A - \\mathcal{Z} \\in L^\\infty(0,T, W^{1,2p}_\\varrho)\\;.\n\\]\nMoreover for all $p>2$ there exists a constant $C_p$ such that\n\\[\n\\mathbb{E}[\\sup_{[0,T]}\\|B\\|^{2p}_{W^{1,2p}_\\varrho}]\\leqslant C_p.\n\\]\n\\end{lemma}\n\n\\begin{proof}\nRecall\n\\[\n\\partial_T B= B'' +\\nu B - 3|A|^2A\\;.\n\\]\nThen, using the same ideas as in Lemma~\\ref{lem:5.1}, we obtain the following bound:\n\\begin{equation*}\n\t\\begin{split}\n\t\\partial_T \\| B' \\|^{2 p}_{L_{\\varrho}^{2 p}} & = \\partial_T \\int_\\mathbb{R} w_\\varrho\n\t(B')^p (\\overline{B}')^p\\\\\n\n\t& = 2 p \\,\\text{Re} \\int_\\mathbb{R} w_\\varrho (B')^{p -\n\t\t1} (\\overline{B}')^p \\partial_T B' d x\\\\\n\t& = 8p \\text{Re} \\int_\\mathbb{R} w_\\varrho (B')^{p -\n\t\t1} (\\overline{B}')^p B''' d x \n\t + 2p\\nu \\text{Re}\\int_\\mathbb{R} w_\\varrho (B')^{p} (\\overline{B}')^p d x\\\\\n\t& \\qquad -6p \\text{Re}\\int_\\mathbb{R} w_\\varrho (B')^{p -\n\t\t1} (\\overline{B}')^p (|A|^2A)' d x\\;.\n\t\\end{split}\n\\end{equation*}\n\nNow the second term is $2p\\nu \\|B'\\|^{2p}_{L^{2p}_\\varrho}$, \nand it remains to control the first and third terms. \nLet us start with the first one:\n\\begin{align*}\n\t8p \\text{Re} \\int_\\mathbb{R} w_\\varrho (B')^{p -\n\t\t1} (\\overline{B}')^p B''' d x ={}& -8p \\text{Re} \\int_\\mathbb{R} w_\\varrho' (B')^{p -\n\t\t1} (\\overline{B}')^p B'' d x \\\\\n\t& -8p(p-1) \\text{Re} \\int_\\mathbb{R} w_\\varrho (B')^{p -2}(B'')^2 (\\overline{B}')^p d x \\\\\n\t& -8p^2 \\text{Re} \\int_\\mathbb{R} w_\\varrho (B')^{p -1} (\\overline{B}')^{p-1} \\overline{B}'' B'' d x \\\\\n\t\\leqslant {}& 8pC_1^2\\int_\\mathbb{R} w_\\varrho|B'|^{2p}d x - (8p-\\frac{8p}{4})\\int_\\mathbb{R} w_\\varrho |B'|^{2p-2}|B''|^2 d x\\\\\n\t ={}& 8pC_1^2\\|B'\\|^{2p}_{L^{2p}_\\varrho}- 6p\\int_\\mathbb{R} w_\\varrho |B'|^{2p-2}|B''|^2 d x\\;.\n\\end{align*}\nConcerning the third term we have:\n\\begin{align*}\n\t-6p \\text{Re}\\int_\\mathbb{R} w_\\varrho (B')^{p -1} (\\overline{B}')^p (|A|^2A)' d x \n\t= {}& 6p \\text{Re}\\int_{\\mathbb{R}} w_\\varrho'(B')^{p-1}(\\overline{B}')^{p}(|A|^2A) d x\\\\\n\t &+6p(p-1)\\text{Re}\\int_{\\mathbb{R}} w_\\varrho(B')^{p-2}B''(\\overline{B}')^{p}(|A|^2A) d x\\\\\n\t& +6p^2 \\text{Re}\\int_{\\mathbb{R}} w_\\varrho(B')^{p-1}(\\overline{B}')^{p-1}\\overline{B}''(|A|^2A) d x\\\\\n\n\n\n\t\\leqslant{}& 6C_1 p \\int_\\mathbb{R} w_\\varrho |B'|^{2p-1}|A|^3 d x \\\\\n\t& + (12p^2-6p) \\int_{\\mathbb{R}} w_\\varrho|B'|^{2p-2}|B''| |A|^3 d x\\\\\n\n\t \\leqslant {}& C\\int_\\mathbb{R} w_\\varrho|B'|^{2p}d x + C\\int_\\mathbb{R} w_\\varrho|A|^{6p}d x\\\\\n\t&+ p \\int_\\mathbb{R} w_\\varrho |B'|^{2p-2}|B''|^2 d x + C \\int_\\mathbb{R} B'|^{2p-2} |A|^6 \\\\\n\t \\leqslant {}& C \\|B'\\|^{2p}_{L^{2p}_\\varrho} + C \\|A\\|^{6p}_{L^{6p}_\\varrho}+p \\int_\\mathbb{R} w_\\varrho |B'|^{2p-2}|B''|^2 d x\\;,\n\\end{align*}\nwhere the constant $C$ depends only on $p$ and $C_1$.\nSo we can conclude, putting the estimates for all three terms together,\n\\begin{equation*}\n\\begin{split}\n\t\\partial_T \\| B' \\|^{2 p}_{L_{\\varrho}^{2 p}} & \\leqslant C \\|B'\\|^{2p}_{L^{2p}_\\varrho} + C \\|A\\|^{6p}_{L^{6p}_\\varrho}\n-5p\\int_{\\mathbb{R}} w_\\varrho |B'|^{2p-2}|B''|^2 d x\\\\\n\t&\\leqslant C\\|B'\\|^{2p}_{L^{2p}_\\varrho}+C \\|A\\|^{6p}_{L^{6p}_\\varrho}\\;,\n\t\\end{split}\n\\end{equation*}\nwhere we dropped a negative term.\n \nNow, we can finish the proof using Gronwall's inequality, where we need for the initial condition \n$A (0) \\in W_{\\varrho}^{1, 2p}$ for the $W^{1,2p}_\\varrho$-bound on $B$, and \nfurthermore $A (0) \\in L^{6p}_{\\varrho}$ for the ${L^{6p}_\\varrho}$-bound on $A$.\nThe bound for the moments is again straightforward.\n\\end{proof}\n\nNow we turn to the regularity in weighted H\\\"older spaces defined in \n\\eqref{def:wHoe}.\n\n\n\\begin{theorem}\\label{thm:whreg}\nLet $B=A-\\mathcal{Z}$ with $A$ a weak solution of \\eqref{e:GLt} given for $T>0$ by Theorem \\ref{thm:exGL}. \nIf for some $\\varrho>2$ and all sufficiently large $p\\geqslant 2$ we have $A(0)\\in W^{1,p}_\\varrho $ \nthen for all $\\eta\\in(0,1)$\n\\[ \n B \\in L^\\infty(0,T,C^{0,\\eta}_\\kappa ) \\quad \\text{for all sufficiently small }\\kappa>0\\;.\n\\]\nMoreover, for all $p>0$ there exists a constant $C_p$ such that\n\\[\n\\mathbb{E}[\\sup_{[0,T]}\\|B\\|^{p}_{C^{0,\\eta}_\\kappa}]\\leqslant C_p.\n\\]\n\\end{theorem}\nUsing Lemma~\\ref{lem:regZ} for the regularity of $\\mathcal{Z}$ we obtain:\n\\begin{corollary}\n\\label{cor:maxregA}\n Under the assumptions of the previous theorem for all $\\eta\\in(0,1\/2)$\n\\[ \n A \\in L^\\infty(0,T,C^{0,\\eta}_\\kappa ) \\quad \\text{for all sufficiently small }\\kappa>0\\;.\n\\]\nMoreover, for all $p>0$ there exists a constant $C_p$ such that\n\\[\n\\mathbb{E}[\\sup_{[0,T]}\\|A\\|^{p}_{C^{0,\\eta}_\\kappa}]\\leqslant C_p.\n\\]\n\\end{corollary}\n\n\\begin{proof}(Proof of Theorem~\\ref{thm:whreg})\nWe proceed\nby using the Sobolev embedding of $W^{\\alpha,p}([-L,L])$ into $C^{0,\\alpha-1\/p}([-L,L])$ for $p+1>\\alpha p>1$ and then an interpolation inequality.\nAs before, we need to take care of the scaling of the constants with respect to $L$. Recall that $I=[-1,1]$. First,\n\\begin{equation*}\n\\|B\\|_{C^{0,\\alpha-1\/p}(I)} \n\\leqslant C\\|B\\|_{W^{\\alpha,p}(I)}\n\\leqslant C\\|B\\|^{1-\\alpha}_{L^p(I)}\\|B\\|^{\\alpha}_{W^{1,p}(I)}\\;.\n\\end{equation*}\nRescaling for $L\\geqslant1$ yields\n\\begin{equation*}\n\\begin{split}\n\\|B\\|_{C^{0,\\eta}[-L,L]}&=\\sup_{\\xi,\\zeta\\in [-L,L]}\\frac{|B(\\xi)-B(\\zeta)|}{|\\xi-\\zeta|^\\eta}+\\|B\\|_{C^0[-L,L]}\\\\\n&=\\sup_{x,y\\in I} \\frac{|B(xL)-B(yL)|}{|x-y|^\\eta L^\\eta}+\\|B(\\cdot L)\\|_{C^0(I)}\\\\\n &\\leqslant L^{-\\eta}\\|B(\\cdot L)\\|_{C^{0,\\eta}(I)} + \\|B(\\cdot L)\\|_{C^0(I)} \\;.\n\\end{split}\n\\end{equation*}\nNow we take $\\varsigma=\\alpha-1\/p$ (recall $\\alpha p>1$) and interpolate:\n\\begin{equation*}\n\\|B\\|_{C^{0,\\varsigma}[-L,L]} \\leqslant L^{-\\varsigma} \\|B(\\cdot L)\\|^\\alpha_{W^{1,p}(I)}\\cdot\\|B(\\cdot L)\\|^{1-\\alpha}_{L^{p}(I)} + \\|B(\\cdot L)\\|_{C^0(I)} \\;.\n\\end{equation*}\nNow we rescale back all the norms to the original length scale. For the first one we obtain\n\\begin{equation*}\n\\begin{split}\n \\|B(\\cdot L)\\|^p_{W^{1,p}(I)}&= L^p\\int_I |B'(Lx)|^p dx + \\int_I|B(Lx)|^p dx\\\\\n &= L^{p-1}\\int_{-L}^{L}|B'|^p dz + \\frac{1}{L} \\int_{-L}^{L}|B|^p dx\n \\leqslant L^{p-1}\\|B\\|^p_{W^{1,p}[-L,L]}\n\\end{split}\n\\end{equation*}\nand thus\n\\[\n \\|B(\\cdot L)\\|^\\alpha_{W^{1,p}(I)}\\leqslant L^{\\frac{p-1}{p}\\alpha}\\|B\\|^\\alpha_{W^{1,p}[-L,L]} \\;.\n\\]\nFor the second norm in the interpolated part we have, by a substitution,\n\\[\n\\|B(\\cdot L)\\|^p_{L^p(I)}\\leqslant L^{-1}\\|B\\|^p_{L^p[-L,L]}\n\\quad \\text{and}\\quad\n \\|B(\\cdot L)\\|^{1-\\alpha}_{L^{p}(I)} \\leqslant L^{-\\frac{1-\\alpha}{p}}\\|B\\|^{1-\\alpha}_{L^{p}[-L,L]} \\;.\n\\]\nWe can now put these estimates together and derive\n\\begin{equation*}\n \\begin{split}\n\t\\| B \\|_{C^{0, \\varsigma} [- L, L]} \n\t& \\leqslant L^{- \\varsigma} \\| B (\\cdot L)\\|^{\\alpha}_{W^{1, p}(I)} \\| B (\\cdot L) \\|^{1 - \\alpha}_{L^p(I)}+ \\|B(\\cdot L)\\|_{C^0[-1,1]}\\\\\n\t& \\leqslant L^{- \\varsigma} L^{\\frac{p - 1}{p} \\alpha} \\| B\\|^{\\alpha}_{W^{1, p} [- L, L]} \\cdot L^{- \\frac{1 - \\alpha}{p}} \\| B \\|^{1- \\alpha}_{L^p [- L, L]} + \\|B\\|_{C^0[-L,L]}\\\\\n\n\t& = \\| B \\|^{\\alpha}_{W^{1, p} [- L, L]} \\cdot \\| B \\|^{1 - \\alpha}_{L^p[- L, L]} + \\|B\\|_{C^0[-L,L]}\n\t\\end{split}\n\\end{equation*}\nwith $\\varsigma = \\alpha -\\frac{1}{p}$, \nwhere $\\varsigma \\in (0, 1)$, $\\alpha \\in (\\varsigma, 1)$ and $p > 1\/\\alpha$ sufficiently large. It is somewhat remarkable here that the constant above is $1$ and thus independent of $L$.\n\nNow by \\eqref{e:Wkpbound} we can change to the weighted space to\nobtain\n\\begin{equation*}\n\\begin{split}\n\t\\| B \\|_{C^{0, \\varsigma} [- L, L]} \n\t& \\leqslant \\| B \\|^{\\alpha}_{W^{1, p} [-L, L]} \\| B \\|^{1 - \\alpha}_{L^p [- L, L]} + \\|B\\|_{C^0[-L,L]}\\\\\n\t& \\leqslant \\| B \\|^{\\alpha}_{W^{1, p}_{\\varrho}} \\| B \\|^{1 -\\alpha}_{L^p_{\\varrho}} L^{\\varrho \/ p} + \\|B\\|_{C^0[-L,L]} \\;.\n\\end{split}\n\\end{equation*}\nNow for $\\varsigma \\in (0, 1)$, $\\varsigma =\n\\alpha - \\frac{1}{p}$, $\\alpha \\in (\\varsigma, 1)$ and $p > 1\/\\alpha$,\nusing the definition of the weighted H\\\"older norms from \\eqref{def:wHoe} \nand the equivalent representation of the $C^0_\\kappa$-norm (see \\eqref{e:equivC0}) we derive\n\\begin{equation*}\n\\begin{split}\n\\|B\\|_{C^{0,\\varsigma}_\\kappa} &= \\sup_{L\\geqslant1}\\{L^{-\\kappa}\\|B\\|_{C^{0,\\varsigma}[-L,L]}\\}\\\\\n&\\leqslant \\sup_{L\\geqslant1}\\{L^{\\varrho\/p-\\kappa}\\}\\| B\n\\|^{\\alpha}_{W^{1, p}_{\\varrho}} \\| B \\|^{1 - \\alpha}_{L^p_{\\varrho}} + \\|B\\|_{C^0_\\kappa},\n\\end{split}\n\\end{equation*}\nand as soon as we choose $p$ large enough that $\\varrho\/p-\\kappa\\leqslant 0$ we have finished the proof.\nOnce again the bounds on all the moments follow easily, as we have all moments of the terms on the right hand side.\n\\end{proof}\n\n\n\n\\section{Residual}\n\\label{sec:res}\n\nDefine the approximation \n\\begin{equation}\n\\label{def:uA}\nu_A(t,x) = \\varepsilon A(\\varepsilon^2 t, \\varepsilon x){\\mathrm{e}}^{ix} + c.c. \\;,\n\\end{equation}\nwhere $A$ is both a weak and a mild solution of the amplitude equation given by \n\\begin{equation}\n \\label{e:GLmild}\n A(T) = {\\mathrm{e}}^{(4\\partial_X^2+\\nu) T } A(0) - 3\\int_0^T {\\mathrm{e}}^{(4\\partial_X^2+\\nu)(T-s) } A|A|^2(s) ds + \\mathcal{Z}(T) \\;.\n\\end{equation}\nDefine the residual\n\\begin{equation}\n \\label{e:residual}\n \\text{Res}(\\varphi)(t) = \\varphi(t) - {\\mathrm{e}}^{t{\\mathcal{L}}_\\nu}\\varphi(0) + \\int_0^t {\\mathrm{e}}^{(t-s){\\mathcal{L}}_\\nu} \\varphi(s)^3 ds + \\varepsilon^{3\/2} W_{{\\mathcal{L}}_\\nu} (t) \\;,\n\\end{equation}\nwhich measures how close $u_A$ is to a solution.\nIn this section we bound $\\text{Res}(u_A)$. \nThis is a key result to prove the error estimate later. \nFirst we plug in the definition of $u_A$ to obtain\n\\begin{align*}\n\\text{Res}(u_A)(t,\\cdot) \n = {}& \\varepsilon \\Big[A( \\varepsilon^2 t, \\varepsilon \\cdot){\\mathrm{e}}_1 - {\\mathrm{e}}^{t{\\mathcal{L}}_\\nu}[A(0,\\varepsilon \\cdot){\\mathrm{e}}_1] \n\\\\&\n+ 3\\varepsilon^2 \\int_0^t {\\mathrm{e}}^{(t-s){\\mathcal{L}}_\\nu} A|A|^2(\\varepsilon^2 s ,\\varepsilon \\cdot) {\\mathrm{e}}_1 ds \\Big]\\\\\n& + \\varepsilon^3 \\int_0^t {\\mathrm{e}}^{(t-s){\\mathcal{L}}_\\nu}A^3(\\varepsilon^2 s , \\varepsilon \\cdot ){\\mathrm{e}}_3 ds\n+ c.c. - \\varepsilon^{3\/2} W_{{\\mathcal{L}}_\\nu} (t,\\cdot)\\;,\n\\end{align*}\nwhere we used the notation \n\\[\n{\\mathrm{e}}_n(x)={\\mathrm{e}}^{inx}\\;.\n\\]\n\nRescaling to the slow time-scale, we find\n\\begin{equation}\n\\label{e:Res1}\n\\begin{split}\n \\text{Res}(u_A)(T\\varepsilon^{-2}, \\cdot) \n = {}& \\varepsilon A( T,\\varepsilon \\cdot ){\\mathrm{e}}_1 - \\varepsilon {\\mathrm{e}}^{T\\varepsilon^{-2}{\\mathcal{L}}_\\nu}[A(0,\\varepsilon \\cdot){\\mathrm{e}}_1] \n \\\\ &\n + \\varepsilon \\int_0^{T} {\\mathrm{e}}^{(T-s)\\varepsilon^{-2}{\\mathcal{L}}_\\nu} \\left[A^3(s, \\varepsilon \\cdot){\\mathrm{e}}_3 + 3A|A|^2(s, \\varepsilon \\cdot ) {\\mathrm{e}}_1\\right] ds \n \\\\ &\n + c.c. - \\varepsilon^{3\/2} W_{{\\mathcal{L}}_\\nu} (T\\varepsilon^{-2},\\cdot)\\;.\n \\end{split}\n\\end{equation}\nNow we need to transform this to obtain the mild formulation \\eqref{e:GLmild}.\nThis will remove all the $\\mathcal{O}(\\varepsilon)$-terms.\n\n\n\\subsection{Stochastic Convolution}\n\n\nThe stochastic convolution in \\eqref{e:Res1} can be replaced by (see \\cite{BB:15})\n\\[ \n\\varepsilon^{1\/2} W_{{\\mathcal{L}}_\\nu} (t,x) \\approx \\mathcal{Z}(\\varepsilon^2 t,\\varepsilon x)e_1 +c.c. \\quad\\text{uniformly on $[0,T\\varepsilon^{-2}]$ in } C^0_\\gamma \\;.\n\\]\nThe precise statement from \\cite{BB:15} is \n\\begin{theorem}\nFor all $T>0$, for all $\\kappa >0$, for all $p>1$ and all sufficiently small $\\gamma>0$ \nthere is a constant $C>0$ such that\n\\begin{equation*}\n\\mathbb{P}\\Big(\\sup_{[0,T\\varepsilon^{-2}]}\n\\|\\varepsilon^{ 1 \/ 2} W^{(\\varepsilon)}_{\\mathcal{L}_{\\varepsilon}} (t, x) - [ \n \\mathcal{Z} (t \\varepsilon^2, x \\varepsilon) e_1 +\n \\textrm{c.c}.] \\|_{C^0_{\\gamma}}\\geqslant\\varepsilon^{1-\\kappa}\\Big)\\leqslant C \\varepsilon^p\n\\end{equation*}\nfor all $\\varepsilon\\in (0,\\varepsilon_0)$.\n\\end{theorem}\nWe use the following short-hand notation in order to reformulate this lemma.\n\\begin{definition} \n\t\\label{def:0}\n\tWe say that a real valued stochastic process $X$ is \n $\\mathcal{O}(f_\\varepsilon)$ with high probability, if there is a constant $C>0$ such that for all $p>1$ there is a constant $C_p>0$ such that for $\\varepsilon\\in(0,1)$\n\\[ \\mathbb{P}\\Big(\\sup_{[0,T]} |X| \\geqslant C f_\\varepsilon\\Big) \\leqslant C_p\\varepsilon^p\\;.\n\\]\n\\end{definition}\n\\begin{lemma} \n\\label{lem:stoch}\nWe can write\n\t\\[\n\t\t\\varepsilon^{3\/2} W_{{\\mathcal{L}}_\\nu} (t,\\cdot) = \\varepsilon \\mathcal{Z}(\\varepsilon^2 t, \\varepsilon\\cdot){\\mathrm{e}}_1 +c.c. + E_s(t)\n\t\\]\n\twhere $\\|E_s(\\varepsilon^{-2}\\cdot)\\|_{C^0_{\\gamma}}=\\mathcal{O}(\\varepsilon^{1-\\kappa})$ for any $\\kappa>0$ in the sense of the previous definition.\n\\end{lemma}\n\n\n\\subsection{Exchange Lemmas and estimates for the residuals}\n\n\nLet us now come back to \\eqref{e:Res1}. In the following we present lemmas to exchange the Swift-Hohenberg \nsemigroup generated by ${\\mathcal{L}}_\\nu={\\mathcal{L}}_0 + \\varepsilon^2 \\nu$ with \nthe Ginzburg-Landau semigroup generated by $\\Delta_\\nu=4\\partial_x^2 + \\nu$.\nThe first one is for the initial condition, which is the most difficult one, as we cannot allow for a pole in time.\nThe second one is for the term in \\eqref{e:Res1} that contains $A|A|^2$, while the third one shows that the term in \\eqref{e:Res1} with $A^3$ is negligible.\nAfter applying all the exchange Lemmas to \\eqref{e:Res1} we will see that in \\eqref{e:ResExch} below\nall of the remaining terms of order $\\mathcal{O}(\\varepsilon)$ \nwill cancel due to the mild formulation of the Amplitude Equation in \\eqref{e:GLmild}.\n\n\nWe will state all lemmas here and first prove the bound on the residual, before verifying the Exchange Lemmas.\n\n\\begin{lemma}[Exchange Lemma - IC]\n\\label{lem:lemma1} Let $A_0\\in C_\\kappa^{0,\\alpha}$ for some $\\alpha\\in(\\frac12,1)$ and sufficiently small $ \\kappa>0$. \n\tThen the following holds \n\t\\[\n\t\te^{t{\\mathcal{L}}_\\nu}[A_0(\\varepsilon \\cdot){\\mathrm{e}}_1]=(e^{\\Delta_\\nu T}A_0)(\\varepsilon \\cdot){\\mathrm{e}}_1 + E_1(A_0)\n\t\\]\n\twhere $T=\\varepsilon^2 t$, and the error $E_1$ is bounded uniformly in time for all small $\\kappa>0$ by\n\t\\[\n\t\t\\|E_1\\|_{C^0_\\kappa} \\leqslant C \\varepsilon^{\\alpha-\\frac12} \\|A_0\\|_{ C_\\kappa^{0,\\alpha}}\\;.\n\t\\]\n\\end{lemma}\n\n\\begin{remark}\nHere we allow some dependence on higher norms of the initial conditions, i.e. we assume more regularity for the initial conditions in order to avoid the pole in time that appears \nin the exchange lemma below.\n\\end{remark}\n\\begin{remark}\nFor the solution $A$ of the amplitude equation we showed in Section~\\ref{sec:reg} that it splits into a more regular part $B$ and the Gaussian $\\mathcal{Z}$.\nThe process $B$ has $W^{1,p}_\\varrho$-regularity as assumed for the initial condition $A(0)=A_0$ in the previous exchange Lemma.\nThe term $\\mathcal{Z}$ is less regular, but thanks to the fact that it is Gaussian, we can still prove the exchange Lemma for initial conditions $A(0)$ which split into the more regular and the Gaussian part (see the application of Lemma 3.5 in the proof of Theorem 4.2 of \\cite{BB:15}.\n\nThus for our result we could take initial conditions that are as regular, as the solution of the amplitude equation, but\nin order to simplify the statement of the result we refer from adding the less regular Gaussian part here.\n\\end{remark}\nThe following lemma is applied in \\eqref{e:Res1} to the term in the residual associated to the nonlinearity $|A|^2A$,\nin order to exchange the semigroups there.\n\n\\begin{lemma}[Exchange Lemma I]\n\\label{lem:lemma2}\n\tFor any function $D \\in C^{0,\\alpha}_\\kappa$ with small $\\kappa>0$ and $\\alpha\\in(0,1)$, we have for all $t \\in[0,T_0\\varepsilon^{-2}]$\n\t\\[\n\t\te^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] = (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 + E_2(T,D)\n\t\\]\n\twith $T=\\varepsilon^2t$ where the error term $E_2$ is bounded by\n\t\\[\n\t\\|E_2(T,D) \\|_{C^{0}_\\kappa} \\leqslant C \\left[(\\varepsilon^{2\\gamma-1}+\\varepsilon^{\\alpha-\\kappa} )T^{-1\/2}+\\varepsilon T^{-3\/4}\\right] \\|D\\|_{C^{0,\\alpha}_\\kappa}\\;.\n\t\\]\n\\end{lemma}\nThe third lemma is needed in \\eqref{e:Res1} for the term in the residual associated to $A^3$, which should be small.\n\\begin{lemma}[Exchange Lemma II]\n\\label{lem:lemma3}\n\tFor any function $D \\in C^{0,\\alpha}_\\kappa$ for all $\\kappa>0$ and $\\alpha\\in(0,1)$, we have for all $t \\in[0,T_0\\varepsilon^{-2}]$\n\t\\[\n\t\te^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_3] = E_3(T,D) \n\t\\]\t\n\twhere for any $\\gamma\\in[1\/8,1\/2)$ \n\tthe error term $E_3$ on the slow time-scale $T=\\varepsilon^2t$\n\tis bounded by \n\t\\[\\|E_3(T, D)\\|_{C^{0}_\\kappa} \\leqslant C ( \\varepsilon^{2\\gamma-1\/2}+ \\varepsilon^{\\alpha-\\kappa})T^{-1\/2}\n\t\\|D\\|_{C^{0,\\alpha}_\\kappa}\\;.\n\t\\]\n\\end{lemma}\n\nNow we apply all Exchange Lemmas \\ref{lem:lemma1}, \\ref{lem:lemma2}, and \\ref{lem:lemma3} \ntogether with the result for the stochastic convolution from Lemma \\ref{lem:stoch} \nto the definition of $\\text{Res}(u_A)$ from~\\eqref{e:Res1} to obtain\n\\begin{equation}\n\\label{e:ResExch}\n\\begin{split}\n\t\\text{Res}(u_A)(t) ={}& \\varepsilon \\Big[A(\\varepsilon \\cdot, \\varepsilon^2 t ) - {\\mathrm{e}}^{\\Delta_\\nu T}[A(\\varepsilon \\cdot, 0)]\n\t\\\\ &\n\t+ 3\\varepsilon^2 \\int_0^t {\\mathrm{e}}^{(T-\\varepsilon^2s)\\Delta_\\nu} A|A|^2(\\varepsilon^2 s,\\varepsilon \\cdot ) ds - \\mathcal{W}_{\\Delta_\\nu}(\\varepsilon^2 t, \\varepsilon\\cdot)\\Big]{\\mathrm{e}}_1 + c.c.\\\\\n\t& + \\varepsilon [ E_1(A_0)+ E_s] \n\t+ \\varepsilon \\int_0^T \\left[E_2\\left(T-S, A|A|^2\\right)+ E_3\\left(T-S, A^3\\right)\\right] \\; dS.\n\\end{split}\n\\end{equation}\n\nWith this representation we are done. By substituting $S=s\\varepsilon^2$ in the integral, \nwe obtain that the whole bracket $[\\cdots]{\\mathrm{e}}_1$ is the mild formulation of the Ginzburg-Landau equation (see~\\eqref{e:GLmild}) and thus cancels.\nUsing the bounds on the error terms and the regularity of $A$, we obtain the main result.\n\nNote that all poles from the error terms are integrable and that we choose $\\alpha, \\gamma<1\/2$, arbitrarily close to $1\/2$.\n\n\\begin{theorem}[Residual]\n\\label{thm:res}\nLet $A$ be a solution of the amplitude equation \\eqref{e:GL} and assume that there is a $\\varrho>2$ such that for all $p>1$ one has $A(0)\\in W^{1,p}_{\\varrho}$.\nThen for the approximation $u_A$ defined in \\eqref{def:uA} and the residual defined in \\eqref{e:residual} we have\nfor all small $\\kappa>0$ that\n\\begin{equation}\n\\label{e:thmmain}\n\\| \\text{\\rm Res}(u_A)( \\varepsilon^{-2}\\;\\cdot)\\|_{C^0_\\kappa} =\\mathcal{O}(\\varepsilon^{\\frac32 - 2\\kappa }) \\;.\n\\end{equation}\n\\end{theorem}\n\n\\begin{remark}\n\tNote that under the assumptions of the previous theorem \n\tby the regularity results in Section \\ref{sec:reg} we have for all small $\\kappa>0$, $\\gamma\\in(0,1)$ and $\\alpha\\in(0,\\frac12)$ that $A(0)\\in C^{0,\\gamma}_\\kappa$ and $A\\in L^\\infty([0,T], C^{0,\\alpha}_\\kappa)$\n\\end{remark}\n\n\n We remark without proof that one could replace $- 2\\kappa $ on the right hand side of \n \\eqref{e:thmmain} by an arbitrarily small $\\delta>0$. But as $\\kappa$ is small also,\n we state this simpler but weaker statement. \n\n\n\n\\subsection{Fourier Estimates}\n\n\n\n\nNow we present three results that have the same focus, as\nthey all bound convolution operators with a kernel such that the support of the Fourier transform is bounded away from $0$. \nThe bounds are established in weighted H\\\"older norms and are the backbone of the proofs of the exchange lemmas.\n \nIn the first one, Lemma \\ref{lem:guidos}, we consider some smooth projection on a region in Fourier space that is far away from the origin.\nUsing H\\\"older regularity, we show that this is an operator with small norm, when considered from $C^{0,\\alpha}_\\kappa$ to $C^0_\\kappa$.\nLater in Lemma \\ref{lem:guidoext} we modify the proof to give bounds on convolution operators using the $H^2$-norm of the Fourier transform of the kernel.\nWhile in Corollary \\ref{cor:new} we finally modify the result even more, by showing that we do not need the $L^2$-norm of the Fourier transform of the kernel. \nWhile Lemma \\ref{lem:guidoext} will be sufficient for most of the estimates used in the proofs of the exchange lemmas, at one occasion we need Corollary \\ref{cor:new}.\n\n\\begin{lemma}\\label{lem:guidos}\n\tLet $\\widehat{P}:\\mathbb{R}\\to [0,1]$ be a smooth function with bounded support, such that $0\\notin\\textup{supp}(\\widehat{P})$. Let also $D\\in \\mathcal{C}^{0,\\alpha}_\\kappa$, with $\\alpha\\in (0,1)$, $\\kappa>0$. \n\tThen \n\t\\[\n\t\t\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\\leqslant C\\varepsilon^\\alpha\\|D\\|_{C^{0,\\alpha}_\\kappa}.\n\t\\]\n\\end{lemma}\n\n\\begin{proof}\n\tLet us define $\\widehat{G}=1-\\widehat{P}$. Then, by taking the inverse Fourier transform we have for $x\\in \\mathbb{R}$\n\t\\[\n\t\tP(x)+G(x)=\\sqrt{2\\pi}\\delta_0(x).\n\t\\]\n\tNote also that $\\widehat{G}(0)=1$.\n\t\n\tNow \n\t\\begin{equation*}\n\t\\begin{split}\n\t\tP\\ast D(\\varepsilon\\cdot) & = \\sqrt{2\\pi}D(\\varepsilon\\cdot) - G\\ast D(\\varepsilon\\cdot)\\\\\n\t\t& = D(\\varepsilon\\cdot)\\int_{\\mathbb{R}}G(z)dz - \\int_{\\mathbb{R}}G(z)D(\\varepsilon(z-\\cdot))dz\\\\\n\t\t& = \\varepsilon^\\alpha\\int_{\\mathbb{R}}G(z)|z|^\\alpha\\frac{D(\\varepsilon\\cdot)-D(\\varepsilon(z-\\cdot))}{\\varepsilon^\\alpha|z|^\\alpha}dz.\n\t\\end{split}\n\t\\end{equation*}\n\t%\n\tLet us consider\n\t\\[\n\t\t\\begin{split}\n\t\t\t\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0[-L,L]} & \\leqslant \\varepsilon^\\alpha\\int_{\\mathbb{R}}|G(z)||z|^\\alpha\\|D\\|_{C^{0,\\alpha}[-L_z,L_z]}dz\\\\\n\t\t\t& \\leqslant \\varepsilon^\\alpha\\int_{\\mathbb{R}}|G(z)||z|^\\alpha L_z^\\kappa\\|D\\|_{C^{0,\\alpha}_\\kappa}dz,\n\t\t\\end{split}\n\t\\]\n\twhere $L_z=\\max\\{\\varepsilon L+\\varepsilon |z|,2\\}$. We can now divide both sides by $L^\\kappa$ to obtain\n\t\\begin{equation}\\label{e:so}\n\t\t\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\\leqslant \\varepsilon^\\alpha\\int_{\\mathbb{R}}|G(z)||z|^\\alpha \\Big(\\frac{L_z}{L}\\Big)^\\kappa dz \\|D\\|_{C^{0,\\alpha}_\\kappa}.\n\t\\end{equation}\n\tNow recall $\\varepsilon\\in (0,1)$ and $L>1$, so we derive\n\t\\[\n\t\t\\frac{L_z}{L}=\\max\\Big\\{\\varepsilon + \\varepsilon\\frac{|z|}{L},\\frac{2}{L}\\Big\\}\\leqslant 2+|z| \\;.\n\t\\]\n\tGoing back to \\eqref{e:so}\n\t\\[\n\t\t\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\\leqslant \\varepsilon^\\alpha\\int_{\\mathbb{R}}|G(z)||z|^\\alpha (2+|z|)^\\kappa dz \\|D\\|_{C^{0,\\alpha}_\\kappa}.\n\t\\]\n\tThe integral is actually finite and bounded by a constant $C_{\\alpha,\\kappa}$, as $\\widehat{G}$ is sufficiently smooth. \n\tSince any derivative has bounded support so $G$ decays sufficiently fast for the existence of the integral.\n\n\\end{proof}\n\nA simple modification of the previous proof yields the following Lemma:\n\n\\begin{lemma}\n \\label{lem:guidoext} \n\tLet $\\widehat{P}:\\mathbb{R}\\to [0,1]$ be a smooth function and $P$ its inverse Fourier transform. \n\tLet also $D\\in \\mathcal{C}^{0,\\alpha}_\\kappa$, with $\\alpha\\in (0,1)$, $\\kappa\\in(0,1\/2)$. \n\tThen \n\t\\[\n\t\t\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\\leqslant C \\varepsilon^\\alpha \\big[ \\|\\widehat{P}\\|_{L^2}^{1-\\alpha} \\|\\widehat{P}''\\|_{L^2}^\\alpha + \\|\\widehat{P}''\\|_{L^2} \\big]\\|D\\|_{C^{0,\\alpha}_\\kappa} + |\\widehat{P}(0)|\\|D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\\;.\n\t\\]\n\\end{lemma}\n\n\\begin{proof}\n First note that \n \\[\n[ P\\ast D(\\varepsilon\\cdot)](x) = \\int_{\\mathbb{R}} P(y) [D(\\varepsilon (x-y)) - D(\\varepsilon x) ] dy + \\widehat{P}(0) D(\\varepsilon x)\\;.\n \\]\n Then we can proceed as in the proof before to obtain \n \\begin{equation}\n \\label{e:cent}\n \\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\n\\leqslant C\\varepsilon^\\alpha \\int_{\\mathbb{R}}|P(z)||z|^\\alpha (2+|z|)^\\kappa dz \\|D\\|_{C^{0,\\alpha}_\\kappa} + |\\widehat{P}(0)|\\|D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\\;.\n \\end{equation}\nNow we bound the integral. For $|z|\\geqslant1$ as $\\alpha+\\kappa \\leqslant 3\/2$\n\\[\n\\begin{split}\n \\int_{\\{|z|\\geqslant1\\}} |P(z)||z|^{\\alpha+\\kappa} dz \n & \\leqslant \\Big(\\int_{\\mathbb{R}}|P(z)|^2 |z|^4 dz \\Big)^{\\frac12} \n \\Big(\\int_{\\mathbb{R}} |z|^{2(\\alpha+\\kappa-2)} dz\\Big)^{\\frac12}\\\\\n & \\leqslant C \\|\\widehat{P}''\\|_{L^2}\\;,\n\\end{split}\n\\]\nwhere we used Plancherel theorem in the last step. For $|z|\\leqslant 1$ we have \n\\[\n\\begin{split}\n \\int_{\\{|z|\\leqslant 1\\}} |P(z)||z|^{\\alpha} dz \n & \\leqslant C \\Big(\\int_{\\{|z|\\leqslant 1\\}} |P(z)|^2|z|^{2\\alpha} dz\\Big)^{1\/2} \\\\\n&\\leqslant C \\Big(\\int_{\\mathbb{R}}|P(z)|^2 dz \\Big)^{(1-\\alpha)\/2} \n \\Big(\\int_{\\mathbb{R}} |P(z)|^2 |z|^{2} dz\\Big)^{\\alpha\/2}\\\\\n & \\leqslant C \\|\\widehat{P}\\|_{L^2}^{1-\\alpha} \\|\\widehat{P}'\\|_{L^2}^\\alpha\\;,\n\\end{split}\n\\]\nwhere we used first the Cauchy-Schwarz inequality, then H\\\"older's one with $p=1\/(1-\\alpha)$ and $q=1\/\\alpha$, and finally Plancherel theorem.\n\nPutting together all estimates finishes the proof.\n\\end{proof}\n\nUnfortunately, the previous two lemmas are not sufficient in Lemma~\\ref{lem:lemma1}.\nWhen the support of $\\widehat{P}$ is unbounded we have problems with the $L^2$-norm of $\\widehat{P}$, \nwhile higher derivatives are easier to bound. \n\n\\begin{corollary}\n \\label{cor:new}\n Let $\\widehat{P}:\\mathbb{R}\\to [0,1]$ be a smooth function, $P$ its inverse Fourier transform, \n and suppose that there is some $\\delta>0$ such that $\\text{supp}(\\widehat{P}) \\cap (-\\delta,\\delta)=\\emptyset$. \n Fix $\\alpha\\in (0,1)$, $\\kappa\\in(0,1\/2)$ and suppose that there is a $\\gamma>1$ such that $\\alpha+\\kappa+\\gamma\/2 \\in(1,2)$. \n \nThen for all $D\\in \\mathcal{C}^{0,\\alpha}_\\kappa$ we have\n\t\\[\n\t\t\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\n\t\t\\leqslant C \\varepsilon^\\alpha \\|\\widehat{P}'\\|_{L^2}^{2-\\alpha-\\kappa-\\frac{\\gamma}2} \\|\\widehat{P}''\\|_{L^2}^{\\alpha+\\kappa+\\frac{\\gamma}2-1 } \\|D\\|_{C^{0,\\alpha}_\\kappa}\\;.\n\t\\]\n\\end{corollary}\n\n\n\\begin{proof}\n\tFrom~\\eqref{e:cent} we obtain using H\\\"older's inequality\n\\[\n\\begin{split} \n\\|P\\ast D(\\varepsilon\\cdot)\\|_{C^0_\\kappa}\n& \\leqslant C\\varepsilon^\\alpha \\int_{\\mathbb{R}}|P(z)||z|^\\alpha (2+|z|)^\\kappa dz \\|D\\|_{C^{0,\\alpha}_\\kappa} \\\\\n&\\leqslant C\\varepsilon^\\alpha\\int_{\\{|z|\\geqslant \\delta\\}} |P(z)||z|^{\\alpha+\\kappa} dz\\|D\\|_{C^{0,\\alpha}_\\kappa} \\\\\n& \\leqslant C\\varepsilon^\\alpha \\Big(\\int_{\\{|z|\\geqslant \\delta\\}} |P(z)|^2|z|^{2\\alpha+2\\kappa+\\gamma} dz\\Big)^{1\/2} \\|D\\|_{C^{0,\\alpha}_\\kappa}\\\\\n& \\leqslant C\\varepsilon^\\alpha \\Big(\\int_{\\mathbb{R}} |zP(z)|^2|z|^{2\\alpha+2\\kappa+\\gamma-2} dz\\Big)^{1\/2} \\|D\\|_{C^{0,\\alpha}_\\kappa}.\n\\end{split}\n\\]\nNow as the exponent $2\\alpha+2\\kappa+\\gamma-2\\in(0,2)$, \nwe can use H\\\"older inequality to bound the integral above by the integral over $|z|^2|P(z)|^2$ and $|z|^4|P(z)|^2$, \nwhich in turn gives the $L^2$-norm of $\\widehat{P}'$ and $\\widehat{P}''$.\nWe obtain\n\\[\\begin{split} \n\\int_{\\mathbb{R}} |zP(z)|^2|z|^{2\\alpha+2\\kappa+\\gamma-2} dz\n&\\leqslant \\Big( \\int_{\\mathbb{R}} |zP(z)|^2 dz \\Big)^{2-\\alpha-\\kappa-\\frac{\\gamma}2} \n\\Big( \\int_{\\mathbb{R}} |z^2P(z)|^2 dz \\Big)^{\\alpha+\\kappa+\\frac{\\gamma}2-1 }\\\\\n&\\leqslant \\|\\widehat{P}'\\|_{L^2}^{2(2-\\alpha-\\kappa-\\frac{\\gamma}2)} \\|\\widehat{P}''\\|_{L^2}^{2(\\alpha+\\kappa+\\frac{\\gamma}2-1) },\n\\end{split}\\]\nwhich implies the claim.\n\\end{proof}\n\n\n\\subsection{Applications of Fourier estimates}\n\n\nNow we rephrase the bounds of the previous subsection to bound operators given by a Fourier multiplier, as for example\nin the statement of the exchange Lemmas~\\ref{lem:lemma1}, \\ref{lem:lemma2}, and \\ref{lem:lemma3}. \nAnother example we have in mind are bounds on the semigroup generated by the Swift-Hohenberg operator which is presented later in Corollary \\ref{cor:SGbound}.\n\nIn the first step we use regularity of the kernel to bound the operator.\n\\begin{lemma}\n\\label{lem:ours}\n\tLet $m>\\frac{1}{2}$ and $\\mathcal{H}\\cdot=H\\star\\cdot$ be an operator such that the Fourier transform\n\t$\\hat{H}$ of the kernel $H$ is in $H^m(\\mathbb{R})$.\n\tFor any $\\kappa \\in (0, m -\\frac12)$ there is a constant such that \n\t for all $u\\in C^0_\\kappa$ we have \n\t\\[ \n\t\\|\\mathcal{H} u\\|_{C^0_\\kappa} \\leqslant C \\|\\hat{H}\\|_{H^m(\\mathbb{R})} \\| u\\|_{C^0_\\kappa} \\;.\n\t\\]\n\\end{lemma}\n\n\\begin{proof} By the definition of the convolution\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\mathcal{H} u (x) &= \\int_{\\mathbb{R}} H(x-y) u(y) dy = C\\int_{\\mathbb{R}} H(z) u(x-z) dz \\\\\n\t&= \\int_{\\mathbb{R}} H(z) (1+(x-z)^2)^{\\kappa\/2} w_\\kappa(x-z) u(x-z) dz.\n\t\\end{split}\n\t\\end{equation*}\n\tNow we use\n\t\\[1+(x-z)^2 \\leqslant 2(1+z^2)(1+x^2)\\]\n\tto obtain\n\t\\begin{equation*}\n\\begin{split}\n\t|\\mathcal{H} u (x)|\n\t&= C(1+x^2)^{\\kappa\/2} \\int_{\\mathbb{R}}|H(z)| (1+z^2)^{\\kappa\/2} dz \\| u\\|_{C^0_\\kappa}\\\\\n\t& \\leqslant C(1+x^2)^{\\kappa\/2} \\Big( \\int_{\\mathbb{R}}(1+z^2)^{-m+\\kappa} dz\\Big)^{\\frac{1}{2}} \\Big( \\int_{\\mathbb{R}}|H(z)|^2 (1+z^2)^{m} dz\\Big)^{\\frac{1}{2}} \\| u\\|_{C^0_\\kappa}.\n\\end{split}\n\t\\end{equation*}\n\t%\n\tWe finish the proof by noting that $ m -\\kappa >\\frac12$ by assumption, and that by Plancherel theorem\n\t\\[\n\t\\int_{\\mathbb{R}}|H(z)|^2 (1+z^2)^{m} dz =\\|\\hat{H}\\|_{H^m(\\mathbb{R})} \\;.\\qedhere\n\t\\]\n\\end{proof}\nIn order for the previous Lemma to be useful in our case, we have to control the $H^m$-norm of the kernel. \nThis is straightforward for the Swift-Hohenberg semigroup \nif we add a smooth projection on bounded Fourier domains. \n\\begin{lemma}\\label{lem:thisone}\n\tFix $m\\in[0,1)$ and $\\ell\\in\\mathbb{Z}$. Consider $\\widehat{P} :\n\t\\mathbbm{R} \\rightarrow [0, 1]$ smooth with $\\textup{supp} (\\widehat{P}) \\subset [\\ell - 2 \\delta, \\ell + 2\n\t\\delta]$ for some $0<\\delta < 1 \/ 2$. Then it holds that\n\t\\[\n\t\t\\sup_{t \\in [0, T_0 \\varepsilon^{-2}]} \\|\\widehat{P}e^{\\lambda_\\nu t}\\|_{H^m}\\leqslant C \\varepsilon^{-\\max\\{0,m-\\frac12\\}}\n\t\\]\n\twhere $\\lambda_\\nu(k)=-(1-k^2)^2+\\nu\\varepsilon^2= - (1 - k)^2 (1+ k)^2 + \\nu \\varepsilon^2$ is the Fourier-symbol of the operator ${\\mathcal{L}}_\\nu$.\n\\end{lemma}\n\n\\begin{corollary}\n\\label{cor:SGbound}\nConsider the Fourier-projection $\\mathcal{P}=P\\star\\cdot$ with $\\widehat{P}$ as in the lemma above,\nthen we obtain in case $\\kappa \\in (0, \\frac14)$ with $m=\\frac12+\\kappa$ that \n\n\t\\[\n\t\t\\|e^{\\mathcal{L}_\\nu t}\\mathcal{P}u\\|_{C^0_\\kappa}\\leqslant C\\|\\widehat{P}e^{\\lambda_\\nu t}\\|_{H^m}\\|u\\|_{C^0_\\kappa}\n\t\t\\leqslant C \\epsilon^{-\\kappa} \\|u\\|_{C^0_\\kappa}\\;.\n\t\\]\n\tfor all $t\\in[0,T_0\\varepsilon^{-2}]$.\n\\end{corollary}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:thisone}]\nFor the proof we only focus on the most complicated case $\\ell=1$, i.e. with $\\textup{supp}(P)\\subset[1-2\\delta,1+2\\delta]$. The \ncase $\\ell=-1$ is almost verbatim and for $|\\ell|\\not=1$ the proof is actually much simpler, \nas $\\lambda_\\nu \\leqslant -c <0$ is strictly negative there and we obtain exponentially small terms.\n\n A straightforward calculation shows\n\t\\begin{equation*}\n\t\\begin{split}\n\t\t\\| \\widehat{P} e^{\\lambda_\\nu t} \\|^2_{H^1} & = \\| \\widehat{P} (\\cdot - 1)\n\t\te^{\\lambda_\\nu (\\cdot - 1) t} \\|^2_{H^1 (- 2 \\delta, 2 \\delta)}\\\\\n\t\n\t\t& \\leqslant C\n\t\t\\int_{- 2 \\delta}^{2 \\delta} e^{- 2 (2 - k)^2 k^2 t} d k \\\\\n\t\t& \\qquad + C \\int_{- 2 \\delta}^{2 \\delta} 4 (2 (k - 2) k^2 + (k - 2)^2 k)^2 t^2\n\t\te^{- 2 (2 - k)^2 k^2 t} d k\\\\\n\t\n\t\t& \\leqslant C \\int_{- 2 \\delta}^{2 \\delta}\n\t\te^{- C_{\\delta} k^2 t} d k + C \\int_{- 2 \\delta}^{2 \\delta} (k^2 + k)^2\n\t\tt^2 e^{- C_{\\delta} k^2 t} d k\\\\\n\t\n\t\t& \\leqslant C + C \\int_{- 2 \\delta}^{2 \\delta} k^2\n\t\tt^2 e^{- C_{\\delta} k^2 t} d k.\n\t\\end{split}\n\t\\end{equation*}\n\tNow we have to consider two cases, depending on $t$. First if $t \\leqslant 1$, then\n\talso the second integral can be bound by a constant $C$. If $t > 1$, then we can\n\tcontinue with the substitution $l = \\sqrt{t} k$, which gives\n\t$d l = \\sqrt{t} d k$, and we derive\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\\| \\widehat{P} e^{\\lambda_\\nu t} \\|^2_{H^1} & \\leqslant C + C \\int_{- 2\n\t\t\t\\delta \\sqrt{t}}^{2 \\delta \\sqrt{t}} l^2 \\sqrt{t} e^{- Cl^2} d l\n\t \\leqslant C + C \\sqrt{t} \\int_{- \\infty}^{\\infty} l^2 e^{- Cl^2} d l\\\\\n\t\t& \\leqslant C \\sqrt{t} \\;.\n\t\t\\end{split}\n\t\\end{equation*}\n\tThus\n\t\\[ \\| \\widehat{P} e^{\\lambda_\\nu t} \\|_{H^1}^2 \\leqslant \\left\\{\n\t\\begin{array}{ll}\n\tC \\sqrt{t} \\;, & t \\geqslant 1\\\\\n\tC \\;, & t \\leqslant 1\n\t\\end{array} \\right. \\;.\\]\n\tIn a similar way, we can consider the bounds in $L^2$:\n\t\\begin{equation*}\n\t\t\\| \\widehat{P} e^{\\lambda_\\nu t} \\|^2_{L^2} \\leqslant C \\int_{- 2\n\t\t\t\\delta}^{2 \\delta} e^{- C_{\\delta} k^2 t} d k \n\t\t = C \\int_{- 2 \\delta \\sqrt{t}}^{2 \\delta \\sqrt{t}} \\frac{1}{\\sqrt{t}}\n\t\te^{- C_{\\delta} l^2} d l \n\t\t \\leqslant \\left\\{\n\t\\begin{array}{ll}\n\tC t^{-1\/2} \\;, & t \\geqslant 1\\\\\n\tC \\;,& t \\leqslant 1\n\t\\end{array} \\right. \n\t\t\\;.\n\t\\end{equation*}\n\tWe finally get to the bounds in $H^m$ by interpolation:\n\t\\begin{equation*}\n\t\t\\| \\widehat{P} e^{\\lambda_\\nu t} \\|_{H^m} \n\t\t \\leqslant C \\| \\widehat{P}e^{\\lambda_\\nu t} \\|^{1 - m}_{L^2} \\| \\widehat{P} e^{\\lambda_\\nu t} \\|^m_{H^1}\n\t\t \\leqslant \\left\\{ \\begin{array}{ll}\n\t\t\tC\\;,& t \\leqslant 1\\\\\n\t\t\tCt^{- \\frac{1}{4}+\\frac{m}{2}}\\;,& t \\geqslant 1\n\t\t\\end{array} \\right\\} \\leq C \\varepsilon^{-\\max\\{0,m-\\frac12\\}}\n\t\\end{equation*}\n\tfor all $t\\leqslant T_0\\varepsilon^{-2}$.\n\\end{proof}\n\n\n\\subsection{Proof of Exchange Lemma II}\n\nFor the proof of Lemma~\\ref{lem:lemma3} we write the differences of semigroups as convolution operators.\n\nFirst we define a smooth Fourier-multiplier that cuts out regions around $\\pm1$ in Fourier space, \nwhere the eigenvalues of the Swift-Hohenberg operator are close to $0$.\nFix a small $\\delta>0$ independent of $0<\\varepsilon\\ll 1$ and consider a smooth function $\\widehat{P}:\\mathbb{R}\\to[0,1]$ such that \n$\\textup{supp}(\\widehat{P})=[-1-2\\delta, -1+2\\delta]\\cup[1-2\\delta, 1+2\\delta]$\nand $\\widehat{P}=1$ on $[-1-\\delta, -1+\\delta]\\cup[1-\\delta, 1+\\delta]$.\nWe define $\\hat{Q}=1-\\widehat{P}^2$ and let $\\mathcal{Q}=I-\\mathcal{P}^2$.\n\n\nNow we obtain\n\\[ \ne^{T \\varepsilon^2 \\mathcal{L}_{\\nu}} [D (\\varepsilon \\cdot) \\mathrm{e}_3] \n=\ne^{T \\varepsilon^2 \\mathcal{L}_{\\nu}} \\mathcal{P} [\\mathcal{P}D\n(\\varepsilon \\cdot) \\mathrm{e}_3] + e^{T \\varepsilon^2 \\mathcal{L}_{\\nu}}\n\\mathcal{Q} [D (\\varepsilon \\cdot) \\mathrm{e}_3]\n\\]\nand we bound separately the two terms. For the first term we use the semigroup\nestimate from Corollary \\ref{cor:SGbound} (with $\\ell=\\pm1$), \nusing the $H^\\alpha$-estimate on the kernel and Lemma \\ref{lem:guidos}.\nNote that for the application we need to split the estimate\ninto two terms: one concentrated around $1$ and the other around $-1$.\nWe obtain\n\\[ \n\\| e^{T \\varepsilon^2 \\mathcal{L}_{\\nu}} \\mathcal{P} [\\mathcal{P}D\n(\\varepsilon \\cdot) \\mathrm{e}_3] \\|_{C_{\\kappa}^0}\n\\leqslant C \\epsilon^{-\\kappa}\n\\| \\mathcal{P}D\n(\\varepsilon \\cdot) \\mathrm{e}_3 \\|_{C_{\\kappa}^0}\n\\leqslant C\n\\varepsilon^{\\alpha-\\kappa} \\| D \\|_{C_{\\kappa}^{0, \\alpha}}\\;.\n\\]\nFor the second term we need some more work. We start by writing\n\\[ \ne^{T \\varepsilon^2 \\mathcal{L}_{\\nu}} \\mathcal{Q} [D (\\varepsilon \\cdot)\n\\mathrm{e}_3] = (\\mathcal{H}_T D) (\\varepsilon \\cdot) \\mathrm{e}_3\n\\]\nand denoting the kernel of $\\mathcal{H}_T=H_T\\star$ by\n\\[ \n{\\hat{H}_T} (k) = e^{\\nu T} e^{- T \\varepsilon^{- 2} (4 + k \\varepsilon)^2 (2 + k\n\t\\varepsilon)^2} \\hat{Q} (3 + \\varepsilon k) \\;.\n\\]\nIn view of Lemma \\ref{lem:ours} we only need to bound the $H^1$-norm of the kernel ${\\hat{H}_T}$.\nTherefore, we split the $H^1$-norm into two different areas in Fourier space\n\\[ \\| {\\hat{H}_T} \\|^2_{H^1 (\\mathbbm{R})} \\leqslant 2 \\| {\\hat{H}_T} \\|^2_{H^1 \\left( \\left[ -\n\t\\frac{c}{\\varepsilon}, \\frac{c}{\\varepsilon} \\right] \\right)} + 2 \\| {\\hat{H}_T}\n\\|^2_{H^1 \\left( \\left[ - \\frac{c}{\\varepsilon}, \\frac{c}{\\varepsilon}\n\t\\right]^C \\right)}\\;.\n\t\\]\nNote first that both $\\hat{Q}$ and $\\hat{Q}'$ are bounded smooth functions independent of $\\varepsilon$.\nThen we use in the first term that $\\varepsilon | k | \\leqslant C$ and that $| T | $ is bounded.\n\nThus ${\\hat{H}_T}$ is uniformly bounded on $[ -\\frac{c}{\\varepsilon}, \\frac{c}{\\varepsilon}]$ by $ C e^{- TC_0 \\varepsilon^{- 2}}$ \nwhere $- C_0$ is the level\nwhere we cut out the two bumps around $- 2 \/ \\varepsilon$ and $- 4 \/\\varepsilon$. Note that $\\hat{Q}$ is identically zero there.\nWith similar arguments we show that the derivative ${\\hat{H}_T}'$ is uniformly bounded by\n$ C \\varepsilon (1+T\\varepsilon^{-2}) e^{- TC_0 \\varepsilon^{- 2}}$. Thus\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\| {\\hat{H}_T} \\|^2_{H^1 \\left( \\left[ - \\frac{c}{\\varepsilon}, \\frac{c}{\\varepsilon}\n\t\t\\right] \\right)}\n\t\t& \\leqslant C \\int_{-\\frac{c}{\\varepsilon}}^{\\frac{c}{\\varepsilon}} (1 + \\varepsilon^2+ T^2 \\varepsilon^{-2}) e^{- 2TC_0 \\varepsilon^{- 2}} dk\\\\\n\t\t& \\leqslant C \\varepsilon^{- 1} (1 + T^2 \\varepsilon^{- 2}) e^{-2 TC_0\n\t\t\\varepsilon^{- 2}}\\\\\n\t\t& \\leqslant C \\varepsilon^{- 1} e^{- TC_0 \\varepsilon^{- 2}}\\\\\n\t\n\t\t& \\leqslant C \\varepsilon^{4 \\gamma - 1} T^{- 2 \\gamma}\\;,\n\t\\end{split}\n\\end{equation*}\nwhere we used first that $xe^{-x} \\leqslant 1$ and then that $e^{-x} \\leqslant C_\\gamma x^{-\\gamma}$.\nThe final estimate is not necessary at this point, but it is still sufficient for our purposes, \nas other terms in the estimate are bounded by this weaker estimate. \n\n\nNow we have to consider the case $\\varepsilon | k | > c$ \nwhen we are away from the bumps. In this case, by adjusting $c$ we can use that $\\hat{Q}$ is a constant.\nMoreover, the bound for negative and positive $k$ is the same, so we restrict ourselves to the case with $k>c\/\\varepsilon$. \n\\begin{equation*}\n\t\\begin{split}\n\t\t\\| {\\hat{H}_T} \\|^2_{H^1 \\left( \\left[ \\frac{c}{\\varepsilon}, + \\infty \\right)\n\t\t\\right)} & \\leqslant \\int_{\\frac{c}{\\varepsilon}}^{\\infty} e^{2 T\n\t\t\\varepsilon^{- 2} (4 + k \\varepsilon)^2 (2 + k \\varepsilon)^2} dk \\\\\n\t\t&\\quad + \\int_{\\frac{c}{\\varepsilon}}^{\\infty} e^{2 T \\varepsilon^{- 2} (4 +\n\t\tk \\varepsilon)^2 (2 + k \\varepsilon)^2} (T \\varepsilon^{- 2} (4 + \\varepsilon k) (2+ \\varepsilon k) (3 +\\varepsilon k) \\varepsilon)^2 dk\\\\\n\n\t& \\leqslant \\frac{1}{\\varepsilon} \\int_c^{\\infty} e^{2\n\t\tT \\varepsilon^{- 2} (4 + k)^2 (2 + k )^2} dk \\\\\n\t& \\quad + \\frac{C}{\\varepsilon} \\int_c^{\\infty} e^{2 T \\varepsilon^{- 2} (4\n\t\t+ k )^2 (2 + k )^2} (T \\varepsilon^{- 1} (4 + k) \n\t(2 + k) (3 + k))^2 dk \\;.\n\t\\end{split}\n\\end{equation*}\nNow we use that $(4 + k)^2 (2 + k)^2 \\geqslant k^4$ and $(4 + k) (2 + k) (3\n+ k) \\leqslant C (k^3 + 1) \\leqslant Ck^3$, for $|h|> \\frac{C}{\\varepsilon}$ with an $\\varepsilon$-independent constant $C$, so that\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\| {\\hat{H}_T} \\|^2_{H^1 \\left( \\left[ \\frac{c}{\\varepsilon}, + \\infty \\right)\n\t\t\\right)} & \\leqslant \\frac{C}{\\varepsilon} \\int_c^{\\infty} e^{- CT\n\t\t\\varepsilon^{- 2} k^4} dk + CT^2 \\varepsilon^{- 3} \\int_c^{\\infty} k^3 e^{-\n\t\tCT \\varepsilon^{- 2} k^4} dk\\\\\n\t\n\t\t& =\n\t\t\\frac{C}{\\varepsilon} (T \\varepsilon^{- 2})^{- \\frac{1}{4}} \\int_{c (T\n\t\t\\varepsilon^{- 2})^{\\frac{1}{4}}}^{\\infty} e^{- Ck^4} dk \\\\\n\t\t& \\quad+ CT^2 \\varepsilon^{- 3} (T \\varepsilon^{- 2})^{- \\frac{1}{4}} \n\t\t\\int_{c (T \\varepsilon^{- 2})^{\\frac{1}{4}}}^{\\infty} k^3 (T \\varepsilon^{-\n\t\t2})^{- \\frac{3}{4}} e^{- Ck^4} dk\\\\\n\t\t& \\leqslant \\frac{C}{\\varepsilon} (T \\varepsilon^{- 2})^{- \\frac{1}{4}} \n\t\t\\int_{c (T \\varepsilon^{- 2})^{\\frac{1}{4}}}^{\\infty} e^{- Ck^4} dk + T\n\t\t\\varepsilon \\int_{0}^{\\infty} k^3 e^{-\n\t\tCk^4} dk \\\\\n\t\t& \\leqslant \\frac{C}{\\varepsilon} (T \\varepsilon^{- 2})^{- \\frac{1}{4}} \n\t\\int_{c (T \\varepsilon^{- 2})^{\\frac{1}{4}}}^{\\infty} e^{- Ck^4} dk + C\\varepsilon \\;.\n\t\\end{split}\n\\end{equation*}\nFor the remaining term we use that for $\\alpha \\geqslant 0$ \n\\[ \n\t\\sup_{z>0}\\Big\\{z^{\\alpha} \\int_z^{\\infty} e^{- ck^4} dk\\Big\\} <\\infty,\n\\]\nto obtain for $\\gamma= (1+\\alpha)\/8 \\geqslant 1\/8 $ \n\\[\n\\| {\\hat{H}_T} \\|^2_{H^1 \\left( \\left[ \\frac{c}{\\varepsilon}, + \\infty \\right)\n\t\t\\right)} \n\t\t \\leqslant C \\varepsilon^{4\\gamma - 1} T^{- 2 \\gamma}+ C\\varepsilon \\;.\n\\]\nNote finally, that for $\\gamma<1\/2$ and bounded $T$ \nwe have $\\varepsilon \\leqslant C \\varepsilon^{4\\gamma - 1} T^{- 2 \\gamma}$ and \nwe can neglect the $ C\\varepsilon$ in the estimate above.\n\n\n\\subsection{Proof of Exchange Lemma I}\n\n\n\nThe proof of the Exchange Lemma I stated in Lemma~\\ref{lem:lemma2} is similar to the one for the Exchange Lemma II in Lemma~\\ref{lem:lemma3}, but requires additional arguments.\nWe start again by smoothly projecting in Fourier-space, but now in $k=1$ and $k=3$.\n\nFix a small $\\delta>0$ and consider for $\\ell\\in\\mathbb{Z}$ a smooth function $\\hat{P_\\ell}:\\mathbb{R}\\to[0,1]$ such that \n$\\textup{supp}(\\hat{P_\\ell})=[-\\ell-2\\delta, -\\ell+2\\delta]\\cup[\\ell-2\\delta, \\ell+2\\delta]$\nand $\\widehat{P}_\\ell=1$ on $[-\\ell-\\delta, -\\ell+\\delta]\\cup[\\ell-\\delta, \\ell+\\delta]$.\n\nNow we can rewrite:\n\\begin{multline*}\n\te^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] - (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 = \\mathcal{P}_3^2 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] + \\mathcal{P}_1^2 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1]\\\\ \n\t- \\mathcal{P}_1^2 (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 +(1-\\mathcal{P}_1^2-\\mathcal{P}_3^2)e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] -(1-\\mathcal{P}_1^2)(e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1.\n\\end{multline*}\nNow the first term on the right hand side is bounded the same way as the second term in the proof of the Exchange Lemma II (Lemma~\\ref{lem:lemma3}) in the previous section.\nAlso the last two terms can be controlled in a similar way as the first term in the proof of Lemma~\\ref{lem:lemma3}.\nWe only need the semigroup\nestimate from Corollary \\ref{cor:SGbound} (now for $\\ell=\\pm1$ and $\\ell=\\pm3$) and Lemma \\ref{lem:guidos}.\n\n\nLet us focus on the missing two terms:\n\\[\n\t\\mathcal{P}_1^2 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] - \\mathcal{P}_1^2 (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 =: \\mathcal{H}D,\n\\]\nwith $\\mathcal{H}\\cdot=H\\star\\cdot$ such that $\\textup{supp}(\\hat{H})\\subset (-2\\delta\/\\varepsilon, 2\\delta\/\\varepsilon)$ and \n\\[\n\\begin{split}\n\t\\hat{H} & = \\widehat{P}(\\varepsilon\\cdot)[e^{T\\varepsilon^{-2}\\lambda_\\nu(1+l\\varepsilon)}-e^{-4l^2T+\\nu T}]\\\\\n\t& = \\widehat{P}(\\varepsilon\\cdot)e^{-4Tl^{2}+\\nu T}[e^{-4Tl^{3}\\varepsilon-l^4T\\varepsilon^2}-1].\n\\end{split}\n\\]\nIn view of Lemma \\ref{lem:ours} it is enough to show that $\\|\\hat{H}\\|_{H^1(\\mathbb{R})}$ is small. Thus we need the derivative\n\\begin{multline*}\n\\frac{d}{dl}\\hat{H} = \\widehat{P}^\\prime(\\varepsilon\\cdot)e^{-4Tl^{2}+\\nu T}[e^{-4Tl^{3}\\varepsilon-l^4T\\varepsilon^2}-1]-8Tl \\widehat{P}(\\varepsilon\\cdot)e^{-4Tl^{2}+\\nu T}[e^{-4Tl^{3}\\varepsilon-l^4T\\varepsilon^2}-1]\\\\\n-4T\\varepsilon l^2(3+4\\varepsilon l) \\widehat{P}(\\varepsilon\\cdot)e^{-4Tl^{2}+\\nu T}[e^{-4Tl^{3}\\varepsilon-l^4T\\varepsilon^2}-1].\n\\end{multline*}\nNow we collect the common exponential term in the parenthesis, and then we write the Taylor expansion in $l=0$.\n To get the estimate in $H^1$ we bound both $\\hat{H}$ and $\\frac{d}{dl}\\hat{H}$ in $L^2$.\n \nActually we can do better and provide point-wise estimates (and not just $L^2$). First of all we observe that by means of Taylor expansion and triangular inequality\n\\[\n |1-e^z|\\leqslant |z|e^{|z|},\n\\]\nand in our case $z = -4Tl^{3}\\varepsilon-l^4T\\varepsilon^2$, with $|z|\\leqslant 8T\\delta l^2 + 4\\delta^2Tl^2\\leqslant 2Tl^2$ \nif $|\\ell| \\leqslant \\delta\/\\varepsilon$ for some small fixed $\\delta\\leqslant 1\/2$.\n\nIf we consider $\\hat{H}$ we have the following bound:\n\\begin{equation*}\n\t\\begin{split}\n\t\t|\\hat{H}| & \\leqslant C_{T_0}e^{-2Tl^2}\\left(|4Tl^3\\varepsilon|+ |l^4T\\varepsilon^2|\\right)\\\\\n\t\t& \\leqslant C_{T_0}T^{-1\/2}e^{-Tl^2}\\left(4\\varepsilon+ l\\varepsilon^2\\right)\\\\\n\t\t& \\leqslant C_{T_0}T^{-1\/2}e^{-Tl^2}\\left(4+2\\delta\\right)\\varepsilon\\;,\n\t\\end{split}\n\\end{equation*}\nwhere we used the inequality\n\\[\n\te^{-Tl^2}\\left(Tl^2\\right)^{3\/2}\\leqslant C.\n\\]\n\nIn the same way we can bound the derivative of $\\hat{H}$:\n\\[\n\t\\hat{H}^\\prime = \\left(\\varepsilon\\widehat{P}^\\prime - 8Tl\\widehat{P}-4T\\varepsilon l^2(3+4\\varepsilon l)\\widehat{P}\\right)e^{-4Tl^{2}+\\nu T}[e^{-4Tl^{3}\\varepsilon-l^4T\\varepsilon^2}-1],\n\\]\nwhere we have almost $\\hat{H}$ with a different prefactor that we can bound by using the previous one on $\\hat{H}$ and the fact that $\\varepsilon l\\leqslant \\delta$ such that\n\\[\n\t|\\hat{H}^\\prime|\\leqslant C(1+Tl)T^{-1\/2}e^{-Tl^2}\\varepsilon.\n\\]\nNow we use that $\\sqrt{T}le^{-1\/2\\cdot Tl^2}\\leqslant C$, so\n\\[\n\t|\\hat{H}^\\prime|\\leqslant C(1+\\sqrt{T})T^{-1\/2}e^{-1\/2\\cdot Tl^2}\\varepsilon\\;.\n\\]\nThus using $T\\leqslant T_0$\n\\[\n\t\\begin{split}\n\t\t\\|\\hat{H}\\|_{H^1} & \\leqslant CT^{-1\/2}\\varepsilon\\|^{-1\/2\\cdot Tl^2}\\|_{L^2}\\\\\n\t\t& \\leqslant C T^{-3\/4}\\varepsilon \\;.\n\t\\end{split}\n\\]\n \n \n\n\\subsection{Proof of Exchange Lemma IC}\n\n\n\nThe idea behind this proof is almost the same as before, but the proof itself is technically slightly different, relies on Corollary~\\ref{cor:new}, and does not need $L^2$-estimates on the kernel.\n\nWe start again by smoothly projecting in Fourier-space, but onto the modes $k=\\pm1$ and $k=\\pm3$.\n\nFix a small $\\delta>0$ and consider a smooth function $\\widehat{P}:\\mathbb{R}\\to[0,1]$ such that \n$\\textup{supp}(\\widehat{P})=[-2\\delta, 2\\delta]$\nand $\\widehat{P}=1$ on $[-\\delta, \\delta]$.\nDefine now for $\\ell\\in\\mathbb{Z}$ the function \n\\[\\widehat{P}_\\ell:\\mathbb{R}\\to[0,1] \\text{ defined by }\\widehat{P}_\\ell(k)=\\widehat{P}(k-\\ell).\\]\nNow we can rewrite:\n\\begin{equation}\n\\label{e:ELIC}\n\\begin{split}\n\te^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] - (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 \n\t = & \\mathcal{P}_3^2 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] \\\\\n\t& + \\mathcal{P}_1 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] - \\mathcal{P}_1 (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 \\\\\n\t& +(1-\\mathcal{P}_1-\\mathcal{P}_3^2)e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] \\\\ \n\t&- (1-\\mathcal{P}_1)(e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1.\n\\end{split}\n\\end{equation}\n\\subsubsection{First term}\nNow the first term is bounded the same way as the second term in the proof the Exchange Lemma II (Lemma~\\ref{lem:lemma3}).\nWe only need the semigroup\nestimate from Corollary \\ref{cor:SGbound} and Lemma \\ref{lem:guidos} to obtain.\n\\[ \n\\| e^{T \\varepsilon^2 \\mathcal{L}_{\\nu}} \\mathcal{P}_3^2 [D(\\varepsilon \\cdot) \\mathrm{e}_1] \\|_{C_{\\kappa}^0}\n\\leqslant C\n\\varepsilon^{\\alpha-\\kappa} \\| D \\|_{C_{\\kappa}^{0, \\alpha}}\\;.\n\\]\n\\subsubsection{Second term}\nWe can write the second term in view of Lemma \\ref{lem:guidoext}:\n\\[\n \\mathcal{P}_1 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] - \\mathcal{P}_1 (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1 = \\mathcal{H} [D(\\varepsilon \\cdot)] \\cdot e_1\n\\]\nwith convolution operator $ \\mathcal{H}_T\\cdot = H_T\\star\\cdot$ with Fourier transform\n\\[\n\\begin{split}\n \\hat{H}_T(k)& = \\widehat{P}(k)[ e^{-T \\varepsilon^{-2}k^2( k+2)^2} - e^{-4k^2 \\varepsilon^{-2} T}]e^{\\nu T}\n \\\\& = \\widehat{P}(k)[ e^{-T\\varepsilon^{-2}(4k^3+k^4)} - 1]e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T},\n \\end{split}\n\\]\nwhere $\\hat{H}_T(0)=0$.\nNow we bound the $L^2$-norms of $\\hat{H}_T$, $\\hat{H}^\\prime_T$, and $\\hat{H}''_T$, and apply the results in Lemma~\\ref{lem:guidoext}.\nWe can get the following point-wise bound, \nusing the support of $\\widehat{P}$ together with mean-value theorem and $\\delta<1\/2$\n\\begin{equation}\n\\label{e:*}\n\\begin{split}\n |\\hat{H}_T(k)|& \\leqslant C |\\widehat{P}(k) T\\varepsilon^{-2}|4k^3+k^4| e^{-k^2 \\varepsilon^{-2} T}\n \\\\& \\leqslant C |\\widehat{P}(k)| T\\varepsilon^{-2}|k|^3 e^{-k^2 \\varepsilon^{-2} T},\n \\end{split}\n\\end{equation}\nfor all $\\gamma \\geqslant0$.\n\nWe use that for $a>0$ and $\\xi>0$\n\\[\n\\begin{split}\n\\int_0^{2\\delta} k^a e^{-\\xi k^2} dk & \n = \\int_0^{2\\delta\\sqrt{\\xi}} \\xi^{-1\/2-a\/2} k^a e^{- k^2} dk \n \\\\& \\leqslant C \\min\\{ \\xi^{-1-a}\\ , \\ 1 \\}^{1\/2}.\n \\end{split}\n\\]\nThus for the $L^2$-norm we integrate the squared inequality \\eqref{e:*} \nand use the previous estimate with $a=6$ and $\\xi=T\\varepsilon^{-2}$ to obtain\n\n\\[\n\\|\\hat{H}_T\\|_{L^2} \\leqslant C (T\\varepsilon^{-2})\\min\\{ (T\\varepsilon^{-2})^{-7}\\ , \\ 1 \\}^{1\/4} \\leqslant C \\;.\n\\]\n\nFor the first derivative\n\\[\n\\begin{split}\n \\hat{H}'_T (k)\n =& \\widehat{P}'(k)[ e^{-4T\\varepsilon^{-2}(4k^3+k^4)} - 1]e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T} \\\\\n & + \\widehat{P}(k)[ (-T\\varepsilon^{-2}(12k^2+4k^3)) e^{-T \\varepsilon^{-2}(4k^3+k^4)} e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T} \\\\\n & + \\widehat{P}(k)[ e^{-T\\varepsilon^{-2}(4k^3+k^4)} - 1] (-8k \\varepsilon^{-2} T) e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T}.\n \\end{split}\n\\]\nAs before, \n\\[\n\\begin{split}\n |\\hat{H}'_T (k)|\n \\leqslant & C |\\widehat{P}'(k)| T\\varepsilon^{-2} |k|^3 e^{-4k^2 \\varepsilon^{-2} T} \n + C |\\widehat{P}(k)| [ (T\\varepsilon^{-2}) k^2 + (T\\varepsilon^{-2})^2 k^4 ]e^{-k^2 \\varepsilon^{-2} T} \\\\\n \\leqslant & C_\\delta |\\widehat{P}'(k)| T\\varepsilon^{-2} e^{- \\delta^2 \\varepsilon^{-2} T} \n + C |\\widehat{P}(k)| (T\\varepsilon^{-2}) k^2 e^{-k^2 \\varepsilon^{-2} T} . \n \\end{split}\n\\]\nThus for the $L^2$-norm\n\\[\n\\|\\hat{H}'_T\\|_{L^2} \\leqslant C+ C (T\\varepsilon^{-2})\\min\\{ (T\\varepsilon^{-2})^{-5}\\ ; \\ 1 \\}^{1\/4} \\leqslant C \\;.\n\\]\n\nFor the second derivative we obtain similarly\n\\[\n\\begin{split}\n |\\hat{H}''_T (k)|\n \\leqslant & \n C_\\delta |\\widehat{P}'' (k)| T\\varepsilon^{-2} e^{- \\delta^2 \\varepsilon^{-2} T} \\\\\n & + C_\\delta |\\widehat{P}'(k)| [T\\varepsilon^{-2} + (T\\varepsilon^{-2})^2] e^{- \\delta \\varepsilon^{-2} T} \\\\\n &+ C |\\widehat{P}(k)| [ (T\\varepsilon^{-2}) |k| + (T\\varepsilon^{-2})^2 |k|^3 + (T\\varepsilon^{-2})^3 |k|^5] e^{-k^2 \\varepsilon^{-2} T} ,\n \\end{split}\n\\]\nso for the $L^2$-norm\n\\[\n\\|\\hat{H}''_T\\|_{L^2} \\leqslant C+ C (T\\varepsilon^{-2})\\min\\{ (T\\varepsilon^{-2})^{-3}\\ , \\ 1 \\}^{1\/4} \\leqslant C \\varepsilon^{-1\/2} \\;.\n\\]\nNow Lemma \\ref{lem:guidoext} yields: \n \\[\n\\| \\mathcal{P}_1 e^{t{\\mathcal{L}}_\\nu}[D(\\varepsilon \\cdot){\\mathrm{e}}_1] - \\mathcal{P}_1 (e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1\\|_{C^0_\\kappa}\n\\leqslant C\\varepsilon^\\alpha (1+\\varepsilon^{-1\/2}) \\|D\\|_{C^{0,\\alpha}_\\kappa}.\n \\]\n\n %\n \\subsubsection{Final two terms}\n Let us now turn to the last two terms in~\\eqref{e:ELIC} where we need Corollary \\ref{cor:new}.\nBoth are bounded in a similar way. We focus only on the last one.\nFor the other one, we cut out a small part in the middle and then bound the infinite rest as done here. \nRecall that the argument is slightly asymmetric, as we only have a $\\mathcal{P}_1$ but a $\\mathcal{P}_3^2.$\n\nWe have\n\\[ \n(1-\\mathcal{P}_1)(e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1\n= \\mathcal{H} [D(\\varepsilon \\cdot)] \\cdot e_1,\n\\]\nwith convolution operator $ \\mathcal{H}_T = H_T\\star$ with Fourier transform\n\\[\n \\hat{H}_T(k) = \\hat{Q}(k) e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T},\n\\]\nwhere $\\hat{H}_T(0)=0$ and we defined here $\\hat{Q}(k)=1-\\widehat{P}(k)$, which is slightly different $\\hat{Q}$ as defined before, \n\nbut it has the same properties. It is smooth, has support outside of $[-\\delta,\\delta]$, and is constant outside $[-2\\delta,2\\delta]$.\nThe bounded support is a key point in the argument for this Exchange Lemma, because the $L^2$-norm is not small uniformly in $T$, \nso we need to use Corollary~\\ref{cor:new} instead of Lemma~\\ref{lem:guidoext}.\n\nNow \n\n\\begin{align*}\n \\hat{H}_T'(k) &= \\hat{Q}'(k) e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T} - \\hat{Q}(k)T\\varepsilon^{-2}8k e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T},\\\\\n \\hat{H}_T''(k) &= \\hat{Q}''(k) e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T} -16\\hat{Q}'(k)T\\varepsilon^{-2}8k e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T} \n \\\\&\\qquad+ \\hat{Q}(k) [ (8T\\varepsilon^{-2}k )^2 -T\\varepsilon^{-2}8 ] e^{-4k^2 \\varepsilon^{-2} T}e^{\\nu T}. \n \\end{align*}\n\nNow we use that on the support of $Q'$ we have $|k|\\in[\\delta,2\\delta]$ and the bounds already used many times before, to derive\n\\[\n\\begin{split}\n |\\hat{H}_T'(k)| & \\leqslant C|\\hat{Q}'(k)| e^{-4\\delta^2 \\varepsilon^{-2} T} + C|\\hat{Q}(k)| T\\varepsilon^{-2} k^2 e^{-4k^2 \\varepsilon^{-2} T},\\\\\n |\\hat{H}_T''(k)| & \\leqslant C|\\hat{Q}''(k)| e^{-4\\delta^2 \\varepsilon^{-2} T} +C|\\hat{Q}'(k)| \\varepsilon^{-2}T e^{-4 \\delta^2 \\varepsilon^{-2} T} \\\\ &\\qquad+ C |\\hat{Q}(k)| T\\varepsilon^{-2}(1+T\\varepsilon^{-2}k^2) e^{-2k^2 \\varepsilon^{-2} T}. \n \\end{split}\n\\] \nThus we can write\n\\[\n\\begin{split}\n \\|\\hat{H}_T'\\|^2_{L^2} \n & \\leqslant C + C T^2\\varepsilon^{-4} \\int_{\\delta}^{\\infty} k^2e^{-8k^2 \\varepsilon^{-2} T}dk \\\\\n & \\leqslant C + C T^{1\/2}\\varepsilon^{-1} \\int_{\\delta T^{1\/2} \\varepsilon^{-1} }^{\\infty} k^2e^{-8k^2}dk \\leqslant C,\n \\end{split}\\]\n and similarly\n \\[\n \\|\\hat{H}_T''(k)\\|_{L^2} \n \\leqslant C .\n\\] \nUsing Corollary \\ref{cor:new} we obtain\n\\[\n\\| (1-\\mathcal{P}_1)(e^{\\Delta_\\nu T}D)(\\varepsilon \\cdot){\\mathrm{e}}_1\\|_{C_\\kappa^0} \\leqslant C \\varepsilon^\\alpha \\|D\\|_{ C_\\kappa^{0,\\alpha}},\n\\]\nwhich concludes the proof.\n\n\\section{Approximation}\n\\label{sec:app}\n\nIn this section we present the proof our main approximation result using the bound on the residual derived in the sections above.\nAs the result should hold for very long times of order $\\epsilon^{-2}$ we need to rely on the sign of the cubic nonlinearity \nand energy-time estimates. But as the Swift-Hohenberg operator does not allow for straightforward $L^p$-estimates, \nwe have to restrict the final result to $L^2$-spaces. \n\nLet us recall the main setting: $A$ is a mild solution of the amplitude equation~\\eqref{e:GL}\nand assume that there is a $\\varrho>2$ such that for all $p>0$ one has $A(0)\\in W^{1,p}_{\\varrho}$,\nand $u$ is a solution of the Swift-Hohenberg equation~\\eqref{e:SH}.\n\nIn order to prove our main result,\nwe need to bound the error\n\\[\n R(t) =u(t)-u_A(t) \n\\]\nbetween $u$ and the approximation $u_A$ defined in \\eqref{def:uA}. Using the definition \nof the residual from \\eqref{e:residual} and the mild formulation for the Swift-Hohenberg equation, we obtain\n\\begin{equation}\n \\label{e:derR}\n R(t) = {\\mathrm{e}}^{t{\\mathcal{L}}_\\nu}R(0) + \\int_0^t {\\mathrm{e}}^{(t-s){\\mathcal{L}}_\\nu} [u_A^3-(u_A+R)^3] ds + \\text{Res}(u_A)(t)\\;.\n\\end{equation}\nAs the residual $\\text{Res}$ is not differentiable in time, \nwe cannot proceed with $L^2$-energy estimates as in the deterministic case, but the proof is still very similar.\n\nSubstituting $D = R - \\text{Res}$, we obtain first (note that $\\text{Res}(0)=0$)\n\\[\nD(t) = {\\mathrm{e}}^{t{\\mathcal{L}}_\\nu}R(0) + \\int_0^t {\\mathrm{e}}^{(t-s){\\mathcal{L}}_\\nu} [u_A^3-(D+u_A+\\text{Res})^3] ds\n\\]\nand thus\n\\[\n\\partial_t D = {\\mathcal{L}}_\\nu D -(D+u_A+\\text{Res}(u_A))^3 - u_A^3 \\;.\n\\]\nNow we can use $L^2_{\\varrho,\\varepsilon}$-energy estimates \n\\[\n \\frac12\\partial_t \\| D\\|^2_{L^2_{\\varrho,\\varepsilon}} \n= \\langle {\\mathcal{L}}_\\nu D, D \\rangle_{L^2_{\\varrho,\\varepsilon}} \n- \\int_{\\mathbb{R}}{ w_{\\varrho,\\varepsilon}} D[ (D+u_A+\\text{Res}(u_A))^3 - u_A^3] \\;dx \\;.\n\\]\nWe choose the weight (see Definition \\ref{def:weight}) for some $\\varrho>1$ as\n\\[\n w_{\\varrho,\\varepsilon}(x) := \\frac1{(1+|\\varepsilon x|^2)^{\\varrho\/2}},\n\\]\nwhich is integrable with $\\| w_{\\varrho,\\varepsilon}\\|_{L^1}=C\\varepsilon^{-1}$. \nAlso recall that by Lemma \\ref{lem:spec}\n\\[\\langle {\\mathcal{L}}_\\nu D, D \\rangle_{L^2_{\\varrho_\\varepsilon}} \\leqslant C \\varepsilon^2 \\| D\\|^2_{L^2_{\\varrho,\\varepsilon}} \\;.\n\\]\nFor the nonlinearity we use a straightforward modification of the standard dissipativity result for the cubic in $L^2$-spaces, \nwhich states that \n\\[ \\langle (-(D+u_A)^3+D^3, D \\rangle_{L^2_\\rho} \\leq 0 \\;.\n\\]\nBut here we have the additional term $\\text{Res}(u_A)$ that we need to take care of.\nUsing Young's inequality several times, we obtain\n\\begin{multline*}\n - [ (D+u_A+\\text{Res}(u_A))^3 - u_A^3] \\cdot D \\\\\n \\begin{aligned}\n \t= {} & - [ (D+\\text{Res})^3+ 3 u_A(D+\\text{Res})^2+ 3u_A^2(D+\\text{Res}) ] \\cdot D \\\\\n \t= {} & - D^4 - 3 D^2 \\text{Res}^2 - 3u_A^2D^2 \\\\\n \t& - 3 D^3 \\text{Res} - 3 D^3 u_A -6 D^2u_A\\text{Res} \n \t- D \\text{Res}^3- 3D u_A^2 \\text{Res} - 3 D u_A \\text{Res}^2 \\\\\n \t\\leqslant {}& C u_A^2 \\text{Res}^2 + C \\text{Res}^4 \\;.\n \\end{aligned}\n\\end{multline*}\nThe critical terms in the estimates above are:\n\\[\n6 D^2u_A\\text{Res} \\leqslant \\delta D^2u_A^2 + \\delta D^4 + C_\\delta \\text{Res}^4 \n\\]\nand with $\\delta=8\/15$ we have\n\\[\n3D^3u_A \\leqslant \\frac32 \\delta D^4 + \\frac3{2\\delta} D^2u_A^2 = \\frac45 D^4 + \\frac{45}{16} D^2u_A^2\\;.\n\\]\nIn summary we obtain\n\\[\n\\partial_t \\| D\\|^2_{L^2_{\\varrho,\\varepsilon}} \n\\leqslant \nC \\varepsilon^2 \\| D\\|^2_{L^2_{\\varrho,\\varepsilon}} \n+ C \\| u_A\\|^2_{L^4_{\\varrho,\\varepsilon}} \\|\\text{Res}\\|^2_{L^4_{\\varrho,\\varepsilon}} \n+ C \\|\\text{Res}\\|^4_{L^4_{\\varrho,\\varepsilon}} \\;.\n\\]\nThus by Gronwall's inequality or comparison principle for ODEs, we obtain directly\n\\[\n\\| D(t) \\|^2_{L^2_{\\varrho,\\varepsilon}} \n\\leqslant \ne^{C t\\varepsilon^2} \\| R(0) \\|^2_{L^2_{\\varrho,\\varepsilon}} \n+ \\int_0^t e^{C (t-s)\\varepsilon^2} \\|\\text{Res}\\|^2_{L^4_{\\varrho,\\varepsilon}} (\\|\\text{Res}\\|^2_{L^4_{\\varrho,\\varepsilon}} + \\| u_A\\|^2_{L^4_{\\varrho,\\varepsilon}}) ds\n\\]\nand finally we established the following result.\n\\begin{lemma}\nLet $A$ and $u$ be given as in the beginning of the section.\nFor the error $R$ given in \\eqref{e:derR} we obtain \n\\[ \\sup_{[0,T_0\\varepsilon^{-2}]} \\| R-\\text{\\rm Res}\\|_{L^2_{\\varrho,\\varepsilon}} \n\\leqslant C \\| R(0) \\|_{L^2_{\\varrho,\\varepsilon}} \n+ C\\varepsilon^{-1} \\sup_{[0,T_0\\varepsilon^{-2}]} \\Big[ \\|\\text{\\rm Res}\\|_{L^4_{\\varrho,\\varepsilon}} (\\|\\text{\\rm Res}\\|_{L^4_{\\varrho,\\varepsilon}} + \\| u_A\\|_{L^4_{\\varrho,\\varepsilon}}) \\Big]\\;.\n\\]\n\\end{lemma}\nBy assumption, $A(0)\\in W^{1,p}_\\varrho$ for all $p>0$, so by Corollary \\ref{cor:maxregA} we have\n\\[\n\t\\mathbb{E}\\sup_{[0,T]}\\|A\\|^p_{C^0_\\kappa}\\leqslant C_p\\qquad \\forall p>1, \\forall \\kappa>0\n\\]\nwhere $p$ is ``large'' and $\\kappa$ is ``small''.\n\nThen, by the Chebychev inequality, we have, in the sense of Definition~\\ref{def:0},\n\\[\n\t\\sup_{[0,T]}\\|A\\|_{C^0_\\kappa}=\\mathcal{O}(\\varepsilon^{-\\delta}), \\qquad \\forall \\delta>0,\n\\]\nand thus\n\\[\n\t\\sup_{[0,T_0\\varepsilon^{-2}]}\\|u_A\\|_{C^0_{\\varrho,\\varepsilon}}=\\mathcal{O}(\\varepsilon^{1-\\delta}).\n\\]\n\nNote that due to the $\\varepsilon$ scaling in the weight we have \n\\begin{lemma}\n Let $A(0)\\in W^{1,p}_\\varrho$ for all $p>0$. Then for all $p>1$, $\\varrho>1$ and $\\delta>0$ \n\\[\\sup_{[0,T_0\\varepsilon^{-2}]}\\|u_A\\|_{L^p_{\\varrho,\\varepsilon}} = \\mathcal{O} ( \\varepsilon^{1 - 1\/p-\\delta} ).\n\\]\n\\end{lemma}\n\n\\begin{proof}\n\tThe claim follows from the simple scaling argument below, which is based on a substitution:\n\t\\begin{equation*}\n\t\t\\|u_A\\|_{L^p_{\\varrho,\\varepsilon}} \\leqslant \\varepsilon \\|A\\|_{L^p_{\\varrho,\\varepsilon}} =\\varepsilon^{1-1\/p}\\|A\\|_{L^p_{\\varrho}},\n\t\\end{equation*}\n\tand we can conclude by noting that $\\|A\\|_{L^p_{\\varrho}}=\\mathcal{O}(1)$, with the meaning given in Definition~\\ref{def:0}.\n\\end{proof}\nBy the result on the residual in Theorem~\\ref{thm:res} we have, for all small $\\kappa>0$,\n\\[\n\t\\sup_{[0,T_0\\varepsilon^{-2}]}\\|\\text{Res}(u_A)\\|_{C^0_\\kappa}=\\mathcal{O}(\\varepsilon^{3\/2-2\\kappa}),\n\\]\nthus\n\\[\n\t\\sup_{[0,T_0\\varepsilon^{-2}]}\\|\\text{Res}(u_A)\\|_{L^p_{\\varrho,\\varepsilon}}=\\mathcal{O}(\\varepsilon^{3\/2-1\/p-2\\kappa}).\n\\]\n\nIn conclusion,\n\\[\n\t\\begin{split}\n\t\\sup_{[0,T_0\\varepsilon^{-2}]}\\|R\\|_{L^2_{\\varrho,\\varepsilon}} \n\t& \\leqslant \\sup_{[0,T_0\\varepsilon^{-2}]}\\|R-\\text{Res}(u_A)\\|_{L^2_{\\varrho,\\varepsilon}}\n\t+\\sup_{[0,T_0\\varepsilon^{-2}]}\\|\\text{Res}(u_A)\\|_{L^2_{\\varrho,\\varepsilon}}\\\\\n\t&\\leqslant C\\|R(0)\\|_{L^2_{\\varrho,\\varepsilon}} + \\mathcal{O}(\\varepsilon^{1-\\delta-2\\kappa}),\n\t\\end{split}\n\\]\nwhere we used once more Definition~\\ref{def:0} for the $\\mathcal{O}(\\varepsilon^{1-\\delta-\\kappa})$ term.\nThus we finished the estimate on the error. Setting $2\\kappa=\\delta$, we established:\n\\begin{theorem}\\label{thm:final} Let \n\t $A$ be a solution of the amplitude equation \\eqref{e:GL}\n\t on $[0,T_0]$ such that there is a $\\varrho>2$ so that $A(0)\\in W^{1,p}_\\varrho$ for all $p>1$. \n\t Let $u$ be the solution to the Swift Hohenberg equation~\\eqref{e:SH} and $u_A$ the approximation built through $A$, which is defined in \\eqref{def:uA}.\n\t \n\tThen for all $\\delta>0$, $q>0$ there exists a constant $C_{q,\\delta}$ such that\n\t\\[\n\t\t\\mathbb{P}(\\sup_{[0,T_0\\varepsilon^{-2}]}\\|u-u_A\\|_{L^2_{\\varrho,\\varepsilon}}\\leqslant C\\|u(0)-u_A(0)\\|_{L^2_{\\varrho,\\varepsilon}} + C\\varepsilon^{1-2\\delta})\\geqslant 1-C_{q,\\delta}\\varepsilon^q,\n\t\\]\n\twhere the weight $ w_{\\varrho,\\varepsilon}(x) = (1+|\\varepsilon x|^2)^{-\\varrho\/2}$ \n\t(see Definition \\ref{def:weight}) for some $\\varrho>1$.\n\\end{theorem}\n\n\n\n\n\n\\paragraph{Acknowledgments}\nL.A.B. and D.B. were supported by DFG-funding BL535-9\/2 ``Mehrskalenanalyse stochastischer partieller Differentialgleichungen (SPDEs)'', and would also like to thank the M.O.P.S. program for providing a continuous support during the development of this research.\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuantum walks (QWs) are dynamical tools that are used to control the motion of a quantum particle in space and time. Due to their potential applications in quantum information and physical sciences, they have been at the focus of much research work since its first introduction \\cite{aharonov}. Owing to their quantum mechanical resources, i.e., quantum superposition, quantum interference, and entanglement, QWs hold the promise to develop new algorithms for computations on quantum computers \\cite{deutsch1992rapid,divincenzo1995quantum,farhi1998quantum, kempe,ambainis2003quantum,computation1,computation2, chandrashekar2010discrete,venegas2012quantum}. In Physics QWs provide a versatile platform to simulate various physical phenomena, e.g., topological phases \\cite{Kitagawa2010, Kitagawa2012, Kitagawa2012a, Asboth2012, Asboth2013, Asboth2014, Tarasinski2014, Asboth2015, Obuse2015, Cedzich2016BECorspnds, Groh2016, Xiao2017, Zhan2017, Sajid2019}, Anderson localization \\cite{disorder1, Ahlbrecht2011, Ahlbrecht2011a, disorder2, disorder3}, Bloch Oscillations \\cite{Genske2013, Cedzich2013, Arnault2020}, molecular binding \\cite{Ahlbrecht2012, Lahini2012, Krapivsky2015}, and Hofstadter spectrum \\cite{SajidThesis, Cedzich2020}, to name just a few. Due to their broad spectrum of applications, QWs have been realized in experiments using different physical systems, e.g., neutral atoms trapped in optical lattices \\cite{karski, robens}, trapped ions on a line \\cite{exp2, Xue2009, zahringer}, photons in free space \\cite{broome, schreiber}, correlated photons on continuously evanescently coupled waveguides \\cite{exp1}, and integrated photonics \\cite{sansoni}.\n\nQWs exhibit different features from their classical counterparts owing to their quantum mechanical resources.\nIn a quantum walk (QW) a quantum particle, which can exist in a superposition of several quantum states, moves to explore several paths simultaneously. \nQuantum interference takes place when different trajectories cross each other. As a result the probability distribution of a QW is strikingly different from a classical random walk. For example, the variance of a quantum walk grows quadratically faster with the number of steps of the walk compared to the linear growth of a classical random walk \\cite{kempe,venegas2012quantum}.\nQWs have shown speedups in comparison to their classical counterparts and, hence, are useful tools to design new fast algorithms for computation on a quantum computer \\cite{ambainis2003quantum,search1}, and to simulate and control certain quantum computational tasks \\cite{chandrashekar2010discrete}.\nThis has sparked a great interest to engineer different types of QWs and investigate their properties in different settings in the context of controlling and manipulating a desired quantum state for quantum computation and simulation.\nIn this vein, QWs have been investigated on a large scale which has resulted in the exploration of various types of walks. This includes QWs with decoherence \\cite{decoherent,Romanelli2005,Kendon,Zhang2013,albertinDecoherence2014}, QWs with time-dependent coin \\cite{Banuls2006,Xue2015,Cedzich2016,panahiyan,Katayama2020}, QWs with position dependent coin \\cite{Wojcik2004,Uzma2020}, and QWs with different types of phase defects \\cite{zhang2014one,Zhang2014TW0,Farooq2020}.\n\nIn this paper we engineer QW protocols (one standard QW protocol and one split-step protocol) which imprint a time and spin-dependent phase shift (TSDPS) to the wave function of a quantum particle undergoing a QW on a one-dimensional (1D) lattice. We get inspiration from the previous works \\cite{Xue2015,Cedzich2016,panahiyan,Katayama2020,zhang2014one,Zhang2014TW0,Farooq2020,Cedzich2019} where a desired time evolution of a QW is achieved either by manipulating the coin parameter by making it time or position dependent, or by introducing different types of spatial phase defects into the wavefunction of the quantum particle. Our engineered QWs share common features, e.g., complete revivals and partial revivals in the probability distribution of QWs which are investigated in \\cite{Xue2015,Cedzich2016,Katayama2020,Farooq2020}. We use the phase factor ($\\phi$) as a control knob to control and manipulate the evolution of a QW. By numerically computing the probability distribution $P(x,n)$ and the standard deviation $\\sigma(n)$ of the walk, and the probability of the particle to return to its initial position $P(x=x_{\\text{i}},n)$ as a function of the number of steps of the walk, we show revivals in the evolution of QWs with TSDPS.\nFor rational values of the phase factor, i.e., $\\phi\/2\\pi=p\/q$, where $p$ and $q$ are mutually coprime integers, periodic revivals occur in the QW driven by the standard protocol where the period of revivals depends on the denominator $q$. In this case the quantum particle takes a finite excursion in the lattice and then comes back to its initial position. \nFor an irrational $\\phi\/2\\pi$, the QW shows partial revivals with irregular periods, and the walker remains localized in a small region of the lattice, hence, showing no transport. The periods of partial revivals in this case are not regular due to the incommensurability of the $\\phi\/2\\pi$ with the step size of the QW. \nIn the case of a QW walk driven by the split-step protocol, partial revivals occurs for rational values of $\\phi\/2\\pi$ with periods of revivals double of the standard QW.\n\nIn experiments, imperfections are inevitable and, hence, imprinting an exact rational or irrational $\\phi\/2\\pi$ is challenging. From this perspective we investigate the robustness of revivals against imperfections in the imprinted phase factor. We show with our numerical results that in the presence of a linear random noise (random fluctuations) in the imprinted phase factor, signatures of revivals persist for smaller values of the noise (fluctuations) parameter in the case of $\\phi\/2\\pi=p\/q$. Increasing the strength of the noise parameter results in suppression of revivals for both rational and irrational $\\phi\/2\\pi$. In this case the quantum particle starts to spread out showing quantum transport.\n\nThe rest of the paper is organized as follows. In Sec.~\\ref{QW_TSDPS} we introduce our system, the standard QW protocol, coin operator, and shift operator with TSDPS. We present our numerical results for revivals and partial revivals in the evolution of the QW driven by the standard protocol for rational and irrational values of $\\phi\/2\\pi$, respectively. In Sec.~\\ref{SPLIT_STEP} we introduce the split-step protocol with TSDPS, and investigate revivals in the probability distribution for rational values of $\\phi\/2\\pi$. Section~\\ref{QW_TSDPS_Noise} presents the effects of random fluctuations in the phase factor on revivals in the probability distribution of the QW driven by the standard protocol. We summarize our results and conclude with a brief outlook in Sec.~\\ref{conc}.\n\n\\section{Quantum Walks with a Time and Spin-dependent Phase Shift} \\label{QW_TSDPS}\nWe consider a single quantum particle (also called a walker) with two internal degrees of freedom undergoing a QW on a 1D lattice. The internal states of the particle (also called spin states due to its analogy with a spinor particle) are represented by basis vectors $\\{\\ket{s}:\\ s\\in \\{\\uparrow, \\downarrow\\}\\}$ which span a two-dimensional Hilbert space $\\mathcal{H}^s$. The position states of the walker are represented by the lattice coordinates $ x$ with basis vectors $\\{ \\ket{x}: x \\in \\mathbb{Z} \\}$ spanning a 1D Hilbert space $\\mathcal{H}^x$. The quantum particle undergoing a quantum walk resides in a Hilbert space that is the tensor product of the two Hilbert spaces, i.e., $\\mathcal{H}^s \\otimes \\mathcal{H}^x$. For simplicity, we use dimensionless units by assuming the lattice constant and the time duration of a single-step of the QW to be equal to 1.\n\nThe 1D standard protocol of the QW with TSDPS consists of a set of unitary operators. We call this protocol as the ``walk operator'' which is defined as,\n\\begin{align}\n\\label{eq:_1}\n\\hat{W}_{\\phi}(n) = \\hat{S}_x(\\phi n) \\, \\hat{C},\n\\end{align}\nhere $\\hat{C}$ is known as a coin operator \nand $\\hat{S}_x(\\phi n)$ as a spin-dependent shift operator. The coin operator acts on the internal states of the walker and rotates its spin state in the two-dimensional Hilbert space $\\mathcal{H}^s$. In this work we employ the so-called Hadamard coin which is defined as,\n\\begin{align}\n\\label{eq:_coin}\n\\hat{C} = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} 1 & \\ \\ \\ \\ 1 \\\\ 1 & \\ \\ -1 \\end{pmatrix} \\otimes \\ket{x}\\bra{x}.\n\\end{align}\nThe spin-dependent shift operator $\\hat{S}_x(\\phi n)$ translates the walker by one lattice site to the right or to the left depending on its internal state, and imprints a step and spin-dependent phase shift $\\phi$ to the walker's wavefunction. The shift operator is defined as,\n\\begin{align} \\label{eq:shift-x}\n \\hat{S}_x (\\phi n) = & \\sum_{x} \n \\Big[ \\exp\\big[i \\phi n\\big] \\ket{\\uparrow}\\bra{\\uparrow} \\otimes \\ket{x+1}\\bra{x} \\nonumber \\\\ & \n + \\exp\\big[ - i \\phi n\\big] \\ket{\\downarrow}\\bra{\\downarrow} \\otimes \\ket{x-1}\\bra{x} \\Big],\n\\end{align} \nwhich shifts the walker in the spin-up (spin-down) state to the right (left). By applying the walk operator ( given in eq. (\\ref{eq:_1})) to an initial state $(\\ket{\\Psi_{\\text{i}}})$ of the walker one time constitutes a single step of the QW.\nThe evolution of the walk results by applying the walk operator to the initial state of the walker repeatedly for a large number of times. After certain $n$ number of steps of the walk (where $n\\in \\mathbb{N}$) the final state of the walker can be written as,\n\\begin{equation} \\label{eq:final-state}\n\\ket{\\Psi_n(\\phi)}=\\hat{W}_{\\phi}(n) \\ket{\\Psi_{\\text{i}}}.\n\\end{equation}\nFor a fixed value of $\\phi$ the factor $n$ ensures the time dependence of the phase factor that is imprinted to the wavefunction of the walker at each step of the walk. The spatial probability distribution $P(x,n)$ is obtained by tracing out the coin degrees of freedom, i.e.,\n\\begin{equation} \\label{eq:prob-dist}\nP(x,n)=\\sum_{s=0,1} \\braket{\\Psi_n(\\phi)|\\Psi_n(\\phi)}.\n\\end{equation}\nFor rational values of the phase factor, i.e., $\\phi\/2\\pi=p\/q$, the walk operator $\\hat{W}_{\\phi}(n)$ is periodic, i.e., $\\hat{W}_{\\phi}(n+q*r) = \\hat{W}_{\\phi}(n)$ for some $r\\in\\mathbb{N}$. As a result the evolution of the walk (and hence, $P(x,n)$) is also periodic \\cite{Cedzich2016}, i.e., $P(x,n+q*r) =P(x,n)$. To investigate revivals in the evolution of the QW we compute the return probability $P(x=x_{\\text{i}},n)$ of the walker to its initial position $x_{\\text{i}}$, and the standard deviation $\\sigma(n)$ of the walk. After certain $n$ number of steps of the QW, $P(x=x_{\\text{i}},n)$ and $\\sigma(n)$ are obtained using the following expressions,\n\\begin{equation} \\label{eq:prob-init-state}\nP(x=x_{\\text{i}},n)=\\sum_{s=0,1} \\braket{\\Psi_{\\text{i}}|\\Psi_n(\\phi)},\n\\end{equation}\n\\begin{equation} \\label{eq:standard-deviation}\n\\sigma(n)= \\sqrt{ - ^2}.\n\\end{equation}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=170mm]{TDPD2PiBy200S_100_49ym.eps} \\\\\n \\includegraphics[width=170mm]{TDPD2PiBy200S_100_49AntSym1.eps} \\\\\n \\includegraphics[width=170mm]{Simga_RtrnProb_2Piby200_100_49B.eps}\n\\caption{(First two rows). Probability distribution of the QW with TSDPS in the position-time plane for rational values of $\\phi\/2\\pi$ (red, white, and blue colors indicate the maximum ($1.0$), moderate ($\\sim0.1$), and minimum ($\\sim0.$) probabilities, respectively).\nFirst row: The evolution of the walk with a quantum particle initially prepared in the initial state $\\Psi_{i1}$. The values of $\\phi\/2\\pi$ are indicated in the inset.\nSecond row: The evolution of the walk with a quantum particle initially prepared in the initial state $\\Psi_{i2}$. The values of $\\phi\/2\\pi$ remain same in a given column.\nThe difference in the probability distribution for the two initial states $\\Psi_{i1}$ and $\\Psi_{i2}$ can be noticed by a careful inspection.\nThird row: The evolution of the return probability of the walker to its initial position (solid curve), and the standard deviation (dotted curve) of the walk upto $200$ steps of the walk. \nThe behaviors of $P(x=x_{\\text{i}}, n)$ and $\\sigma(n)$ are same for both $\\Psi_{i1}$ and $\\Psi_{i2}$, and hence, the results are shown only for the first choice of the initial state $\\Psi_{i1}$.\n(a) For $\\phi\/2\\pi=1\/200$ and (b) $\\phi\/2\\pi=1\/100$ complete revivals occur after $n=q=200$ and $n=q=100$ steps of the walk, respectively. The return probability is equal to $1$ as it can be clearly seen from the corresponding subfigures (g) and (h) in the third row. (c) For $\\phi\/2\\pi=1\/49$ there occurs a partial revival when $n$ is an odd integral multiple of $q=49$ (i.e., $P(x=x_{\\text{i}}, n)<1$), and a complete revival occurs when $n$ is an even integral multiple of $q=49$. The occurrence of revivals, their periodicities, and the boundedness of the standard deviation are independent of the initial state of the walker.}\n\\label{fig:a}\n\\end{figure*}\nHere $$ represents the average or expected value of the walker position $x$. Like $P(x,n)$, for $\\phi\/2\\pi=p\/q$ the return probability and the standard deviation of the walk are also periodic, i.e., $P(x=x_{\\text{i}},n+q*r)$=$P(x=x_{\\text{i}},n)$ and $\\sigma(n+q*r)=\\sigma(n)$. If initially a walker spreads from its initial position $x=x_{\\text{i}}$, and after certain $n$ number of steps of the walk the return probability $P(x=x_{\\text{i}}, n)$ becomes equal to $1$ (correspondingly $\\sigma(n)$ becomes zero) we call this a complete revival. Similarly, if the walker returns to its initial position $x=x_{\\text{i}}$ but with $P(x=x_{\\text{i}},n)<1$ (correspondingly $\\sigma(n)>0$), we call it a partial revival.\nIn the following sections we will investigate revivals in the probability distribution of a QW with TSDPS driven by the standard protocol (given in Eq.~(\\ref{eq:_1})), and in a QW with TSDPS driven by the split-step protocol (given in Eq.~(\\ref{eq:_SS_W_operator})) for various values of $\\phi\/2\\pi$.\n\n\n\\subsection{Standard protocol with TSDPS for rational phase factor} \\label{QW_TSDPS_Rational}\nIn this section we investigate the evolution of a QW driven by the standard protocol Eq.~(\\ref{eq:_1}) for rational values of the phase factors, i.e., $\\phi\/2\\pi=p\/q$, where $p$ and $q$ are mutually coprime integers. \nWe consider a walker initially residing at the origin of a 1D lattice, and consider two possibilities for the internal state of the walker, i.e., an equal superposition of the two spin states, and a spin up state only. The two initial states of the walker can be written as,\n\\begin{align} \\label{eq:initial-state}\n\\begin{aligned}\n\\ket{\\Psi_{i1}} &=\\frac{1}{2} \\Big( \\ket{\\uparrow} + i \\ket{\\downarrow} \\Big) \\otimes \\ket{0}, \\\\\n\\ket{\\Psi_{i2}} & = \\ket{\\uparrow} \\otimes \\ket{0}.\n\\end{aligned}\n\\end{align}\nWe evolve these initial states by periodically applying the walk operator given in Eq.~(\\ref{eq:_1}) for a large number of steps. To demonstrate revivals in the walk we calculate the probability distribution ($P(x,n)$) of the walk in the position-time plane, the probability of the walker to return to its initial position ($P(x=x_{\\text{i}}, n)$), and the standard deviation ($\\sigma(n)$) of the walk using Eqs.~(\\ref{eq:prob-dist}), (\\ref{eq:prob-init-state}), and (\\ref{eq:standard-deviation}), respectively.\n\nIn Fig.~\\ref{fig:a}, we show our numerically simulated results of the evolution of $P(x,n)$ in the position-time plane for $n=200$ steps of the walk for $\\phi\/2\\pi=1\/200$ (Fig.~\\ref{fig:a}(a)), $\\phi\/2\\pi=1\/100$ (Fig.~\\ref{fig:a}(b)), and $\\phi\/2\\pi=1\/49$ (Fig.~\\ref{fig:a}(c)). The evolution shown in Figs.~\\ref{fig:a}(a)-(c) is computed with the initial state $\\psi_{i1}$ as defined in Eq.~\\ref{eq:initial-state}. Revivals in the probability distribution are apparent in all three cases with period of revival equal to $n=q$ for even values of $q$ (Fig.~\\ref{fig:a}(a) and (b)) and $n=2q$ for odd values of $q$ ((Fig.~\\ref{fig:a}(c)). The difference in the periods of the revivals for the even and odd values of $q$ is due to the fact that after odd number of steps of the walk the probability of the walker at its initial position is zero \\cite{Cedzich2016}. In the case of odd $q$ the probability distribution shows partial revivals when $n$ is an odd multiple of $q$, and complete revivals when $n$ is an even multiple of $q$. In all three cases, i.e., Fig.~\\ref{fig:a}(a)-(c), the walker initially takes an excursion in a finite region of the 1D lattice when $n0$ (let us call it lower branch of the $P(x,n)$) throughout the evolution of the walk. In the case of $\\psi_{i2}$ higher probabilities alternate between the upper ($x<0$) and the lower ($x>0$) branches of the probability distribution, i.e., $P(x,n)$ remains higher in a single branch for certain range of $n$ and then switches to the other branch. \n\nIn order to give a clear demonstration of complete and partial revivals in the QW with TSDPS, in Figs.~\\ref{fig:a}(g)-(i) we show the evolution of the return probabilities $P(x=x_{\\text{i}}, n)$ of the walker to its initial position, and the standard deviation $\\sigma(n)$ of the walk. Both the initial states $\\psi_{i1}$ and $\\psi_{i2}$ show identical behavior of $P(x=x_{\\text{i}}, n)$ and $\\sigma(n)$, we therefore show these results only for the first choice of the initial state, i.e., $\\psi_{i1}$. The values of the phase factor remain same in a given column. For $\\phi\/2\\pi=1\/200$ and $\\phi\/2\\pi=1\/100$ complete revivals are clearly visible, i.e., $P(x=x_{\\text{i}}, n)=1$ when $n$ is an integral multiple of $q$. In the case of $\\phi\/2\\pi=1\/49$, there are partial revivals when $n$ is an odd integral multiple of $q$ as $P(x=x_{\\text{i}}, n)<1$, and complete revivals occur when $n$ is an even integral multiple of $q$ as $P(x=x_{\\text{i}}, n)=1$. The extent of the region in which the walker remains bounded during its evolution can be estimated from the standard deviation as these are directly related. For a smaller value of the phase factor $\\phi\/2\\pi$ (e.g., $\\phi\/2\\pi=1\/200$) the walker has a large excursion, i.e., $\\sigma(n) \\sim 26 $, and a smaller excursion ($\\sigma(n) \\sim 7 $) for a larger phase factor (e.g., $\\phi\/2\\pi=1\/49$).\nIn general, the TSDPS inhibits the ballistic expansion of the QW and confine the walker to a finite region of the lattice.\nThe extension of the bounded region decreases with the increase in $\\phi\/2\\pi$. This behaviour is apparent from both the probability distribution and the standard deviation of the walk.\n\\begin{figure*}[t]\n \\centering \n \\includegraphics[width=57mm]{TDPD_Golden_Ratio_Sym2.eps}\n \\includegraphics[width=57mm]{RtrnProb_Golden_Ratio.eps} \n \\includegraphics[width=57mm]{Sigma_Golden_Ratio.eps} \\\\ \n\t\\includegraphics[width=57mm]{TDPD_Golden_Ratio_AntSym1.eps} \n \\caption{Evolution of the QW with TSDPS and with the irrational phase factor $\\phi\/2\\pi = (\\sqrt{5}-1)\/2$ (red, white, and blue colors indicate maximum ($1.$), moderate ($\\sim0.1$), and minimum ($\\sim0.$) probabilities, respectively). (a) and (d) show the probability distribution $P(x, n)$ of the walk in the position-time plane for $n=200$ and with symmetric (asymmetric) initial state $\\Psi_{i1}$ ($\\Psi_{i2}$) given in Eq.~(\\ref{eq:initial-state}). (b) and (c) respectively show the long time evolution of the return probability $P(x=x_{\\text{i}}, n)$ and the standard deviation of the walker with the initial state $\\Psi_{i1}$ for $n=1000$. From (a) and (d) it is clear that the probability distribution is no more periodic for the irrational value of $\\phi\/2\\pi$. The walker is completely localized in a finite region of the 1D lattice, which is also apparent from (c) which shows that $\\sigma(n)$ has a finite upper bound, i.e., $\\sigma<3$. The return probability in (b) shows a number of partial revivals with unpredictable periods in the long time evolution of the walk. $P(x=x_{\\text{i}}, n)$ of the walker is bounded away from 0, i.e., $P(x=x_{\\text{i}}, n) \\ge 0.32$. \n By carefully inspecting (a) and (d) one can notice that the probability distributions for the two choices of the initial states are not exactly similar. However, the return probability and the standard deviation show similar behavior for the two choices of the initial states. We have, therefore, shown $P(x=x_{\\text{i}}, n)$ and $\\sigma(n)$ for the first choice of the initial state, i.e., $\\Psi_{i1}$. }\n\\label{fig:b}\n\\end{figure*}\n\\subsection{Standard protocol with TSDPS for an irrational phase factor} \\label{QW_TSDPS_Irrational}\nWe now consider the case of an irrational phase factor $\\phi\/2\\pi$. The well known approximate irrational number is the Golden ratio $(\\sqrt{5} - 1)\/2$. The walk operator (given in Eq.~(\\ref{eq:_1}) with the phase factor equal to the Golden ratio is no more periodic as $(\\sqrt{5} - 1)\/2$ is incommensurate with the number of steps of the walk. However, the behaviour of the probability distribution of the walker still shows a number of partial revivals in its long time evolution. In Fig.~\\ref{fig:b} we show the evolution of the probability distribution $P(x,n)$ for $n=200$ for both choices of the initial states ($\\Psi_{i1}$ and $\\Psi_{i2}$), the evolution of the return probability of the walker with initial state $\\Psi_{i1}$ to its initial position $P(x=x_{\\text{i}}, n)$ for $n=1000$, and the standard deviation $\\sigma(n)$ of the walker with initial state $\\Psi_{i1}$ for $n=1000$. Figure~\\ref{fig:b}(a) shows that the walker remains completely localized in a small region of the lattice throughout its evolution. Figure~\\ref{fig:b}(b) shows a number of revivals with unpredictable periods in the long time evolution of the walk. These revivals are not strictly complete as the return probability $P(x=x_{\\text{i}}, n)$ is not exactly equal to unity for all $n>1$. Note that $P(x=x_{\\text{i}}, n)$ is bounded away from $0$, i.e., $P(x=x_{\\text{i}}, n) \\ge 0.32$ for all $n$ showing that there is a significant probability of the walker to remain at its initial position during the evolution of the walk. The reason for this behavior is that the initial state of the walker has a significant overlap with a bound state \\cite{Cedzich2013}. Figure~\\ref{fig:b}(c) shows the evolution of the standard deviation of the walk which has an upper bound, i.e., $\\sigma(n)<3$ for all values of $n$. This shows that the walker takes a very short excursion during the evolution of the walk and remains localized in a small region of the 1D lattice. Figure~\\ref{fig:b}(d) shows the same evolution of the walk as in the Fig.~\\ref{fig:b}(a) but with the second choice of the initial state, i.e., $\\Psi_{i2}$. By a careful inspection of Fig.~\\ref{fig:b}(a) and Fig.~\\ref{fig:b}(d) one can notice that the probability distributions for the two initial states are not similar. However, the return probability, the standard deviation, and the localization behaviour of the walk are same for the two initial states. We, therefore, show the return probability and the standard deviation for the first initial state, i.e., $\\Psi_{i1}$ only. \n\\section{Split-Step Protocol with TSDPS for a rational phase factor} \\label{SPLIT_STEP}\nWe now investigate revivals in a QW with a TSDPS driven by a split-step protocol. We consider only rational values of the phase factor, i.e., $\\phi\/2\\pi=p\/q$. The walk operator for this protocol is defined as,\n\\begin{align}\n\\label{eq:_SS_W_operator}\n\\hat{W}_{ss, \\phi}(n) = \\hat{S}_{x}^{\\downarrow}(\\phi n) \\, \\hat{C}_2 \\, \\hat{S}_{x}^{\\uparrow}(\\phi n) \\, \\hat{C}_1,\n\\end{align}\n\\begin{figure*}[t]\n \\centering \n \\includegraphics[width=71mm]{TDPD2PiBy100_Split_Step_400Steps.eps}\n \\includegraphics[width=75mm]{Simga_RtrnProb_2Piby100_Split_Step_400Steps.eps} \\\\\n \\includegraphics[width=71mm]{TDPD2PiBy49_Split_Step_400Steps.eps}\n \\includegraphics[width=75mm]{Simga_RtrnProb_2Piby49_Split_Step_400Steps.eps}\n\\caption{Evolution of the split-step QW with TSDPS with rational phase factor $\\phi\/2\\pi$ (red, white, and blue colors indicate maximum ($1.$), moderate ($\\sim 0.1$), and minimum ($\\sim 0.$) probabilities, respectively). (a) and (c) show the Probability distribution $P(x, n)$ of the walk in the position-time plane for $\\phi\/2\\pi=1\/100$ and $\\phi\/2\\pi=1\/49$, respectively. (b) and (d) show the evolution of the return probability $P(x=x_{\\text{i}}, n)$ (indicated by the blue solid curve) and $\\sigma(n)$ (indicated by the orange dotted curve) corresponding to (a) and (c), respectively. \nIt is clear from the probability distribution that the walker remains bounded in a finite region of the lattice, as it was the case in the QW with TSDPS driven by the standard protocol. The return probability remains smaller than unity in both (b) and (d), showing partial revivals. The periods of the partial revivals are double of the standard protocol case (compare with Fig.~(\\ref{fig:a})). At each period of revivals the return probability decreases and the standard deviation increases with the number of steps of the walk. The standard deviation has an upper bound due to the fact that the walker remains bounded during the evolution of the walk.}\n\\label{fig:Split_Step}\n\\end{figure*}\nwhich consists of two coin operators $\\hat{C}_1$ and $\\hat{C}_2$, and two shift operators $\\hat{S}_{x}^{\\uparrow}(\\phi n)$ and $\\hat{S}_{x}^{\\downarrow}(\\phi n)$. We consider both the coin operators to be the Hadamard coins as defined in Eq.~(\\ref{eq:_coin}).\nThe shift operator $\\hat{S}_{x}^{\\uparrow} (\\phi n)$ shifts only the spin-up state of the walker to the right by a unit length and imprints a TSDPS to its wavefunction leaving the spin-down state unchanged. Mathematically, it is defined as,\n\\begin{align} \\label{eq:shift-x-up}\n \\hat{S}_{x}^{\\uparrow} (\\phi n) = & \\sum_{x}\n \\Big[ \\exp\\big[i \\phi n\\big] \\ket{\\uparrow}\\bra{\\uparrow} \\otimes \\ket{x+1}\\bra{x} \\nonumber \\\\ & \n + \\ket{\\downarrow}\\bra{\\downarrow} \\otimes \\ket{x}\\bra{x}\\Big].\n\\end{align} \nSimilarly, the operator $\\hat{S}_{x}^{\\downarrow}(\\phi n)$ shifts only the spin-down state of the walker to the left by a unit length and imprints a TSDPS to its wavefunction leaving the spin-up state unchanged. It is defined as,\n\\begin{align} \\label{eq:shift-x-up}\n \\hat{S}_{x}^{\\downarrow} (\\phi n) = & \\sum_{x}\n \\Big[ \\ket{\\uparrow}\\bra{\\uparrow} \\otimes \\ket{x}\\bra{x} \\nonumber \\\\ & \n + \\exp\\big[- i \\phi n\\big] \\ket{\\downarrow}\\bra{\\downarrow} \\otimes \\ket{x-1}\\bra{x}\\Big].\n\\end{align} \nTo investigate revivals in the QW driven by the split-step protocol with TSDPS, we consider a walker in an initial state $\\Psi_{i1}$ defined in Eq.~(\\ref{eq:initial-state}), and evolve it through the walk operator defined in Eq.~(\\ref{eq:_SS_W_operator}) for a large number of steps. We numerically compute the probability distribution $P(x,n)$, the return probability $P(x=x_{\\text{i}},n)$ to the initial position, and the standard deviation $\\sigma(n)$ of the walker similar to the QW driven by the standard protocol.\n\nIn Fig.~(\\ref{fig:Split_Step}) we show our numerical results for $P(x,n)$, $P(x=x_{\\text{i}},n)$, and $\\sigma(n)$ of the split-step QW with TSDPS. The evolution is carried out with the initial state $\\Psi_{i1}$ for $n=400$ steps of the walk. Figures~\\ref{fig:Split_Step}(a) and (c) show $P(x,n)$ in the position-time plane for $\\phi\/2\\pi=1\/100$ and $\\phi\/2\\pi=1\/49$, respectively. The probability distributions show periodic behavior. The walker spreads initially for $n 0$, $\\alpha_{i,r+1} = \\cdots =\n\\alpha_{in} = 0$ and $r=1,\\ldots,n$ where $r=r(i)$.\nWe set\n\\[\nU_{i \\eta_i} = [s_{i1},\\ldots,s_{i\\eta_i}],\\qquad V_{i \\eta_i} = [d_{i1},\\ldots, d_{i\\eta_i}]\\qquad\n\\mbox{and}\\qquad \\Sigma_{i \\eta_i} = \\mbox{diag}(\\alpha_{i1},\\ldots,\\alpha_{i\\eta_i}),\n\\]\nwhere $U_{i\\eta_i}\\in {\\mathbb R}^{m\\times \\eta_i}$, $V_{i\\eta_i}\\in {\\mathbb R}^{n\\times \\eta_i}$ and $\\Sigma_{i\\eta_i}\\in {\\mathbb R}^{\\eta_i \\times \\eta_i}$.\n Now we def\\\/ine $K_{i\\eta_i} \\in{\\mathbb R}^{m\\times n}$ and ${\\mathcal K}_{i\\eta_i}:L^{2}(\\Omega,{\\mathbb R}^{n})\\rightarrow L^{2}(\\Omega,{\\mathbb R}^{m})$ by\n\\begin{gather}\n\\label{trsvd}\nK_{i\\eta_i} = U_{i\\eta_i}\\Sigma_{i \\eta_i}V_{i\\eta_i}^{T}\\qquad\\mbox{and}\\qquad [{\\mathcal K}_{i\\eta_i}({\\boldsymbol w}_i)](\\omega) = K_{i\\eta_i}[{\\boldsymbol w}_i(\\omega)],\n\\end{gather}\nrespectively, for any ${\\boldsymbol w}_i\\in L^{2}(\\Omega,{\\mathbb R}^{n})$.\n\n\\begin{theorem}\n\\label{sol1}\nLet ${\\boldsymbol v}_1,\\ldots,{\\boldsymbol v}_p$ be determined by Lemma~{\\rm \\ref{lemma1}}.\nThen the vector $f^0$ and operators ${\\mathcal F}_1^0, \\ldots, {\\mathcal F}_p^0$,\nsatisfying \\eqref{min1}--\\eqref{con1}, are determined by\n\\begin{gather}\n\\label{sol-f0}\nf^0 = E[{\\boldsymbol x}] - \\sum_{k=1}^p F^0_k E[{\\boldsymbol v}_k]\\qquad \\mbox{and}\\qquad\n{\\mathcal F}_1^0 = {\\mathcal K}_{1\\eta_1}, \\quad \\ldots, \\quad {\\mathcal F}_p^0 = {\\mathcal K}_{p\\eta_p}.\n\\end{gather}\nThe accuracy associated with transform ${\\mathcal T}_p^0$, determined by \\eqref{th1}\n and \\eqref{sol-f0}, is given by\n\\begin{gather}\n\\label{er1}\nE[\\|{\\boldsymbol x} - {\\mathcal T}_p^0({\\boldsymbol y})\\|^2] =\\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\\sum_{k=1}^p \\sum_{j=1}^{\\eta_k} \\alpha^2_{kj} .\n\\end{gather}\n\\end{theorem}\n\n\\begin{proof} The functional $J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p)$ is written as\n\\begin{gather}\nJ(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) = \\mbox{tr}\\Bigg[E_{xx}\n- E[{\\boldsymbol x}]f^T - \\sum_{i=1}^p E_{xv_i}F_i^T - f E[{\\boldsymbol x}^T] + ff^T\n+ f\\sum_{i=1}^p E[{\\boldsymbol v}_i^T]F_i^T\\nonumber \\\\\n\\phantom{J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) =}{}\n- \\sum_{i=1}^p F_i E_{v_ix}\n+ \\sum_{i=1}^p F_iE[{\\boldsymbol v}_i]f^T + E\\Bigg(\\sum_{i=1}^p {\\mathcal F}_i\n({\\boldsymbol v}_i) \\Bigg[\\sum_{k=1}^p {\\mathcal F}_i ({\\boldsymbol v}_i)\\Bigg]^T\\Bigg)\\Bigg]. \\label{jqa}\n\\end{gather}\nWe remind (see Section \\ref{summ}) that here and below, $F_i$\nis def\\\/ined by $[{\\mathcal F}_i({\\boldsymbol v}_i)](\\omega) = F_i[{\\boldsymbol v}_i(\\omega)]$\nso that, for example, $E[{\\mathcal F}_k({\\boldsymbol v}_k){\\boldsymbol x}_k^T] = F_k E_{v_k x_k}$.\nIn other words, the right hand side in (\\ref{jqa}) is a function of $f$, ${\\mathcal F}_1, \\ldots ,{\\mathcal F}_p$.\n\nLet us show that $J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p)$ can be represented as\n\\begin{gather}\n\\label{j012}\nJ(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) = J_0 + J_1 + J_2,\n\\end{gather}\nwhere\n\\begin{gather}\n\\label{j0}\nJ_0 = \\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\\sum_{i=1}^p \\|{\\mathbb E}_{xv_i}\\|^2,\n\\\\\n\\label{j12}\nJ_1 = \\|f - E[{\\boldsymbol x}] + \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i]\\|^2\\qquad\\mbox{and}\\qquad\nJ_2 =\\sum_{i=1}^p \\|F_i - {\\mathbb E}_{xv_i}\\|^2.\n\\end{gather}\nIndeed, $J_1$ and $J_2$ are rewritten as follows\n\\begin{gather}\nJ_1 = \\mbox{tr} \\Bigg(ff^T - fE[{\\boldsymbol x}^T] + \\sum_{i=1}^p f E[{\\boldsymbol v}_i^T]F_i + E[{\\boldsymbol x}] E[{\\boldsymbol x}^T]\n - E[{\\boldsymbol x}]f^T - \\sum_{i=1}^p E[{\\boldsymbol x}] E[{\\boldsymbol v}_i^T] F_i^T \\nonumber\\\\\n\\phantom{J_1 =}{} + \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i]f^T\n- \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i] E[{\\boldsymbol x}^T] + \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i] \\sum_{k=1}^p E[{\\boldsymbol v}_k^T] F_k^T\\Bigg)\\label{j11}\n\\end{gather}\nand\n\\begin{gather}\n\\label{j2}\nJ_2 = \\sum_{i=1}^p \\mbox{tr}\\, (F_i - {\\mathbb E}_{xv_i})(F_i^T - {\\mathbb E}_{v_i x})\n= \\sum_{i=1}^p \\mbox{tr} \\,(F_i F_i^T - F_i {\\mathbb E}_{v_i x} - {\\mathbb E}_{xv_i}F_i^T + {\\mathbb E}_{xv_i} {\\mathbb E}_{v_ix}).\n\\end{gather}\nIn (\\ref{j2}), $\\sum\\limits_{i=1}^p \\mbox{tr}\\, (F_i F_i^T)$ can be represented in the form\n\\begin{gather}\n\\label{aea}\n\\sum_{i=1}^p \\mbox{tr} \\,(F_i F_i^T) = \\mbox{tr}\\Bigg[E\\Bigg(\\sum_{i=1}^p F_i {\\boldsymbol v}_i \\sum_{k=1}^p {\\boldsymbol v}_k^T F_k^T\\Bigg)\n\\Bigg] - \\mbox{tr}\\Bigg(\\sum_{i=1}^p F_i E[{\\boldsymbol v}_i] \\sum_{k=1}^p E[{\\boldsymbol v}_k^T] F_k^T\\Bigg)\n\\end{gather}\nbecause\n\\begin{gather}\n\\label{v-ort}\nE[{\\boldsymbol v}_i {\\boldsymbol v}_k^T] - E[{\\boldsymbol v}_i] E[ {\\boldsymbol v}_k^T] = \\left \\{ \\begin{array}{@{}cc}\n{\\mathbb O}, & i\\neq k,\\\\\n I, & i=k \\end{array} \\right.\n\\end{gather}\n due to the orthonormality of vectors ${\\boldsymbol v}_1, \\ldots, {\\boldsymbol v}_p$.\n\nThen\n\\begin{gather}\nJ_0 + J_1 + J_2 = \\mbox{tr}(E_{xx} - E[{\\boldsymbol x}]E[{\\boldsymbol x}^T]) - \\sum_{i=1}^p \\mbox{tr} [{\\mathbb E}_{xv_i}{\\mathbb E}_{v_ix}]\\label{jj012}\\\\\n\\phantom{J_0 + J_1 + J_2 =}{} + \\mbox{tr} \\Bigg(ff^T - fE[{\\boldsymbol x}^T] + \\sum_{i=1}^p f E[{\\boldsymbol v}_i^T]F_i\n+ E[{\\boldsymbol x}] E[{\\boldsymbol x}^T] - E[{\\boldsymbol x}]f^T \\nonumber\\\\\n\\phantom{J_0 + J_1 + J_2 =}{} - \\sum_{i=1}^p E[{\\boldsymbol x}] E[{\\boldsymbol v}_i^T] F_i^T + \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i]f^T\n - \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i] E[{\\boldsymbol x}^T] \\nonumber\\\\\n\\phantom{J_0 + J_1 + J_2 =}{} + \\sum_{i=1}^p F_i E[{\\boldsymbol v}_i] \\sum_{k=1}^p E[{\\boldsymbol v}_k^T] F_k^T\\Bigg)\n + \\mbox{tr}\\Bigg[E\\Bigg(\\sum_{i=1}^p F_i {\\boldsymbol v}_i \\sum_{k=1}^p {\\boldsymbol v}_k^T F_k^T\\Bigg)\\Bigg] \\nonumber\\\\\n\\phantom{J_0 + J_1 + J_2 =}{} - \\mbox{tr}\\Bigg(\\sum_{i=1}^p F_i E[{\\boldsymbol v}_i] \\sum_{k=1}^p E[{\\boldsymbol v}_k^T] F_k^T\\Bigg)\\nonumber\\\\\n\\phantom{J_0 + J_1 + J_2 =}{} - \\sum_{i=1}^p \\mbox{tr} (F_i E_{v_i x} - F_i E[{\\boldsymbol v}_i] E[{\\boldsymbol x}^T] + E_{xv_i}F_i^T-E[{\\boldsymbol x}]E[{\\boldsymbol v}_i^T]F_i^T - {\\mathbb E}_{xv_i} {\\mathbb E}_{v_ix}) \\nonumber\\\\\n\\phantom{J_0 + J_1 + J_2}{} = J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p).\\label{nj012}\n\\end{gather}\nHence, (\\ref{j012}) is true. Therefore,\n\\begin{gather}\n J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) =\n\\|{\\mathbb E}_{xx}^{1\/2}\\|^2 - \\sum_{k=1}^p \\|{\\mathbb E}_{xv_k}\\|^2\n+ \\|f - E[{\\boldsymbol x}] + \\sum_{k=1}^p F_k E[{\\boldsymbol v}_k]\\|^2 \\nonumber \\\\\n\\phantom{J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) =}{}+ \\sum_{k=1}^p \\|F_k - {\\mathbb E}_{xv_k}\\|^2.\\label{proof2}\n\\end{gather}\nIt follows from (\\ref{proof2}) that the constrained minimum\n(\\ref{min1})--(\\ref{con1}) is achieved if $f=f^0$ with $f^0$ given by (\\ref{sol-f0}),\nand if $F_k^0$ is such that\n\\begin{gather}\n\\label{min-k}\nJ_k(F_k^0) = \\min_{F_k} J_k(F_k) \\qquad \\mbox{subject to} \\quad \\mbox{rank}\\, (F_k) = \\eta_k,\n\\end{gather}\nwhere $J_k(F_k) = \\|F_k - {\\mathbb E}_{xv_k}\\|^2$.\nThe solution to (\\ref{min-k}) is given \\cite{gol1} by\n\\begin{gather}\n\\label{proof3}\nF_k^0 = K_{k\\eta_k}.\n\\end{gather}\nThen\n\\[\nE[\\|{\\boldsymbol x} - {\\mathcal T}_p^0({\\boldsymbol y})\\|^2]\n=\\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\\sum_{k=1}^p (\\|{\\mathbb E}_{xv_k}\\|^2 - \\|K_{k\\eta_k} - {\\mathbb E}_{xv_k}\\|^2).\n\\]\nHere \\cite{gol1},\n\\begin{gather}\n\\label{kr}\n \\|{\\mathbb E}_{xv_k}\\|^2 = \\sum_{j=1}^{r} \\alpha^2_{kj}\\qquad\\mbox{and}\\qquad \\|K_{k\\eta_k} - {\\mathbb E}_{xv_k}\\|^2 =\n \\sum_{j=\\eta_k +1}^{r} \\alpha^2_{kj}\n\\end{gather}\nwith $r=r(k)$. Thus, (\\ref{er1}) is true. The theorem is proved.\n\\end{proof}\n\n\\begin{corollary}\\label{corollary1}\nLet ${\\boldsymbol v}_1,\\ldots,{\\boldsymbol v}_p$ be determined by Lemma~{\\rm \\ref{lemma1}}.\nThen the vector $\\hat{f}$ and operators $\\hat{\\mathcal F}_1, \\ldots, \\hat{\\mathcal F}_p$\nsatisfying the unconstrained problem \\eqref{min1}, are determined by\n\\begin{gather}\n\\label{sol-q}\n\\hat{f} = E[{\\boldsymbol x}] - \\sum_{k=1}^p \\hat{F}_k E[{\\boldsymbol v}_k]\\qquad \\mbox{and}\\qquad\n\\hat{\\mathcal F}_1 = {\\mathcal E}_{x v_1}, \\quad \\ldots, \\quad \\hat{\\mathcal F}_p = {\\mathcal E}_{x v_p}\n\\end{gather}\nwith $\\hat{\\mathcal F}_k$ such that $[\\hat{\\mathcal F}_k({\\boldsymbol v}_k)](\\omega) = \\hat{F}_k{\\boldsymbol v}_k(\\omega)$ where\n$\\hat{F}_k\\in{\\mathbb R}^{n\\times m}$ and $k=1,\\ldots,p$.\n\nThe accuracy associated with transform $\\hat{{\\mathcal T}}_p$ given by\n\\begin{gather}\n\\label{fr1}\n\\hat{{\\mathcal T}}_p({\\boldsymbol y})= \\hat{f} + \\sum _{k=1}^p \\hat{\\mathcal F}_k({\\boldsymbol v}_k)\n\\end{gather}\nis such that\n\\begin{gather}\n\\label{er2}\nE[\\|{\\boldsymbol x} - \\hat{{\\mathcal T}}_p({\\boldsymbol y})\\|^2] =\\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\\sum_{k=1}^p \\|{\\mathbb E}_{xv_k}\\|^2 .\n\\end{gather}\n\\end{corollary}\n\n\\begin{proof} The proof follows directly from (\\ref{proof2}).\n\\end{proof}\n\n\n\\subsubsection[The case when matrix ${\\mathbb E}_{v_k v_k}$ is not invertible for $k=1,\\ldots,p$]{The case\nwhen matrix $\\boldsymbol{{\\mathbb E}_{v_k v_k}}$ is not invertible for $\\boldsymbol{k=1,\\ldots,p}$}\n\\label{det-fk}\n\nWe write $A_k\\in {\\mathbb R}^{m\\times n}$ for an arbitrary matrix, and\ndef\\\/ine operators ${\\cal A}_k: L^2(\\Omega,{\\mathbb R}^{n})\\rightarrow L^2(\\Omega,{\\mathbb R}^{m})$ and\n${\\mathcal E}_{v_k v_k}, {\\mathcal E}^{\\dag}_{v_k v_k}, ({\\mathcal E}^{1\/2}_{v_k v_k})^{\\dag} :L^{2}(\\Omega,{\\mathbb R}^{n})\\rightarrow\nL^{2}(\\Omega,{\\mathbb R}^{n})$ similarly to those in (\\ref{qk}) and (\\ref{ge}).\n\n\nFor the case under consideration (matrix ${\\mathbb E}_{v_k v_k}$ is not invertible),\nwe introduce the SVD of ${\\mathbb E}_{xv_k}({\\mathbb E}^{1\/2}_{v_k v_k})^{\\dag}$,\n\\begin{gather}\n\\label{svd2}\n U_k \\Sigma_k V^{T}_k = {\\mathbb E}_{xv_k}({\\mathbb E}^{1\/2}_{v_k v_k})^{\\dag},\n\\end{gather}\n where, as above,\n$U_k\\in {\\mathbb R}^{m\\times n}$, $V_k\\in {\\mathbb R}^{n\\times n}$\nare orthogonal and $\\Sigma_k\\in {\\mathbb R}^{n\\times n}$ is diagonal,\n\\begin{gather}\n\\label{sqd3}\nU_k = [s_{k1},\\ldots,s_{kn}],\\qquad V_k =[d_{k1},\\ldots,d_{kn}] \\qquad\\mbox{and}\n\\qquad \\Sigma_k = \\mbox{diag}(\\beta_{k1},\\ldots,\\beta_{kn})\n\\end{gather}\n with $\\beta_{k1} \\geq\n\\cdots \\geq \\beta_{kr} > 0$, $\\beta_{k,r+1} = \\cdots = \\beta_{kn} = 0$, $r=1,\\ldots,n$ and\n$r=r(k)$.\n\nLet us set\n\\begin{gather}\nU_{k \\eta_k} = [s_{k1},\\ldots,s_{k\\eta_k}],\\qquad V_{k\\eta_k} = [d_{k1},\\ldots, d_{k\\eta_k}]\\qquad\n\\mbox{and}\\nonumber\\\\\n\\Sigma_{k\\eta_k} = \\mbox{diag}\\,(\\beta_{k1},\\ldots,\\beta_{k\\eta_k}),\\label{tr-svd}\n\\end{gather}\nwhere $U_{k\\eta_k}\\in {\\mathbb R}^{m\\times \\eta_k}$, $V_{k\\eta_k}\\in\n{\\mathbb R}^{n\\times \\eta_k}$ and $\\Sigma_{k\\eta_k}\\in {\\mathbb R}^{\\eta_k \\times \\eta_k}$.\n Now we def\\\/ine $G_{k\\eta_k} \\in{\\mathbb R}^{m\\times n}$ and ${\\mathcal G}_{k\\eta_k}:L^{2}\n (\\Omega,{\\mathbb R}^{n})\\rightarrow L^{2}(\\Omega,{\\mathbb R}^{m})$ by\n\\begin{gather}\n\\label{trsvd2}\nG_{k\\eta_k} = U_{k\\eta_k}\\Sigma_{k\\eta_k}V_{k\\eta_k}^{T}\\qquad\\mbox{and}\\qquad [{\\mathcal G}_{k\\eta_k}({\\boldsymbol w}_k)](\\omega) = G_{k\\eta_k}[{\\boldsymbol w}_k(\\omega)],\n\\end{gather}\nrespectively, for any ${\\boldsymbol w}_k\\in L^{2}(\\Omega,{\\mathbb R}^{n})$.\n\nAs noted before, we write ${\\cal I}$ for the identity operator.\n\n\\begin{theorem}\n\\label{sol2}\nLet ${\\boldsymbol v}_1,\\ldots,{\\boldsymbol v}_p$ be determined by Lemma {\\rm \\ref{ort3}}.\nThen $f^0$ and ${\\mathcal F}_1^0, \\ldots, {\\mathcal F}_p^0$,\n satisfying \\eqref{min1}--\\eqref{con1}, are determined by\n\\begin{gather}\n\\label{sol-f02}\nf^0 = E[{\\boldsymbol x}] - \\sum_{k=1}^p F^0_k E[{\\boldsymbol v}_k]\n\\end{gather}\nand\n\\begin{gather}\n\\label{sol-f12}\n{\\mathcal F}^0_1 = {\\mathcal G}_{1\\eta_1}({\\mathcal E}^{1\/2}_{v_1 v_1})^{ \\dag}\n+ {\\cal A}_1[{\\cal I} - {\\mathcal E}_{v_1 v_1}^{1\/2}({\\mathcal E}^{1\/2}_{v_1 v_1})^{\\dag}],\\\\\n\\cdots \\cdots \\cdots\\cdots \\cdots \\cdots \\cdots \\cdots \\cdots \\cdots \\cdots \\cdots \\cdots \\cdots\\nonumber \\\\\n\\label{sol-fp2}\n{\\mathcal F}^0_p = {\\mathcal G}_{p\\eta_p}({\\mathcal E}^{1\/2}_{v_p v_p})^{ \\dag}\n+ {\\cal A}_p[{\\cal I} - {\\mathcal E}_{v_p v_p}^{1\/2}({\\mathcal E}^{1\/2}_{v_p v_p})^{\\dag}],\n\\end{gather}\nwhere for $k=1, \\ldots,p,$ ${\\cal A}_k$ is any linear operator such that $\\mbox{\\rm rank}\\,\n {\\mathcal F}^0_k \\leq \\eta_k$\\footnote{In particular, ${\\cal A}_k$ can be chosen as the zero operator.}.\n\nThe accuracy associated with transform ${\\mathcal T}_p^0$ given by \\eqref{th1}\nand \\eqref{sol-f02}--\\eqref{sol-fp2} is such that\n\\begin{gather}\n\\label{er12}\nE[\\|{\\boldsymbol x} - {\\mathcal T}_p^0({\\boldsymbol y})\\|^2] =\\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\\sum_{k=1}^p \\sum_{j=1}^{\\eta_k} \\beta^2_{kj}.\n\\end{gather}\n\\end{theorem}\n\n\n\\begin{proof} For ${\\boldsymbol v}_1,\\ldots,{\\boldsymbol v}_p$ determined by Lemma \\ref{ort3}, $J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p)$\nis represented by (\\ref{jqa}) as well.\nLet us consider $J_0$, $J_1$ and $J_2$ given by\n\\begin{gather}\n\\label{j02}\nJ_0 = \\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\\sum_{k=1}^p \\|{\\mathbb E}_{xv_k}({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag} \\|^2,\n\\\\\n\\label{j122}\nJ_1 = \\|f - E[{\\boldsymbol x}] + \\sum_{k=1}^p F_k E[{\\boldsymbol v}_k]\\|^2\\qquad\\mbox{and}\\qquad\nJ_2 =\\sum_{k=1}^p \\|F_k{\\mathbb E}_{v_kv_k}^{1\/2} - {\\mathbb E}_{xv_k}({\\mathbb E}_{v_kv_k}^{1\/2})^{\\dag}\\|^2.\n\\end{gather}\nTo show that\n\\begin{gather}\n\\label{jff}\nJ(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) = J_0 + J_1 + J_2\n\\end{gather}\nwith $J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p)$ def\\\/ined by (\\ref{jqa}), we use the relationships (see \\cite{tor1})\n\\begin{gather}\n\\label{eee1}\n{\\mathbb E}_{xv_k}{\\mathbb E}_{v_k v_k}^\\dag {\\mathbb E}_{v_k v_k}\n= {\\mathbb E}_{xv_k} \\qquad\\mbox{and}\\qquad {\\mathbb E}_{v_k v_k}^{\\dag}{\\mathbb E}_{v_k v_k}^{ 1\/2}\n= ({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag}\n\\end{gather}\nThen\n\\begin{gather}\nJ_1 = \\mbox{tr} \\Bigg(ff^T - fE[{\\boldsymbol x}^T] + \\sum_{k=1}^p f E[{\\boldsymbol v}_k^T]F_k + E[{\\boldsymbol x}] E[{\\boldsymbol x}^T]\n - E[{\\boldsymbol x}]f^T - \\sum_{k=1}^p E[{\\boldsymbol x}] E[{\\boldsymbol v}_k^T] F_k^T\\nonumber\\\\\n \\phantom{J_1 =}{} + \\sum_{k=1}^p F_k E[{\\boldsymbol v}_k]f^T\n- \\sum_{k=1}^p F_k E[{\\boldsymbol v}_k] E[{\\boldsymbol x}^T] + \\sum_{k=1}^p F_k E[{\\boldsymbol v}_k] \\sum_{i=1}^p E[{\\boldsymbol v}_i^T] F_i^T\\Bigg)\\label{j112}\n\\end{gather}\nand\n\\begin{gather}\nJ_2 = \\sum_{k=1}^p \\mbox{tr}\n(F_k - {\\mathbb E}_{xv_k}{\\mathbb E}_{v_kv_k}^\\dag){\\mathbb E}_{v_kv_k}(F_k^T\n- {\\mathbb E}_{v_kv_k}^\\dag{\\mathbb E}_{v_k x})\\nonumber\\\\\n\\phantom{J_2 }{} = \\sum_{k=1}^p \\mbox{tr} (F_k {\\mathbb E}_{v_kv_k} F_k^T\n- F_k {\\mathbb E}_{v_k x} - {\\mathbb E}_{xv_k}F_k^T + {\\mathbb E}_{xv_k}{\\mathbb E}_{v_kv_k}^\\dag {\\mathbb E}_{v_kx}),\n\\label{j22}\n\\end{gather}\nwhere\n\\begin{gather}\n\\label{aea2}\n\\sum_{k=1}^p \\mbox{tr} (F_k {\\mathbb E}_{v_kv_k} F_k^T)= \\mbox{tr}\\Bigg[E\\Bigg(\\sum_{k=1}^p F_k\n{\\boldsymbol v}_k \\sum_{i=1}^p {\\boldsymbol v}_i^T F_i^T\\Bigg)\\Bigg] -\\mbox{tr}\\Bigg(\\sum_{k=1}^p F_k E[{\\boldsymbol v}_k] \\sum_{i=1}^p E[{\\boldsymbol v}_i^T] F_i^T\\Bigg)\n\\end{gather}\nbecause\n\\begin{gather}\n\\label{v-ort2}\nE[{\\boldsymbol v}_i {\\boldsymbol v}_k^T] - E[{\\boldsymbol v}_i] E[ {\\boldsymbol v}_k^T] = {\\mathbb O}\\qquad\\mbox{for}\\quad i\\neq k\n\\end{gather}\n due to orthogonality of the vectors ${\\boldsymbol v}_1, \\ldots, {\\boldsymbol v}_s$.\nOn the basis of (\\ref{eee1})--(\\ref{aea2}) and similarly to (\\ref{jj012})--(\\ref{nj012}),\nwe establish that (\\ref{jff}) is true.\n Hence,\n\\begin{gather}\n J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) =\n\\|{\\mathbb E}_{xx}^{1\/2}\\|^2 -\n\\sum_{k=1}^p \\|{\\mathbb E}_{xv_k}({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag} \\|^2\n+ \\|f - E[{\\boldsymbol x}] \\nonumber \\\\\n\\label{proof22}\n\\phantom{J(f,{\\mathcal F}_1, \\ldots ,{\\mathcal F}_p) =}{}\n+ \\sum_{k=1}^p F_k E[{\\boldsymbol v}_k]\\|^2+ \\sum_{k=1}^p \\|F_k{\\mathbb E}_{v_k v_k}^{1\/2} - {\\mathbb E}_{xv_k}({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag}\\|^2.\n\\end{gather}\nIt follows from the last two terms in (\\ref{proof22}) that\nthe constrained minimum (\\ref{min1})--(\\ref{con1}) is achieved\nif $f=f^0$ with $f^0$ given by (\\ref{sol-f02}), and $F_k^0$ is such that\n\\begin{gather}\n\\label{min-k2}\nJ_k(F_k^0) = \\min_{F_k} J_k(F_k) \\qquad \\mbox{subject to} \\quad \\mbox{rank} \\,(F_k) = \\eta_k,\n\\end{gather}\nwhere $J_k(F_k) = \\|F_k{\\mathbb E}_{v_k v_k}^{1\/2} - {\\mathbb E}_{xv_k}({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag}\\|^2$.\nThe constrained minimum (\\ref{min1})--(\\ref{con1})\nis achieved if $f=f^0$ is def\\\/ined by (\\ref{sol-f02}), and if \\cite{gol1}\n\\begin{gather}\n\\label{me}\nF_k{\\mathbb E}_{v_k v_k}^{1\/2} = G_{\\eta_k}.\n\\end{gather}\nThe matrix equation (\\ref{me}) has the general solution \\cite{ben1}\n\\begin{gather}\n\\label{proof312}\nF_k = F^0_k =G_{\\eta_k}({\\mathbb E}_{v_k v_k}^{1\/2})^\\dag\n+ A_k[I - {\\mathbb E}_{v_k v_k}^{1\/2} ({\\mathbb E}_{v_k v_k}^{1\/2})^\\dag]\n\\end{gather}\nif and only if\n\\begin{gather}\n\\label{proof42}\nG_{\\eta_k}({\\mathbb E}_{v_k v_k}^{1\/2})^\\dag {\\mathbb E}_{v_k v_k}^{1\/2}= G_{\\eta_k}.\n\\end{gather}\n The latter is satisf\\\/ied on the basis of the following\n derivation\\footnote{Note that the matrix\n$I - {\\mathbb E}_{v_k v_k}^{1\/2} ({\\mathbb E}_{v_k v_k}^{1\/2})^\\dag$\nis simply a projection onto the null space of ${\\mathbb E}_{v_k v_k}$\n and can be replaced by $I - {\\mathbb E}_{v_k v_k} ({\\mathbb E}_{v_k v_k})^\\dag$.}.\n\nAs an extension of the technique presented in the proving Lemmata~1\nand~2 in \\cite{tor1}, it can be shown that for any matrices $Q_1, Q_2\\in{\\mathbb R}^{m\\times n}$,\n\\begin{gather}\n\\label{proof52}\n{\\cal N}(Q_1)\\subseteq {\\cal N}(Q_2)\\quad \\Rightarrow \\quad Q_2(I- Q_1^\\dag Q_1) = \\mathbb O,\n\\end{gather}\n where ${\\cal N}(Q_i)$ is the null space of $Q_i$ for $i=1,2$. In regard of the equation under consideration,\n\\begin{gather}\n\\label{proof62}\n{\\cal N}([{\\mathbb E}_{v_k v_k}^{1\/2}]^\\dag)\n\\subseteq {\\cal N}({\\mathbb E}_{x v_k}[{\\mathbb E}_{v_k v_k}^{1\/2}]^\\dag).\n\\end{gather}\n The def\\\/inition of $G_{\\eta_k}$ implies that\n\\[\n{\\cal N}({\\mathbb E}_{x v_k}[{\\mathbb E}_{v_k v_k}^{1\/2}]^\\dag)\n\\subseteq {\\cal N}(G_{\\eta_k})\\qquad\\mbox{and then}\\qquad {\\cal N}([{\\mathbb E}_{v_k v_k}^{1\/2}]^\\dag)\n \\subseteq {\\cal N}(G_{\\eta_k}).\n\\]\n On the basis of (\\ref{proof52}), the latter implies\n $G_{\\eta_k}[I-({\\mathbb E}_{v_k v_k}^{1\/2})^\\dag {\\mathbb E}_{v_k v_k}^{1\/2}]={\\mathbb O}$,\n i.e.\\ (\\ref{proof42}) is true. Hence, (\\ref{proof312}) and (\\ref{sol-f12})--(\\ref{sol-fp2}) are true as well.\n\nNext, similar to (\\ref{kr}),\n\\begin{gather}\n\\label{g-e}\n \\|{\\mathbb E}_{xv_k}({\\mathbb E}_{v_kv_k}^{1\/2})^\\dag\\|^2 -\\|G_{\\eta_k}\n - {\\mathbb E}_{xv_k}({\\mathbb E}_{v_kv_k}^{1\/2})^\\dag\\|^2 =\n \\sum_{j=1}^{\\eta_k} \\beta^2_{kj}.\n\\end{gather}\nThen (\\ref{er12}) follows from (\\ref{proof22}), (\\ref{proof312}), (\\ref{sol-f02}) and (\\ref{g-e}).\n\\end{proof}\n\n\\begin{remark}\\label{remark4}\nThe known reduced-rank transforms based on the Volterra polynomial structure\n\\cite{yam2,tor3,tor4} require the computation of a covariance matrix similar\nto ${\\mathbb E}_{vv}$, where ${\\boldsymbol v} = [{\\boldsymbol v}_1,\\ldots,{\\boldsymbol v}_p]^T$,\nbut for $p=N$ where $N$ is large (see Sections \\ref{intr} and \\ref{summ}).\nThe relationships (\\ref{jj012})--(\\ref{min-k}) and (\\ref{j112})--(\\ref{min-k2})\nillustrate the nature of the proposed method and its dif\\\/ference from the\ntechniques in \\cite{yam2,tor3,tor4}: due to the structure (\\ref{t3})\nof the transform ${\\mathcal T}_p$, the procedure for f\\\/inding $f^0$,\n${\\mathcal F}_1^0$, $\\ldots$, ${\\mathcal F}_p^0$ avoids direct computation of ${\\mathbb E}_{vv}$\nwhich could be troublesome due to large $N$. If operators ${\\mathcal Q}_1,\\ldots,{\\mathcal Q}_p$\nare orthonormal, as in Theorem~\\ref{sol1}, then (\\ref{v-ort}) is true and\nthe covariance matrix ${\\mathbb E}_{vv}$ is reduced to the identity.\nIf operators ${\\mathcal Q}_1,\\ldots,{\\mathcal Q}_p$ are orthogonal,\nas in Theorem \\ref{sol2}, then (\\ref{v-ort2}) holds and the covariance matrix ${\\mathbb E}_{vv}$\nis reduced to a block-diagonal form with non-zero blocks ${\\mathbb E}_{v_1v_1}, \\ldots, {\\mathbb E}_{v_pv_p}$\nso that\n\\[\n {\\mathbb E}_{vv}=\\left [ \\begin{array}{cccc}\n{\\mathbb E}_{v_1v_1} & {\\mathbb O} & \\ldots & {\\mathbb O} \\\\\n{\\mathbb O} & {\\mathbb E}_{v_2v_2} & \\ldots & {\\mathbb O} \\\\\n\\ldots & \\ldots & \\ldots & \\ldots \\\\\n{\\mathbb O} & {\\mathbb O} & \\ldots & {\\mathbb E}_{v_p v_p}\n\\end{array} \\right ]\n\\]\n with ${\\mathbb O}$ denoting the zero block. As a result,\n the procedure for f\\\/inding $f^0$, ${\\mathcal F}_1^0, \\ldots, {\\mathcal F}_p^0$\n is reduced to $p$ separate rank-constrained problems (\\ref{min-k}) or (\\ref{min-k2}).\n Unlike the methods in \\cite{yam2,tor3,tor4}, the operators ${\\mathcal F}^0_1,\\ldots,{\\mathcal F}^p_0$\n are determined with {\\em much smaller} $m\\times n$ and {$n\\times n$ matrices}\n given by the simple formulae (\\ref{sol-f0}) and (\\ref{sol-f02})--(\\ref{sol-fp2}).\n This implies a reduction in computational work compared with that required by the approach\n in \\cite{tor3,tor4,how2}.\n\\end{remark}\n\n\\begin{corollary}\\label{corollary2} Let ${\\boldsymbol v}_1,\\ldots,{\\boldsymbol v}_p$ be determined by Lemma {\\rm\\ref{ort3}}.\nThen the vector $\\bar{f}$ and operators $\\bar{\\mathcal F}_1, \\ldots, \\bar{\\mathcal F}_p$,\nsatisfying the unconstrained minimum \\eqref{min1}, are determined by\n\\begin{gather}\n\\label{sol-fc0}\n\\bar{f} = E[{\\boldsymbol x}] - \\sum_{k=1}^p \\bar{F}_k E[{\\boldsymbol v}_k]\n\\end{gather}\nand\n\\begin{gather}\n\\label{sol-fc2}\n\\bar{\\mathcal F}_1 = {\\mathcal E}_{xv_1}{\\mathcal E}_{v_1 v_1}^{ \\dag}\n+ {\\cal A}_1[{\\cal I} - {\\mathcal E}_{v_1 v_1}{\\mathcal E}_{v_1 v_1}^{\\dag}],\\\\\n\\cdots \\cdots\\cdots \\cdots\\cdots \\cdots\\cdots \\cdots\\cdots \\cdots\\cdots \\cdots \\nonumber \\\\\n\\label{sol-fpc2}\n\\bar{\\mathcal F}_p = {\\mathcal E}_{xv_p}{\\mathcal E}_{v_p v_p}^{ \\dag}\n+ {\\cal A}_p[{\\cal I} - {\\mathcal E}_{v_p v_p}{\\mathcal E}_{v_p v_p}^{\\dag}].\n\\end{gather}\nThe associated accuracy for transform $\\bar{{\\mathcal T}}_p$, defined by\n\\[\n\\bar{{\\mathcal T}}_p({\\boldsymbol y}) = \\bar{f} + \\sum _{k=1}^p \\bar{\\mathcal F}_k({\\boldsymbol v}_k),\n\\]\n is given by\n\\begin{gather}\n\\label{er-c2}\nE[\\|{\\boldsymbol x} - \\bar{{\\mathcal T}}_p({\\boldsymbol y})\\|^2] =\\|{\\mathbb E}_{xx}^{1\/2}\\|^2\n-\\sum_{k=1}^p \\|{\\mathbb E}_{xv_k}({\\mathbb E}^{1\/2}_{v_k v_k})^{ \\dag}\\|^2.\n\\end{gather}\n\\end{corollary}\n\n\\begin{proof} It follows from (\\ref{proof22}) that the unconstrained minimum (\\ref{min1})\nis achieved if $f$ is def\\\/ined by~(\\ref{sol-fc0}) and if ${F}_k$ satisf\\\/ies the equation\n$\nF_k{\\mathbb E}_{v_k v_k}^{1\/2} - {\\mathbb E}_{xv_k}({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag} = {\\mathbb O}\n$\nfor each $k=1,\\ldots,p$.\nSimilar to (\\ref{me})--(\\ref{proof312}), its general solution is given by\n\\[\nF_k = \\bar{F}_k ={\\mathbb E}_{xv_k}{\\mathbb E}_{v_k v_k}^{\\dag}\n+ A_k[I - {\\mathbb E}_{v_k v_k}{\\mathbb E}_{v_k v_k}^{\\dag}]\n\\]\nbecause ${\\mathbb E}_{v_k v_k}^{1\/2}({\\mathbb E}_{v_k v_k}^{1\/2})^{\\dag}\n ={\\mathbb E}_{v_k v_k} {\\mathbb E}_{v_k v_k}^{\\dag}$.\nWe def\\\/ine $\\bar{\\mathcal F}_k$ by $[\\bar{\\mathcal F}_k({\\boldsymbol w}_k)](\\omega)\n= \\bar{F}_k[{\\boldsymbol w}_k(\\omega)]$ for all $k=1,\\ldots,p$, and then (\\ref{sol-fc2})--(\\ref{sol-fpc2})\nare true. The relation (\\ref{er-c2}) follows from (\\ref{proof22}) and (\\ref{sol-fc0})--(\\ref{sol-fpc2}).\n\\end{proof}\n\n\\begin{remark} The dif\\\/ference between the transforms given by Theorems \\ref{sol1} and \\ref{sol2} is that\n${\\mathcal F}^0_k$ by~(\\ref{sol-f0}) (Theorem~\\ref{sol1}) does not contain a factor\nassociated with $({\\mathbb E}_{v_kv_k}^{1\/2})^\\dag$ for all $k=1,\\ldots.p$.\nA similar observation is true for Corollaries \\ref{corollary1} and \\ref{corollary2}.\n\\end{remark}\n\n\\begin{remark}\nThe transforms given by Theorems \\ref{sol1} and \\ref{sol2} are not unique due to arbitrary operators\n${\\mathcal A}_1, \\ldots, {\\mathcal A}_p$. A natural particular choice\nis ${\\mathcal A}_1 = \\cdots = {\\mathcal A}_p = \\mathbb O$.\n\\end{remark}\n\n\n\\subsubsection[Compression procedure by ${\\mathcal T}^0_p$]{Compression procedure by $\\boldsymbol{{\\mathcal T}^0_p}$}\n\\label{sec5.2.3}\n\n\nLet us consider transform ${\\mathcal T}^0_p$ given by (\\ref{th1}),\n(\\ref{sol-f02})--(\\ref{sol-fp2}) with $A_k={\\mathbb O}$ for $k=1,\\ldots,p$ where $A_k$\nis the matrix given in (\\ref{proof312}).\nWe write $[{\\mathcal T}^0_p({\\boldsymbol y})](\\omega) = T^0_p(y)$\nwith $T^0_p:{\\mathbb R}^n\\rightarrow {\\mathbb R}^m$.\n\nLet\n\\[\nB^{(1)}_k = S_{k\\eta_k}V_{k\\eta_k}D_{k\\eta_k}^{T}\\qquad\\mbox{and}\n\\qquad B^{(2)}_k = D_{k\\eta_k}^{T}({\\mathbb E}^{1\/2}_{v_k v_k})^{\\dag}\n\\]\nso that $B^{(1)}_k\\in{\\mathbb R}^{m\\times \\eta_k}$\nand $B^{(2)}_k\\in{\\mathbb R}^{\\eta_k\\times n}$. Here, $\\eta_1$, $\\ldots$, $\\eta_p$\nare determined by (\\ref{con1}). Then\n\\[\nT^0_p(y) = f + \\sum_{k=1}^p B^{(1)}_k B^{(2)}_k v_k,\n\\]\nwhere $v_k = {\\boldsymbol v}_k(\\omega)$ and $B^{(2)}_k v_k\\in{\\mathbb R}^{\\eta_k}$ for $k=1,\\ldots,p$\nwith $\\eta_1 + \\cdots + \\eta_p d_0>> M^{p,q+1} @>d_0>> M^{p+1,q+1} @>d_0>> \\\\\n @. @AAd_1A @AAd_1A @. \\\\\n @>d_0>> M^{p,q} @>d_0>> M^{p+1,q} @>d_0>> \\\\\n @. @AAd_1A @AAd_1A @. \n\\end{CD}\n\\end{equation*}\nThe associated \\emph{total complex} is defined as $\\mathrm{Tot}^n M\\colonequals\\oplus_{p+q=n}M^{p,q}$, with total differential $d\\colonequals d_0+d_1$. A double complex allows for two filtrations, namely,\n\\begin{equation}\nF_{\\rm I}^p(\\mathrm{Tot}^n\\ M) = \\oplus_{r\\geqslant p}M^{r,n-r}\\;, \\qquad F_{\\rm II}^p(\\mathrm{Tot}^n\\ M) = \\oplus_{r\\geqslant p}M^{n-r,r}~.\n\\end{equation}\nThese filtrations are bounded in each dimension if for each $n$ only a finite number of $M^{p,q}$ with $n=p+q$ are non-zero.\n\nCorrespondingly, we can consider two spectral sequences converging to $H^*(\\mathrm{Tot}\\ M, d)$ with as first terms\n\\begin{eqnarray}\n&_{\\rm I} E^{p,q}_1 \\cong H^{p,q}(M,d_1)\\;, \\qquad & _{\\rm I} E^{p,q}_2 \\cong H^{p,q}(H^{*,*}(M,d_1),d_0)\\\\\n&_{\\rm II} E^{p,q}_1 \\cong H^{p,q}(M,d_0)\\;, \\qquad & _{\\rm II} E^{p,q}_2 \\cong H^{p,q}(H^{*,*}(M,d_0),d_1)~.\n\\end{eqnarray}\nNote that here one can show that the first term of the spectral sequence is equal to the one mentioned in the more general case above. Higher differentials $d_{r+1}$ for $r\\geqslant 1$ are defined by $d_{r+1}x=d_1 y$ where $y$ is defined by $d_0 y=d_r x$. Such a $y$ can be proven to always exist, so that the higher differentials are always well-defined.\n\n\\paragraph{Example} As a simple example of the utility of spectral sequences, let us reproduce a proof of the K\\\"unneth formula \\cite{deBoer:1992sy}. Consider a differential graded algebra $(\\Ab,d)$, \\ie, a graded algebra endowed with a differential $d$ of degree one satisfying the Leibniz rule. Let it have two graded subalgebras $\\Ab_1$ and $\\Ab_2$ which are respected by the differential, \\ie, $d\\Ab_i\\subseteq \\Ab_i$. Let us assume the multiplication map $m:\\Ab_1\\otimes\\Ab_2\\rightarrow \\Ab$ is an isomorphism of vector spaces. Then one can define the double complex $(M^{p,q}; d_0, d_1)$ by \n\\begin{equation}\nM^{p,q}\\colonequals m(\\Ab_1^p\\otimes\\Ab_2^q)~,\\quad d_0(a_1a_2)=d(a_1)a_2~,\\quad d_1(a_1a_2)=(-1)^{\\mathrm{deg}(a_1)}a_1 d(a_2)~.\n\\end{equation} \nAssume that this double complex is bounded in each dimension; then one can make use of the spectral sequence for the double complex as described above. One finds for the first couple of levels \n\\begin{equation}\nE^{p,q}_1\\cong m(\\Ab_1^p\\otimes H^q(\\Ab_2,d))~,\\qquad E^{p,q}_2 \\cong m(H^p(\\Ab_1,d)\\otimes H^q(\\Ab_2,d))~.\n\\end{equation} \nHigher differentials all manifestly vanish, so the spectral sequence terminates. At the level of vector spaces, the above-stated theorem implies that $H^*(\\Ab,d) \\cong m(H^*(\\Ab_1,d)\\otimes H^*(\\Ab_2,d))$. This statement can be extended to an isomorphism of algebras because $a_1a_2$ is a representative of an element in $H^*(\\Ab,d)$.\n\\subsection{Argyres-Seiberg duality}\n\\label{subapp:Argyres_Seiberg}\n\nFirst we describe in detail the check of Argyres-Seiberg duality at the level of chiral algebras described in Section \\ref{subsubsec:t3_chiral_algebra}. The first duality frame is that of SQCD, the chiral algebra of which was described in \\cite{Beem:2013sza}. There the generators of the chiral algebra were found to include a $\\widehat{\\mf{su}(6)}_{-3}\\times\\widehat{\\mf{u}(1)}$ affine current algebra with currents $J_i^j$ and $J$, along with baryonic and anti-baryonic operators $\\{b_{ijk},\\tilde b^{ijk}\\}$ of dimension $\\Delta=\\frac32$. The singular OPEs for these generators are as follows,\n{\\allowdisplaybreaks\n\\begin{equation}\n\\label{eq:scqcdopes}\n\\begin{alignedat}{3}\n&& J_i^j(z) J_k^l(0) \t &~~\\sim~& &-\\frac{3 (\\delta_i^l \\delta^j_k - \\text{trace})}{z^2} ~+~ \\frac{\\delta_k^j J_i^l(z)-\\delta_i^l J_k^j(z)}{z}~,\\\\\n&& J(z) J(0) \t\t\t &~~\\sim~& &-\\frac{18}{z^2}~,\\\\\n&& J_i^j(z) b_{k_1 k_2 k_3}(0) &~~\\sim~& &\\phantom{-}\\frac{3 \\delta_{[k_1|}^j b_{i |k_2 k_3]}(0) - \\frac{1}{2} \\delta_i^j b_{k_1 k_2 k_3}(0)}{z}~,\\\\\n&& J(z) b_{k_1 k_2 k_3}(0) &~~\\sim~& &\\phantom{-}\\frac{3b_{k_1 k_2 k_3}(0)}{z}~,\\\\\n&& J(z) b^{k_1 k_2 k_3}(0) &~~\\sim~& &-\\frac{3b^{k_1 k_2 k_3}(0)}{z}~,\\\\\n&& b_{i_1i_2i_3}(z)\\tilde b^{j_1j_2j_3}(0) &~~\\sim~& &\\phantom{-}\\frac{36\\,\\delta_{[i_1}^{[j_1} \\delta_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2} \\delta_{i_3]}^{j_3]}}{z^3} - \\frac{36\\, \\delta_{[i_1}^{[j_1} \\delta_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2}\\hat J_{i_3]}^{j_3]}(0)}{z^2} \\\\\n&& &~~\\phantom{\\sim}~& &\\phantom{-}\\qquad +\\frac{18\\, \t\\delta_{[i_1}^{[j_1} \t \\hat J_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2} \t \\hat J_{i_3]}^{j_3]}(0) - 18\\,\\delta_{[i_1}^{[j_1} \\delta_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2} \\partial \\hat J_{i_3]}^{j_3]}(0)}{z}~.\n\\end{alignedat}\n\\end{equation}}%\nAntisymmetrizations are performed with weight one, and lower (upper) indices $i,j,\\ldots$ transform in the fundamental (antifundamental) representation of $\\mf{su}(6)$. In the last line we have introduced the $\\mf{u}(6)$ current $\\hat J^i_j \\colonequals J^i_j + \\frac{1}{6} \\delta^i_j J$. \nIt was conjectured in \\cite{Beem:2013sza} that the SQCD chiral algebra is a $\\WW$-algebra with just these generators. This proposal passed a few simple checks. All the generators of the Hall-Littlewood chiral ring have been accounted for and the OPE closes. There is no additional stress tensor as a generator because the Sugawara stress tensor of the $\\mf{u}(6)$ current algebra turns out to do the job (this again implies a relation in the Higgs branch chiral ring of SQCD). The spectrum of the chiral algebra generated by these operators also correctly reproduces the low-order expansion of the superconformal index.\n\nOur aim in the remainder of this appendix is to reproduce this chiral algebra from the `exceptional side' of the duality using our proposal that the chiral algebra $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ is the current algebra $(\\,\\widehat{\\mf{e}}_6\\,)_{-3}$. The two free hypermultiplets contribute symplectic bosons $q_{\\alpha}$ and $\\tilde q^{\\alpha}$ with $\\alpha=1,2$ with singular OPE given by\n\\begin{equation}\n\\label{eq:app_symplectic_boson_OPE}\nq_\\alpha(z) \\tilde q^\\beta (0) \\sim \\frac{\\delta_\\alpha^\\beta}{z}~.\n\\end{equation}\nThe $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra should be re-expressed in terms of an $\\mf{su}(6)\\times\\mf{su}(2)$ maximal subalgebra, in terms of which the affine currents split as\n\\begin{equation}\n\\label{eq:e6_as_relabeling}\n\\{J_{A=1,\\ldots,78}\\} ~~\\Longrightarrow~~ \\{X^i_j,~ Y^{[ijk]}_\\alpha,~ Z_\\alpha^\\beta\\}~.\n\\end{equation}\nThe operators $X^i_j$ and $Z_\\alpha^\\beta$ are the affine currents of $\\mf{su}(6)$ and $\\mf{su}(2)$, respectively, with $X^i_i = Z_\\alpha^\\alpha = 0$. The additional operators $Y^{ijk}_\\alpha$ transform in the $(\\mathbf{20},\\mathbf{2})$ of $\\mf{su}(6)\\times\\mf{su}(2)$. The nonvanishing OPEs amongst these operators are simply a rewriting of the $\\wh{\\mf{e}}_6$ current algebra,\n\\begin{eqnarray}\n\\label{eq:e6_ope_decomposition}\nX_i^j(z) X_k^l(0) &~\\sim~& - \\frac{3 (\\delta_i^l \\delta^j_k - \\text{trace})}{z^2} + \\frac{\\delta_k^j X_i^l(0) - \\delta_i^l X_k^j(0)}{z} \\nn\\\\\nZ_\\alpha^\\beta(z) Z_\\gamma^\\delta(0) &~\\sim~& - \\frac{3 (\\delta_\\alpha^\\delta \\delta^\\beta_\\gamma - \\text{trace})}{z^2} + \\frac{\\delta_\\gamma^\\beta Z_\\alpha^\\partial(0) - \\delta_\\alpha^\\delta Z_\\gamma^\\beta(0)}{z}\\nn\\\\\nX_i^j(z) Y^{klm}_\\alpha (0) &~\\sim~& - \\frac{3 \\delta_i^{[k} Y_\\alpha^{lm]j}(0)}{z} - \\text{trace}\\\\\nZ_\\alpha^\\beta(z) Y^{ijk}_\\gamma (0) &~\\sim~& \\phantom{-}\\frac{\\delta_\\gamma^\\beta Y^{ijk}_{\\alpha} (0)}{z} - \\text{trace}\\nn\\\\\nY^{ijk}_\\alpha (z) Y^{lmn}_\\beta (0) &~\\sim~& \\phantom{-}\\frac{\\e_{\\alpha \\beta} \\epsilon^{ijklmn}}{z^2} +\\frac{\\e^{ijklmn} \\e_{\\alpha \\gamma} Z^\\gamma_\\beta(0) - 3 \\e_{\\alpha \\beta} \\e^{[ijklm|p} X^{|n]}_p(0)}{z}~.\\nn\n\\end{eqnarray}\nGluing introduces a dimension $(1,0)$ ghost system in the adjoint of $\\mf{su}(2)$ and restricting to the appropriate cohomology of the following BRST operator,\n\\begin{equation}\n\\label{eq:AS_BRST}\nJ_{\\rm BRST}=~c^\\alpha_\\beta (Z_\\alpha^\\beta - q_\\alpha \\tilde q^\\beta) -\\frac{1}{2} (\\delta_{\\alpha_1}^{\\alpha_6}\\delta_{\\alpha_3}^{\\alpha_2}\\delta_{\\alpha_5}^{\\alpha_4}-\\delta_{\\alpha_1}^{\\alpha_4}\\delta_{\\alpha_3}^{\\alpha_6}\\delta_{\\alpha_5}^{\\alpha_2}) c_{\\alpha_2}^{\\alpha_1}b_{\\alpha_4}^{\\alpha_3}c_{\\alpha_6}^{\\alpha_5}.\n\\end{equation}\nThe cohomology can be constructed level by level using the \\texttt{OPEdefs} package for \\texttt{Mathematica} \\cite{Thielemans:1991uw}. Up to dimension $h=\\frac32$, we find the following operators,\n\\begin{equation}\n\\label{eq:AS_generators}\nX^i_j~,\\qquad\nq_\\alpha\\tilde q^\\alpha~,\\qquad\n\\e_{ijklmn}\\tilde q^\\alpha Y^{lmn}_\\alpha~,\\qquad\n\\e^{\\alpha\\beta}q_\\alpha Y_\\beta^{ijk}~.\n\\end{equation}\nUp to normalizations, these can naturally be identified with the generators of the SQCD chiral algebra,\n\\begin{equation}\n\\label{eq:argyresseibergmatch}\nX_i^j \\simeq J_i^j~, \\qquad \n3 q_\\alpha \\tilde q^\\alpha \\simeq J~, \\qquad \n\\frac{1}{6} \\e_{ijk l m n} \\tilde q^\\alpha Y^{l m n}_\\alpha \\simeq b_{ijk}~, \\qquad\n \\e^{\\alpha \\beta} q_\\alpha Y_\\beta^{i j k} \\simeq \\tilde b^{ijk}~.\n\\end{equation}\nThe equations relating chiral algebra generators in the two duality frames are the same as the ones obtained in \\cite{Gaiotto:2008nz}, with the operators being viewed as generators of the Higgs branch chiral ring. In that work, establishing them at the level of the Higgs branch required a detailed understanding of the chiral ring relations on both sides. By contrast, to establish equivalence of the chiral algebras we need to check that the above operators have the same singular OPEs. Relations in the chiral ring will then show up automatically as null states.\n\nWith the \\texttt{OPEdefs} package we have also computed the OPEs of the composite operators in \\eqref{eq:AS_generators} and found perfect agreement with \\eqref{eq:scqcdopes}. Most of the OPEs are reproduced in a fairly trivial fashion. However, the simple pole in the baryon-antibaryon OPE can only be matched by realizing that there is a null state at level two in the $(\\,\\widehat{\\mf{e}}_6\\,)_{-3}$ algebra given by\n\\begin{equation}\n\\label{eq:t3nullstate}\nY^{ijk}_\\alpha Y^{abc}_\\beta \\e^{\\alpha \\beta} \\e_{abc l m n} + 108 \\partial X^{[i}_{[l} \\delta^j_m \\delta^{k]}_{n]} + 108 X^{[i}_{[l} X^j_m \\delta^{k]}_{n]} + \\frac{1}{72} Z_\\alpha^\\beta Z_\\beta^\\alpha \\delta^{[i}_{[l} \\delta^j_m \\delta^{k]}_{n]}~.\n\\end{equation}\n\nThus we have shown that using our proposal for the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra in the Argyres-Seiberg duality problem, one at least produces a self-contained $\\WW$-algebra that matches between the two sides of the duality. It would be nice to prove that this $\\WW$-algebra is the full chiral algebra. Indeed, if one could demonstrate this fact for the SQCD side of the duality, it seems likely that it wouldn't be too hard to prove that there can be no additional generators in the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra beyond the affine currents.\n\\subsection{Schur indices}\n\\label{subapp:cylinder_cap_index}\n\nAlthough they are only formally defined (there is no true four-dimensional SCFT associated to the cylinder and cap geometries), the reduction rules for the Schur index allow us to define an index for these geometries that must behave appropriately under gluing. Let us determine these indices.\n\n\\paragraph{Cylinder}\nUsing the general results given in Eqns. \\eqref{eq:SchurindexUVcurve} and \\eqref{eq:psi_max}, the index of the two-punctured sphere theory can be determined immediately\n\\begin{align}\\label{eq:indexcylinder}\n\\begin{split}\n\\II_\\text{cylinder}\\left(q;\\textbf{a},\\mathbf{b}\\right) &~=~ K_{\\mathrm{max.}}(\\textbf{a};q) K_{\\mathrm{max.}}(\\mathbf{b};q) \\sum_{\\mf{R}} \\chi_{\\mf{R}}(\\textbf{a}) \\chi_{\\mf{R}}(\\mathbf{b})\\\\\n&~=~{\\rm PE}\\left[ \\frac{q}{1-q} \\left(\\chi_{\\mathrm{adj}}(\\textbf{a}) + \\chi_{\\mathrm{adj}}(\\mathbf{b})\\right)\\right] \\sum_{\\mf{R}} \\chi_{\\mf{R}}(\\textbf{a}) \\chi_{\\mf{R}}(\\mathbf{b})~.\n\\end{split}\n\\end{align}\nUpon using the relation $\\sum_{\\mf{R}} \\chi_{\\mf{R}}(\\textbf{a}) \\chi_{\\mf{R}}(\\mathbf{b}) = \\delta(\\textbf{a},\\mathbf{b}^{-1})$, where the delta function is defined with respect to the Haar measure, we can rewrite this index as\n\\begin{equation}\n\\II_\\text{cylinder}\\left(q;\\textbf{a},\\mathbf{b}\\right) = {\\rm PE}\\left[ \\frac{2q}{1-q} \\chi_{\\mathrm{adj}}(\\textbf{a}) \\right] \\delta(\\textbf{a},\\mathbf{b}^{-1}) = I_V^{-1}(\\textbf{a};q)\\ \\delta(\\textbf{a},\\mathbf{b}^{-1})~,\n\\end{equation}\nwhere $I_V$ is the vector multiplet index \\eqref{eq:vector_multiplet_index}. This makes it clear that when the gluing prescription for the index given in Eqn. \\eqref{eq:indexgluing} is applied, the index $\\II_{\\TT}(q; \\textbf{a},\\ldots)$ of a generic theory $\\TT$ containing a maximal puncture with fugacities $\\textbf{a}$ remains the same after gluing a cylinder to that maximal puncture\n\\begin{equation}\n\\int [d\\textbf{a}] \\Delta(\\textbf{a}) I_V(q;\\textbf{a})\\ \\II_{\\TT}(q; \\textbf{a},\\ldots)\\ \\II_\\text{cylinder}\\left(q;\\textbf{a}^{-1},\\mathbf{b}\\right) = \\II_{\\TT}(q; \\mathbf{b},\\ldots)~.\n\\end{equation}\nHere $[d\\textbf{a}] = \\prod_{j=1}^{\\text{rank} \\mf{g}}\\frac{da_j}{2\\pi i a_j}$ and $\\Delta(\\textbf{a})$ is the Haar measure.\n\nReturning to expression \\eqref{eq:indexcylinder}, we wish to rewrite the sum over representations. Let us therefore consider the regularized sum\n\\begin{equation}\n\\label{eq:regsumreps}\n\\sum_{\\mf{R}} u^{|\\mf{R}|}\\chi_{\\mf{R}}(\\textbf{a}) \\chi_{\\mf{R}}(\\mathbf{b}) = {\\rm PE}\\left[ u\\ \\chi_\\mf{f}(\\textbf{a})\\chi_\\mf{f}(\\mathbf{b}) - u^{n} \\right]\\;,\n\\end{equation}\nwhere $|\\mf{R}|$ denotes the number of boxes in the Young diagram corresponding to the representation $\\mf{R}$ of $\\mf{g}=\\mf{su}(n).$ For $\\mf{g}=\\mf{su}(2)$ we have checked this equality exactly by performing the geometric sums and for $\\mf{su}(3),$ $\\mf{su}(4)$ and $\\mf{su}(5)$ in a series expansion in $u.$ In the limit $u\\rightarrow 1$ one can verify that the right hand side behaves as a $\\delta$-function with respect to the Haar measure, as expected. Consequently, the cylinder index can then be rewritten in a particularly useful form,\n\\begin{equation}\n\\II_\\text{cylinder}\\left(q;\\textbf{a},\\mathbf{b}\\right)={\\rm PE}\\left[ \\frac{q}{1-q} \\left(\\chi_{\\mathrm{adj}}(\\textbf{a}) +\\chi_{\\mathrm{adj}}(\\mathbf{b})\\right) + \\chi_\\mf{f}(\\textbf{a})\\chi_\\mf{f}(\\mathbf{b}) - 1\\right]~.\n\\end{equation}\nBy using $\\chi_{\\mathrm{adj}}(\\textbf{a}) = \\chi_{\\mf{f}}(\\textbf{a})\\chi_{\\mf{f}}(\\textbf{a}^{-1})-1 $ and the $\\delta$-function constraint, one can finally rewrite the index as\n\\begin{align}\\begin{split}\n\\II_\\text{cylinder}\\left(q;\\textbf{a},\\mathbf{b}\\right)&~=~{\\rm PE}\\left[ \\frac{q}{1-q} \\left(\\chi_{\\mathrm{adj}}(\\mathbf{b}) + \\left(\\chi_{\\mf{f}}(\\textbf{a})\\chi_{\\mf{f}}(\\mathbf{b})-1 \\right)\\right) + \\chi_\\mf{f}(\\textbf{a})\\chi_\\mf{f}(\\mathbf{b}) - 1\\right]~,\\\\\n&~=~ {\\rm PE}\\left[ \\frac{q}{1-q} \\chi_{\\mathrm{adj}}(\\mathbf{b}) + \\frac{1}{1-q}\\left(\\chi_{\\mf{f}}(\\textbf{a})\\chi_{\\mf{f}}(\\mathbf{b})-1 \\right)\\right]~.\n\\end{split}\\end{align}\nNote that this looks like the partition function of a finitely generated chiral algebra satisfying a single relation. Namely, it appears that the chiral algebra has one set of dimension one currents transforming in the adjoint of $\\mf{su}(n)$, in addition to a bifundamental field $g_{ab}$ of dimension zero subject to a dimension zero constraint in the singlet representation. Going further, using this interpretation of the index and reintroducing the fugacity $u$ as in \\eqref{eq:regsumreps}, we see that $u$ counts the power of the bifundamental generators in an operator, and the constraint should then involve $n$ bifundamental fields. A natural form for such a relation (after proper rescaling of the generators) is the following,\n\\begin{equation}\n\\label{eq:cylinderconstraint}\n\\frac{1}{n!}\\epsilon^{a_1a_2\\ldots a_n}\\epsilon^{b_1b_2\\ldots b_n} g_{a_1b_1}g_{a_2b_2}\\ldots g_{a_nb_n} = 1.\n\\end{equation}\nInterpreting $g_{ab}$ as a matrix, this is a unit determinant condition. This picture, guessed on the basis of the superconformal index, will be borne out in the qDS analysis below.\n\n\\paragraph{Cap} \nA similarly heuristic analysis is possible for the theory associated to a decorated cap, which is obtained by further partially closing a puncture in the cylinder theory. The index of the decorated cap theory takes the form\n\\begin{align}\n\\II_{\\text{cap}({\\Lambda})}\\left(q;\\textbf{a},\\mathbf{b}_{{\\Lambda}}\\right) &~=~ K_{\\mathrm{max.}}(\\textbf{a};q) K_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}},q) \\sum_{\\mf{R}} \\chi_{\\mf{R}}(\\textbf{a}) \\chi_{\\mf{R}}(\\mathrm{fug}_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}};q))~,\\nn\\\\\n&~=~{\\rm PE}\\left[ \\frac{q}{1-q} \\chi_{\\mathrm{adj}}(\\textbf{a}) + \\sum_j \\frac{q^{j+1}}{1-q} \\mbox{Tr}_{\\RR_j^{(\\mathrm{adj})}}(\\mathbf{b}_{{\\Lambda}})\\right] \\sum_{\\mf{R}} \\chi_{\\mf{R}}(\\textbf{a}) \\chi_{\\mf{R}}(\\mathrm{fug}_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}};q))~,\\\\\n&~=~I_V^{-1\/2}(\\textbf{a};q)\\ K_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}},q)\\ \\delta(\\textbf{a}^{-1},\\mathrm{fug}_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}};q))~.\\nn\n\\end{align}\nAgain it is clear how gluing this index reduces the flavor symmetry of the puncture. Using \\eqref{eq:indexgluing} and the general expression for a class $\\SS$ index \\eqref{eq:SchurindexUVcurve} for some theory $\\TT$ of genus $g$ and containing $s$ punctures, of which the first is maximal with corresponding flavor fugacities $\\textbf{a}$, one obtains by gluing the cap to this maximal puncture\n\\begin{multline}\n\\int [d\\textbf{a}] \\Delta(\\textbf{a}) I_V(\\textbf{a};q)\\ \\II_{\\text{cap}({\\Lambda})}\\left(q;\\textbf{a}^{-1},\\mathbf{b}_{{\\Lambda}}\\right)\\ \\sum_{\\mf{R}} C_\\mf{R}(q)^{2g-2+s} K_{\\mathrm{max.}}(\\textbf{a};q) \\chi_{\\mf{R}}(\\textbf{a}) \\prod_{i=2}^s \\psi_{\\mf{R}}^{\\Lambda_i}({\\bf x}_{\\Lambda_i} ;q) \\\\ = \\sum_{\\mf{R}} C_\\mf{R}(q)^{2g-2+s} K_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}},q) \\chi_{\\mf{R}}(\\mathrm{fug}_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}};q)) \\prod_{i=2}^s \\psi_{\\mf{R}}^{\\Lambda_i}({\\bf x}_{\\Lambda_i} ;q)~,\n\\end{multline}\nwhere we have again used that $K_{\\mathrm{max.}}(\\textbf{a};q) = I_V^{-1\/2}(\\textbf{a};q).$\n\nAs in the previous paragraph we can rewrite the index in a suggestive fashion,\n\\begin{align}\n\\II_{\\text{cap}({\\Lambda})}\\left(q;\\textbf{a},\\mathbf{b}_{\\Lambda}\\right)\n&~=~ {\\rm PE}\\left[ \\sum_j \\frac{q^{j+1}}{1-q} \\mbox{Tr}_{\\RR_j^{(\\mathrm{adj})}}(\\mathbf{b}_{{\\Lambda}}) + \\frac{1}{1-q}\\left(\\chi_{\\mf{f}}(\\textbf{a})\\chi_{\\mf{f}}(\\mathrm{fug}_{{\\Lambda}}(\\mathbf{b}_{{\\Lambda}};q))-1 \\right)\\right]~.\n\\end{align}\nA natural interpretation of this index is as that of a chiral algebra with generators given by currents $J_{\\bar\\alpha}$ for $T_{\\bar\\alpha} \\in \\ker( ad_{{\\Lambda}(t_+)})$ with dimensions shifted by their ${\\Lambda}(t_0)$ weight. Moreover, for each $\\mf{su}(2)$ irrep in the decomposition \\eqref{eq:generaldecomposition} of the fundamental representation $\\mf{f}$ there are an additional $2j+1$ generators transforming in representation $\\mf{f}\\otimes\\RR_j^{(\\mf{f})}$ with dimensions $-j,-j+1,\\ldots,j$. The latter generators satisfy a singlet relation of dimension zero.\n\\subsection{QDS argument}\n\\label{subapp:cylinder_cap_qDS}\n\nNow that we have some intuition for the kinds of chiral algebras to expect, let us study the cylinder theory for $\\mf{g}=\\mf{su}(3)$ by fully closing a puncture in the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ theory. Full closure is achieved via the principal embedding $\\rho:\\mf{su}(2)\\rightarrow \\mf{g}$, which is can be specified explicitly in components as \n\\begin{equation}\n\\rho(t_-) = 2(T_2^{\\ 1} + T_3^{\\ 2})~,\\qquad \\rho(t_0) = T_1^{\\ 1}-T_3^{\\ 3}~,\\qquad \\rho(t_+)=T_1^{\\ 2} + T_2^{\\ 3}~.\n\\end{equation}\nThe grading by $\\rho(t_0)$ is integral, with the negatively graded generators being $T_3^{\\ 1}$ with grade minus two and $T_2^{\\ 1},T_3^{\\ 2}$ with grade minus one. We should then impose the constraints\n\\begin{equation}\n\\label{eq:cylinderconstraints}\n\\left(J^{(1)}\\right)_2^{\\ 1} + \\left(J^{(1)}\\right)_3^{\\ 2} = 1\\;,\\qquad \\left(J^{(1)}\\right)_2^{\\ 1} - \\left(J^{(1)}\\right)_3^{\\ 2} = 0\\;, \\qquad \\left(J^{(1)}\\right)_3^{\\ 1}=0~.\n\\end{equation}\nUpon introducing three $(b,c)$-ghost systems -- $(b_2^{\\ 1},c_1^{\\ 2})$, $(b_3^{\\ 2},c_2^{\\ 3})$, and $(b_3^{\\ 1},c_1^{\\ 3})$ -- these first-class constraints are implemented by a BRST procedure via the current\n\\begin{equation}\nd(z) = (J^{(1)})_2^{\\ 1} c_1^{\\ 2}(z) + (J^{(1)})_3^{\\ 2}c_2^{\\ 3}(z) + (J^{(1)})_3^{\\ 1} c_1^{\\ 3}(z) - \\frac{1}{2} (c_1^{\\ 2}+c_2^{\\ 3})(z) - b_3^{\\ 1} c_1^{\\ 2}c_2^{\\ 3}(z)~.\n\\end{equation}\n\n\\begin{table}[t!]\n\\begin{center}\n\\begin{tabular}{c|l}\n\\hline\\hline\n\\text{dimension} & \\text{generators}\\\\\n\\hline\n0 & $\\WW_{3bc}, \\tilde{ \\WW}^{1bc} $\\\\\n1 & $\\WW_{2bc}, \\tilde{ \\WW}^{2bc}, ( \\JJ^{2})_{b_1}^{\\ b_2}, (\\JJ^{3})_{c_1}^{\\ c_2}$\\\\\n2 & $\\hat{ \\JJ}_{\\text{sum}}, \\WW_{1bc}, \\tilde{ \\WW}^{3bc}$\\\\\n3 &$ (\\hat{ \\JJ}^{1})_1^{\\ 3}$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\label{tab:generatorscylinder}(Redundant) generators of the cylinder theory for $\\mf{g}=\\mf{su}(3)$.}\n\\end{table}\n\nThis cohomological problem is partly solved by following the same approach as that advocated in Subsec. \\ref{subsubsec:qDSspecseq}. The redundant generators of the reduced algebra are the tic-tac-toed versions of the currents $ (\\hat J^{1})_1^{\\ 3}$ and $\\hat J_{\\text{sum}} \\equiv (\\hat J^{1})_1^{\\ 2} + (\\hat J^{1})_2^{\\ 3},$ as well as of the generators $\\{(J^{2})_{b_1}^{\\ b_2},\\,(J^{3})_{c_1}^{\\ c_2},\\,W_{abc},\\,\\tilde W^{abc}\\}$. These currents can be seen arranged according to their dimensions in Table \\ref{tab:generatorscylinder}.\n\nThe explicit form of the tic-tac-toed generators of dimensions zero and one are fairly simple,\n\\begin{align}\n\\WW_{3bc}\t\t\t\t&~\\colonequals~ W_{3bc}~,\\\\\n\\tilde{\\WW}^{1bc}\t\t&~\\colonequals~ \\tilde W^{1bc}~,\\\\\n\\WW_{2bc}\t\t\t\t&~\\colonequals~ W_{2bc} + 2 W_{3bc}(\\hat J^{(1)})_3^{\\ 3}~,\\\\\n\\tilde{ \\WW}^{2bc}\t\t&~\\colonequals~ \\tilde W^{2bc} + 2 \\tilde W^{1bc} (\\hat J^{(1)})_1^{\\ 1}~,\\\\\n(\\JJ^{2})_{b_1}^{\\ b_2}\t&~\\colonequals~ (J^{2})_{b_1}^{\\ b_2}~,\\\\\n(\\JJ^{3})_{c_1}^{\\ c_2}\t&~\\colonequals~ (J^{3})_{c_1}^{\\ c_2}~.\n\\end{align}\nOn the other hand, the higher dimensional generators are quite complicated,\n{\\small\n\\begin{align}\\begin{split}\n\\hat{ \\JJ}_{\\text{sum}}&~\\colonequals~ (\\hat J^{1})_1^{\\ 2} + (\\hat J^{1})_2^{\\ 3} \\\\\n& - \\left(-2(2+k)\\partial(\\hat J^{1})_2^{\\ 2} - 4(2+k)\\partial(\\hat J^{1})_3^{\\ 3} +2(\\hat J^{1})_1^{\\ 1}(\\hat J^{1})_2^{\\ 2} + 2 (\\hat J^{1})_1^{\\ 1}(\\hat J^{1})_3^{\\ 3} + 2(\\hat J^{1})_2^{\\ 2}(\\hat J^{1})_3^{\\ 3} \\right)~,\n\\end{split}\\\\\n\\begin{split}\n\\WW_{1bc}&~\\colonequals~ W_{1bc} - \\left(2 W_{2bc}(\\hat J^{1})_1^{\\ 1} -2 W_{3bc}(\\hat J^{1})_2^{\\ 3} \\right)\\\\\n&+ \\left( -4\\left((\\hat J^{1})_1^{\\ 1} + (\\hat J^{1})_2^{\\ 2}\\right) W_{3bc} (\\hat J^{1})_3^{\\ 3} -\\frac{1}{3}(-20-12k)W_{3bc}\\partial(\\hat J^{1})_3^{\\ 3} - \\frac{8}{3} \\partial W_{3bc} (\\hat J^{1})_3^{\\ 3}\\right)~,\n\\end{split}\\\\\n\\begin{split}\n\\tilde{ \\WW}^{3bc}&~\\colonequals~ \\tilde W^{3bc} - \\left( 2\\tilde W^{2bc}(\\hat J^{1})_3^{\\ 3} + 2 \\tilde W^{1bc}(\\hat J^{1})_2^{\\ 3} \\right) +4(\\hat J^{1})_3^{\\ 3} \\tilde W^{1bc} (\\hat J^{1})_2^{\\ 2}\\\\\n& - \\tilde W^{1bc}\\partial\\left(-\\frac{4}{3}(\\hat J^{1})_1^{\\ 1} +(8+4k)(\\hat J^{1})_3^{\\ 3} \\right) - \\partial \\tilde W^{1bc}\\left( -\\frac{4}{3}(\\hat J^{1})_1^{\\ 1} -\\frac{4}{3}(\\hat J^{1})_3^{\\ 3}\\right)~,\n\\end{split}\\\\\n\\begin{split}\n(\\hat{ \\JJ}^{1})_1^{\\ 3}&~\\colonequals~ (\\hat J^{1})_1^{\\ 3} - \\left( 2(k+2)\\partial(\\hat J^{1})_1^{\\ 2} -2 (\\hat J^{1})_2^{\\ 3}\\left((\\hat J^{1})_2^{\\ 2}+(\\hat J^{1})_3^{\\ 3} \\right) -2(\\hat J^{1})_1^{\\ 2}\\left( (\\hat J^{1})_1^{\\ 1} + (\\hat J^{1})_2^{\\ 2} \\right) \\right)\\\\\n&+ 4(4+4k+k^2)\\partial^2(\\hat J^{1})_1^{\\ 1} - 4(2+k)(\\hat J^{1})_1^{\\ 1}\\partial (\\hat J^{1})_1^{\\ 1}+ 4(2+k)(\\hat J^{1})_1^{\\ 1}\\partial (\\hat J^{1})_2^{\\ 2}\\\\\n& -4 \\left( (\\hat J^{1})_1^{\\ 1} + (\\hat J^{1})_3^{\\ 3} \\right)\\left((\\hat J^{1})_2^{\\ 2}+(\\hat J^{1})_3^{\\ 3} \\right)\\left((\\hat J^{1})_1^{\\ 1}+(\\hat J^{1})_2^{\\ 2}\\right)~.\n\\end{split}\n\\end{align}}%\n\nOur next task should be to check for redundancies by computing null relations. This analysis is substantially complicated by the presence of dimension zero fields in the cohomology. This means that we don't have an algorithm for finding such redundancies that must terminate in principle. Instead, we use the nulls of $T_3$ to predict some of the nulls in the cylinder theory.\n\n\\paragraph{Dimension zero nulls}\nStarting with the $\\mathbf{(8,1,1)}$ nulls and specializing the indices to $(a_1,a_2)=(3,1)$ we obtain the null relation\n\\begin{equation}\n0=\\frac{1}{3}W_{3bc}\\tilde W^{1bc} + (J^{1})_{3}^{\\ a} (J^{1})_{a}^{\\ 1} - 3 \\partial (J^{1})_{3}^{\\ 1} = \\frac{1}{4} +\\frac{1}{3} {\\WW}_{3bc} \\tilde {\\WW}^{1bc} + d(\\ldots)~.\n\\end{equation}\nSimilarly, starting with the $\\mathbf{(8,8,1)}$ nulls and specializing the indices to $(a_1,a_2)=(3,1)$ we obtain the null relation\n\\begin{align}\\begin{split}\n0&~=~\\left(W_{3b_1c}\\tilde W^{1b_2c} - \\frac{1}{3}\\delta_{b_1}^{b_2}W_{3bc}\\tilde W^{1bc} \\right) + (J^{(1)})_{3}^{\\ 1} (J^{(2)})_{b_1}^{\\ b_2}~,\\\\\n &~=~ \\left({\\WW}_{3b_1c} \\tilde {\\WW}^{1b_2c} - \\frac{1}{3} \\delta_{b_1}^{b_2} {\\WW}_{3bc} \\tilde {\\WW}^{1bc} \\right) + d(\\ldots)~.\n\\end{split}\\end{align}\nSimilar nulls can be found by interchanging the second and third puncture. In summary, we have the relations\n\\begin{equation}\n\\label{WWtinverse}\n{\\WW}_{3b_1c} \\tilde {\\WW}^{1b_2c} = -\\frac{1}{4} \\delta_{b_1}^{b_2}\\;, \\qquad {\\WW}_{3bc_1} \\tilde {\\WW}^{1bc_2} = - \\frac{1}{4} \\delta_{c_1}^{c_2}~.\n\\end{equation}\nThis shows that, up to a rescaling, $\\WW_{3bc}(z)$ and $\\tilde\\WW^{1bc}(z)$ can be thought of as inverses of one another.\n\nNext, we look at the $\\mathbf{(\\bar{6},3,3)}$ nulls and specialize $a_1=a_2=1$, which gives us\n\\begin{align}\\begin{split}\n0&~=~2(J^{1})_{\\alpha_1}^{\\ 1} W_{\\alpha_2bc}\\epsilon^{\\alpha_1\\alpha_21} + \\tilde W^{1b_1c_1}\\tilde W^{1b_2c_2} \\epsilon_{bb_1b_2}\\epsilon_{cc_1c_2}~,\\\\ \n &~=~\\WW_{3bc} + \\tilde{\\WW}^{1b_1c_1}\\tilde{\\WW}^{1b_2c_2} \\epsilon_{bb_1b_2}\\epsilon_{cc_1c_2} + d(\\dots)~.\n\\end{split}\\end{align}\nSimilarly from the nulls in the $\\mathbf{(6,\\bar{3},\\bar{3})}$ we find\n\\begin{align}\\begin{split}\n0&~=~(J^{1})_{3}^{\\ \\ \\alpha_1} \\tilde W^{\\alpha_2bc}\\epsilon_{\\alpha_1\\alpha_23} +\\frac{1}{2} W_{3b_1c_1} W_{3b_2c_2} \\epsilon^{bb_1b_2}\\epsilon^{cc_1c_2}~,\\\\\n &~=~-\\frac{1}{2} \\tilde{\\WW}^{1bc} +\\frac{1}{2} \\WW_{3b_1c_1} \\WW_{3b_2c_2} \\epsilon^{bb_1b_2}\\epsilon^{cc_1c_2} + d(\\ldots)~.\n\\end{split}\\end{align}\nCombining these with the previous relations, we find that\n\\begin{equation}\n\\frac{1}{3!}\\tilde {\\WW}^{1bc}\\tilde {\\WW}^{1b_1c_1}\\tilde {\\WW}^{1b_2c_2} \\epsilon_{bb_1b_2}\\epsilon_{cc_1c_2} = -\\frac{1}{3!}\\tilde {\\WW}^{1bc}{\\WW}_{3bc} = - \\frac{1}{3!}{\\WW}_{3bc} \\tilde {\\WW}^{1bc} = \\frac{1}{8}~,\n\\end{equation}\nand \n\\begin{equation}\n\\frac{1}{3!}{\\WW}_{3bc}{\\WW}_{3b_1c_1} {\\WW}_{3b_2c_2} \\epsilon^{bb_1b_2}\\epsilon^{cc_1c_2} = \\frac{1}{3!}{\\WW}_{3bc}\\tilde {\\WW}^{1bc} = -\\frac{1}{8}~.\n\\end{equation}\nThese are conditions on the determinants of $\\WW_{3bc}$ and $\\tilde \\WW^{1bc}$ thought of as three-by-three matrices. Note that we used the relation $\\tilde {\\WW}^{1bc}{\\WW}_{3bc} = {\\WW}_{3bc} \\tilde {\\WW}^{1bc}$, which is true in cohomology:\n\\begin{equation}\n\\tilde {\\WW}^{1bc}{\\WW}_{3bc} = {\\WW}_{3bc} \\tilde {\\WW}^{1bc} -d(9 \\partial b_3^{\\ 1})~.\n\\end{equation}\nIf we now introduce rescaled operators $g_{bc} \\colonequals -2 {\\WW}_{3bc}$ and $\\tilde g^{bc} \\colonequals 2 \\tilde {\\WW}^{1bc}$, then $g$ and $\\tilde g$ have unit determinant and are inverses of one another. Because of the determinant condition, this also means that we can rewrite $\\tilde g$ in terms of positive powers of $g$, so only one needs to be considered as an honest generator of the chiral algebra.\n\n\\paragraph{Dimension one nulls}\nWe can continue the same analysis at dimension one. The second relation in the $\\mathbf{(3,3,3)}$ representation gives us\n\\begin{equation}\n(\\JJ^{2})_{b}^{\\ \\beta}\\WW_{3 \\beta c_1} = (\\JJ^{3})_{c_1}^{\\ \\gamma}\\WW_{3 b\\gamma}~.\n\\end{equation}\nBy taking the normal ordered product of both sides with $\\tilde {\\WW}^{1 b c_2}$ and re-ordering (ignoring BRST exact terms), we can make a sequence of replacements using the dimension zero relations of the previous paragraph and end up with the following derivation,\n{\\small\n\\begin{align}\n&&\\tilde {\\WW}^{1 b c_2}({\\JJ}^{(2)})_{b}^{\\ \\beta}{\\WW}_{3 \\beta c_1} &~=~ \\tilde {\\WW}^{1 b c_2}({\\JJ}^{(3)})_{c_1}^{\\ \\gamma}{\\WW}_{3 b\\gamma}\\nn\\\\\n&\\Longrightarrow&({\\JJ}^{(2)})_{b}^{\\ \\beta}{\\WW}_{3 \\beta c_1}\\tilde {\\WW}^{1 b c_2} +\\tfrac{8}{3}{\\WW}_{3 \\beta c_1}\\partial\\tilde {\\WW}^{1 \\beta c_2} &~=~ ({\\JJ}^{(3)})_{c_1}^{\\ \\gamma}{\\WW}_{3 b\\gamma}\\tilde {\\WW}^{1 b c_2} - \\tfrac{1}{3} {\\WW}_{3 \\beta c_1}\\partial\\tilde {\\WW}^{1 \\beta c_2} +\\delta_{c_1}^{c_2} {\\WW}_{3 \\beta \\gamma}\\partial\\tilde {\\WW}^{1 \\beta \\gamma}\\nn\\\\\n&\\Longrightarrow&({\\JJ}^{(2)})_{b}^{\\ \\beta}{\\WW}_{3 \\beta c_1}\\tilde {\\WW}^{1 b c_2} &~=~ ({\\JJ}^{(3})_{c_1}^{\\ \\gamma}{\\WW}_{3 b\\gamma}\\tilde {\\WW}^{1 b c_2} - 3 \\left( {\\WW}_{3 \\beta c_1}\\partial\\tilde {\\WW}^{1 \\beta c_2} -\\tfrac{1}{3}\\delta_{c_1}^{c_2} {\\WW}_{3 \\beta \\gamma}\\partial\\tilde {\\WW}^{1 \\beta \\gamma} \\right)\\nn\\\\\n&\\Longrightarrow&({\\JJ}^{(2)})_{b}^{\\ \\beta}{\\WW}_{3 \\beta c_1}\\tilde {\\WW}^{1 b c_2} &~=~ -\\tfrac{1}{4} ({\\JJ}^{(3)})_{c_1}^{\\ c_2} - 3 \\left( {\\WW}_{3 \\beta c_1}\\partial\\tilde {\\WW}^{1 \\beta c_2} -\\tfrac{1}{3}\\delta_{c_1}^{c_2} {\\WW}_{3 \\beta \\gamma}\\partial\\tilde {\\WW}^{1 \\beta \\gamma} \\right)\\nn\\\\\n&\\Longrightarrow&({\\JJ}^{(2)})_{b}^{\\ \\beta}{ g}_{\\beta c_1}\\tilde { g}^{ b c_2} &~=~ ({\\JJ}^{(3)})_{c_1}^{\\ c_2} - 3 \\left( {g}_{\\beta c_1}\\partial\\tilde { g}^{ \\beta c_2} -\\tfrac{1}{3}\\delta_{c_1}^{c_2} { g}_{\\beta \\gamma}\\partial\\tilde { g}^{\\beta \\gamma} \\right)~.\\label{eq:JintermsofJ}\n\\end{align}}%\nAt last, we see that the current $\\JJ^{(3)}$ is not an independent generator. \n\nOther dimension one nulls can be obtained from the first equality in the $\\mathbf{(3,3,3)}$. Here we find\n\\begin{equation}\n(J^{1})_{3}^{\\ \\alpha}W_{\\alpha bc} = (J^{2})_{b}^{\\ \\beta}W_{3 \\beta c} ~\\Longrightarrow~ \\frac{1}{2}\\WW_{2bc}+\\frac{2}{3}\\partial \\WW_{3bc} = (\\JJ^{2})_{b}^{\\ \\beta}\\WW_{3 \\beta c}~,\n\\end{equation}\nwhich implies that the generator $\\WW_{2bc}$ is not independent. Similarly, from the $\\mathbf{(\\bar3,\\bar3,\\bar3)}$ relations one finds\n\\begin{equation}\n(J^{1})_{\\alpha}^{\\ 1}\\tilde W^{\\alpha bc} = (J^{2})_{\\beta}^{\\ b}\\tilde W^{1 \\beta c} ~\\Longrightarrow~ \\frac{1}{2}\\tilde{\\WW}^{2bc}-\\frac{2}{3}\\partial \\tilde{\\WW}^{1bc} = (\\JJ^{2})_{\\beta}^{\\ b}\\tilde {\\WW}^{1 \\beta c},\n\\end{equation}\nwhich implies that $\\tilde{\\WW}^{2bc}$ is not an independent generator. \n\nBased on the analysis of the index in \\ref{subapp:cylinder_cap_index}, we expect that all higher dimensional generators can be similarly related via null relations to composites of $\\JJ^{2}$ and $g_{bc} = -2\\WW_{3 b c}$. It would be interesting if this could be proven as a consequence of only the null states that are guaranteed to exist based on nulls of the unreduced theory, although such a simplification is not a necessary condition for the existence of the desired nulls.\n\\subsection{Reduction of \\texorpdfstring{$T_3$}{T3} to free hypermultiplets}\n\\label{subapp:e6_to_free}\n\nIn this appendix we provide some details about the reduction of the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra to free symplectic bosons. This corresponds to the subregular embedding $\\mf{su}(2)\\hookrightarrow\\mf{su}(3)$, which is given by\n\\begin{equation}\n\\Lambda(t_0) = \\frac12(T_1^{\\phantom{1}1} - T_3^{\\phantom{1}3})~,\\qquad\n\\Lambda(t_-) = T_3^{\\phantom{1}1}~,\\qquad\n\\Lambda(t_+) = T_1^{\\phantom{1}3}~.\n\\end{equation}\nThe grading on the Lie algebra by the Cartan element $\\Lambda(t_0)$ is half-integral. In order to arrive at first-class constraints, we introduce a different Cartan element $\\delta$ that gives an integral grading. More specifically, we have $\\delta = \\frac13 (T_1^{\\ 1} + T_2^{\\ 2} - 2T_3^{\\ 3})$. With respect to the $\\delta$-grading there are two negatively graded currents and we consequently impose the constraints $\\left(J^{1}\\right)_3^{\\ 1} = 1$ and $\\left(J^{1}\\right)_3^{\\ 2} = 0$. These are implemented via a BRST procedure with differential given by\n\\begin{equation}\nd(z) = \\left(\\left((J^{1})_3^{\\ 1} -1 \\right) c_1^{\\ 3} + (J^{1})_3^{\\ 2} c_2^{\\ 3}\\right)(z)~,\n\\end{equation}\nwhere the ghost pairs $b_3^{\\ 1}, c_1^{\\ 3} $ and $b_3^{\\ 2},c_2^{\\ 3}$ have the usual singular OPEs.\n\nImplementing the first step of the qDS procedure, one obtains the (redundant) generators of the chiral algebra at the level of vector spaces. Applying the tic-tac-toe procedure to produce genuine chiral algebra generators, we obtain the set of generators that were listed in Table \\ref{tab:T3_reduced_generators}. The explicit forms of these generators are given as follows,\n{\\allowdisplaybreaks\n\\begin{align}\n\\label{eq:reduced_generators_def}\n\\JJ_{\\mf{u}(1)} &~\\colonequals~ (\\hat J^{1})_1^{\\ 1}-2 (\\hat J^{1})_2^{\\ 2}+(\\hat J^{1})_3^{\\ 3}\\nn\\\\\n(\\hat {\\JJ}^{1})_1^{\\ 2} &~\\colonequals~ (\\hat J^{1})_1^{\\ 2}\\nn\\\\\n(\\hat {\\JJ}^{1})_1^{\\ 3} &~\\colonequals~ (\\hat J^{1})_1^{\\ 3} -\\left( -(k+1)\\partial (\\hat J^{1})_3^{\\ 3} +(\\hat J^{1})_1^{\\ 1} (\\hat J^{1})_3^{\\ 3} -(\\hat J^{1})_2^{\\ 1}(\\hat J^{1})_1^{\\ 2}\\right)\\nn\\\\\n(\\hat {\\JJ}^{1})_2^{\\ 3} &~\\colonequals~ (\\hat J^{1})_2^{\\ 3} -\\left( (k+2)\\partial (\\hat J^{1})_2^{\\ 1} +(\\hat J^{1})_3^{\\ 3} (\\hat J^{1})_2^{\\ 1} -(\\hat J^{1})_2^{\\ 2}(\\hat J^{1})_2^{\\ 1}\\right)\\nn\\\\\n{\\WW}_{1bc} &~\\colonequals~ W_{1bc} - W_{3bc} (\\hat J^{1})_1^{\\ 1}\\nn\\\\\n{\\WW}_{2bc} &~\\colonequals~ W_{2bc} - W_{3bc}(\\hat J^{1})_2^{\\ 1}\\\\\n{\\WW}_{3bc} &~\\colonequals~ W_{3bc}\\nn\\\\\n\\tilde {\\WW}^{1bc} &~\\colonequals~ \\tilde W^{1bc}\\nn\\\\\n\\tilde {\\WW}^{2bc} &~\\colonequals~ \\tilde W^{2bc}\\nn\\\\\n\\tilde {\\WW}^{3bc} &~\\colonequals~ \\tilde W^{3bc} - \\left(- \\tilde W^{1bc} (\\hat J^{1})_1^{\\ 1} - \\tilde W^{2bc} (\\hat J^{1})_2^{\\ 1} \\right)\\nn\\\\\n(\\JJ^{2})_{b_1}^{\\ b_2}&~\\colonequals~( J^{2})_{b_1}^{\\ b_2}\\nn\\\\\n(\\JJ^{3})_{c_1}^{\\ c_2}&~\\colonequals~( J^{3})_{c_1}^{\\ c_2}~\\nn.\n\\end{align}\n}\n\nThe generators $\\WW_{3bc}$ and $\\tilde\\WW^{1bc}$ have the correct charges and mutual OPE to be identified as the expected symplectic bosons. It follows that the reduction argument will be complete if we can show that at the specific value of the level of interest $k=-3$, all the other generators listed in Eqn. \\eqref{eq:reduced_generators_def} participate in a null state condition that allows them to be equated with composites of $\\WW_{3bc}$ and $\\tilde\\WW^{1bc}$. \n\nIndeed, we do find such relations to account for all additional generators. At level $h=1$, we find\n\\begin{align}\n\\JJ_{\\mf{u}(1)} \t\t\t&~=~ - {\\WW}_{3bc} \\tilde {\\WW}^{1bc}~,\\label{eq:nullJU1}\\\\ \n(\\JJ^{2})_{b_1}^{\\ b_2} &~=~ -\\left({\\WW}_{3b_1c} \\tilde {\\WW}^{1b_2c} - \\frac13 \\delta_{b_1}^{b_2} {\\WW}_{3bc} \\tilde {\\WW}^{1bc} \\right)~,\\label{eq:nullJSU3_2}\\\\\n(\\JJ^{3})_{c_1}^{\\ c_2} &~=~ -\\left({\\WW}_{3bc_1} \\tilde {\\WW}^{1bc_2} - \\frac13 \\delta_{c_1}^{c_2} {\\WW}_{3bc} \\tilde {\\WW}^{1bc} \\right)~,\\label{eq:nullJSU3_3}\\\\\n\\tilde{\\WW}_{2bc} \t\t&~=~ \\frac{1}{2} \\epsilon_{b b_1b_2} \\epsilon_{c c_1c_2} \\tilde {\\WW}^{1b_1c_1} \\tilde {\\WW}^{1b_2c_2}~,\\label{eq:nullWlower}\\\\\n\\tilde{\\WW}^{2bc} \t\t&~=~ -\\frac{1}{2} \\epsilon^{b b_1b_2} \\epsilon^{c c_1c_2} {\\WW}_{3b_1c_1} {\\WW}_{3b_2c_2}~.\\label{eq:nullWupper}\n\\end{align}\nAt dimension $h=3\/2$, one can find the null relations\n\\small\n\\begin{align}\n(\\hat{\\JJ}^{1})_1^{\\ 2} &~=~ \\frac16\\WW_{3b_1c_1}\\WW_{3b_2c_2}\\WW_{3b_3c_3}\\epsilon^{b_1b_2b_3}\\epsilon^{c_1c_2c_3}~,\\\\\n(\\hat{\\JJ}^{1})_2^{\\ 3} &~=~ -\\frac16\\tilde{\\WW}^{1b_1c_1}\\tilde{\\WW}^{1b_2c_2}\\tilde{\\WW}^{1b_3c_3}\\epsilon_{b_1b_2b_3}\\epsilon_{c_1c_2c_3}~,\\\\\n\\WW_{1bc}\t\t\t\t&~=~ 2\\partial\\WW_{3bc} + \\frac{5}{12}\\WW_{3b_1c_1}\\WW_{3b_2c_2}\\tilde{\\WW}^{1b_3c_3}\\epsilon^{\\beta b_1b_2}\\epsilon^{\\gamma c_1c_2}\\epsilon_{\\beta b b_3}\\epsilon_{\\gamma c c_3} -\\frac13 \\WW_{3(b(c}\\WW_{3b_1)c_1)}\\tilde{\\WW}^{1b_1c_1}~,\\\\\n\\WW^{3bc}\t\t\t\t&~=~- \\partial\\tilde{\\WW}^{1bc} + \\frac13 \\tilde{\\WW}^{1b_1c_1}\\tilde{\\WW}^{1b_2c_2}{\\WW}_{3b_3c_3}\\epsilon_{\\beta b_1b_2}\\epsilon_{\\gamma c_1c_2}\\epsilon^{\\beta b b_3}\\epsilon^{\\gamma c c_3} -\\frac{2}{3}\\tilde{\\WW}^{1(b(c}\\tilde{\\WW}^{1b_1)c_1)}{\\WW}_{3b_1c_1}~.\n\\end{align}\n\\normalsize\nFinally, at dimension $h=2$, we find\n\\small\n\\begin{multline}\n(\\hat{\\JJ}^{1})_1^{\\ 3} = \\frac{14}{9}\\WW_{3bc}\\partial\\tilde{\\WW}^{1bc}\n-\\frac{8}{9}\\partial\\WW_{3bc}\\tilde{\\WW}^{1bc}\n+\\frac{2}{9}\\WW_{3(b_1(c_1}\\WW_{3b_2)c_2)}\\tilde{\\WW}^{1(b_1(c_1}\\tilde{\\WW}^{1b_2)c_2)}\\\\\n-\\frac{7}{36}\\WW_{3b_1c_1}\\WW_{3b_2c_2}\\tilde{\\WW}^{1b_3c_3}\\tilde{\\WW}^{1b_4c_4}\\epsilon^{b_1b_2b}\\epsilon^{c_1c_2c}\\epsilon_{b_3b_4b}\\epsilon_{c_3c_4c}~.\n\\end{multline}\n\\normalsize\n\nIt is interesting to see these null relations as a consequence of the nulls in the original chiral algebra. To that effect, let us re-derive the dimension one nulls in this manner. Starting with the $\\mathbf{(8,1,1)}$ null states in Table \\ref{tab:relation_reductions} and specializing the indices to $(a_1,a_2)=(3,1)$, we find the null relation\n\\begin{equation}\n0=\\tfrac13 W_{3bc}\\tilde W^{1bc} + (J^{(1)})_{3}^{\\ a} (J^{(1)})_{a}^{\\ 1} + 3 \\partial (J^{(1)})_{3}^{\\ 1} = \\tfrac13 {\\WW}_{3bc} \\tilde {\\WW}^{1bc} +\\tfrac13 \\JJ_{\\mf{u}(1)} + d(\\ldots)~,\n\\end{equation}\nthus reproducing Eqn. \\eqref{eq:nullJU1}. Alternatively, starting with the null states in the $\\mathbf{(8,8,1)}$ and specializing the indices to $(a_1,a_2)=(3,1)$, we obtain the null relation\n\\begin{multline}\n0=\\left(W_{3b_1c}\\tilde W^{1b_2c} - \\frac13 \\delta_{b_1}^{b_2}W_{3bc}\\tilde W^{1bc} \\right) -\\frac13 \\beta_1 (J^{(1)})_{3}^{\\ 1} (J^{(2)})_{b_1}^{\\ b_2} \\\\\n = \\left({\\WW}_{3b_1c} \\tilde {\\WW}^{1b_2c} - \\frac13 \\delta_{b_1}^{b_2} {\\WW}_{3bc} \\tilde {\\WW}^{1bc} \\right)+ \\left(\\JJ^{(2)}\\right)_{b_1}^{\\ b_2} + d(\\ldots)~,\n\\end{multline}\nwhich precisely matches the null relation of Eqn. \\eqref{eq:nullJSU3_2}. Similarly, one can reproduce \\eqref{eq:nullJSU3_3}. It is straightforward to check that the null relations in Eqns. \\eqref{eq:nullWlower}-\\eqref{eq:nullWupper} can be obtained from the relations in the $\\mathbf{(\\bar{6},3,3)}$ and $\\mathbf{(6,\\bar{3},\\bar{3})}$ and specializing the indices appropriately.\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nA large and interesting class of interacting quantum field theories are the \\emph{theories of class $\\SS$} \\cite{Gaiotto:2009we,Gaiotto:2009hg}. These are superconformal field theories (SCFTs) with half-maximal (\\ie, $\\NN=2$) supersymmetry in four dimensions. The most striking feature of this class of theories is that they assemble into vast duality webs that are neatly describable in the language of two-dimensional conformal geometry. This structure follows from the defining property of theories of class $\\SS$: they can be realized as the low energy limits of (partially twisted) compactifications of six-dimensional CFTs with $(2,0)$ supersymmetry on punctured Riemann surfaces.\n\nGeneric theories of class $\\SS$ are strongly interacting. (In many cases they possess generalized weak-coupling limits wherein the neighborhood of a certain limit point on their conformal manifold can be described by a collection of \\emph{isolated} strongly coupled SCFTs with weakly gauged flavor symmetries.) It is remarkable, then, that one can say much of anything about these theories in the general case. One classic and successful approach has been to restrict attention to the weakly coupled phases of these theories by, for example, studying the physics of Coulomb branch vacua at the level of the low energy effective Lagrangian and the spectrum of BPS states. Relatedly, one may utilize brane constructions of these theories to extract some features of the Coulomb branch physics \\cite{Witten:1997sc,Benini:2009gi}.\n\nAn alternative -- and perhaps more modern -- tactic is to try to constrain or solve for various aspects of these theories using consistency conditions that follow from duality. This approach was successfully carried out in \\cite{Gaiotto:2012xa} (building on the work of \\cite{Gadde:2009kb, Gadde:2010te, Gadde:2011ik, Gadde:2011uv}) to compute the superconformal index of a very general set of class $\\SS$ fixed points (see also \\cite{Lemos:2012ph,Mekareeya:2012tn} for extensions to even more general cases). Subsequently, the framework for implementing this approach to study the (maximal) Higgs branch was established in \\cite{Moore:2011ee}. The general aspiration in this sort of program is that the consistency conditions imposed by generalized $S$-duality and the (known) behavior of these theories under certain partial Higgsing and weak gauging operations may be sufficient to completely determine certain nice observables. In this sense the approach might be thought of as a sort of ``theory space bootstrap''. One expects that this approach has the greatest probability of success when applied to observables of class $\\SS$ theories that are protected against corrections when changing exactly marginal couplings, thus leading to objects that are labelled by topological data and have no dependence on continuous parameters.%\n\\footnote{Observables with a manageable dependence on the marginal couplings, such as $\\Rb^4_{\\epsilon_1\\epsilon_2}$ and $\\Sb^4$ partition functions, also provide natural settings for this type of argument.}\n\nA new class of protected observables for four-dimensional $\\NN=2$ SCFTs was introduced in \\cite{Beem:2013sza}. There it was shown that certain carefully selected local operators, restricted to be coplanar and treated at the level of cohomology with respect to a particular nilpotent supercharge, form a closed subalgebra of the operator algebra. Moreover their operator product expansions and correlation functions are meromorphic functions of the operator insertion points on the plane. This subalgebra consequently acquires the structure of a two-dimensional chiral algebra. The spectrum and structure constants of this chiral algebra are subject to a non-renormalization theorem that renders them independent of marginal couplings. The existence of this sector can formally be summarized by defining a map that associates to any $\\NN=2$ SCFT in four dimensions the chiral algebra that computes the appropriate protected correlation functions,\n\\begin{equation*}\n\\protect\\raisebox{1pt}{$\\chi$}~:~\\bigg\\{\\bigslant{\\text{$\\NN=2$ SCFTs}}{\\text{Marginal deformations}}\\bigg\\}~\\longrightarrow~\\bigg\\{\\text{Chiral algebras}\\bigg\\}~.\n\\end{equation*}\nChiral algebras with the potential to appear on the right hand side of this map are not generic -- they must possess a number of interesting properties that reflect the physics of their four-dimensional ancestors.\n\nIn this paper we initiate the investigation of chiral algebras that are associated in this manner with four-dimensional theories of class $\\SS$. For lack of imagination, we refer to the chiral algebras appearing in this fashion as \\emph{chiral algebras of class $\\SS$}. For a general strongly interacting SCFT, there is at present no straightforward method for identifying the associated chiral algebra. Success in this task would implicitly fix an infinite amount of protected CFT data (spectral data and three-point couplings) that is generally difficult to determine. However, given the rigid nature of chiral algebras, one may be optimistic that chiral algebras of class $\\SS$ can be understood in some detail by leveraging the constraints of generalized $S$-duality and the wealth of information already available about the protected spectrum of these theories. In the present work, we set up the abstract framework of this bootstrap problem in the language of generalized topological quantum field theory, and put into place as many ingredients as possible to define the problem concretely. We perform some explicit calculations in the case of theories of rank one and rank two, and formulate a number of conjectures for the higher rank case. One of our main results is a general prescription to obtain the chiral algebra of a theory with sub-maximal punctures given that of the related theory with all maximal punctures. We demonstrate that the reduction in the rank of a puncture is accomplished in the chiral algebra by quantum Drinfeld-Sokolov reduction, with the chiral algebra procedure mirroring the corresponding four-dimensional procedure involving a certain Higgsing of flavor symmetries.\n\nUltimately we believe that the bootstrap problem for chiral algebras of class $\\SS$ may prove solvable, and we hope that the existence of this remarkable structure will pique the interest of readers with a passion for vertex operator algebras. Characterizing these algebras should prove to be both mathematically and physically rewarding.\n\nThe organization of this paper is as follows. Section \\ref{sec:review} is a two-part review: first of the protected chiral algebra of $\\NN=2$ SCFTs, and then of $\\NN=2$ SCFTs of class $\\SS$. In Section \\ref{sec:TQFT}, we outline the structure of the chiral algebras of class $\\SS$, using the $A_1$ and $A_2$ cases as examples. We also take some steps to formalize the TQFT structure of the chiral algebras of class $\\SS$ so as to emphasize that the structures outlined here are susceptible to rigorous mathematical analysis. In Section \\ref{sec:reducing}, we describe the generalization of our story to the case of theories with sub-maximal punctures. In the process, we are led to consider the problem of quantum Drinfeld-Sokolov reduction for modules of affine Lie algebras. In Section \\ref{sec:cyl_and_cap}, we offer some comments on unphysical chiral algebras that are expected to exist at a formal level in order to complete the TQFT structure. A number of technical details having to do with rank two theories are included in Appendix \\ref{app:level_by_level}. Details having to do with unphysical cylinder and cap chiral algebras appear in Appendix \\ref{app:cylinders_and_caps}. Finally, in Appendix \\ref{app:spectral_sequences} we review the methods for computing the cohomology of a double complex using spectral sequences. These methods are instrumental to the analysis of Section \\ref{sec:reducing}.\n\\section{Background}\n\\label{sec:review}\n\nWe begin with a review of the two main topics being synthesized in this paper: the protected chiral algebras of $\\NN=2$ SCFTs and superconformal theories of class $\\SS$. Readers who have studied our first paper on protected chiral algebras \\cite{Beem:2013sza} should be fine skipping Section \\ref{subsec:chiral_review}, while those familiar with the class $\\SS$ literature (for example, \\cite{Gaiotto:2009we,Gadde:2009kb,Gaiotto:2012xa,Chacaltana:2010ks}) may safely skip Section \\ref{subsec:class_S_review}.\n\n\\medskip\n\\input{.\/sections\/Section_2\/S2_1}\n\\medskip\n\\input{.\/sections\/Section_2\/S2_2}\n\\bigskip\n\\subsection{Review of protected chiral algebras}\n\\label{subsec:chiral_review}\n\nThe observables we aim to study for class $\\SS$ fixed points are those described by the protected chiral algebras introduced in \\cite{Beem:2013sza} (see also \\cite{Beem:2014kka} for the extension to six dimensions). The purpose of this section is to provide a short overview of how those chiral algebras come about and the properties that were deduced for them in the original papers. We simply state the facts in this section; the interested reader is encouraged to consult the original work for explanations.\n\nThe starting point is the $\\NN=2$ superconformal algebra $\\mathfrak{su}(2,2|2)$. The fermionic generators of the algebra are Poincar\\'e supercharges $\\{\\mathcal{Q}^{\\II}_{\\alpha},\\tilde\\mathcal{Q}_{\\dot\\alpha \\JJ}\\}$ and special conformal supercharges $\\{\\SS^{\\alpha}_{\\II},\\tilde\\SS^{\\dot\\alpha\\JJ}\\}$. From these, one can form two interesting nilpotent supercharges that are mixtures of Poincar\\'e and special conformal supercharges,\n\\begin{equation}\n\\label{eq:square_supercharges}\n\\qq_{\\,1}\\ceq \\mathcal{Q}^1_{-}+\\tilde\\SS^{\\dot{-}2}~,\\qquad\\qq_{\\,2}\\ceq \\tilde\\mathcal{Q}_{\\dot{-}2}+\\SS_1^{-}~.\n\\end{equation}\nThese supercharges have the following interesting property. Let us define the subalgebra of the four-dimensional conformal symmetry algebra that acts on a plane $\\Rb^2\\subset\\Rb^4$ as $\\mf{sl}(2)\\times\\overline{\\mf{sl}(2)}$. Let us further denote the complexification of the $\\mf{su}(2)_R$ $R$-symmetry as $\\mf{sl}(2)_R$. These subalgebras have the following nice relationship to the supercharges $\\qq_{\\,i}$,\n\\begin{equation}\n\\label{eq:q_exact_commutators}\n[\\qq_{\\,i},\\mf{sl}(2)]=0~,\\qquad \\{\\qq_{\\,i},\\cdot\\}=\\mathrm{diag}\\left[\\overline{\\mf{sl}(2)}\\times\\mf{sl}(2)_R\\right]~.\n\\end{equation}\nIt follows from these relations that operators that are $\\qq\\,$-closed must behave as meromorphic operators in the plane. They have meromorphic operator product expansions (modulo $\\qq\\,$-exact terms) and their correlation functions are meromorphic functions of the positions. Restricting from the full $\\NN=2$ SCFT to $\\qq\\,$-cohomology therefore defines a two-dimensional chiral algebra. For a pedagogical discussion of chiral algebras, see \\cite{Bouwknegt:1992wg}.\n\nThe conditions for a local operator to define a nontrivial $\\qq\\,$-cohomology element were worked out in \\cite{Beem:2013sza}. It turns out that such operators are restricted to lie in the \\emph{chiral algebra plane}: $\\{x_3=x_4=0\\}$. When inserted at the origin, an operator belongs to a well-defined cohomology class if\nand only if it obeys the conditions \n\\begin{equation}\n\\label{schurconditions}\n\\hat h \\ceq \\frac{E-(j_1+j_2)}{2}-R = 0~,\\qquad \\ZZ \\ceq j_1-j_2+r = 0~.\n\\end{equation}\nUnitarity of the superconformal representation requires $\\hat h \\geqslant \\frac{ | \\ZZ | }{2}$, so the first condition actually implies the second. We refer to operators obeying $\\hat h = 0$ as \\emph{Schur operators}. All Schur operators are necessarily $\\mf{su}(2)_R$ highest weight states. Indeed, if the $\\mf{su}(2)_R$ raising generator did \\emph{not} annihilate a Schur operator, it would generate an operator with $\\hat h < 0$, which would violate unitarity.\n\nAs $\\overline {\\mf{sl}(2)}$ does not commute with $\\qq$, ordinary translations of Schur operators in the chiral algebra plane fail to be $\\qq\\,$-closed away from the origin. Rather, we translate operators using the twisted translation generator $\\widehat{L}_{-1}\\ceq \\overline{L}_{-1}+\\RR_{-}$, where $\\RR_{-}$ is the lowering operator of $\\mf{su}(2)_R$. As shown in Eqn.~\\eqref{eq:q_exact_commutators}, this is a $\\qq\\,$-exact operation. We find that local operators defining nontrivial $\\qq\\,$-cohomology classes can be written in the form\n\\begin{equation}\n\\label{eq:twisted_translated}\n\\OO(z,\\zb)\\ceq u_{\\II_1}(\\zb) \\cdots u_{\\II_k}(\\zb)\\OO^{\\{\\II_1\\cdots\\II_k\\}}(z,\\zb)~,\\qquad \\text{where}\\qquad u_{\\II}(\\zb)\\ceq\\binom{\\,1\\,}{\\zb}~.\n\\end{equation}\nHere $\\OO^{1\\cdots1}(0)$ is a Schur operator, and we are suppressing Lorentz indices. It is these \\emph{twisted-translated} Schur operators, taken at the level of cohomology, that behave as meromorphic operators in two dimensions,\n\\begin{equation}\n\\label{eq:twisted_translated_to_meromorphic}\n\\OO(z) \\ceq [\\OO(z,\\zb)]_{\\qq_{\\,i}}~.\n\\end{equation}\nWe now turn to a recap of the various types of four-dimensional operators that may satisfy the Schur condition, and thus participate in the protected chiral algebra.\n\n\\renewcommand{\\arraystretch}{1.55}\n\\begin{table}\n\\centering\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline \\hline\nMultiplet \t\t\t & $\\OO_{\\rm Schur}$ \t\t \t\t\t\t\t\t & $h\\ceq\\frac{E+j_1+j_2}{2}$ & $\\phantom{-}r$ \t\t \t& Lagrangian ``letters''\\\\ \n\\hline \n$\\hat\\BB_R$ \t\t & $\\Psi^{11\\dots 1}$ \t\t\t\t \t\t\t\t\t & $R$ \t\t \t\t\t\t & $\\phantom{-}0$ \t\t\t& $Q$, $\\tilde Q$ \\\\ \n\\hline\n$\\bar\\DD_{R(j_1,0)}$ & ${\\mathcal{Q}}^1_+\\Psi^{11\\dots 1}_{+\\dots+}$ \t\t\t\t\t & $R+j_1+1$ \t\t\t\t & $-j_1-\\frac12~$ \t& $Q$, $\\tilde Q$, $\\lambda^1_+$ \\\\\n\\hline\n$\\DD_{R(0,j_2)}$ \t & $\\wt{\\mathcal{Q}}^1_{\\dot +}\\Psi^{11\\dots 1}_{\\dot+\\dots\\dot+}$ & $R+j_2+1$\t\t\t\t & $\\phantom{-}j_2+\\frac12$ & $Q$, $\\tilde Q$, $\\tilde\\lambda^1_{\\dot+}$ \\\\\n\\hline\n$\\hat\\CC_{R(j_1,j_2)}$ & $\\mathcal{Q}^1_{+} \\wt{\\mathcal{Q}}^1_{\\dot+}\\Psi^{1\\dots1}_{+\\dots+\\,\\dot+\\dots\\dot+}$ & $R+j_1+j_2 +2$ & $\\phantom{-}j_2-j_1$ & $D_{+\\dot+}^n Q$, $D_{+\\dot+}^n \\tilde Q$, $D_{+\\dot+}^n \\lambda^1_+$, $D_{+\\dot+}^n \\tilde\\lambda^1_{\\dot+}$\\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{schurTable} This table summarizes the manner in which Schur operators fit into short multiplets of the $\\NN=2$ superconformal algebra. We use the naming conventions for supermultiplets of Dolan and Osborn \\cite{Dolan:2002zh}. For each supermultiplet, we denote by $\\Psi$ the superconformal primary. There is then a single conformal primary Schur operator ${\\OO}_{\\rm Schur}$, which in general is obtained by the action of some Poincar\\'e supercharges on $\\Psi$. The holomorphic dimension ($h$) and $U(1)_r$ charge ($r$) of ${\\OO}_{\\rm Schur}$ are determined in terms of the quantum numbers $(R,j_1,j_2)$ that label the shortened multiplet. We also indicate the schematic form that ${\\OO}_{\\rm Schur}$ can take in a Lagrangian theory by enumerating the elementary ``letters'' from which the operator may be built. We denote by $Q$ and $\\tilde Q$ the complex scalar fields of a hypermultiplet, by $\\lambda_{\\alpha}^\\II$ and $\\tilde \\lambda_{\\dot \\alpha}^\\II$ the left- and right-handed fermions of a vector multiplet, and by $D_{\\alpha \\dot \\alpha}$ the gauge-covariant derivatives. Note that while in a Lagrangian theory Schur operators are built from these letters, the converse is false -- not \\emph{all} gauge-invariant words of this kind are Schur operators. Only the special combinations with vanishing anomalous dimensions retain this property at finite coupling.} \n\\end{table}\n\n\\subsubsection{Taxonomy of Schur operators}\n\\label{subsubsec:schur_taxonomy}\n\nA Schur operator is annihilated by two Poincar\\'e supercharges of opposite chiralities ($\\mathcal{Q}_-^1$ and $\\widetilde \\mathcal{Q}_{2 \\dot -}$ in our conventions). A summary of the different classes of Schur operators, organized according to how they fit in shortened multiplets of the superconformal algebra, is given in Table \\ref{schurTable} (reproduced from \\cite{Beem:2013sza}). Let us briefly discuss each row in turn.\n\nThe first row describes half-BPS operators that are a part of the Higgs branch chiral ring. These have $E = 2R$ and $j_1 = j_2 = 0$. In a Lagrangian theory, operators of this type schematically take the form $QQ\\cdots\\tilde Q\\tilde Q$. A special case is when $R = 1$, in which case a conserved current is amongst the super-descendants of the primary. The half-BPS primary is then the ``moment map'' operator $\\mu_A$ which has dimension two and transforms in the adjoint representation of the flavor symmetry. The $\\mf{su}(2)_R$ highest weight state of the moment map is a Schur operator.\n\nThe operators in the second row are more general $\\NN=1$ chiral operators, obeying $E = 2 R+|r|$ and $r= -j_1 -\\frac12$. Together with the Higgs branch chiral ring operators (which can be regarded as the special case with $r=0$), they make up the so-called \\emph{Hall-Littlewood chiral ring}. These are precisely the operators that are counted by the Hall-Littlewood limit of the superconformal index \\cite{Gadde:2011uv}. In a Lagrangian theory, these operators are obtained by constructing gauge-invariant words out of $Q$, $\\tilde Q$, and the gaugino field $\\lambda^1_+$ (the bottom component of the field strength chiral superfield $W_\\alpha$ with $\\alpha = +$). In complete analogy, the third line describes $\\NN=1$ \\emph{anti}-chiral operators obeying $E= 2 R+|r|$, $r= j_2 +\\frac12$, which belong to the Hall-Littlewood anti-chiral ring. The second and third lines are CPT conjugate to each other. It is believed that $\\DD$ and $\\overline\\DD$ type operators are absent in any theory arising from a (generalized) quiver description with no loops (\\ie, an \\emph{acyclic quiver}). These are theories for which the Hall-Littlewood superconformal index matches the ``Hilbert series'' for the Higgs branch \\cite{Gadde:2011uv,Benvenuti:2010pq}. Equivalently, these are the theories for which the maximal Higgs branch is an honest Higgs branch, with no low-energy abelian gauge field degrees of freedom surviving.\n\nThe fourth line describes the most general type of Schur operators, which belong to supermultiplets that obey less familiar semi-shortening conditions. An important operator in this class is the conserved current for $\\mf{su}(2)_R$, which belongs to the $\\hat \\CC_{0(0, 0)}$ supermultiplet which also contains the stress-energy tensor and is therefore universally present in any $\\NN=2$ SCFT. This current has one component with $E= 3$, $R=1$, $j_1 = j_2 = \\frac12$ which is a Schur operator.\n\nFinally, let us point out the conspicuous absence of half-BPS operators that belong to the \\emph{Coulomb} branch chiral ring (these take the form $\\mbox{Tr}\\,\\phi^k$ in a Lagrangian theory, where $\\phi$ is the complex scalar of the $\\NN=2$ vector multiplet). These operators are in many ways more familiar than those appearing above due to their connection with Coulomb branch physics. The protected chiral algebra is thus complementary, rather than overlapping, with a Coulomb branch based analysis of class $\\SS$ physics.\n\n\\subsubsection{The $4d\/2d$ dictionary}\n\\label{subsubsec:4d_2d_dict}\n\nThere is a rich dictionary relating properties of a four-dimensional SCFT with properties of its associated chiral algebra. Let us briefly review some of the universal entries in this dictionary that were worked out in \\cite{Beem:2013sza}. Interested readers should consult that reference for more detailed explanations.\n\n\\subsubsection*{Virasoro symmetry}\n\nThe stress tensor in a four-dimensional $\\NN=2$ SCFT lives in the $\\hat \\CC_{0(0, 0)}$ supermultiplet, which contains as a Schur operator a component of the $\\mf{su}(2)_R$ conserved current $\\JJ^{(\\II\\JJ)}_{\\alpha\\dot\\alpha}$. The corresponding twisted-translated operator gives rise in cohomology to a two-dimensional meromorphic operator of dimension two, which acts as a two-dimensional stress tensor, $T(z)\\ceq [\\JJ_{+\\dot+}(z,\\bar z)]_{\\qq}$. As a result, the global $\\mf{sl}(2)$ symmetry that is inherited from four dimensions is always enhanced to a local Virasoro symmetry acting on the chiral algebra. From the current-current OPE, which is governed by superconformal Ward identities, one finds a universal expression for the Virasoro central charge,\n\\begin{equation}\n\\label{eq:cc_relation}\nc_{2d} = -12\\,c_{4d}~,\n\\end{equation}\nwhere $c_{4d}$ is the conformal anomaly coefficient of the four-dimensional theory associated to the square of the Weyl tensor. Note that the chiral algebra is necessarily non-unitary due to the negative sign in Eqn.~\\eqref{eq:cc_relation}.\n\n\\subsubsection*{Affine symmetry}\n\nSimilarly, continuous global symmetries of the four-dimensional SCFT (when present) are enhanced to local affine symmetries at the level of the associated chiral algebra. This comes about because the conserved flavor symmetry current sits in the $\\hat\\BB_1$ supermultiplet, whose bottom component is the moment-map operator discussed above. The $\\mf{su}(2)_R$ highest weight component of the moment map operator then gives rise to an affine current, $J_A(z)\\ceq [\\mu_A(z,\\bar z)]_{\\qq}$. The level of the affine current algebra is related to the four-dimensional flavor central charge by another universal relation,\n\\begin{equation}\n\\label{eq:kk_relation}\nk_{2d} = -\\frac12 k_{4d}~.\n\\end{equation}\n\n\\subsubsection*{Hall-Littlewood ring generators as chiral algebra generators}\n\nIdentifying chiral algebra generators is of crucial importance if one is to find an intrinsic characterization of any particular chiral algebra without reference to its four-dimensional parent. A very useful fact is that generators of the Hall-Littlewood chiral ring (and in particular those of the Higgs branch chiral ring) necessarily give rise to generators of the protected chiral algebra after passing to $\\qq\\,$-cohomology. This follows from $\\mf{su}(2)_R$ and $\\mf{u}(1)_r$ selection rules, which forbid such an operator from appearing in any non-singular OPEs. A special case is the aforementioned affine currents, which arise from Higgs branch moment map operators with $E=2R=2$. With the exception of theories with free hypermultiplets, these are always generators.\n \n\\subsubsection*{Exactly marginal gauging}\n\nGiven an SCFT $\\TT$ with a flavor symmetry $G$ that has flavor central charge $k_{4d}=4h^{\\vee}$, one may form a new family of SCFTs $\\TT_G$ by introducing an $\\NN=2$ vector multiplet in the adjoint representation of $G$ and gauging the symmetry. This specific value of the flavor central charge ensures that the gauge coupling beta function vanishes, so the procedure preserves conformal invariance.\n\nThere exists a corresponding procedure at the level of chiral algebras that produces the chiral algebra $\\protect\\raisebox{1pt}{$\\chi$}[\\TT_G]$ given that of the original theory $\\protect\\raisebox{1pt}{$\\chi$}[\\TT]$. In parallel with the introduction of a $G$-valued vector multiplet, one introduces a dimension $(1,0)$ ghost system $(b_A,c^A)$ with $A=1,\\ldots,\\dim G$. In the tensor product of this ghost system and the chiral algebra $\\protect\\raisebox{1pt}{$\\chi$}[\\TT]$, one may form a canonical nilpotent BRST operator given by\n\\begin{equation}\n\\label{eq:BRST_def}\nQ_{\\rm BRST} \\ceq \\oint\\frac{dz}{2\\pi i}\\,j_{\\rm BRST}(z)~,\\qquad j_{\\rm BRST}(z) \\ceq \\left(c^A\\left[J_A-\\frac12 f_{AB}^{\\phantom{AB}C}\\,c^B\\,b_C\\right]\\right)(z)~,\n\\end{equation}\nwhere the affine currents $J_A(z)$ are those associated with the $G$ symmetry of $\\protect\\raisebox{1pt}{$\\chi$}[\\TT]$, and $f_{AB}^{\\phantom{AB}C}$ are the structure constants for $G$. Nilpotency of this BRST operator depends on the precise value of the affine level $k_{2d}=-2h^{\\vee}$, and so the self-consistency of this procedure is intimately connected with the preservation of conformal invariance in four dimensions. The gauged chiral algebra is then obtained as the cohomology of the BRST operator relative to the $b\\,$-ghost zero modes,\n\\begin{equation}\n\\label{eq:gauged_cohomology}\n\\protect\\raisebox{1pt}{$\\chi$}[\\TT_G] = H^{\\star}_{\\rm BRST} \\left[\\psi\\in\\protect\\raisebox{1pt}{$\\chi$}[\\TT]\\otimes\\protect\\raisebox{1pt}{$\\chi$}_{(b,c)}~|~b^A_{0}\\psi=0\\right]\n\\end{equation}\n\n\\subsubsection*{Superconformal index}\n\nThe superconformal index of a superconformal field theory is the Witten index of the radially quantized theory, refined by a set of fugacities that keep track of the maximal set of charges commuting with each other and with a chosen supercharge. For our purposes, we consider the specialization of the index of an $\\NN=2$ SCFT known as the Schur index \\cite{Gadde:2011ik,Gadde:2011uv}. The trace formula for the Schur index reads\n\\begin{equation}\n\\label{eq:SchurSCI}\n\\II^{\\rm (Schur)}(q; {\\bf x}) = \\mbox{Tr}_{\\HH[\\Sb^3]}(-1)^F q^{E-R}\\prod_i {x_i}^{f_i}~,\n\\end{equation}\nwhere $F$ denotes the fermion number and $\\{f_i\\}$ the Cartan generators of the flavor group. The Schur index counts (with signs) precisely the operators obeying the condition (\\ref{schurconditions}). Moreover, for Schur operators $E-R$ coincides with the left-moving conformal weight $h$ (the eigenvalue of $L_0$),\n\\begin{equation}\n\\label{eq:schu_op_dimension}\nE-R ~=~ \\frac{E+j_1+j_2}{2} ~\\eqc~ h~.\n\\end{equation}\nIt follows that the graded character of the chiral algebra is identical to the Schur index,\n\\begin{equation}\n\\label{charschur}\n\\II_{\\chi}(q; {\\bf x}) ~\\ceq~ \\mbox{Tr}_{\\HH_{\\chi}}\\,(-1)^F q^{L_0} ~=~ \\II^{\\rm Schur}(q; {\\bf x})~,\n\\end{equation}\nwhere $\\HH_{\\chi}$ denotes the state space of the chiral algebra. Note that this object is not interpreted as an index when taken as a partition function of the chiral algebra, because (with the exception of chiral algebras associated to $\\NN=4$ theories in four dimensions) the protected chiral algebra itself is not supersymmetric.\n\\subsection{Review of theories of class \\texorpdfstring{$\\SS$}{S}}\n\\label{subsec:class_S_review}\n\nFour-dimensional superconformal field theories of class $\\SS$ may be realized as the low-energy limit of twisted compactifications of an $\\NN=(2,0)$ superconformal field theory in six dimensions on a Riemann surface, possibly in the presence of half-BPS codimension-two defect operators. The resulting four-dimensional theory is specified by the following data:\\footnote{We restrict our attention in this note to \\emph{regular} theories. A larger class of theories can be obtained by additionally allowing for \\emph{irregular} punctures \\cite{Witten:2007td}. Still more possibilities appear when the UV curve is decorated with outer automorphisms twist lines \\cite{Tachikawa:2010vg,Chacaltana:2012ch}.} \n\\begin{itemize}\n\\item A simply-laced Lie algebra $\\mf{g} = \\{A_n, D_n, E_6, E_7, E_8\\}$. This specifies the choice of six-dimensional $(2,0)$ theory.\n\\item A (punctured) Riemann surface ${\\CC}_{g,s}$ known as the \\emph{UV curve}, where $g$ indicates the genus and $s$ the number of punctures. In the low energy limit, only the complex structure of ${\\CC}_{g,s}$ plays a role. The complex structure moduli of the curve are identified with exactly marginal couplings in the SCFT.\n\\item A choice of embedding $\\Lambda_i: \\mf{su}(2) \\to \\mf{g}$ (up to conjugacy) for each puncture $i=1,\\ldots,s$. These choices reflect the choice of codimension-two defect that is present at each puncture in the six-dimensional construction. The centralizer $\\mf{h}_{\\Lambda_i} \\subset \\mf{g}$ of the embedding is the global symmetry associated to the defect. The theory enjoys a global flavor symmetry algebra given by at least $\\oplus_{i=1}^s\\mf{h}_{\\Lambda_i}$.\\footnote{In some exceptional cases the global symmetry of the theory is enhanced due to the existence of additional symmetry generators that are not naturally associated to an individual puncture.}\n\\end{itemize}\nWhen necessary, we will label the corresponding four-dimensional SCFT as $\\TT[\\mf{g}; \\CC_{g,s}; \\{\\Lambda_i\\}]$. Because we are ultimately only interested in theories modulo their exactly marginal couplings, we will not keep track of a point in the complex structure moduli space of the UV curve.\n\nFor the sake of simplicity, we will restrict our attention to theories where $\\mf{g}$ is in the $A$ series. The generalization to $D$ and $E$ series theories (at least in the abstract discussion) should be possible to carry out without a great deal of additional difficulty. In the $A_{n-1}$ case -- \\ie, $\\mf{g}=\\mf{su}(n)$ -- the data at punctures can be reformulated as a partition of $n$: $[n_1^{\\ell_1}\\,n_2^{\\ell_2}\\,\\dots\\,n_k^{\\ell_k}]$ with $\\sum_i \\ell_i n_i = n$ and $n_i > n_{i+1}$. Such a partition indicates how the fundamental representation $\\mf{f}$ of $\\mf{su}(n)$ decomposes into irreps of $\\Lambda(\\mf{su}(2))$,\n\\begin{equation}\n\\label{eq:fundamental_decomposition_partition}\n\\mf{f} \\rightarrow \\bigoplus_{i=1}^{k} \\ell_i V_{\\frac12(n_i-1)}~,\n\\end{equation}\nwhere $V_j$ denotes the spin $j$ representation of $\\mf{su}(2)$. An equivalent description comes from specifying a nilpotent element $e$ in $\\mf{su}(n)$, \\ie, an element for which $\\left({\\rm ad}_e\\right)^r=0$ for some positive integer $r$. The Jordan normal form of such a nilpotent element is given by\n\\begin{equation}\n\\label{eq:jordan_normal_form}\ne = \\bigoplus_{i=1}^{k} \\overbrace{J_{n_i}\\oplus\\cdots\\oplus J_{n_i}}^{\\ell_i~{\\rm times}}~,\n\\end{equation}\nwhere $J_m$ is the elementary Jordan block of size $m$, \\ie, a sparse $m\\times m$ matrix with only ones along the superdiagonal. Thus every nilpotent element specifies a partition of $n$ and vice versa. The $\\mf{su}(2)$ embedding comes from defining $\\mf{su}(2)$ generators $t_0, t_\\pm$ and demanding that $\\Lambda(t_-)=e$.\n\nThe trivial embedding is identified with the partition $[1^n]$ and leads to a defect with maximal flavor symmetry $\\mf{h} = \\mf{su}(n)$. A puncture labelled by this embedding is called \\emph{full} or \\emph{maximal}. The opposite extreme is the principal embedding, which has partition $[n^1]$. This embedding leads to $\\mf{h} = \\varnothing$, and the puncture is effectively absent. Another important case is the subregular embedding, with partition $[n-1,1]$, which leads to $\\mf{h} = \\mf{u}(1)$ (as long as $n>2$). Punctures labelled by the subregular embedding are called \\emph{minimal} or \\emph{simple}.\n\nThe basic entities of class $\\SS$ are the theories associated to thrice-punctured spheres, or \\emph{trinions}. The designations of these theories are conventionally shortened as \n\\begin{equation}\n\\label{eq:trinion_theory_label}\nT_n^{\\Lambda_1\\Lambda_2\\Lambda_3} \\ceq \\TT[\\mf{su}(n); \\CC_{0, 3}; \\{\\Lambda_1\\,\\Lambda_2\\,\\Lambda_3\\}]~.\n\\end{equation} \nFor the special case of all maximal punctures, the convention is to further define $T_n\\ceq T_n^{[1^n][1^n][1^n]}$. All of the trinion theories are isolated SCFTs -- they have no marginal couplings. For most of these theories, no Lagrangian description is known. An important class of exceptions are the theories with two maximal punctures and one minimal puncture: $T_n^{[1^n][1^n][n-1,1]}$. These are theories of $n^2$ free hypermultiplets, which in this context are naturally thought of as transforming in the bifundamental representation of $\\mf{su}(n)\\times\\mf{su}(n)$. In the case $n=2$, the minimal and maximal punctures are the same and the theory of four free hypermultiplets (equivalently, eight free half-hypermultiplets) is the $T_2$ theory. In this case the global symmetry associated to the punctures is $\\mf{su}(2)\\times\\mf{su}(2)\\times\\mf{su}(2)$ which is a subgroup of the full global symmetry $\\mf{usp}(8)$.\n\nAt the level of two-dimensional topology, an arbitrary surface $\\CC_{g,s}$ can be assembled by taking $2g-2+s$ copies of the three-punctured sphere, or ``pairs of pants'', and gluing legs together pairwise $3g-3+s$ times. Each gluing introduces a complex \\emph{plumbing parameter} and for a given construction of this type the plumbing parameters form a set of coordinates for a patch of the Teichmuller space of Riemann surfaces of genus $g$ with $s$ punctures. A parallel procedure is used to construct the class $\\SS$ theory associated to an arbitrary UV curve using the basic trinion theories. Starting with $2g-2+s$ copies of the trinion theory $T_n$, one glues along maximal punctures by gauging the diagonal subgroup of the $\\mf{su}(n)\\times\\mf{su}(n)$ flavor symmetry associated to the punctures. This introduces an $\\mf{su}(n)$ gauge group in the four-dimensional SCFT, and the marginal gauge coupling is related to the plumbing parameter. If one wants, the remaining maximal punctures can then be reduced to sub-maximal punctures using the Higgsing procedure described below.\\footnote{In terms of the low energy SCFT, the operations of Higgsing at external punctures and gauging of internal ones commute, so one may equally well think of gluing together trinions some of whose punctures are not maximal. Our presentation here is not meant to convey the full depth of what is possible in class $\\SS$.} To a given pants decomposition of a UV curve, one associates a ``weakly coupled'' frame of the corresponding SCFT in which the flavor symmetries of a collection of trinion theories are being weakly gauged. The equivalence of different pants decompositions amounts to $S$-duality. It is only in very special cases that a weakly coupled duality frame of this type will actually be described by a Lagrangian field theory.\n\nBy now, quite a few general facts are known about theories of class $\\SS$. Here we simply review some relevant ones while providing pointers to the original literature. The list is not meant to be comprehensive in any sense.\n\n\\medskip\n\\subsubsection*{Central charges}\n\nThe $a$ and $c$ conformal anomalies have been determined for all of the regular $A$-type theories in \\cite{Chacaltana:2010ks,Chacaltana:2012zy}. The answer takes the following form,\n\\begin{equation}\n\\label{eq:4dcentralcharges}\nc_{4d}=\\frac{2 n_v + n_h}{12}~, \\qquad a=\\frac{5 n_v + n_h}{24}~,\n\\end{equation}\nwhere \n\\begin{align}\\label{eq:nv_nh_defs}\n\\begin{split}\nn_v &~=~ \\sum_{i=1}^s n_v(\\Lambda_i) + (g-1)\\left( \\tfrac{4}{3}h^\\vee \\dim \\mf{g} + \\text{rank } \\mf{g} \\right)~, \\\\\nn_h &~=~ \\sum_{i=1}^s n_h(\\Lambda_i) + (g-1)\\left( \\tfrac{4}{3}h^\\vee \\dim \\mf{g} \\right)~,\n\\end{split}\\end{align}\nand\n\\begin{align}\\label{eq:nv_nh_defs_2}\n\\begin{split}\nn_v(\\Lambda) &~=~ 8\\left(\\rho\\cdot \\rho - \\rho \\cdot \\Lambda(t_0)\\right) + \\tfrac{1}{2}(\\text{rank } \\mf{g} - \\dim \\mf{g}_{0})~,\\\\\nn_h(\\Lambda) &~=~ 8\\left(\\rho\\cdot \\rho - \\rho \\cdot \\Lambda(t_0)\\right) + \\tfrac{1}{2} \\dim \\mf{g}_{\\frac{1}{2}}~.\n\\end{split}\\end{align}\nIn these equations, $\\rho$ is the Weyl vector of $\\mf{su}(n)$ and $h^\\vee$ is the dual coxeter number, which is equal to $n$ for $\\mf{g}=\\mf{su}(n)$. The Freudenthal-de Vries strange formula states that $|\\rho|^2 = \\frac{h^\\vee}{12}\\dim\\mf{g}$, which is useful in evaluating these expressions. Additionally, the embedded Cartan generator $\\Lambda(t_0)$ has been used to define a grading on the Lie-algebra, \n\\begin{equation}\n\\label{eq:lie_algebra_grading}\n\\mf{g} = \\bigoplus_{m\\in\\frac{1}{2}\\Zb} \\mf{g}_m~,\\qquad \\mf{g}_m\\ceq\\left\\{ t \\in \\mf{g}\\ |\\ {\\rm ad}_{\\Lambda(t_0)}t = mt \\right\\}~.\n\\end{equation}\nThis grading will make another appearance in Sec. \\ref{sec:reducing}.\n\nThe $\\mf{su}(n)$ flavor symmetry associated to a full puncture comes with flavor central charge $k_{\\mf{su}(n)}=2n$. This is a specialization of the general formula $k_{\\rm ADE}=2h^{\\vee}$. For a non-maximal puncture, the flavor central charge for a given simple factor $\\mf{h}_{\\rm simp}\\subseteq\\mf{h}$ is given by \\cite{Chacaltana:2012zy},\n\\begin{equation}\n\\label{eq:flavor_central_charge_reduced}\nk_{\\mf{h}_{\\rm simp}}\\delta_{AB}= 2 \\sum_j \\mbox{Tr}_{\\RR_j^{(\\text{adj})}}T_A T_B~,\n\\end{equation}\nwhere $T_A,T_B$ are generators of $\\mf{h}_{\\rm simp}$ satisfying the normalization $\\mbox{Tr}_{\\mf{h}_{\\rm simp}} T_A T_B = h^\\vee_{\\mf{h}_{\\rm simp}}\\delta_{AB}$ and we have introduced the decomposition of the adjoint representation of $\\mf{su}(n)$ into representations of $\\mf{h}_\\Lambda \\otimes\\Lambda(\\mf{su}(2))$,\n\\begin{equation}\n\\label{eq:adjdecomposition}\n\\mathrm{adj}_{\\mf{g}} = \\bigoplus_j \\RR_j^{(\\mathrm{adj})} \\otimes V_j~.\n\\end{equation}\nIn cases where there are global symmetries that extend the symmetries associated to punctures, the central charge can be deduced in terms of the embedding index.\n\n\\medskip\n\\subsubsection*{Higgs branch chiral ring and their relations}\n\nOperators in an $\\NN=2$ SCFT whose conformal dimension is equal to twice their $\\mf{su}(2)_R$ spin ($E=2R$) form a ring called the \\emph{Higgs branch chiral ring}. This ring is generally believed to be the ring of holomorphic functions (in a particular complex structure) on the Higgs branch of the moduli space of vacua of the theory. It is expected to be finitely generated, with the generators generally obeying nontrivial algebraic relations. For theories of class $\\SS$ the most general such relations have not been worked out explicitly to the best of our knowledge. However, certain cases of the relations can be understood.\n\nFor any puncture there is an associated global symmetry $\\mf{h}$, and the conserved currents for that global symmetry will lie in superconformal representations that include \\emph{moment map} operators $\\mu^A$, $A=1,\\ldots,\\dim\\mf{h}$ that belong to the Higgs branch chiral ring. Of primary interest to us are the relations that involve solely these moment map operators. Let us specialize to the case where all punctures are maximal, so $\\mf{h}_i=\\mf{g}$ for all $i=1,\\ldots,s$. There are then chiral ring relations given by\n\\begin{equation}\n\\label{eq:moment_map_relation}\n\\mbox{Tr}\\mu_1^k=\\mbox{Tr}\\mu_2^k=\\cdots=\\mbox{Tr}\\mu_s^k~,\\qquad k=1,2\\ldots~.\n\\end{equation}\nThere are additional Higgs branch chiral ring generators for a general class $\\SS$ theory of the form\n\\begin{equation}\n\\label{eq:higgs_branch_extra_generators}\nQ_{(k)}^{\\II^{(k)}_1\\cdots\\II^{(k)}_s}~,\\qquad k=1,\\ldots,n-1~,\n\\end{equation}\nof dimension $E_k=2R_k=\\frac12 k(n-k)(2g-2+s)$. The multi-indices $\\II^{(k)}$ index the $k$-fold antisymmetric tensor representation of $\\mf{su}(n)$. There are generally additional chiral ring relations involving these $Q_{(k)}$ operators, some of which mix them with the moment maps \\cite{Gadde:2013fma}. The complete form of these extra relations has not been worked out -- a knowledge of such relations would characterize the Higgs branch of that theory as a complex algebraic variety, and such a characterization is presently lacking for all but a small number of special cases. We will not make explicit use of such additional relations in what follows.\n\n\\medskip\n\\subsubsection*{Higgsing and reduction of punctures: generalities}\n\nTheories with non-maximal punctures can be obtained by starting with a theory with maximal punctures and going to a particular locus on the Higgs branch \\cite{Benini:2009gi,Tachikawa:2013kta,Chacaltana:2012zy,Maruyoshi:2013hja}. The flavor symmetry associated to a puncture is reflected in the existence of the above-mentioned half-BPS moment map operators, $\\mu_A$, that transform in the adjoint representation of the flavor symmetry with corresponding index $A=1,\\ldots,n^2-1$. In reducing the flavor symmetry via Higgsing, one aims to give an expectation value to one of the $\\mu_i$'s, say $\\mu_1$, while keeping $\\vev{\\mu_{i \\neq 1} }=0$. Consistency with Eqn. \\eqref{eq:moment_map_relation} then requires that $\\vev{\\mbox{Tr}\\mu_1^k}=0$ for any $k$, or put differently, $\\vev{\\mu_1}$ is a nilpotent $\\mf{su}(n)$ matrix. Since any nilpotent element can be realized as the image of $t_-\\in\\mf{su}(2)$ with respect to some embedding $\\Lambda:\\mf{su}(2)\\hookrightarrow\\mf{su}(n)$, the relevant loci on the Higgs branch are characterized by such an embedding, where we have\n\\begin{equation}\n\\label{eq:invariant_vev}\n\\vev{\\mu_1} = v\\, \\Lambda(t_-)~.\n\\end{equation}\nThe expectation value breaks the $\\mf{su}(n)$ flavor symmetry associated with the puncture to $\\mf{h}_\\Lambda$, the centralizer of the embedded $\\mf{su}(2)$, as well as the $\\mf{su}(2)_R$ symmetry (and also conformal symmetry). It will be important in the following that a linear combination of the flavor and $\\mf{su}(2)_R$ Cartan generators remains unbroken,\\footnote{We suspect not only this Cartan generator, but the full diagonal subalgebra of $\\mf{su}(2)_R$ and the embedded $\\mf{su}(2)$ is preserved on the sublocus of the Higgs branch in question. It should be possible to prove such a thing using the hyperkahler structure on nilpotent cones described in \\cite{Swann}. We thank D. Gaiotto, A. Neitzke, and Y. Tachikawa for helpful conversations on this point.} namely\n\\begin{equation}\n\\label{eq:new_R_cartan}\n\\tilde R \\ceq R + J_0~,\\qquad J_0 \\ceq \\Lambda (t_0)~.\n\\end{equation}\nIn such a vacuum, the low energy limit of the theory is described by the interacting class $\\SS$ SCFT with the same UV curve as the original theory, but with the first puncture replaced by a puncture of type $\\Lambda$. Additionally there will be decoupled free fields arising from the Nambu-Goldstone fields associated to the symmetry breaking \\cite{Maruyoshi:2013hja,Tachikawa:2013kta}. We identify $\\tilde R$ as the Cartan generator of the $\\mf{su}(2)_{\\tilde R}$ symmetry of the infrared fixed point.\n\nIt will prove useful to introduce notation to describe the breaking of $\\mf{su}(n)$ symmetry in greater detail. The generators of $\\mf{su}(n)$ can be relabeled according to the decomposition of Eqn. \\eqref{eq:adjdecomposition},\n\\begin{equation}\n\\label{eq:generator_expansion}\nT_A ~~\\Longrightarrow~~ T_{j,m;\\WW(\\RR_j)}~,\n\\end{equation}\nwhere $m=-j, -j+1,\\ldots,+j$ is the eigenvalue of the generator with respect to $\\Lambda(t_0)$, and $\\WW(\\RR_j)$ runs over the various weights of the representation $\\RR_j$ of $\\mf{h}_{\\Lambda}$. Expanding $\\mu_1$ around its expectation value, we have\n\\begin{equation}\n\\label{eq:mu_expansion}\n\\mu_1 = v\\, \\Lambda(t_-) + \\sum_j \\sum_{m=-j}^{+j} \\sum_{\\WW(\\RR_j)} (\\tilde\\mu_1)_{j;m,\\WW(\\RR_j)}T_{j;m,\\WW(\\RR_j)}~.\n\\end{equation}\nThe operators $(\\tilde\\mu_1)_{j;m,\\WW(\\RR_j)}$ with $m3$ we are guaranteed to have non-linear chiral algebras.\n\nFor $n>3$ the stress tensor must be an independent generator of the chiral algebra. This is because the stress tensor can only be a composite of other chiral algebra operators with dimension $h\\leqslant1$. For an interacting theory there can be no chiral algebra operators of dimension $h=1\/2$, so the only possibility is that the stress tensor is a Sugawara stress tensor built as a composite of affine currents. This can only happen if the $\\mf{su}(n)^3$ symmetry is enhanced, since as we have seen above the affine currents associated to the $\\mf{su}(n)$ symmetries are at the critical level and therefore do not admit a normalizable Sugawara stress tensor. Such an enhancement of the flavor symmetry only happens for the $n=3$ case, as will be discussed in greater detail below.\n\nLet us now consider the two simplest cases of trinion chiral algebras: $n=2$ and $n=3$. These are both exceptional in some sense compared to our expectations for generic $n$, which will ultimately make them easier to work with in our examples.\n\n\\medskip\n\\subsubsection{The \\texorpdfstring{$\\protect\\raisebox{1pt}{$\\chi$}[T_2]$}{Chi[T2]} chiral algebra}\n\\label{subsubsec:t2_chiral_algebra}\n\nIn the rank one case, the trinion SCFT is a theory of free hypermultiplets. This case is exceptional compared to the general free hypermultiplets discussed in Section \\ref{subsec:lagrangian_building_blocks} because for $\\mf{su}(2)$ the maximal puncture and minimal puncture are the same, so the minimal puncture also carries an $\\mf{su}(2)$ flavor symmetry, and instead of $n^2$ hypermultiplets transforming in the bifundamental of $\\mf{su}(n)\\times\\mf{su}(n)$, one instead describes the free fields as $2^3=8$ half hypermultiplets transforming in the trifundamental representation of $\\mf{su}(2)^3$. Consequently the symplectic bosons describing this theory are organized into a trifundamental field $q_{abc}(z)$ with $a,b,c=1,2$, with OPE given by\n\\begin{equation}\n\\label{eq:trifundamental_OPE}\nq_{abc}(z)q_{a'b'c'}(w)\\sim\\frac{\\epsilon_{aa'}\\epsilon_{bb'}\\epsilon_{cc'}}{z-w}~.\n\\end{equation}\nEach of the three $\\mf{su}(2)$ subalgebras has a corresponding $\\wh{\\mf{su}(2)}_{-2}$ affine current algebra in the chiral algebra. For example, the currents associated to the first puncture are given by\n\\begin{align}\\label{eq:T2_affine_currents}\n\\begin{split}\nJ_1^{+}(z) &~\\ceq~ \\frac{1}{2}\\epsilon^{bb'}\\epsilon^{cc'}(q_{1bc}q_{1b'c'})(z)~,\\\\\nJ_1^{-}(z) &~\\ceq~ \\frac{1}{2}\\epsilon^{bb'}\\epsilon^{cc'}(q_{2bc}q_{2b'c'})(z)~,\\\\\nJ_1^{\\,0}(z)&~\\ceq~ \\frac{1}{4}\\epsilon^{bb'}\\epsilon^{cc'}\\Big[(q_{1bc}q_{2b'c'})(z)+(q_{2bc}q_{1b'c'})(z)\\Big]~.\n\\end{split}\\end{align}\nThe currents associated to the second and third punctures are constructed analogously. The stress tensor is now given by\n\\begin{equation}\n\\label{eq:T2_stress_tensor}\nT(z)\\ceq \\e^{aa'}\\e^{bb'}\\e^{cc'}(q_{abc}\\partial q_{a'b'c'})(z)~,\n\\end{equation}\nwith corresponding Virasoro central charge given by $c_{2d}=-4$.\n\nIn this simple case it is easy to explicitly compare the Schur superconformal index for the $T_2$ theory with the vacuum character of the chiral algebra. The Schur index has appeared explicitly in, \\eg, \\cite{Gadde:2009kb}. It is given by a single plethystic exponential,\n\\begin{equation}\n\\label{eq:T2_index}\n\\II(q;{\\bf a},{\\bf b},{\\bf c})={\\rm PE}\\left[\\frac{q^{\\frac12}}{1-q}\\protect\\raisebox{1pt}{$\\chi$}_\\square(\\bf a)\\protect\\raisebox{1pt}{$\\chi$}_\\square(\\bf b)\\protect\\raisebox{1pt}{$\\chi$}_\\square(\\bf c)\\right]~.\n\\end{equation}\nThis is easily recognized as the vacuum character of the symplectic boson system defined here. The only comment that needs to be made is that there are no null states that have to be removed from the freely generated character of the symplectic boson algebra. In the next example this simplifying characteristic will be absent.\n\nCrossing symmetry, or associativity of gluing, was investigated for this chiral algebra in \\cite{Beem:2013sza}. There it was proposed that the complete chiral algebra obtained when gluing two copies of $\\protect\\raisebox{1pt}{$\\chi$}[T_2]$ is the $\\widehat{\\mf{so}(8)}$ affine current algebra at level $k_{\\mf{so}(8)}=-2$, and this proposal was checked up to level $h=5$. If the chiral algebra of the four-punctured sphere is precisely this current algebra, then the crossing symmetry relation is implied immediately. This is because the $\\mf{so}(8)$ current algebra has an automorphism as a consequence of triality that exchanges the $\\mf{su}(2)$ subalgebras in accordance with Figure \\ref{fig:tqft_topological_invariance}. If one could prove that the solution to the BRST problem for this gluing is the $\\wh{\\mf{so}(8)}$ current algebra, one would therefore have a proof of generalized $S$-duality at the level of the chiral algebra for all rank one theories of class $\\SS$. We hope that such a proof will turn out to be attainable in the future.\n\n\\medskip\n\\subsubsection{The \\texorpdfstring{$\\protect\\raisebox{1pt}{$\\chi$}[T_3]$}{Chi[T3]} chiral algebra}\n\\label{subsubsec:t3_chiral_algebra}\n\nThe $T_3$ theory is the rank-one $\\mf{e}_6$ theory of Minahan and Nemeschanksky \\cite{Minahan:1996fg}. Before describing its chiral algebra, let us list a number of known properties of this theory.\n\\begin{enumerate}\n\\item[$\\bullet$]The $a$ and $c_{4d}$ anomaly coefficients are known to be given by $a=\\frac{41}{24}$ and $c_{4d}=\\frac{13}{6}$.\n\\item[$\\bullet$]The global symmetry is $\\mf{e}_6$, for which the flavor central charge is $k_{\\mf{e}_6}=6$. This is an enhancement of the $\\mf{su}(3)^3$ symmetry associated with the punctures. It can be understood as a consequence of the fact that the extra Higgs branch generators have dimension two in this case, which means that they behave as moment maps for additional symmetry generators.\n\\item[$\\bullet$]The Higgs branch of this theory is the $\\mf{e}_6$ one-instanton moduli space, which is the same thing as the minimal nilpotent orbit of $\\mf{e}_6$. This property follows immediately from the realization of this theory as a single D3 brane probing an $\\mf{e}_6$ singularity in F-theory.\n\\item[$\\bullet$]A corollary of this characterization of the Higgs branch is that the Higgs branch chiral ring is finitely generated by the moment map operators $\\mu_A$ for $A=1,\\ldots,78$, subject to the \\emph{Joseph relations} (see {\\it e.g.} \\cite{Gaiotto:2008nz}),\n$$\\restr{(\\mu\\otimes\\mu)}{\\bOn\\oplus\\mathbf{650}}=0~.$$\n\\item[$\\bullet$]The superconformal index of the $T_3$ theory was computed in \\cite{Gadde:2010te}. This leads to a formula for the Schur limit of the index given by \n\\begin{equation}\\begin{split}\n\\II_{T_3}(q)=1\\ &+\\ q\\ \\chi_{\\mathbf{[0,0,0,0,0,1]}}\\ \\\\&+\\ q^2\\ (\\chi_{\\mathbf{[0,0,0,0,0,2]}}+\\chi_{\\mathbf{[0,0,0,0,0,1]}}+1)\\ \\\\ \n&+\\ q^3\\ (\\chi_{\\mathbf{[0,0,0,0,0,3]}}+\\chi_{\\mathbf{[0,0,0,0,0,2]}}+\\chi_{\\mathbf{[0,0,1,0,0,0]}}+2\\ \\chi_{\\mathbf{[0,0,0,0,0,1]}}+1)\\ \\\\\n&+\\ q^4\\ (\\chi_{\\mathbf{[0,0,0,0,0,4]}} + \\chi_{\\mathbf{[0,0,0,0,0,3]}} + \\chi_{\\mathbf{[0,0,1,0,0,1]}} + 3\\ \\chi_{\\mathbf{[0,0,0,0,0,2]}}\\\\\n&\\phantom{+\\ q^4\\ ( }\\ + \\chi_{\\mathbf{[0,0,1,0,0,0]}}+ \\chi_{\\mathbf{[1,0,0,0,1,0]}} + 3\\ \\chi_{\\mathbf{[0,0,0,0,0,1]}} +2)\\ \\notag\\\\\n& + \\ldots\n\\end{split}\\end{equation}\nwhere we denoted the $\\mathfrak{e}_6$ representations by their Dynkin labels and suppressed the fugacity-dependence.\n\\end{enumerate}\n\nThe only chiral algebra generators that are guaranteed to be present on general grounds are the seventy-eight affine currents that descend from the four-dimensional moment map operators. The level of the affine current algebra generated by these operators will be $k=-3$. Note that this is \\emph{not} the critical level for $\\mf{e}_6$. The $\\mf{su}(3)^3$ symmetry associated to the punctures is enhanced, and criticality of the subalgebras does not imply criticality of the enhanced symmetry algebra. For this reason, it is possible to construct a Sugawara stress tensor for the current algebra that is properly normalized, and indeed the correct value of the central charge is given by\n\\begin{equation}\n\\label{eq:sugawara_central_charge_check}\nc_{2d}=-26=\\frac{-3\\dim(\\mf{e}_6)}{-3+h^\\vee_{\\mf{e}_6}}=c_{\\rm Sugawara}~.\n\\end{equation}\nOne then suspects that the chiral algebra does not have an independent stress tensor as a generator, but instead the Sugawara construction yields the true stress tensor. Indeed, this was proven in \\cite{Beem:2013sza} to follow from the saturation of certain unitarity bounds by the central charges of this theory.\n\nThis leads to a natural proposal for the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra that was already put forward in \\cite{Beem:2013sza}. The proposal is that the correct chiral algebra is simply the $\\widehat{\\mf{e}}_6$ affine current algebra at level $k=-3$. The singular OPEs of the seventy-eight affine currents are fixed to the canonical form,\\footnote{Our conventions are that the roots of $\\mf{e}_6$ have squared length equal to two.}\n\\begin{equation}\n\\label{eq:e6_OPE}\nJ_A(z)J_B(0) ~\\sim~ \\frac{-3\\,\\delta_{AB}}{z^2} + \\frac{f_{AB}^{\\phantom{AB}C}\\,J_C(0)}{z}~.\n\\end{equation}\nIt is natural to consider the subalgebra $\\mf{su}(3)^3\\subset \\mf{e}_6$ associated to the three punctures on the UV curve and to decompose the currents accordingly. The adjoint representation of $\\mf{e}_6$ decomposes as\n\\begin{equation}\n\\label{eq:e6_adjoint_decomposition}\n\\mathbf{78}~\\longrightarrow~(\\mathbf{8,1,1})+(\\mathbf{1,8,1})+(\\mathbf{1,1,8})+(\\mathbf{3,3,3})+(\\mathbf{\\bar{3},\\bar 3,\\bar 3})~.\n\\end{equation}\nThe affine currents are therefore rearranged into three sets of $\\mf{su}(3)$ affine currents along with one tri-fundamental and one tri-antifundamental set of dimension one currents,\n\\begin{equation}\n\\label{eq:e6_decomposed_current_list}\nJ_A(z)~\\longrightarrow~\\left\\{(J^{1})_{a}^{\\,a'}(z)~,~(J^{2})_{b}^{\\,b'}(z)~,~(J^{3})_{c}^{\\,c'}(z)~,~W_{abc}(z)~,~{\\wt W}^{abc}(z)\\right\\}~.\n\\end{equation}\nThe singular OPEs for this basis of generators are listed in Appendix \\ref{app:level_by_level}. It is perhaps interesting to note that given this list of generators and the requirement that the $\\mf{su}(3)$ current algebras are all at the critical level, the only solution to crossing symmetry for the chiral algebra that includes no additional generators is the $\\wh{\\mf{e}}_6$ current algebra with $k=-3$. So the chiral algebra is completely inflexible once the generators and their symmetry properties are specified.\n\nA nice check of the whole story is that the Joseph relations are reproduced automatically by the chiral algebra. For the non-singlet relation, this follows in a simple way from the presence of a set of null states in the chiral algebra.\n\\begin{equation}\n\\label{eq: e6 nullprediction in 650}\n\\restr{}{}\\!\\restr{}{}P^{AB}_{\\bf 650}(J_AJ_B)(z)\\restr{}{}\\!\\restr{}{}^2=0 \\quad\\Longleftrightarrow\\quad \\restr{(\\mu\\otimes\\mu)}{\\bf 650}=0~,\n\\end{equation}\nwhere $P^{AB}_{\\bf 650}$ is a projector onto the ${\\bf 650}$ representation. These states are only null at this particular value of the level, so we see a close relationship between the flavor central charge and the geometry of the Higgs branch. Similarly, the singlet relation follows from the identification of the Sugawara stress tensor with the true stress tensor of the chiral algebra,\n\\begin{equation}\nT(z)=\\frac{1}{-3+h^\\vee}(J_AJ_A)(z) \\quad\\Longleftrightarrow\\quad \\restr{(\\mu\\otimes\\mu)}{\\bf 1}=0~.\n\\end{equation}\nSo in this relation we see that the geometry of the Higgs branch is further tied in with the value of the $c$-type central charge in four dimensions.\n\nNote that these successes at the level of reproducing the Higgs branch chiral ring relations follow entirely from the existence of an $\\wh{\\mf{e}}_6$ current algebra at level $k=-3$ in the chiral algebra. However what is not necessarily implied is the absence of additional chiral algebra generators transforming as some module of the affine Lie algebra. We can test the claim that there are no additional generators by comparing the partition function of the current algebra to the Schur limit of the superconformal index for $T_3$ (\\cf\\ \\cite{Gadde:2010te}).\\footnote{Because the current algebra is entirely bosonic, the $\\Zb_2$ graded vacuum character is the same as the ungraded vacuum character. Indeed, it is a prediction of our identification of the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra that there are no cancellations in the Schur index between operators that individually contribute.} This comparison is made somewhat difficult by the fact that affine Lie algebras at negative integer dimension have complicated sets of null states in their vacuum module, and these must be subtracted to produce the correct index. The upshot is that up to level four, the vacuum character does indeed match the superconformal index. In order for this match to work, it is crucial that the $\\wh{\\mf{e}}_6$ current algebra has certain null states at the special value $k=-3$. In Table \\ref{Tab:T3_index}, we show the operator content up to level four of a generic $\\wh{\\mf{e}}_6$ current algebra along with the subtractions that occur at this particular value of the level. It is only after making these subtractions that the vacuum character matches the Schur index. Thus we conclude that if there are any additional generators of the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra, they must have dimension greater than or equal to five.\n\n\\begin{table}\\small\n\\centering\n\\begin{tabular}{cl}\n\\hline\n\\hline\ndimension & $\\mf{e}_6$ representations with multiplicities $m_{\\rm generic}\\blue{\/m_{k=-3}}$\\\\\n\\hline\n$0$ & $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,0]}$\\\\\n$1$ & $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,1]}$\\\\\n$2$ & $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,2]}$,~ $1\\blue{\/0}\\times\\mathbf{[1,0,0,0,1,0]}$,~ $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,1]}$,~ $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,0]}$\\\\\n$3$ & $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,3]}$,~ $1\\blue{\/0}\\times\\mathbf{[1,0,0,0,1,1]}$,~ $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,2]}$,~ $2\\blue{\/1}\\times\\mathbf{[0,0,1,0,0,0]}$,\\\\\n\t& $2\\blue{\/0}\\times\\mathbf{[1,0,0,0,1,0]}$,~ $3\\blue{\/2}\\times\\mathbf{[0,0,0,0,0,1]}$,~ $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,0]}$\\\\\n$4$ & $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,4]}$,~ $1\\blue{\/0}\\times\\mathbf{[1,0,0,0,1,2]}$,~ $1\\blue{\/0}\\times\\mathbf{[2,0,0,0,2,0]}$,~ $1\\phantom{\/0}\\times\\mathbf{[0,0,0,0,0,3]}$,\\\\ \n\t& $2\\blue{\/1}\\times\\mathbf{[0,0,1,0,0,1]}$,~ $1\\blue{\/0}\\times\\mathbf{[0,1,0,1,0,0]}$,~ $3\\blue{\/0}\\times\\mathbf{[1,0,0,0,1,1]}$,~ $2\\blue{\/0}\\times\\mathbf{[1,1,0,0,0,0]}$,\\\\ \n\t& $2\\blue{\/0}\\times\\mathbf{[0,0,0,1,1,0]}$,~ $5\\blue{\/3}\\times\\mathbf{[0,0,0,0,0,2]}$,~ $3\\blue{\/1}\\times\\mathbf{[0,0,1,0,0,0]}$,~ $6\\blue{\/1}\\times\\mathbf{[1,0,0,0,1,0]}$,\\\\\n\t& $6\\blue{\/3}\\times\\mathbf{[0,0,0,0,0,1]}$,~ $3\\blue{\/2}\\times\\mathbf{[0,0,0,0,0,0]}$\\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{Tab:T3_index}The operator content of the $\\mf{e}_6$ current algebra up to dimension four. The first multiplicity is valid for generic values of the level, \\ie, any value of $k$ where null states are completely absent. The second multiplicity is valid for $k=-3$, and if no second multiplicity is given then the original multiplicity is also the correct one for $k=-3$. These latter multiplicities precisely reproduce the coefficients appearing in the Schur superconformal index for the $T_3$ theory.}\n\\end{table}\n\nA more refined test of our identification of the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra comes from the requirement of compatibility with Argyres-Seiberg duality \\cite{Argyres:2007cn}. The meaning of Argyres-Seiberg duality at the level of the chiral algebra is as follows. Introduce a pair of symplectic bosons transforming in the fundamental representation of an $\\mf{su}(2)$ flavor symmetry,\n\\begin{equation}\n\\label{eq:AS_sb_OPE}\nq_\\alpha(z) \\tilde q^\\beta (0) \\sim \\frac{\\delta_\\alpha^{\\phantom{a}\\beta}}{z}~,\\qquad \\alpha,\\beta=1,2~.\n\\end{equation}\nIn this symplectic boson algebra one can construct an $\\mf{su}(2)$ current algebra at level $k=-1$. Now take the $\\mf{e}_6$ current algebra and consider an $\\mf{su}(2)\\times\\mf{su}(6)\\subset\\mf{e}_6$ maximal subalgebra. The $\\mf{su}(2)$ current algebra coming from this subalgebra has level $k=-3$. Thus the combined level of the symplectic-boson-plus-$\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ system is $k_{tot}=-4$, and consequently this current algebra can be gauged in the manner described in Section \\ref{subsec:chiral_review} by introducing a $(b,c)$ ghost system in the adjoint of $\\mf{su}(2)$ and passing to the cohomology of the appropriate BRST operator. The resulting chiral algebra should be \\emph{identical} to the chiral algebra obtained by taking two copies of the $n=3$ free hypermultiplet chiral algebra of Section \\ref{subsec:lagrangian_building_blocks} and gauging a diagonal $\\mf{su}(3)$ current algebra. This comparison is detailed in Appendix \\ref{app:level_by_level}. \n\nAlthough we have not been able to completely prove the equivalence of these two chiral algebras (the BRST problem for this type of gauging is not easy to solve), we do find the following. On each side of the duality, we are able to determine the generators of dimensions $h=1$ and $h=3\/2$ which amount to a $\\widehat{\\mf{u}(6)}_{-6}$ current algebra in addition to a pair of dimension $h=\\frac32$ generators transforming in the tri-fundamental and tri-antifundamental representations of $\\mf{u}(6)$, with singular OPEs given by\n\\begin{equation}\nb_{i_1i_2i_3}(z)\\tilde b^{j_1j_2j_3}(0) \\sim \\frac{36\\,\\delta_{[i_1}^{[j_1} \\delta_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2} \\delta_{i_3]}^{j_3]}}{z^3} - \\frac{36\\, \\delta_{[i_1}^{[j_1} \\delta_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2}\\hat J_{i_3]}^{j_3]}(0)}{z^2}+\\frac{18\\, \t\\delta_{[i_1}^{[j_1} \t \\hat J_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2}\\hat J_{i_3]}^{j_3]}(0) - 18\\,\\delta_{[i_1}^{[j_1} \\delta_{\\phantom{[}\\!i_2}^{\\phantom{[}\\!j_2} \\partial \\hat J_{i_3]}^{j_3]}(0)}{z}~.\n\\end{equation}\nThus these operators in addition to the $\\mf{u}(6)$ currents form a closed $\\WW$-algebra which is common to both sides of the duality. We expect that these $\\WW$-algebras are in fact the entire chiral algebras in question. However, it should be noted that the existence of this $\\WW$-algebra actually follows from what we have established about the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra without any additional assumptions. That is to say, the possible addition of generators of dimension greater than four could not disrupt the presence of this $\\WW$-algebra. In this sense, common appearance of this algebra can be taken as a check of Argyres-Seiberg duality that goes well beyond the check of \\cite{Gaiotto:2008nz} at the level of the Higgs branch chiral ring. It not only implies a match of a much larger set of operators than just those appearing in the chiral ring, but it also amounts to a match of the three-point functions for those operators, which include the Higgs branch chiral ring operators.\n\nFinally, let us mention one last consistency check on the identification of $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ to which we will return in Section \\ref{subsec:examples}. When one of the three maximal punctures of the $T_3$ theory is reduced to a minimal puncture by Higgsing, the resulting theory is simply that of nine free hypermultiplets transforming in the bifundamental representation of the remaining $\\mf{su}(3)\\times\\mf{su}(3)$ flavor symmetry (along with a $\\mf{u}(1)$ baryon number symmetry associated to the minimal puncture). Therefore if we have correctly identified the $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ chiral algebra, then it should have the property that when the corresponding reduction procedure is carried out, the result is the symplectic boson chiral algebra of Section \\ref{subsec:lagrangian_building_blocks}. The proposal we have given will indeed pass this check, but we postpone the discussion until after we present the reduction procedure in Section \\ref{sec:reducing}.\n\n\\medskip\n\\subsubsection{A proposal for \\texorpdfstring{$\\protect\\raisebox{1pt}{$\\chi$}[T_n]$}{Chi[Tn]}}\n\\label{subsubsec:tn_chiral_algebra}\n\nWe have seen above that for ranks one and two, the trinion chiral algebras are finitely generated (in the chiral algebra sense) by currents that descend from four-dimensional generators of the Higgs branch chiral ring. We know from the results of \\cite{Beem:2013sza} that this cannot be a characterization that holds true for the chiral algebra of an \\emph{arbitrary} $\\NN=2$ SCFT. Moreover, in an interacting theory where the $\\mf{su}(n)^3$ symmetry is not enhanced to a larger global symmetry algebra, the chiral algebra stress tensor cannot be the Sugawara stress tensor of the dimension one currents. This follows from the fact that the $\\mf{su}(n)$ current algebras are at the critical level, so the Sugawara construction fails to produce an appropriate stress tensor. Therefore there must be at least an additional generator corresponding to the stress tensor. The results of \\cite{Lemos:2014lua} further suggest that there should be additional generators in one-to-one correspondence with the generators of the $\\WW_n$ algebra -- \\ie, generators of dimensions $3,\\ldots,n-1$. Aside from that, however, there is room to hope that there will be no additional generators for the trinion chiral algebras. One piece of partial evidence in favor of this suggestion is the absence of additional HL chiral ring generators on top of those generating the Higgs branch chiral ring. This follows from the fact that the $T_n$ theories have genus zero UV curves. Taking this as sufficient reason to formulate a conjecture, we propose the following:\\footnote{This conjecture is different from the one that appeared in the earliest version of this paper. It has been changed to reflect the results of \\cite{Lemos:2014lua}, where the original version of the conjecture was ruled out and replaced by the modified version that appears here.}\n\n\\begin{conj}[$T_{n\\geqslant3}$ chiral algebras]\n\\label{conj:Tn}\nThe protected chiral algebra of the $T_n$ SCFT for any $n\\geqslant3$ is a $\\WW$-algebra with the following generators\n\\begin{enumerate}\n\\item[$\\bullet$] Three sets of $\\mf{su}(n)^3$ affine currents at the critical level $k=-n$. \n\\item[$\\bullet$] One current of dimension $\\frac12\\ell(n-\\ell)$ transforming in the $(\\wedge^{\\ell},\\wedge^{\\ell},\\wedge^{\\ell})$ representation of $\\mf{su}(n)^3$ for each $\\ell=1,\\ldots,n-1$.\n\\item[$\\bullet$] Operators $W_i$, $i=1,\\ldots,n-1$ of dimension $i+1$ that are $\\mf{su}(n)^3$ singlets. The dimension two operator is identified as a stress tensor $W_1(z)\\equiv T(z)$ with Virasoro central charge equal to $c_{2d}=-2n^3+3n^2+n-2$. In special cases some of these operators may be redundant.\n\\end{enumerate}\n\\end{conj}\n\n\\noindent At any $n\\geqslant4$, the very existence of such a $\\WW$-algebra is quite nontrivial, since for a randomly chosen set of generators one doesn't expect to be able to solve the associated Jacobi identities. In fact if the singular OPEs of such a $\\WW$-algebra can be chosen so that the algebra is associative, it seems likely that the requirements of associativity will \\emph{completely fix} the structure constants, rendering the chiral algebra unique. It is worth observing that precisely such uniqueness occurs in the case of the $T_3$ chiral algebra. The characterization given by the conjecture above for $n=3$ doesn't explicitly imply $\\mf{e}_6$ symmetry enhancement, but the unique chiral algebra satisfying the requirements listed is precisely the $\\mf{e}_6$ current algebra at the appropriate level. A similar uniqueness result is currently under investigation for the $T_4$ chiral algebra \\cite{Lemos:2014lua}.\n\nBefore moving on, let us extrapolate a bit from Conjecture \\ref{conj:Tn} to make a further conjecture that, while not extremely well-supported, is consistent with everything we know at this time.\n\n\\begin{conj}[Genus zero chiral algebras]\n\\label{conj:genus_zero}\nThe protected chiral algebra of any class $\\SS$ SCFT of type $A_n$ whose UV curve has genus zero is a $\\WW$-algebra with singlet generators $W_i$, $i=1,\\ldots, n$ of dimension $i+1$ and additional currents associated to Higgs branch chiral ring generators of the four-dimensional theory. In special cases some of the $W_i$ may be related to composites -- in particular when the central charge is equal to its Sugawara value with respect to the affine currents, then the stress tensor $W_1(z)$ is a composite.\n\\end{conj}\n\n\\noindent The modest evidence in favor of this proposal is that genus zero theories have honest Higgs branches with no residual $U(1)$ gauge fields in the IR, so they don't have any of the additional $\\NN=1$ chiral ring generators discussed in Section \\ref{subsec:chiral_review}. Additionally the examples of \\cite{Beem:2013sza} for which there were chiral algebra generators unrelated to four-dimensional chiral ring generators was a genus one and two theories. It would be interesting to explore this conjecture further, even in the Lagrangian case.\n\\section{Reduced punctures}\n\\label{sec:reducing}\n\nThe $T_n$ building blocks outlined in Sec.\\;\\ref{subsec:building_blocks} only allow us to construct class $\\SS$ chiral algebras associated to undecorated UV curves, while the inclusion of the free hypermultiplet chiral algebras of Sec.\\;\\ref{subsec:lagrangian_building_blocks} allow for decoration by minimal punctures only. The purpose of this section is to develop the tools necessary to describe theories that correspond to UV curves with general non-trivial embeddings decorating some of their punctures.\n\nFrom the TQFT perspective, the most natural way to introduce the necessary additional ingredients is to find a chiral algebra associated to the decorated cap of Fig. \\ref{fig:dec_cap}. This turns out not to be the most obvious approach from a physical perspective since the cap doesn't correspond to any four-dimensional SCFT.%\n\\footnote{It does however correspond to a true compactification of the six-dimensional $(2,0)$ theory \\cite{Gaiotto:2011xs}. We will return to the notion of such a decorated cap in Sec.\\;\\ref{subsec:decorated_cap}.}\nRather, it is more natural to develop a procedure for reducing a maximal puncture to a non-maximal that mimics the Higgsing procedure reviewed in Sec.\\;\\ref{subsec:class_S_review}. Naively, the four-dimensional Higgsing prescription need not lead to a simple recipe for producing the chiral algebra of the Higgsed theory in terms of that of the original theory. This is because the Higgsing spontaneously breaks the superconformal symmetry that is used to argue for the very existence of a chiral algebra, with the theory only recovering superconformal invariance in the low energy limit. Consequently one could imagine that the Higgsing procedure irrecoverably requires that we abandon the chiral algebraic language until reaching the far infrared.\n\nNevertheless, it turns out that the chiral algebra does admit its own Higgsing procedure that has the desired result. Such a procedure cannot literally amount to Higgsing in the chiral algebra, because quantum mechanically in two dimensions there are no continuous moduli spaces of vacua. The best that we can do is to try to impose a quantum-mechanical \\emph{constraint} on the chiral algebra. A natural expectation for the constraint is that it should fix to a non-zero value the chiral algebra operator that corresponds to the Higgs branch chiral ring operator that gets an expectation value. This means imposing the constraint\n\\begin{equation}\n\\label{eq:positive_current_constraint}\nJ_{\\alpha_-}(z)=A~,\n\\end{equation}\nwhere $T_{\\alpha_-}=\\Lambda(t_-)$. Here $A$ is a dimensionful constant that will be irrelevant to the final answer as long as it is nonzero. We might also expect that we should constrain some of the remaining currents to vanish. A motivation for such additional constraints is that when expanded around the new vacuum on the Higgs branch, many of the moment map operators become field operators for the Nambu-Goldstone bosons of spontaneously broken flavor symmetry, and we want to ignore those and focus on the chiral algebra associated to just the interacting part of the reduced theory.\n\nThere happens to be a natural conjecture for the full set of constraints that should be imposed. This conjecture, which was already foreshadowed in \\cite{Beem:2013sza}, is as follows:\n\\begin{conj}\\label{conj:qDS}\nThe chiral algebra associated to a class $\\SS$ theory with a puncture of type $\\Lambda$ is obtained by performing quantum Drinfeld-Sokolov (qDS) reduction with respect to the embedding $\\Lambda$ on the chiral algebra for the theory where the same puncture is maximal.\n\\end{conj}\nQuantum Drinfeld-Sokolov in its most basic form is a procedure by which one obtains a new chiral algebra by imposing constraints on an affine Lie algebra $\\hat\\mf{g}$, with the constraints being specified by an embedding $\\Lambda:\\mf{su}(2)\\hookrightarrow\\mf{g}$. In the case of interest to us, the chiral algebra on which we will impose these constraints is generally larger than just an affine Lie algebra. Nevertheless, these constraints can still be consistently imposed in the same manner. This conjecture therefore amounts to a choice of the additional constraints beyond \\eqref{eq:positive_current_constraint} that should be imposed in order to reduce a puncture. It is interesting to note that the right set of constraints will turn out to fix only \\emph{half} of the currents that are expected to become Nambu-Goldstone bosons. We will see that the removal of the remaining Nambu-Goldstone bosons occurs in a more subtle manner.\n\nBefore delving into the details, we should make the observation that this answer is not unexpected in light of the pre-existing connections between non-maximal defects in the $(2,0)$ theory and qDS reduction \\cite{Alday:2010vg,Chacaltana:2012zy}. Though a sharp connection between the AGT story and the protected chiral algebra construction is still lacking, we take this as a positive indication that such a connection is there and remains to be clarified. We now turn to a more precise description of qDS reduction for chiral algebras with affine symmetry. We will first develop the general machinery for performing such a reduction in the cases of interest, whereafter we will perform a number of tests of the claim that this is the correct procedure for reducing the ranks of punctures in class $\\SS$ chiral algebras.\n\n\\bigskip\n\\input{.\/sections\/Section_4\/S4_1}\n\\bigskip\n\\input{.\/sections\/Section_4\/S4_2}\n\\bigskip\n\\input{.\/sections\/Section_4\/S4_3}\n\\bigskip\n\\input{.\/sections\/Section_4\/S4_4}\n\\bigskip\n\\bigskip\n\\subsection{Quantum Drinfeld-Sokolov for modules}\n\\label{subsubsec:qDSspecseq}\n\nQuantum Drinfeld-Sokolov reduction is a procedure for imposing a set of constraints given below in Eqn. \\eqref{eq:qDSconstraints} at the quantum level for an affine Lie algebra $\\hat\\mf{g}$ at any level. In the following discussion, we will closely follow the analysis of \\cite{deBoer:1993iz} (see also \\cite{deBoer:1992sy} for a similar discussion for finite dimensional algebras). Although traditionally the starting point for this procedure is a pure affine Lie algebra, our interest is in the case of a more general chiral algebra with an affine Lie subalgebra at the critical level. Said differently, we are interested in performing qDS reduction for nontrivial $\\hat{\\mf{g}}_{-h^\\vee}$ modules. We will utilize essentially the same spectral sequence argument as was used in \\cite{deBoer:1993iz}. Some basic facts about spectral sequences are collected in Appendix \\ref{app:spectral_sequences} for the convenience of the reader.\n\nThe general setup with which we are concerned is the following. We begin with a chiral algebra (for simplicity we take it to be finitely generated) with an $\\widehat{\\mf{su}(n)}_{k}$ affine subalgebra. We denote the generating currents of the affine subalgebra as $J_{A}(z)$, while the additional generators of the chiral algebra will be denoted as $\\{\\phi^i(z)\\}$, each of which transforms in some representation $\\mf{R}_i$ of $\\mf{su}(n)$.\n\nWe now choose some embedding $\\Lambda:\\mf{su}(2)\\hookrightarrow\\mf{su}(N)$, for which the images of the $\\mf{su}(2)$ generators $\\{t_0,t_+,t_-\\}$ will be denoted by $\\{\\Lambda(t_0),\\Lambda(t_+),\\Lambda(t_-\\}$. The embedded Cartan then defines a grading on the Lie-algebra,\n\\begin{equation}\n\\label{eq:cartan_grading_def}\n\\mf{g} = \\bigoplus_{m\\in\\frac12 \\Zb}\\mf{g}_m~,\\qquad \\mf{g}_m\\ceq\\left\\{T_A \\in \\mf{g}~\\vert~{\\rm ad}_{\\Lambda(t_0)}T_A = m\\,T_A \\right\\}~.\n\\end{equation}\nWhen the embedded Cartan is chosen such that some of the currents have half-integral grading, then some of the associated constraints are second-class and cannot be enforced by a straightforward BRST procedure. Fortunately, it has been shown that one may circumvent this problem by selecting an alternative Cartan generator $\\delta$ which exhibits integer grading and imposing the corresponding first class constraints \\cite{Feher:1992ed,deBoer:1992sy,deBoer:1993iz}. We will adopt the convention that an index $\\alpha$ ($\\bar\\alpha$) runs over all roots with negative (non-negative) grading with respect to $\\delta$, while Latin indices run over all roots. The first-class constraints to be imposed are then as follows,\n\\begin{equation}\n\\label{eq:qDSconstraints}\nJ_{\\alpha} = A\\,\\delta_{\\alpha\\alpha_-}~,\n\\end{equation}\nwhere $\\Lambda(t_-) = T_{\\alpha_-}$. These constraints are imposed \\`a la BRST by introducing dimension $(1,0)$ ghost pairs $(c^\\alpha,b_\\alpha)$ in one-to-one correspondence with the generators $T_{\\alpha}$. These ghosts have the usual singular OPE\n\\begin{equation}\n\\label{eq:BRST_ghost_OPE}\nc^\\alpha(z) b_\\beta(0)\\sim \\frac{\\delta^\\alpha_{\\phantom{\\alpha}\\beta}}{z}~,\n\\end{equation}\nand allow us to define a BRST current\n\\begin{equation}\n\\label{eq:BRSTcurrent}\nd(z) = \\left(J_{\\alpha}(z) - A\\,\\delta_{\\alpha\\alpha_-}\\right)c^{\\alpha}(z) - \\frac12 f_{\\alpha\\beta}^{\\phantom{\\alpha\\beta}\\gamma}(b_\\gamma(c^\\alpha c^\\beta))(z)~.\n\\end{equation}\nThe reduced chiral algebra is defined to be the BRST-cohomology of the combined ghost\/matter system. Note that this definition is perfectly reasonable for the case where we are reducing not just the affine current algebra, but a module thereof. The presence of the module doesn't modify the system of constraints of the BRST differential, but as we shall see, the operators in the modules will be modified in a nontrivial way in the constrained theory.\n\nThis cohomological problem can be solved via a modest generalization of the approach of \\cite{Feigin:1990pn,deBoer:1993iz}. We first split the BRST current into a sum of two terms,\n\\begin{align}\\begin{split}\nd_0(z) &~=~ \\left(-A\\,\\delta_{\\alpha\\alpha_-}\\right) c^{\\alpha}(z)~,\\\\\nd_1(z) &~=~ J_{\\alpha}(z)c^{\\alpha}(z)-\\frac12 f_{\\alpha\\beta}^{\\phantom{\\alpha\\beta}\\gamma}(b_\\gamma(c^\\alpha c^\\beta))(z)~.\\label{eq:differential0}\n\\end{split}\\end{align}\nWe now introduce a bi-grading for the currents and ghosts so that the differentials $(d_0,d_1)$ have bi-grades $(1,0)$ and $(0,1)$, respectively,\n\\begin{alignat}{3}\n&\\text{deg }(J_A(z)) \t &~=~& (m,-m)~,\t\t\\qquad &&T_A\\in \\mf{g}_m~,\\notag\\\\\n&\\text{deg }(c^\\alpha(z)) &~=~& (-m,1+m)~, \t\\qquad &&T_\\alpha\\in \\mf{g}_m~,\\label{eq:bigrading}\\\\\n&\\text{deg }(b_\\alpha(z)) &~=~& (m,-m-1)~, \t\t\\qquad &&T_\\alpha\\in \\mf{g}_m\\notag~.\n\\end{alignat}\nThis bi-grading can also be extended to the additional generators $\\phi^i$. We decompose each such generator into weight vectors of $\\mf{su}(n)$ according to\n\\begin{equation}\n\\label{eq:phi_decomposition}\n\\phi^i=\\phi^i_It_I^{(\\mf{R}_i)}~,\\qquad I=1,\\ldots,\\dim\\mf{R}_i~,\n\\end{equation}\nwhere the $t_I^{(\\mf{R}_i)}$ form a weight basis for the representation $\\mf{R}_i$ with weights defined according to \n\\begin{equation}\n\\label{eq:general_rep_weight_basis}\nH_\\alpha\\cdot t_I^{(\\mf{R}_i)} = \\lambda^{(\\mf{R}_i)}_{I,\\alpha}\\, t_I^{(\\mf{R}_i)}~,\n\\end{equation}\nwhere $H_\\alpha$ is an element of the Cartan subalgebra of $\\mf{su}(n)$. Given the element $\\delta$ in terms of which our grading is defined, the bi-grading of the extra generators can be defined according to\n\\begin{equation}\n\\label{eq:bigradingextrafields}\n\\text{deg }(\\phi^i_I) = (\\delta\\cdot t_I^{(\\mf{R}_i)},\\,-\\delta\\cdot t_I^{(\\mf{R}_i)})~.\n\\end{equation}\nThe differentials $(d_0,d_1)$ are each differentials in their own right, that is, they satisfy\n\\begin{equation}\n\\label{eq:double_sequence_differentials}\nd_0^2=d_1^2=d_0d_1+d_1d_0=0~.\n\\end{equation}\nTherefore they define a double complex on the Hilbert space of the ghost\/matter chiral algebra, which is the starting point for a spectral sequence computation of the cohomology.\n\nIt turns out that a simplification occurs if instead of trying to compute the cohomology of the double complex straight off, we first introduce ``hatted currents'' \\cite{Feigin:1990pn,deBoer:1993iz},\n\\begin{equation}\n\\label{eq:hattedcurrents}\n\\hat{J}_A(z) = J_A(z) + f_{A\\beta}^{\\phantom{a\\beta}\\gamma}(b^{\\phantom{a}}_\\gamma c_{\\phantom{a}}^{\\,\\beta})(z)~.\n\\end{equation}\nLet us denote by $\\Ab_1$ the subalgebra generated by $b_\\alpha(z)$ and $\\hat{J}_\\alpha(z)$, and by $\\Ab_2$ the subalgebra produced by the remaining generators $c^\\alpha(z)$, $\\hat{J}_{\\bar\\alpha}(z)$, and $\\phi^i(z)$. One then finds that $d(\\Ab_1)\\subseteq\\Ab_1$ and $d(\\Ab_2)\\subseteq\\Ab_2$, with the generators of $\\Ab_1$ additionally obeying\n\\begin{equation}\n\\label{eq:trivial_cohomology}\nd(b_{\\alpha}(z))= \\hat{J}_{\\alpha}(z)-A\\delta_{\\alpha\\alpha_-}~,\\qquad d(\\hat{J}_{\\alpha}(z))=0~.\n\\end{equation}\nIt follows that the BRST cohomology of $\\Ab_1$ is trivial: $H^*(\\Ab_1,d)=\\Cb$. From the K\\\"unneth formula (see Appendix \\ref{app:spectral_sequences}), it follows that the BRST cohomology of the chiral algebra is isomorphic to the cohomology of the smaller algebra $\\Ab_2$,\n\\begin{equation}\n\\label{eq:cohomology_simplification}\nH^*(\\Ab,d) \\cong H^*(\\Ab_2,d)~.\n\\end{equation}\nOur task then simplifies: we need only compute the cohomology of $\\Ab_2$. We will address this smaller problem by means of a spectral sequence for the double complex $(\\Ab_2,d_0,d_1)$.\n\nThe first step in the spectral sequence computation is to compute the cohomology $H^*(\\Ab_2,d_0)$. The only nontrivial part of this computation is the same as in the case without modules. This is because the additional generators $\\phi^i_{I}(z)$ have vanishing singular OPE with the $c$-ghosts, rendering them $d_0$-closed. Moreover, they can never be $d_0$-exact because the $b$-ghosts are absent from $\\Ab_2$. For the currents and ghosts, one first computes\n\\begin{equation}\n\\label{eq:d0_of_Jhat}\nd_0(\\hat{J}_{\\bar{\\alpha}}(z)) = -A f_{\\bar{\\alpha}\\beta}^{\\phantom{\\alpha\\beta}\\gamma} \\delta^{\\phantom{a}}_{\\gamma\\alpha_-}c^{\\,\\beta}(z) = - \\mbox{Tr}\\left({\\rm ad}_{\\Lambda(t_+)}T_{\\bar{\\alpha}}\\cdot T_\\beta\\right)c^{\\,\\beta}(z)~.\n\\end{equation}\nIt follows that $d_0(\\hat{J}_{\\bar\\alpha}(z))=0$ if and only if $T_{\\bar{\\alpha}}\\in \\ker({\\rm ad}_{\\Lambda(t_+)})$. The same equation implies that the $c^\\alpha(z)$ ghosts are $d_0$-exact for any $\\alpha$. Because the $d_0$-cohomology thus computed is supported entirely at ghost number zero, the spectral sequence terminates at the first step. At the level of vector spaces we find\n\\begin{equation}\n\\label{eq:vector_space_cohomology}\nH^*(\\Ab,d) \\cong H^*(\\Ab_2,d_0)~,\n\\end{equation}\nwith $H^*(\\Ab_2,d_0)$ being generated by the $\\phi^i_I(z)$ and by $J_{\\bar\\alpha}(z)$ for $T_{\\bar\\alpha}\\in\\ker({\\rm ad}_{\\Lambda(t_+)})$.\n\nIn order to improve this result to produce the vertex operator algebra structure on this vector space, we can construct representatives of these with the correct OPEs using the tic-tac-toe procedure. Letting $\\psi(z)$ be a generator satisfying $d_0(\\psi(z))=0$, the corresponding chiral algebra generator $\\Psi(z)$ is given by\n\\begin{equation}\n\\label{eq:tic-tac-toed_generator}\n\\Psi(z) = \\sum_l (-1)^l \\psi_l(z)~,\n\\end{equation}\nwhere $\\psi_l(z)$ is fixed by the condition\n\\begin{equation}\n\\label{eq:tic-tac-toe_condition}\n\\psi_0(z) \\ceq \\psi(z)~,\\quad d_1(\\psi_l(z)) = d_0(\\psi_{l+1}(z))~.\n\\end{equation}\nAt the end, this procedure will give a collection of generators of the qDS reduced theory along with their singular OPEs and it would seem that we are finished. However, it is important to realize that this may not be a minimal set of generators, in that some of the generators may be expressible as composites of lower dimension generators due to null states. The existence of null relations of this type is very sensitive to the detailed structure of the original chiral algebra. For example, the level of the current algebra being reduced plays an important role. In practice, we will find for the class $\\SS$ chiral algebras, \\emph{most} of the generators $\\Psi(z)$ produced by the above construction do in fact participate in such null relations.\n\nSome null states of the reduced theory can be deduced from the presence of null states in the starting chiral algebra. This can be an efficient way to generate redundancies amongst the naive generators of the qDS reduced theory like the ones described above. Abstractly, we can understand this phenomenon as follows. Consider a null operator $N^K(z)$ that is present in the original $\\WW$-algebra, and that transforms in some representation $\\mathfrak{R}$ of the symmetry algebra that is being reduced. Given an embedding $\\Lambda,$ the representation $\\mathfrak{R}$ decomposes as in \\eqref{eq:generaldecomposition} under $\\mathfrak{g}_{\\Lambda}\\oplus \\Lambda(\\mathfrak{su}(2)).$ We can thus split the index $K$ accordingly and obtain $\\{N^{k_j,m_j}(z)\\}_{j\\geqslant 0},$ where $k_j$ is an index labeling the representation $\\mathcal{R}_j^{(\\mathfrak{R})}$ and $m_j$ labels the Cartan of the spin $j$ representation $V_j.$ For fixed values of the index $m_j$ we find an operator that will have proper dimension with respect to the new stress tensor \\eqref{eq:qDSstresstensor}. Moreover, since introducing a set of free ghost pairs naturally preserves the null property of the original operator and restricting oneself to the BRST cohomology does not spoil it either, we find that this operator is null in the qDS reduced theory. In practice, for each value of $m_j$ one chooses a representative of the BRST class $N^{k_j,m_j}(z) + d(\\ldots)$ that only involves the generators of the qDS reduced theory.\n\nThere are a couple of features of the qDS reduced theory that can be deduced without studying the full procedure in specific examples. These features provide us with the most general test of the conjecture that qDS reduction is the correct way to reduce the ranks of punctures in the chiral algebra. The first of these features is the Virasoro central charge of the reduced theory, a subject to which we turn presently.\n\\subsection{Virasoro central charge and the reduced stress tensor}\n\\label{subsec:reduced_central_charges}\n\nA useful feature of qDS reduction is that the stress tensor of a qDS reduced chiral algebra takes a canonical form (up to BRST-exact terms) in which it is written as a shift of the stress tensor of the unreduced theory,\n\\begin{equation}\n\\label{eq:qDSstresstensor}\nT = T_{\\star} - \\partial J_0 + \\partial b_\\alpha c^\\alpha - (1+\\lambda_\\alpha) \\partial (b_\\alpha c^\\alpha)~.\n\\end{equation}\nHere $T_{\\star}$ is the stress tensor of the unreduced theory, $J_0$ is the affine current of the $U(1)$ symmetry corresponding to $\\Lambda(t_0)$, and $\\lambda_\\alpha$ is the weight for $T_\\alpha$ with respect to $\\Lambda(t_0)$ as defined by Eqn. \\eqref{eq:general_rep_weight_basis}.%\n\\footnote{Note that in the case of half-integral gradings, the weights $\\lambda_\\alpha$ are defined with respect to $\\Lambda(t_0)$ and \\emph{not} with respect to the alternate Cartan element $\\delta$.}\nThe dimensions of the ghosts measured by this new stress tensor are $h_{b_\\alpha} = 1+\\lambda_\\alpha$ and $h_{c^\\alpha} = -\\lambda_\\alpha$. Meanwhile the dimensions of all remaining fields are simply shifted by their $J_0$ charge.\n\nThe central charge of the reduced theory can be read off from the most singular term in the self-OPE of the reduced stress tensor. The result is given by \\cite{Feher:1992ed}\n\\begin{align}\\label{eq:central_charge_shift}\n\\begin{split}\nc-c_{\\star}\t&~= \\left(\\dim\\mf{g}_0-\\frac12\\dim\\mf{g}_{\\frac12}-12\\left|\\sqrt{k+h^\\vee}\\Lambda(t_0)-\\frac{\\rho}{\\sqrt{k+h^\\vee}}\\right|^2\\right)-\\left(\\frac{k\\dim\\mf{g}}{k+h^\\vee}\\right)~,\\\\\n\t\t\t&~= \\dim\\mf{g}_0-\\frac12\\dim \\mf{g}_{\\frac{1}{2}}-12(k+h^\\vee)\\left|\\Lambda(t_0)\\right|^2+24\\Lambda(t_0)\\cdot\\rho-\\dim\\mf{g}~.\n\\end{split}\n\\end{align}\nHere $\\rho$ is the Weyl vector of $\\mf{su}(n)$, and in passing to the second line, we have used the Freudenthal-de Vries strange formula $|\\rho|^2 = \\frac{h^\\vee}{12}\\dim\\mf{g}$. In the cases of interest the level of the current algebra is always given by $k=-h^\\vee$ and there is a further simplification,\n\\begin{equation}\n\\label{eq:change2dcentralcharge}\nc=c_{\\star}+\\dim\\mf{g}_0-\\frac12\\dim\\mf{g}_{\\frac12}+24\\Lambda(t_0)\\cdot\\rho-\\dim\\mf{g}~.\n\\end{equation}\n\nThis shift of two-dimensional central charge can be compared to our expectations based on the four-dimensional results in Eqns. \\eqref{eq:4dcentralcharges}-\\eqref{eq:nv_nh_defs_2}. The change of the four-dimensional central charge that occurs upon reducing a maximal puncture down to a smaller puncture labelled by the embedding $\\Lambda$ is given by\n\\begin{align}\\begin{split}\n\\label{eq:change4dcentralcharge}\n-12(c_{4d} - c_{4d,\\text{orig.}} ) &~=~ 2(n_v(\\text{max.}) - n_v(\\Lambda)) + (n_h(\\text{max.}) - n_h(\\Lambda))~,\\\\\n\t\t\t\t\t\t\t\t\t&~=~ \\dim \\mf{g}_0 - \\frac{1}{2}\\dim \\mf{g}_{\\frac{1}{2}} +24 \\Lambda(t_0)\\cdot \\rho -\\dim\\mf{g}~.\n\\end{split}\\end{align}\nThus we see precise agreement with the change in two-dimensional central charge induced by qDS reduction and that of the four-dimensional charge induced by Higgsing after accounting for the relation $c_{2d} = -12 c_{4d}$. We take this as a strong indication the the qDS prescription for reducing chiral algebras is indeed the correct one.\n\\subsection{Reduction of the superconformal index}\n\\label{subsec:reduced_index}\n\nWe can now check that the qDS reduction procedure has an effect on the (graded) partition function of the chiral algebra that mimics the prescription for reducing the Schur superconformal index described in Sec.\\,\\ref{subsec:class_S_review}. As was reviewed above, the Schur limit of the superconformal index is equivalent to a graded partition function of the corresponding chiral algebra,\n\\begin{equation}\n\\label{eq:chiral_character}\n\\II_{\\chi}(q; {\\bf x}) ~\\ceq~ \\mbox{Tr}_{\\HH_{\\chi}}\\,(-1)^F q^{L_0} ~=~ \\II^{\\rm Schur}(q; {\\bf x})~.\n\\end{equation}\nComputing this graded partition function is straightforward for the qDS reduced theory owing to the fact that the BRST differential commutes with all of the fugacities ${\\bf x}$ that may appear in the index and has odd fermion number. This means that we can ignore the cohomological aspect of the reduction and simply compute the partition function of the larger Hilbert space obtained by tensoring the unreduced chiral algebra with the appropriate ghosts system.\\footnote{There is a caveat to this argument, which is that if there are null states in the reduced theory that do not originate as null states in the parent theory, then their subtraction will not necessarily be accomplished by this procedure. We operate under the assumption that such spurious null states do not appear. This assumption appears to be confirmed by the coherence between this procedure and that discussed in Sec.\\,\\ref{subsec:class_S_review}.}\n\nThis simpler problem of computing the partition function of the larger Hilbert space parallels the index computation described in Sec.\\,\\ref{subsec:class_S_review}. There are again two steps -- the inclusion of the ghosts, and the specialization of fugacities to reflect the symmetries preserved by the BRST differential. Including the ghosts in the partition function before specializing the fugacities requires us to assign them charges with respect to the UV symmetries. This can be done in a canonical fashion so that upon specializing the fugacities the BRST current will be neutral with respect to IR symmetries and have conformal dimension one. \n\nRecall that the ghost sector involves one pair of ghosts $(b_\\alpha,c^\\alpha)$ for each generator $T_{\\alpha}$ that is negatively graded with respect to $\\delta$. The charge assignments are then the obvious ones -- namely the charges of $b_\\alpha$ are the same as those of $T_\\alpha$ (let us call them $f_\\alpha$), while those of $c^{\\alpha}$ are minus those of $b_{\\alpha}$. With these charge assignments, the graded partition function of the reduced chiral algebra can be obtained as a specialization that mimics that which led to the superconformal index,\n\\begin{equation}\n\\label{eq:chiral_index_specialization}\n\\II_{\\chi_{\\Lambda}}(q;{\\bf x}_{\\Lambda})=\\lim_{{\\bf x} \\to {\\bf x}_\\Lambda}\\II_{\\chi}(q;{\\bf x})\\,\\II_{(b,c)_{\\Lambda}}(q;{\\bf x})~,\\qquad \\II_{(b,c)_{\\Lambda}}\\ceq{\\rm PE}\\left[-\\!\\!\\!\\!\\sum_{T_{\\alpha}\\in\\mf{g}_{<0}}\\left(\\frac{q\\,{\\bf x}^{f_{\\alpha}}}{1-q}+\\frac{{\\bf x}^{-f_{\\alpha}}}{1-q}\\right)\\right].\n\\end{equation}\nAs in the discussion of the index in Sec.\\,\\ref{subsec:reduced_index}, we can formally perform the specialization ignoring divergences that occur in both the numerator and the denominator as a consequence of constant terms in the plethystic exponent. In doing this, the flavor fugacities are replaced by fugacities for the Cartan generators of $\\mf{h}_\\Lambda$, while the $q$-grading is shifted by the Cartan element of the embedded $\\mf{su}(2)$. This leads to the following formal expression for the contribution of the ghosts,\\footnote{For simplicity, we write the expression here for the case where $\\Lambda(t_0)$ provides an integral grading so there is no auxiliary $\\delta$. The case of half-interal grading can be treated with modest modifications.}\n\\begin{equation}\n\\label{eq:formal_character_specialized_ghosts}\n\\II_{(b,c)_{\\Lambda}}~~ ``=\" ~~{\\rm PE}\\left[ \\frac{-q}{1-q} \\sum_j \\protect\\raisebox{1pt}{$\\chi$}_{\\RR_j^{(\\mathrm{adj})}}^{\\mf{h}_{\\Lambda}}(\\mathbf{a}_{\\Lambda}) \\sum_{i=-j}^{-1}q^i -\\frac{1}{1-q}\\sum_j \\protect\\raisebox{1pt}{$\\chi$}_{\\RR_j^{(\\mathrm{adj})}}^{\\mf{h}_{\\Lambda}}(\\mathbf{a}^{-1}_{\\Lambda})\\sum_{i=1}^{j}q^i \\right]~.\n\\end{equation}\nAfter a small amount of rearrangement and the recognition that the representations $\\RR_j^{(\\mathrm{adj})}$ are pseudoreal, one finds that this exactly reproduces the formal denominator in Eqn.\\,\\eqref{eq:K_factor_fugacity_replacement_2}. Again, when the limit in Eqn.\\,\\eqref{eq:chiral_index_specialization} is taken carefully, the divergences in this formal denominator cancel against equivalent terms in the $K$-factors of the numerator to produce a finite result. It is interesting that in spite of the asymmetry between $b$ and $c$ ghosts in this procedure, they ultimately play the same role from the point of view of four-dimensional physics -- each ghost is responsible for cancelling the effect of a single Nambu-Goldstone boson from the index.\n\nBefore moving on to examples, we recall that in \\cite{Beem:2014kka} it was observed that the $K$-factor for a maximal puncture matches the character of the corresponding affine Lie algebra at the critical level, and it was conjectured that a similar statement would be true for reduced punctures. That is to say, the $K$-factor associated to the reduction of type $\\Lambda$ should be the character of the qDS reduction of type $\\Lambda$ of the critical affine Lie algebra. Given the analysis to this point, this statement becomes almost a triviality. The qDS reduction of the affine current algebra proceeds by introducing the same collection of ghosts as we have used here, and so the effect on the graded partition function is the introduction of the same ghost term given in Eqn.\\,\\eqref{eq:formal_character_specialized_ghosts} and the same specialization of fugacities. Thus, the identification of the $K$-factors given in Eqn.\\,\\eqref{eq:K-factor} with the character of the qDS reduction of the critical affine Lie algebra depends only on our ability to equate the index (\\ie, the partition function graded by $(-1)^F$) with the ungraded vacuum character. This is a simple consequence of the fact that the starting current algebra consists of all bosonic operators and the spectral sequence calculation of Sec.\\,\\ref{subsubsec:qDSspecseq} only found BRST cohomology elements at ghost number zero.\n\\subsection{Simple examples}\n\\label{subsec:examples}\n\nIn light of the analysis in Section \\ref{subsubsec:qDSspecseq}, the reduction problem admits an algorithmic solution subject to two conditions. (A) the starting chiral algebra should be finitely generated, \\ie, it admits a description as a $\\WW$-algebra. (B) the $L_0$ operator of the reduced theory should have a positive definite spectrum. The latter condition must hold for any reductions where the endpoint corresponds to a physical class $\\SS$ theory, while the former conditions is conjectured to be true for general class $\\SS$ theories but is more certainly true in some simple examples. Given these conditions, the procedure is as follows:\n\\begin{enumerate}\n\\item[$\\bullet$] List the (possibly redundant) generators of the qDS reduced chiral algebra at the level of vector spaces. These are given by the hatted currents $\\hat J_{\\bar \\alpha}$ for which $T_{\\bar{\\alpha}}\\in \\ker({\\rm ad}_{\\Lambda(t_+)})$, along with all of the additional generators $\\{\\phi_i\\}$.\n\\item[$\\bullet$] Apply the tic-tac-toe algorithm to construct genuine generators of the chiral algebra. The OPEs of these reduced chiral algebra generators can be computed directly using the OPEs of the original, unreduced fields.\n\\item[$\\bullet$] Compute the null states at each level up to that of the highest-dimensional generator in order to check for redundancy. Remove any redundant generators. What remains is a description of the reduced chiral algebra as a $\\WW$-algebra.\n\\end{enumerate}\nThis procedure is still morally a correct one when the two conditions listed above fail to be met, but in those cases the algorithm will not necessarily terminate in finite time. In the examples discussed in this subsection, both conditions above will indeed be satisfied, so this algorithm will be sufficient to determine the answer entirely.\n\nWe now consider a pair of simple cases in which the reduction can be performed quite explicitly. Our first example will be the complete closure of a single puncture in the rank one theory of a four-punctured sphere, which as we reviewed above has as its chiral algebra the affine Lie algebra $\\wh{\\mf{so}(8)}_{-2}$. The result of this closure is expected to be the $T_2$ theory (see Figure \\ref{fig:4_to_3_reduction}). The second example will be the partial reduction (corresponding to the semi-regular embedding) of one puncture in the $T_3$ theory to produce a theory of free bifundamental hypermultiplets, which should correspond to free symplectic bosons at the level of the chiral algebra. Details of the second calculation beyond what is included in this summary can be found in Appendix \\ref{subapp:e6_to_free}.\n\n\\bigskip\n\\subsubsection*{Reducing $\\wh{\\mf{so}(8)}_{-2}$ to $\\protect\\raisebox{1pt}{$\\chi$}[T_2]$}\n\nThe starting point for our first reduction is the affine Lie algebra $\\widehat{\\mf{so}(8)}_{-2}$. We first introduce a basis for the affine currents that is appropriate for class $\\SS$ and for the reduction we aim to perform. The adjoint of $\\mf{so}(8)$ decomposes into irreps of the $\\mf{su}(2)^{(1)}\\times\\mf{su}(2)^{(2)}\\times\\mf{su}(2)^{(3)}\\times\\mf{su}(2)^{(4)}$ symmetries associated to punctures according to\n\\begin{equation}\n\\label{eq:SO8currentalgebra}\n\\rep{28}_{\\mf{so}(8)} ~\\to~\n(\\rep3,\\rep1,\\rep1,\\rep1)\\oplus\n(\\rep1,\\rep3,\\rep1,\\rep1)\\oplus\n(\\rep1,\\rep1,\\rep3,\\rep1)\\oplus\n(\\rep1,\\rep1,\\rep1,\\rep3)\\oplus\n(\\rep2,\\rep2,\\rep2,\\rep2)~.\n\\end{equation}\nAccordingly, we assemble the twenty-eight affine currents into these irreps,\n\\begin{equation}\n\\label{eq:so8_relabelled_currents}\nJ_A(z)~\\to~ \\{J^{(1)}_{(a_1b_1)}(z)~,~J^{(2)}_{(a_2b_2)}(z)~,~J^{(3)}_{(a_3b_3)}(z)~,~J^{(4)}_{(a_4b_4)}(z)~,~J_{a_1a_2a_3a_4}(z)\\}~,\n\\end{equation}\nwhere $a_I,b_I$ are fundamental indices of $\\mf{su}(2)^{(I)}$. In this basis, the OPEs of the affine Lie algebra are given by\n{\\small\n\\begin{align}\\label{eq:so8algebra}\n\\begin{split}\n\\makebox[1in][r]{$J^{(I)}_{ab}(z) J^{(J)}_{cd} (w)$}\t&~\\sim~ \\frac{- k (\\e_{ac} \\e_{bd} + \\e_{a d} \\e_{bc}) \\delta^{IJ}}{2(z-w)^2} + \\frac{f^{ef}_{ab ;cd} J^{(I)}_{ef} \\delta^{IJ}}{z-w}~,\\\\\n\\makebox[1in][r]{$J^{(1)}_{ab} (z) J_{c d e f}(w)$} \t&~\\sim~ \\frac{ \\e_{a c} J_{b d e f} + \\e_{b c} J_{a de f}}{2(z-w)}~,\\\\\n\\makebox[1in][r]{$J_{a b c d}(z) J_{efgh}(w)$} \t\t\t&~\\sim~ \\frac{k \\e_{ae} \\e_{bf} \\e_{cg} \\e_{dh}}{(z-w)^2} + \\frac{J^{(1)}_{ae} \\e_{bf} \\e_{cg} \\e_{dh} + \\e_{ae} J^{(2)}_{bf} \\e_{cg} \\e_{dh} + \\e_{ae} \\e_{bf} J^{(3)}_{cg} \\e_{dh}+ \\e_{ae} \\e_{bf} \\e_{cg} J^{(4)}_{dh}}{z-w}~,\n\\end{split}\\end{align}}%\nand similarly for the other $J^{(I)}$. Here the $\\mf{su}(2)$ structure constants are given by $f^{ef}_{ab;cd} = \\frac{1}{2} (\\e_{a c} \\delta_b^e \\delta_d^f + \\e_{bc} \\delta_a^e \\delta_d^f + \\e_{ad} \\delta_b^e \\delta_c^f + \\e_{bd} \\delta_a^e \\delta_c^f)$, and for our case of interest level is fixed to $k=-2$. \n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[scale=.5]{.\/figures\/4_to_3_reduction.pdf}\n\\caption{Reduction from the $\\mf{so}(8)$ theory to $T_2$.}\n\\label{fig:4_to_3_reduction}\n\\end{figure}\n\nWe will choose the first puncture to close, meaning we will perform qDS reduction on the current algebra generated by $J^{(1)}_{(ab)}$ with respect to the principal embedding,\n\\begin{equation}\n\\label{eq:so8_reduction_embedding}\n\\Lambda(t_+)= -T_{11}~,\\qquad\\Lambda(t_-)= T_{22}~,\\qquad \\Lambda(t_0)= -T_{(12)}~.\n\\end{equation}\nThe grading provided by $\\Lambda(t_0)$ is integral, so we can proceed without introducing any auxiliary grading. The only constraint to be imposed in this case is $J_{22}^{(1)}(z) = 1$. This is accomplished with the help of a single ghost pair $(c^{22},b_{22})$, in terms of which the BRST operator is given by\n\\begin{equation}\n\\label{eq:so8_brst_operator}\nd(z)=c^{22}(J_{22}-1)(z)~.\n\\end{equation}\nThe remaining three sets of $\\mf{su}(2)$ affine currents can be thought of as trivial modules of the reduced currents, while the quadrilinear currents provide a nontrivial module. In the language of the previous subsection we have\\footnote{We should note that there is something slightly unconventional about the reduction procedure here. In this example the entire starting chiral algebra is an affine current algebra, so one could in principle perform qDS reduction in the entirely standard manner. This is \\emph{not} what our prescription tells us to do. Instead, we treat a single $\\mf{su}(2)$ subalgebra as the target of the reduction, and the rest as modules. The two procedures are naively inequivalent, although we have not checked in detail to make sure that the results don't turn out the same.}\n\\begin{equation}\n\\label{eq:so8_extra_generators}\n\\{\\phi^i\\} = \\{J^{(2)}_{(a_2b_2)}~,J^{(3)}_{(a_3b_3)}~,J^{(4)}_{(a_4b_4)}~, J_{a_1a_2a_3a_4}\\}~.\n\\end{equation}\n\nThe reduced generators of step one are simply the hatted current $\\hat{J}^{(1)}_{{11}} = J^{(1)}_{{11}}$ along with the additional generators in \\eqref{eq:so8_extra_generators}. Applying the tic-tac-toe procedure produces true generators of the reduced chiral algebra,\n\\begin{align}\\label{eq:finalgeneratorsSO8}\n\\begin{split}\n\\makebox[11ex][l]{$\\hat{\\JJ}^{(1)}_{{11}}$} \t\t&~\\colonequals~\t\\hat J^{(1)}_{{11}} - \\hat J^{(1)}_{{12}} \\hat J^{(1)}_{{12}} - (k+1) \\partial(\\hat J^{(1)}_{{12}})~,\\\\\n\\makebox[11ex][l]{$(\\JJ_1)_{a_2a_3a_4}$}\t\t\t&~\\colonequals~\tJ_{{1}a_2a_3a_4} - \\hat J^{(1)}_{{12}} J_{{2}a_2a_3a_4}~,\\\\\n\\makebox[11ex][l]{$(\\JJ_2)_{a_2a_3a_4}$}\t\t\t&~\\colonequals~\tJ_{{2}a_2a_3a_4}~,\\\\\n\\makebox[11ex][l]{$\\JJ^{(I=\\{2,3,4\\})}_{a_Ib_I}$}\t&~\\colonequals~\tJ^{(I=\\{2,3,4\\})}_{a_Ib_I}~,\n\\end{split}\\end{align}\nwhere $\\hat J^{(1)}_{{12}} \\colonequals J^{(1)}_{{12}} + b_{{22}}c^{{2 2}}$. \n\nThe stress tensor of the reduced algebra takes the form given in Eqn. \\eqref{eq:qDSstresstensor}, where the original stress tensor was the Sugawara stress tensor of $\\widehat{\\mf{so}(8)}_{-2}$ and the generator of the embedded Cartan is $J_0 = - J^{(1)}_{{12}}$. We can then compute the conformal dimensions of the new generators and we find\n\\begin{align}\\begin{split}\n\\makebox[1.1in][l]{$[\\hat{\\JJ}^{(1)}_{{11}}]=2$~,}\\qquad &[\\JJ^{(I)}_{a_Ib_I}]=1~,\\\\\n\\makebox[1.1in][l]{$[(\\JJ_1)_{a_2a_3a_4}]=3\/2$~,} \\qquad &[(\\JJ_2)_{a_2a_3a_4}]=1\/2~.\n\\end{split}\\end{align}\nThe currents $\\JJ^{(I)}_{a_Ib_I}$ persist as affine currents of $\\mf{su}(2)$ subalgebras, so all of their singular OPEs with other generators are determined by the symmetry properties of the latter. Explicit calculation determines the OPEs that are not fixed by symmetry to take the following form,\n{\\small\n\\begin{align}\\begin{split}\n\\hat{\\JJ}^{(1)}_{{11}}(z)\\hat{\\JJ}^{(1)}_{{11}}(0)\t&~\\sim~ -\\frac{1}{2}\\frac{(2+k)(1+2k)(4+3k)}{z^4} - \\frac{2(2+k)\\hat{\\JJ}^{(1)}_{{11}}(0)}{z^2}-\\frac{(2+k)\\partial \\hat{\\JJ}^{(1)}_{{11}}(0)}{z}\\\\\n\\hat{\\JJ}^{(1)}_{{11}}(z) (\\JJ_1)_{a_2a_3a_4}(0) \t&~\\sim~ -\\frac{1}{2}\\frac{(2+k)(1+2k)(\\JJ_2)_{a_2a_3a_4}(0)}{z^3}-\\frac{1}{4}\\frac{(7+2k)(\\JJ_1)_{a_2a_3a_4}(0)}{z^2} - \\frac{(\\hat{\\JJ}^{(1)}_{{11}}(\\JJ_2)_{a_2a_3a_4})(0)}{z}\\\\\n\\hat{\\JJ}^{(1)}_{{11}}(z) (\\JJ_2)_{a_2a_3a_4}(0) \t&~\\sim~ \\phantom{-}\\frac{1}{4}\\frac{(1+2k)(\\JJ_2)_{a_2a_3a_4}(0)}{z^2} + \\frac{(\\JJ_1)_{a_2a_3a_4}(0)}{z}\\\\\n(\\JJ_1)_{a_2a_3a_4}(z) (\\JJ_2)_{b_2b_3b_4}(0) \t\t&~\\sim~ -\\frac{1}{2}\\frac{(1+2k)\\epsilon_{a_2b_2}\\epsilon_{a_3b_3}\\epsilon_{a_4b_4}}{z^2}+\\frac{-\\frac{1}{2}((\\JJ_2)_{a_2a_3a_4} (\\JJ_2)_{b_2b_3b_4})(0) + \\mathfrak{J}_{a_2a_3a_4;b_2b_3b_4}(0)}{z}\\\\\n(\\JJ_2)_{a_2a_3a_4}(z) (\\JJ_2)_{b_2b_3b_4}(0) \t\t&~\\sim~ \\phantom{-}\\frac{\\epsilon_{a_2b_2}\\epsilon_{a_3b_3}\\epsilon_{a_4b_4}}{z}\\\\\n(\\JJ_1)_{a_2a_3a_4}(z) (\\JJ_1)_{b_2b_3b_4}(0) \t\t&~\\sim~ \\phantom{-}\\frac{3}{4}\\frac{(1+2k)\\epsilon_{a_2b_2}\\epsilon_{a_3b_3}\\epsilon_{a_4b_4}}{z^3}+\\frac{\\frac{1}{4}(3+2k)((\\JJ_2)_{a_2a_3a_4} (\\JJ_2)_{b_2b_3b_4})(0) -\\mathfrak{J}_{a_2a_3a_4;b_2b_3b_4}(0)}{z^2}\\\\\n&\\qquad\\qquad+\\frac{\\frac{1}{4}((\\JJ_2)_{a_2a_3a_4} \\partial(\\JJ_2)_{b_2b_3b_4})(0)+\\frac{1}{2}(1+k)(\\partial(\\JJ_2)_{a_2a_3a_4} (\\JJ_2)_{b_2b_3b_4})(0)}{z}\\\\\n&\\qquad\\qquad\\qquad-\\frac{1}{2}\\frac{ \\partial\\mathfrak{J}_{a_2a_3a_4;b_2b_3b_4}(0) }{z}~,\n\\end{split}\\end{align}\n}\nwhere \n\\begin{equation}\n\\mathfrak{J}_{a_2a_3a_4;b_2b_3b_4}(z)=\\JJ^{(2)}_{a_2b_2}(z)\\epsilon_{a_3b_3}\\epsilon_{a_4b_4} + \\JJ^{(3)}_{a_3b_3}(z)\\epsilon_{a_2b_2}\\epsilon_{a_4b_4}+\\JJ^{(4)}_{a_4b_4}(z)\\epsilon_{a_2b_2}\\epsilon_{a_3b_3}~,\n\\end{equation}\nand we have removed $d$-exact terms.\n\nWe expect the result of this reduction procedure to be the trifundamental symplectic boson algebra $\\protect\\raisebox{1pt}{$\\chi$}[T_2]$, and $(\\JJ_2)_{a_2a_3a_4}(z)$ has the correct dimension and OPE to be identified with the trifundamental generator $q_{a_2a_3a_4}$. In order to complete the argument, we need all of the remaining reduced generators to be expressible as composites of this basic generator. Indeed it turns out to be a straightforward exercise to compute the null states in the reduced algebra at dimensions $h=1,\\frac32,2$ and to verify that null relations allow all the other generators to be written as normal ordered products of (derivatives of) $(\\JJ_2)_{a_2a_3a_4}(z)$. For example, we should expect that the $\\mf{su}(2)$ affine currents should be equivalent to the bilinears currents of Eqn. \\eqref{eq:T2_affine_currents}, and indeed there are null relations (only for $k=-2$) that allow us to declare such an equivalence,\n\\begin{align}\\label{eq:null_relation_free_currents}\n\\begin{split}\n\\tfrac12(\\JJ_2)_{abc}(\\JJ_2)_{a'b'c'}\\e^{bb'}\\e^{cc'} &~=~ \\JJ^{(2)}_{aa'}~,\\\\ \n\\tfrac12(\\JJ_2)_{abc}(\\JJ_2)_{a'b'c'}\\e^{aa'}\\e^{cc'} &~=~ \\JJ^{(3)}_{bb'}~,\\\\\n\\tfrac12(\\JJ_2)_{abc}(\\JJ_2)_{a'b'c'}\\e^{aa'}\\e^{bb'} &~=~ \\JJ^{(4)}_{cc'}~,\n\\end{split}\\end{align}\nAt dimensions $h=3\/2$ and $h=2$ there are additional null states for our special value of the level,\n\\begin{align}\n(\\JJ_1)_{bcd}\t\t\t&~=~ -\\tfrac{3}{2}\\partial (\\JJ_2)_{bcd} + \\tfrac{2}{3}(\\JJ_2)_{(b_1(c_1(d_1}(\\JJ_2)_{b_2)c_2)d_2)}(\\JJ_2)_{b_3c_3d_3} \\epsilon^{b_2b_3}\\epsilon^{c_2c_3}\\epsilon^{d_2d_3}~,\\\\\n\\begin{split}\n\\hat{\\JJ}^{(1)}_{{11}} &~=~ -\\tfrac{3}{4} (\\JJ_2)_{b_1c_1d_1} \\partial (\\JJ_2)_{b_2c_2d_2} \\e^{b_1b_2}\\e^{c_1c_2}\\e^{d_1d_2}\\\\ \n\t\t\t\t\t\t&~\\phantom{=}~ -\\tfrac{1}{6}(\\JJ_2)_{b_1c_1d_1}(\\JJ_2)_{(b_2(c_2(d_2}(\\JJ_2)_{b_3)c_3)d_3)}(\\JJ_2)_{b_4c_4d_4}\\e^{b_1b_2}\\e^{c_1c_2}\\e^{d_1d_2}\\e^{b_3b_4}\\e^{c_3c_4}\\e^{d_3d_4}~.\n\\end{split}\n\\end{align}\nThus all of the additional generators are realized as composites of the basic field $(\\JJ_2)_{abc}(z)$, and we have reproduced the $\\protect\\raisebox{1pt}{$\\chi$}[T_2]$ chiral algebra from qDS reduction of the $\\mf{so}(8)$ affine current algebra at level $k=-2$. We should re-emphasize that the redundancy amongst generators due to null states depends crucially on the precise value of the level. This is another instance of a general lesson that we have learned: the protected chiral algebras of $\\NN=2$ SCFTs realize very special values of their central charges and levels at which nontrivial cancellations tend to take place. We will see more of this phenomenon in the next example.\n\n\\bigskip\n\\subsubsection*{Reducing $(\\,\\wh{\\mf{e}}_6\\,)_{-3}$ to symplectic bosons}\n\nIn this case, our starting point is again an affine Lie algebra, this time $(\\wh{\\mf{e}}_6)_{-3}$. Also we are again led to decompose the adjoint representation of $\\mf{e}_6$ under the maximal $\\mf{su}(3)_1\\times\\mf{su}(3)_2\\times\\mf{su}(3)_3$ subalgebra associated to the punctures on the UV curve as was done in \\eqref{eq:e6_adjoint_decomposition}, leading to a basis of currents given by \\eqref{eq:e6_decomposed_current_list} subject to singular OPEs given by Eqn.~\\eqref{eq:e6_decomposed_current_OPEs}. Our aim is now to perform a partial reduction of the first puncture. Accordingly, we divide the generating currents as usual,\n\\begin{equation}\n\\label{eq:e6_generators_reduction_basis}\n(J^{1})_{a}^{\\phantom{a}a'}~, \\qquad\\qquad \\{\\phi_i\\} = \\{(J^{2})_{b}^{\\phantom{b}b'}~, (J^{3})_{c}^{\\phantom{c}c'}~, W_{abc}~, \\tilde W^{abc} \\}~,\n\\end{equation}\nwhere now $a,b,c$ are fundamental indices of $\\mf{su}(3)_{1,2,3}$, and the adjoint representation is represented by a fundamental and antifundamental index subject to a tracelessness condition.\n\nThe partial closing down to a minimal puncture is accomplished by means of the subregular embedding,\n\\begin{equation}\n\\Lambda(t_0) = \\frac12(T_1^{\\phantom{1}1} - T_3^{\\phantom{1}3})~,\\qquad\n\\Lambda(t_-) = T_3^{\\phantom{1}1}~,\\qquad\n\\Lambda(t_+) = T_1^{\\phantom{1}3}~.\n\\end{equation}\nThe grading induced by the embedded Cartan turns out to be half-integral in this case and must therefore be supplanted by the integral $\\delta$ grading. Under this grading the generators $\\Lambda(t_-) = T_3^{\\phantom{1}1}$ and $T_{3}^{\\phantom{1}2}$ are negative and of grade minus one. The relevant constraints are thus $\\left(J^{1}\\right)_3^{\\ 1} = 1$ and $\\left(J^{1}\\right)_3^{\\ 2} = 0.$ The implementation of these constraints via the BRST procedure introduces two ghost pairs $b_3^{\\ 1}, c_1^{\\ 3} $ and $b_3^{\\ 2},c_2^{\\ 3}.$\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[scale=.5]{.\/figures\/3_to_hyper_reduction.pdf}\n\\caption{Reduction from the $\\mf{e}_6$ theory to free hypermultiplets.}\n\\label{fig:3_to_hyper_reduction}\n\\end{figure}\n\nIn the reduction of $\\protect\\raisebox{1pt}{$\\chi$}[T_3],$ one finds that the currents $(\\hat J^{1})_{\\bar \\alpha}$ such that $T_{\\bar{\\alpha}}\\in \\ker(ad(\\Lambda(t_+))),$ are given by $(\\hat J^{1})_1^{\\ 2},(\\hat J^{1})_1^{\\ 3},(\\hat J^{1})_2^{\\ 3},$ and the current generating the reduced $\\mf{u}(1)$ symmetry\n\\begin{equation}\n\\JJ_{\\mf{u}(1)} = (\\hat J^{1})_1^{\\ 1}-2(\\hat J^{1})_2^{\\ 2}+(\\hat J^{1})_3^{\\ 3}\\;.\n\\end{equation}\nTogether with the additional generators in \\eqref{eq:e6_generators_reduction_basis}, these constitute the generators of the cohomology at the level of vector spaces. The tic-tac-toe procedure produces honest chiral algebra generators, which we denote by the calligraphic version of the same letter as the vector space generator. The quantum numbers of these redundant generators are summarized in Table \\ref{tab:T3_reduced_generators}. Their precise expressions can be found in Appendix \\ref{subapp:e6_to_free}.\n\\begin{table}[ht!]\n\\begin{center}\n\\begin{tabular}{ c|c|c|c|c }\n\\hline\\hline\n~~Generator~~ & Dimension & $U(1)$ & $SU(3)_2$ & ~~$SU(3)_3$~~ \\\\\n\\hline\n$\\JJ_{\\mf{u}(1)} $ \t\t\t\t\t& $1$ \t\t&$\\phantom{-}0$ \t\t \t& $\\bOn$ \t\t& $\\bOn$\t \t\\\\\n$(\\hat {\\JJ}^{1})_{1\\phantom{\\frac{1}{2}}}^{\\phantom{1}2}$ \t& $\\frac32$ & $\\phantom{-}3$\t\t \t& $\\bOn$ \t\t& $\\bOn$\t \t\\\\\n$(\\hat {\\JJ}^{1})_{1\\phantom{\\frac{1}{2}}}^{\\ph1 3} $ \t& $2$ \t\t& $\\phantom{-}0$ \t \t& $\\bOn$ \t\t& $\\bOn$\t\t\\\\\n$(\\hat {\\JJ}^{1})_{2\\phantom{\\frac{1}{2}}}^{\\ph1 3} $ \t& $\\frac32$ & $-3$ \t\t\t \t& $\\bOn$ \t\t& $\\bOn$\t\t\\\\\n${\\WW}_{1bc}$ \t\t\t\t\t\t\t\t& $\\frac32$ & $\\phantom{-}1$ \t& $\\bTh$ \t\t& $\\bTh$\t\t\\\\\n${\\WW}_{2bc}$ \t\t\t\t\t\t\t\t& $1$ \t\t& $-2$\t \t & $\\bTh$ \t\t& $\\bTh$\t\t\\\\\n${\\WW}_{3bc}$ \t\t\t\t\t\t\t\t& $\\frac12$ & $\\phantom{-}1$\t & $\\bTh$ \t\t& $\\bTh$\t\t\\\\\n$\\tilde{\\WW}^{1bc}$ \t\t\t\t\t\t& $\\frac12$ & $-1$ \t \t & $\\bar{\\bTh}$ \t& $\\bar{\\bTh}$\t\\\\\n$\\tilde{\\WW}^{2bc}$ \t\t\t\t\t\t& $1$ \t\t& $\\phantom{-}2$\t & $\\bar{\\bTh}$ \t& $\\bar{\\bTh}$\t\\\\\n$\\tilde{\\WW}^{3bc}$ \t\t\t\t\t\t& $\\frac32$ & $-1$\t \t & $\\bar{\\bTh}$ \t& $\\bar{\\bTh}$\t\\\\\n$(\\JJ^{2})_{b}^{\\phantom{b}b'}$ \t\t\t\t& $1$\t\t& $\\phantom{-}0$\t\t \t& $\\mathbf{8}$ \t& $\\bOn$\t\t\\\\\n$(\\JJ^{3})_{c}^{\\phantom{c}c'}$ \t\t\t\t& $1$\t\t& $\\phantom{-}0$\t\t \t& $\\bOn$ \t\t& $\\mathbf{8}$\t\\\\\n\\end{tabular}\n\\end{center}\n\\caption{The quantum numbers of redundant generators of the reduced $T_3$ chiral algebra.\\label{tab:T3_reduced_generators}}\n\\end{table}\n\nAgain, we see that there are dimension one half generators $(\\WW_3)_{bc}=W_{3bc}$ and $(\\tilde\\WW^1)^{bc}=\\tilde W^{1bc}$ that one naturally expects should be identified as the symplectic bosons of the reduced theory. Indeed, up to $d$-exact terms, the OPE for these generators is exactly what we expect from the desired symplectic bosons,\n\\begin{equation}\n\\label{eq:reduced_symplectic_boson_OPE}\n(\\WW_3)_{bc}(z) (\\tilde{\\WW}^1)^{b'c'}(0)\\sim \\frac{\\delta_{b}^{\\phantom{b}b'}\\delta_{c}^{\\phantom{c}c'}}{z}~.\n\\end{equation}\nThese generators thus have the correct dimension, charges and OPE to be identified with the expected hypermultiplet generators. Again, by studying the null relations of the reduced chiral algebra at levels $h=1,\\frac32,2$ one finds that precisely when the level $k=-3$, all of the higher dimensional generators in Table \\ref{tab:T3_reduced_generators} are related to composites of $(\\WW_3)_{bc}$ and $(\\tilde {\\WW}^1)^{bc}$ (see Appendix \\ref{subapp:e6_to_free}). In particular, one can verify that the $\\mf{u}(1)\\oplus \\mf{su}(3)_2 \\oplus \\mf{su}(3)_3$ currents are equal to their usual free field expression modulo null states.\n\\section{Cylinders and Caps}\n\\label{sec:cyl_and_cap}\n\nThe procedure we have introduced for reducing punctures is sufficiently general that there is no obstacle to formally defining chiral algebras associated to unphysical curves such as the cylinder and (decorated) cap. These are unphysical curves from the point of view of class $\\SS$ SCFTs, although they have a physical interpretation in terms of theories perturbed by irrelevant operators that correspond to assigning a finite area to the UV curve \\cite{Gaiotto:2011xs}. It would be interesting to interpret the chiral algebras associated with these curves in terms of those constructions, although naively extrapolating away from conformal fixed points seems impossible. (There are other unphysical curves, such as a thrice-punctured sphere with two minimal punctures and one maximal puncture, and the chiral algebras for these can also be defined. We focus on cylinders and caps in this section as they are particularly natural objects in the TQFT.) \n\nThe chiral algebra associated to a cylinder is a particularly natural object to consider from the TQFT perspective because it corresponds to the identity morphism (when taken with one ingoing and one outgoing leg). When taken with two ingoing or two outgoing legs, it is the chiral algebra avatar of the evaluation and coevaluation maps, respectively, of an ordinary two-dimensional TQFT. Similarly, the chiral algebra of the undecorated cap is the chiral algebra version of the trace map.\n\nOn the whole, we have not been able to systematically solve the BRST problem for these theories in the general case. This is because, as we shall see, the chiral algebras involve dimension zero (or negative dimension) operators, which prevent us from applying the simple algorithm set forth in Sec. \\ref{sec:reducing}. Nevertheless, we are able to develop a compelling picture of the mechanics of the cylinder chiral algebra. It would be interesting from a purely vertex operator algebra point of view to construct these algebras rigorously.\n\n\\bigskip\n\\input{.\/sections\/Section_5\/S5_1}\n\\bigskip\n\\input{.\/sections\/Section_5\/S5_2}\n\\bigskip\\bigskip\n\\subsection{The (decorated) cap chiral algebra}\n\\label{subsec:decorated_cap}\n\nThe chiral algebra associated to a decorated cap can be defined by partially reducing one puncture of the cylinder chiral algebra. The resulting chiral algebra should have the interesting property that if you glue it to another class $\\SS$ chiral algebra using the standard gauging BRST construction, it effectively performs the appropriate qDS reduction on the original chiral algebra.\n\nIn trying to characterize this chiral algebra, one immediately encounters the problem that it includes operators of negative dimension. Namely, consider the first steps of the general reduction procedure as applied to the cylinder chiral algebra. The (potentially redundant) generators for the decorated cap labeled by an embedding $\\Lambda$ include the usual currents $\\hat J_{\\bar\\alpha}$ for $T_{\\bar\\alpha} \\in \\ker( ad_{{\\Lambda}(t_+)})$, the dimensions of which are shifted by their ${\\Lambda}(t_0)$ weight. However, there are additional generators coming from the dimension zero bifundamental fields $g_{ab}$ of the cylinder theory. In terms of the reduced symmetry associated with the decoration, these fields are reorganized as follows: for each irrep of $\\mf{su}(2)$ in the decomposition \\eqref{eq:generaldecomposition} of the fundamental representation there are $2j+1$ generators transforming in representation $\\mf{f}\\otimes\\RR_j^{(\\mf{f})}$ with dimensions $-j,-j+1,\\ldots,j$. The dimension zero null relation corresponding to the determinant condition in the cylinder theory of the cylinder theory is expected to descend to the cap theory. The superconformal index (see App. \\ref{subapp:cylinder_cap_index}) supports this expectation, and further suggests that there may be no additional redundancies.\n\nThe existence of negative dimension operators makes this a rather exotic chiral algebra, and we will not explore it much further. Nevertheless, let us offer a couple of brief comments. In the description of the cap chiral algebra given in the previous paragraph, it is not immediately clear that an affine current algebra associated to the maximal puncture survives. However, one finds that the necessary dimension one currents can be constructed using the above fields in a manner similar to \\eqref{eqn: Rightcurrent_cylinder}, using only those elements of the left current algebra that survive in the cap chiral algebra. When gluing the cap to another theory $\\TT$, this current algebra will enter in the BRST current \\eqref{eqn: BRSTcurrentcylinder}. As in the case of the cylinder, the Gauss law constraint can be solved by constructing transferred fields, which thanks to nonzero conformal dimension of the various components of $g_{ab}$ end up with their dimensions shifted correctly. It remains to verify that restricting to the BRST cohomology removes the transferred versions of the currents $J^{\\TT}_{A}$ for $T_{A} \\not\\in \\ker( ad_{{\\Lambda}(t_+)})$.\n\\subsection{The cylinder chiral algebra}\n\\label{subsec:cylinder}\n\nThe chiral algebra associated to a cylinder should be obtained by performing a complete qDS reduction on one puncture of the trinion chiral algebra $\\protect\\raisebox{1pt}{$\\chi$}[T_n]$. In the generalized TQFT, the cylinder chiral algebra plays the role of the identity morphism for a single copy of the affine Lie algebra, ${\\rm Id}:\\wh{\\mf{su}(n)}_{-n}\\mapsto\\wh{\\mf{su}(n)}_{-n}$. The essential property associated with an identity morphism is illustrated in Figure \\ref{fig:tqft_identity}.\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[scale=.5]{.\/figures\/identity_morphism.pdf}\n\\caption{Characteristic property of the identity morphism.}\n\\label{fig:tqft_identity}\n\\end{figure}\nAs a statement about chiral algebras, the identity property is quite interesting. It means that the chiral algebra should have the property that when tensored with another class $\\SS$ chiral algebra $\\protect\\raisebox{1pt}{$\\chi$}[\\TT]$ along with the usual $(b,c)$ ghosts, restriction to the appropriate BRST cohomology produces a chiral algebra that is isomorphic to the original class $\\SS$ chiral algebra,\n\\begin{equation}\nH^*_{\\rm BRST}\\left(\\psi\\in\\protect\\raisebox{1pt}{$\\chi$}_{\\rm Id}\\otimes\\protect\\raisebox{1pt}{$\\chi$}[\\TT]\\otimes\\protect\\raisebox{1pt}{$\\chi$}_{bc}\\,\\restr{}{}\\,b_0\\psi=0\\right)\\cong\\protect\\raisebox{1pt}{$\\chi$}[\\TT]~.\n\\end{equation}\n\nAs stated above, the qDS reduction problem in this case is substantially complicated by the fact that amongst the list of naive generators of the reduced chiral algebra, there will always be dimension zero currents. Consequently, a systematic solution of the BRST problem that removes redundancies from the list of generators is difficult even in the case of the $\\protect\\raisebox{1pt}{$\\chi$}[T_2]$ and $\\protect\\raisebox{1pt}{$\\chi$}[T_3]$ theories, for which the starting point of the reduction is known. A somewhat detailed analysis of the $\\mf{su}(3)$ case can be found in Appendix \\ref{app:cylinders_and_caps}.\n\nAlthough we don't have a general first principles solution, the general structure of the reduction and our intuition gained from other examples suggests a simple characterization of the cylinder chiral algebra. We state this here as a conjecture.\n\\begin{conj}[Cylinder chiral algebra]\\label{conj_cylinder}\nThe chiral algebra associated to a cylinder of type $\\mf{su}(n)$ is finitely generated by an $\\widehat{\\mf{su}(n)}_{-n}$ affine current algebra $\\{(\\JJ_{L})_A(z)$, $A=1,\\ldots,n^2-1\\}$, along with dimension zero currents $\\{g_{ab}(z),~a,b=1,\\ldots,n\\}$ that are acted upon on the left by the affine currents. These dimension zero currents further obey a determinant condition $\\det g=1$, \\ie, they form a matrix that belongs to $SL(n,\\Cb)$.\n\\end{conj}\nThis turns out to be a surprisingly interesting chiral algebra. Let us mention a few of its properties.\n\nThe first key property -- one which is not completely obvious from the description -- is that this chiral algebra actually has two commuting $\\widehat{\\mf{su}(n)}_{-n}$ current algebras. The second set of affine currents are defined as follows\n\\begin{equation}\\label{eqn: Rightcurrent_cylinder}\n\\left(\\JJ_{R}\\right)_c^{\\ c^\\prime}(z)\\colonequals \\left(\\JJ_{L}\\right)_{b^\\prime}^{\\ b}\\ g_{b c}\\ g^{b^\\prime c^\\prime} + n\\left(g_{bc}\\ \\partial g^{bc^\\prime} - \\frac{1}{n} \\delta_c^{c^\\prime} g_{bd}\\ \\partial g^{bd} \\right)~,\n\\end{equation}\nwhere we have traded the adjoint index for a fundamental and antifundamental index satisfying a tracelessness condition, and we've also introduced the shorthand\n\\begin{equation}\\label{eqn: determinantconstraint}\ng^{ab}(z) = \\frac{1}{n!}\\epsilon^{a a_2\\ldots a_n}\\epsilon^{b b_2\\ldots b_n} \\left(g_{a_2b_2}\\ldots g_{a_nb_n}\\right)(z)~.\n\\end{equation}\nBecause of the determinant condition, this can be thought of as the inverse of $g_{ab}(z)$. The currents $(\\JJ_R)_A(z)$ act on the dimension zero currents on the right. The $\\JJ_R$ currents and the $\\JJ_L$ currents have nonsingular OPE with one another, so they generate commuting affine symmetries. These are the symmetries associated with the two full punctures of the cylinder.\n\nThe key feature of this chiral algebra should be its behavior as the identity under gluing to other class $\\SS$ chiral algebras. Let us thus consider a chiral algebra associated to a UV curve $\\CC_{g,s\\geq1}$ with at least one maximal puncture. Let us consider a general operator in this theory which will take the form $X_{a_1 a_2\\ldots a_p}^{b_1b_2\\ldots b_q}$, with $p$ fundamental indices and $q$ antifundamental indices (possibly subject to (anti)symmetrizations and tracelessness conditions) of the flavor symmetry associated to the maximal puncture and with its transformation properties under other flavor symmetries suppressed. Then our expectations is that after gluing in the cylinder, there will be a new operator of the same dimension of the same form, but where its transformation under the symmetry of the original maximal puncture has been replaced with a transformation under the symmetry at the unglued end of the cylinder.\n\nWe can see how this might come about. Gluing a cylinder to the maximal puncture means tensoring the original chiral algebra with the chiral algebra of conjecture \\ref{conj_cylinder} in addition to the usual adjoint $(b,c)$ system of dimensions $(1,0)$. We then restrict ourselves to the BRST cohomology (relative to the $b$-ghost zero modes) of the nilpotent operator\n\\begin{equation}\\label{eqn: BRSTcurrentcylinder}\nQ_{\\rm BRST} = \\oint dz\\,c^{A} ((\\JJ_L)_{A} + J^{\\TT}_{A} + \\frac12 J^{\\text{gh}}_{A})~,\n\\end{equation}\nwhere $J^{\\TT}_{A}$ is the current for the symmetry associated to the puncture on $\\CC_{g,s\\geq1}$ that is being glued. Our original operator, which was charged under the $\\mf{su}(n)$ that is being gauged and therefore does not survive the passage to BRST cohomology, has a related \\emph{transferred operator} of the following form\n\\begin{equation}\n\\hat X^{c_1 c_2\\ldots c_p}_{d_1d_2\\ldots d_q} = X_{a_1 a_2\\ldots a_p}^{b_1b_2\\ldots b_q}\\ g^{a_1c_1}\\ g^{a_2c_2}\\ldots g^{a_pc_p}\\ g_{b_1d_1}\\ g_{b_2d_2}\\ldots g_{b_qd_q}~.\n\\end{equation}\nThis operator \\emph{is} gauge invariant, since the gauged symmetry acts on $g_{ab}, g^{ab}$ on the left. In this sense the $g_{ab}$ fields effectively transfer and conjugate the symmetry from one end of the cylinder to the other. Notice that the transferred operators have the same dimension as before, because the $g_{ab}(z)$ have dimension zero. What's more, by virtue of the unit determinant condition on $g_{ab}$, we see that the OPE of the transferred fields ends up being exactly the conjugate of the OPE of the original fields. It therefore seems likely that we recover precisely the same chiral algebra on the other end of the cylinder (up to conjugation of $\\mf{su}(n)$ representations). Of course, for this construction to work we have to assume that the spectrum of physical operators will consist only of the transferred operators. It would be interesting to prove this conjecture.\n\nFinally, one can't help but notice the similarities between this description of the cylinder chiral algebra and the discussions of \\cite{Moore:2011ee} regarding the holomorphic symplectic manifold associated with the cylinder in the Higgs branch TQFT. In that work, the hyperk\\\"ahler manifold $T^* G_{\\Cb}$ was associated to the cylinder. It is interesting to note that the chiral algebra we have described in Conjecture \\ref{conj_cylinder} seems to be precisely what one obtains from studying the half-twisted $(0,2)$ supersymmetric sigma model on $G_{\\Cb}$ \\cite{Witten:2005px,Kapustin:2005pt}. Alternatively, it describes the global sections of the sheaf of chiral differential operators on $G_{\\Cb}$ as defined in \\cite{Malikov:cdr1,Malikov:cdr2,Malikov:gerbes1,Malikov:gerbes2,Malikov:gerbes3}. This connection is exciting, but remains mostly mysterious to the authors at present.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzboaz b/data_all_eng_slimpj/shuffled/split2/finalzzboaz new file mode 100644 index 0000000000000000000000000000000000000000..c95f9f6e6017e1c7c5937fbc968bf1d40d332a99 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzboaz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nScene understanding from sensory measurements is an essential task for intelligent agents, and has been widely investigated in recent years. The advances of deep learning brought scene understanding into a new era, especially for semantic image understanding tasks starting from object detection \\cite{girshick_rich_2014} to panoptic segmentation \\cite{kirillov_panoptic_2018}. It can be argued that the aforementioned tasks provide the scene representations majorly in the context of pixels, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot bounding boxes, per-pixel semantic labels, \\emph{etc}\\onedot} \\def\\vs{\\emph{vs}\\onedot, which are not the ideal data structures for high level reasoning and decision making by intelligent agents. An alternative option for representing geometric and semantic image content is a graph. Compared to pixel-based representations, a graph, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot OpenStreetMap \\cite{haklay_openstreetmap:_2008} or a scene graph as in \\cite{johnson_image_2015}, is a much better data structure to store scene information for high level reasoning and decision making and, at the same time, being an efficient representation in terms of memory and compute requirements. While the translation from image content (like bounding boxes \\emph{etc}\\onedot} \\def\\vs{\\emph{vs}\\onedot.) into a graph can be performed with hand-designed logic, such approaches are highly task-dependent and thus not scalable. Therefore, similar to our work, significant contemporary research is devoted into methods that can learn to translate image content in a (scene) graph.\n\nPractically all current state-of-the-art methods for this image-to-graph task, like \\cite{xu_scene_2017, herzig_mapping_2018, zellers_neural_2018, yang_graph_2018, li_factorizable_2018, qi_attentive_2019} and as discussed in Section \\ref{sect_related_work}, build on top of an object detection framework and rely on meticulously annotated datasets that contain the relationships between objects, as the ground truth for fully-supervised training. Although large scale datasets for this task are available \\cite{krishna_visual_2017}, the time required for the needed annotations is so significant that also these fully-supervised learning-based approaches do not scale well to task for which the required ground truth is not yet available. Therefore, we set the basic research goal to be able to learn the image-to-graph translation task in a self-supervised manner by extending the methodology of auto-encoding. While the current capability of our approach is limited to simple line drawings, we believe that it holds the key to developing image-to-graph methods that scale over many different scene understanding tasks and thus do not rely on hand-crafted logic or meticulously annotated datasets.\n\nThe details of our graph auto-encoding approach are provided in Section \\ref{sect_method}. At its core, it learns to translate image content into a graph that is represented by node and adjacency matrices. This learning is self-supervised using a fully-differential auto-encoder in which the decoder consists of several carefully designed neural networks and the decoder uses techniques from differentiable image drawing. The image-to-graph-to-image task is then learned in an auto-encoder fashion by minimizing the pixel loss between the input image and the image generated by the decoder. During inference, only the encoder is used, although the learning procedure can, in theory, also be used on-line to fine-tune the obtained graph. \n\nTo demonstrate our approach, we use a synthetic dataset that contains several line drawings of simple shapes, each of which represents a graph with nodes and edges. The details of the dataset and its corresponding experimental results are presented in Section \\ref{sect_method} and Section \\ref{sect_exp}. Although the demonstrated experiments are still at an early stage and further work must be carried out to generalize to large scale real datasets like Visual Genome \\cite{krishna_visual_2017}, we believe our findings are worth to be shared with the community. Our contributions include:\n\\begin{itemize}\n\t\\item the, to the best of our knowledge, first neural network that can learn an image-to-graph translation task by only using self-supervision; \n\t\\item a comparison of our method to an fully-supervised equivalent baseline, which shows our approach exhibits comparable performance in terms of F1-score of triples matching;\n\t\\item several ablation studies that examine and reveal key properties of our method.\n\\end{itemize}\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{fig_overview}\n\t\\caption{The overview of the proposed approach. The auto-encoder is able to extract the canonical graph information from an input image at the bottleneck, with only the image-level self-reconstruction loss.}\n\\end{figure}\n\n\n\\section{Related work}\n\\label{sect_related_work}\n\n\\paragraph{Scene graph generation}\nThe task of scene graph generation, as pioneered by \\citet{johnson_image_2015}, is to estimate a graph structure representing the relevant information of the scene from sensory measurements, In the graph, the objects are nodes and their relationships are modeled as edges. Various approaches are proposed in recent years, enabled by the release of the Visual Genome dataset \\cite{krishna_visual_2017} with massive manual annotations. Most of the state-of-the-art approaches \\cite{xu_scene_2017, herzig_mapping_2018, zellers_neural_2018, yang_graph_2018, li_factorizable_2018, qi_attentive_2019} are formulated on top of bounding box-based detection frameworks, while using advances in graph convolutional networks (GCN) \\cite{yang_graph_2018} and graph attention mechanisms \\cite{qi_attentive_2019} to construct the scene graph. In \\citet{newell_pixels_2018} it is proposed to generate heat-maps for both objects (nodes) and relations (edges) simultaneously for further graph construction using associative embeddings. Recently, scene graph generation is also extended into 3D \\cite{armeni_3d_2019} and applied to input data other than images, such as point clouds. \\cite{wald_learning_2020}. \n\nIt is important to note that these mentioned state-of-the-art methods are, unlike our method, highly dependent on massive annotated ground truth for training. While annotating for object detection is rather straightforward, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, an object is present or not, annotating the relationships between objects is much more involved. It does not only depend on the image context but also on the task what relationships are relevant to be annotated and it is practically impossible to annotate all possible relationships between objects for all possible tasks. This is also why the main performance criteria for image-to-graph translation is currently based on recall but not on precision, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, methods typically estimate many redundant relationships that are not present in the ground truth annotations but that, in certain cases, can be considered as being correct. While the Visual Genome dataset \\cite{krishna_visual_2017} is extensive it surely does not cover the need for ground truth annotations for all possible tasks in computer vision that can benefit from obtaining graph representations from images. As such, to make available image-to-graph translation methods that can effortlessly scale to new tasks, we set the basic research goal to investigate and develop methods that require as little ground truth annotations as possible and preferably none.\n\n\\paragraph{Unsupervised scene representation learning}\nIn general, to reduce the dependency on expensive manual annotations, unsupervised approaches have been widely investigated. The learned representations include individual objects and their information, physical factors, \\emph{etc}\\onedot} \\def\\vs{\\emph{vs}\\onedot. \n\nIn \\citet{burgess_monet_2019} a variational auto-encoder is used and trained together with a recurrent attention network decomposing the objects in an image. This model is fully differentiable and can be trained with self-reconstruction loss in an end-to-end manner. More recently, the work in \\citet{yang_learning_2020} went one step further to decompose the image into objects and enable object manipulation without requiring object-level annotations. Also, in \\citet{wu_unsupervised_2020} the proposed encoder-decoder framework factors each input image into depth, albedo, viewpoint, and illumination, using only unsupervised reconstruction loss. This is achieved by interpreting the prior knowledge of symmetric structure in the design of the network's information flow.\n\nThe mentioned approaches achieve unsupervised learning of scene representation mainly by carefully designing the framework using human prior knowledge and training the models in an auto-encoding fashion with reconstruction supervision. Our approach has the same philosophy, but is more aggressive in the representation-shifting that is performed. In our approach, the self-learned representation (graph) is further away from the input representation (image) in terms of the format and the level of semantic information, than in other works where the self-learned representations are image-level objects.\n\n\\paragraph{Differentiable image synthesis}\nAn important part of our approach is the ability to decode a graph into an image. Conventional computer graphics (CG) methods generate realistic images by approximating the physical processes of light, which are typically hard-coded and non-differentiable. As such, they are not designed to be integrated with deep learning methods. To overcome this, differentiable renderers \\cite{loper_opendr_2014, kato_neural_2018} are proposed to mimic the classical CG process with differentiable operations, which enables various tasks of rendering in combination with using deep neural networks. These renderers are mainly conventional geometry-based approaches with novel differentiable extensions. Thus, they are not able to have the abstract semantic understanding of the scene.\n\nSignificant research is carried out on image synthesis using CNN-based generative networks, such as image-to-image translation \\cite{isola_image--image_2017, zhu_unpaired_2017}. CNN-based approaches are able to capture the complicated high level information of the scene to be generated, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot various background layouts, object classes and their attributions, \\emph{etc}\\onedot} \\def\\vs{\\emph{vs}\\onedot. However, these approaches are mostly data-driven and lack the explainability of the representations inside the CNNs. \n\nIn contrast, scene graphs are better able to provide high level explainable representations than conventional tensors or feature maps.\nMore close to our research, scene graphs are also used in the image synthesis tasks as input \\cite{johnson_image_2018}. Furthermore, image manipulation is also realized by interactively modifying the scene graph extracted from the input image in the bottleneck of an encoder-decoder network \\cite{dhamo_semantic_2020}. However, note that in \\cite{dhamo_semantic_2020} their explainable scene graph representation is first massively annotated by humans and then further learned by the model, unlike our approach that employs self-supervised learning and does not require any human labeling effort.\n\n\n\\section{Methodology}\n\\label{sect_method}\n\n\\subsection{Problem definition}\n\nWe start with the conventional problem definition of scene graph generation task and then discuss the similarities and differences in our setting. \n\nGiven an input image $I$, we assume that there exists a scene graph $G = (V, E)$ with $V$ being a set of nodes corresponding to localized object regions in image $I$, and $E$ being the edges representing the relationships between nodes $V$. Note that each element $v_i$ and $e_{ij}$ in $V$ and $E$ could have one of multiple semantic labels. Thus, the scene graph generation can be formulated as the mapping $f: I \\mapsto G$. Most approaches \\cite{xu_scene_2017, herzig_mapping_2018, zellers_neural_2018, yang_graph_2018, li_factorizable_2018, qi_attentive_2019} factorize it mainly into two sequential sub-mappings, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot $f = f_E \\cdot f_V$, where\n\\begin{equation}\n\\begin{array}{l}\nf_V: I \\mapsto V \\\\\nf_E: (I, V) \\mapsto E.\n\\end{array}\n\\end{equation}\n\n$f_V:I \\mapsto V$ is often accomplished by object detection frameworks, and $f_E: (I, V) \\mapsto E$ is the relationship classification. Both of these mapping processes are usually performed by learnable neural networks. In the conventional fully-supervised setting, for each training sample the ground truth nodes $\\bar{V}$ and edges $\\bar{E}$ are provided such that the neural networks performing the mapping $f: I \\mapsto G$ can be learned.\n\nIf the ground truth $\\bar{V}$ and $\\bar{E}$ are not available, the previous approaches cannot be applied, as they are discriminative models trained in a supervised manner. On the contrary, we opt to extend the task of scene graph generation to image-graph-image translation by using the concept of auto-encoding. The encoder $f: I \\mapsto G$ is expected to perform the same task as the scene graph generation \\cite{johnson_image_2015}, only in our case, the supervision is not provided. It is also composed of two sub-modules, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, $f_V: I \\mapsto V$ for node prediction and $f_E: (I, V) \\mapsto E$ for edge prediction. The decoder $g: G \\mapsto I$ takes the intermediate graph information as input, and re-generates the input image in a fully differentiable manner. By placing the encoder and decoder behind each other, techniques from auto-encoding can be used to learn the graph information.\n\nFormally, our goal is to have two mapping functions $f$ and $g$,\n\\begin{equation}\n\\begin{array}{l}\n\\tilde{G} = f(I) = f_E \\cdot f_V (I) \\\\\n\\tilde{I} = g(\\tilde{G}), \\\\\n\\end{array}\n\\end{equation}\nsuch that\n\\begin{equation}\n\\begin{array}{l}\n\\tilde{G} \\in \\mathcal{G}, \\\\\n\\tilde{I} \\in \\mathcal{I}, \\\\\nf,g = \\underset{f,g}{\\arg\\min}\\ \\mathcal{L}(I, \\tilde{I}), \\\\\n\\end{array}\n\\end{equation}\nwhere $\\mathcal{G}$ and $\\mathcal{I}$ are the collection of all possible graphs and images respectively, and $\\mathcal{L}(\\cdot, \\cdot)$ is the similarity measure of two images. \n\nIn the following sub-sections, we will detail the design and training protocol of the proposed approach. We take a simple toy task with a synthetic dataset as an example to illustrate the idea. The dataset contains a bunch of undirected graphs which are represented as images. The nodes in the graphs are the end-point of the visible edges and the edges are defined as the binary connectivity between two nodes. Note that ideally, the task and dataset can be extended with more complicated definitions, however as the first step, we opt to simplify the task and focus on the basic methodology. The details of the dataset will be discussed in Section~\\ref{sect_exp}.\n\n\\subsection{Network}\n\\label{sect_method_net}\n\n\\begin{figure}\n\t\\centering\t\\includegraphics[width=\\linewidth]{fig_network}\n\t\\caption{The overall framework of the proposed approach, with the upper part being the detailed pipeline and the lower part being the differentiable drawing module. The entire framework is trained end-to-end and using only self-reconstruction supervision. Please See Section \\ref{sect_method_net} for more details.}\n\t\\label{fig_network}\n\\end{figure}\n\nThe overall design of the auto-encoder for image-graph-image translation task is visualized in Figure~\\ref{fig_network}. It mainly consists of three differentiable modules, i.e. node prediction, edge classification and differentiable image synthesis. The three modules are sequentially connected and trained in an end-to-end manner using image self-reconstruction loss. \n\n\n\\subsubsection{Encoder: from image to graph}\n\nAs discussed previously, the encoder part of the network is to predict the graph information from the input image, which is formalized as the process $f = f_E \\cdot f_V :I \\mapsto G$. The encoder is composed of three sequential parts, namely node attention, coordinates transformation, and relation classification. The output of the encoder, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot the bottleneck of the auto-encoder, is the graph represented by two matrices, i.e. the node position matrix and the adjacency matrix.\n\n\\paragraph{Node attention} \n\nAt the early stage, the convolutional layers from ResNet-50 \\cite{he_deep_2016} are implemented as the feature extractor. Since our task is much simpler, we take features from the second residual block and reduce the output channels of two blocks to 128 and 64, respectively. To reduce the size of the extracted features, two max-pooling layers with kernel size and stride being 2 are also applied before each block. On top of the residual blocks we have two extra convolutional layers to predicts the node attention maps $M_{att}$, with the spacial size reduced by the factor of 4 due to the previous pooling operations, and with the number of channels being the pre-defined maximum number of nodes $N_{max}$. Each channel in node attention maps $M_{att}$ is passed through a 2-D softmax layer, such that the summation of all the elements strictly equals to 1, which is essential for the later computation of node coordinates. Note that here we expect the attention channels are able to provide channel-wise heat maps indicating the detected nodes. However, no supervision or extra information is supplied to the network, and the network is able to learn from the final reconstruction loss by gradient back-propagation. \n\n\n\n\\paragraph{Coordinates transformation} \n\nBeing different from other detection frameworks \\cite{girshick_rich_2014, girshick_fast_2015, ren_faster_2017}, in our setting the coordinates of nodes are not provided as the ground truth. Thus the differentiability is crucial at this stage, since without which the gradient back-propagation cannot be applied and the node attention module cannot learn anything. However, it is non-trivial to transform the positional information in the heat maps to numerical coordinates in a differentiable manner. Here we use differentiable spatial to numerical transform (DSNT) \\cite{nibali_numerical_2018} to perform the transformation. Due to the usage of the previously mentioned 2-D softmax layer, for each channel, the heat map can be seen as a 2-D probabilistic distribution of a certain node. Thus, one can create two fixed template maps containing the numerical coordinates value for each pixel, in horizontal and vertical directions, respectively. With element-wise production and summation, the numerical coordinates for each node can be computed. At this stage, there exist no trainable parameters, and the computation, which is detailed in \\cite{nibali_numerical_2018}, is fully deterministic and differentiable.\n\n\\paragraph{Edge prediction}\n\nOnce the coordinates of the nodes are computed, a set of regions of interest (ROIs) for edges can be constructed by combining arbitrary two node coordinates as a bounding box. Then ROI pooling \\cite{ren_faster_2017} is applied on the input image and creates a batch of local image patches of size $N_{max}^2 \\times 1 \\times 16 \\times 16$ for further edge prediction by a edge classifier. The classifier consists of two convolution layers (kernel size 3 and output channels 32 and 16, respectively) with Batch Normalization, ReLU activation and max-pooling. Then, the features are flattened and processed by two fully-connected layers with ReLU and Sigmoid activations, respectively.\n\nPlease note that in our toy task the edge represents the connectivity for the sake of simplicity, while it can be extended with semantic information with little extra effort. The predicted edge connectivities, together with the previously computed nodes, make up the output of the encoder, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot a graph with nodes position matrix and adjacency matrix.\n\n\\subsubsection{Decoder: from graph to image}\n\nThe task of the decoder $g: G \\mapsto I$ is to reconstruct the input image given the graph information provided from the encoder. The key is to ensure the reconstruction flow is fully differentiable such that the whole pipeline can be trained together end-to-end. Inspired by \\cite{johnson_image_2018}, we consider a template-based image synthesis approach, which is referred to as \\textit{differentiable drawing}. \n\nWe create a template set that contains all the pre-defined relationships that could exist in the dataset. For each forward-pass of the network, the differentiable drawing module walks through all the edges in the predicted adjacency matrix, and for the edges that need to be visualized in the image, spatial transformer network \\cite{jaderberg_spatial_2015} is implemented to copy and draw the corresponding edge template onto the canvas. The differentiable drawing module is able to create a coarse reconstructed image $\\tilde{I}_{\\text{coarses}}$ on a blank canvas. This process can be written as $g_{\\text{coarse}}: G \\mapsto \\tilde{I}_{\\text{coarse}}$. On top of the coarse image $\\tilde{I}_{\\text{coarses}}$, a refinement network $g_{\\text{refine}}: \\tilde{I}_{\\text{coarse}} \\mapsto \\tilde{I}_{\\text{refine}}$ is applied for further post-processing, which does not change the structural information but modifies and eliminates the textures and styles difference between the created coarse image and real input image. In our implementation, the refinement network contains three convolution layers with kernel size 3, intermediate channel size 16, and PReLU activations between every two layers.\n\n\n\\subsection{Training}\n\nThe entire framework is trained end-to-end without supervisions that require manual annotations. To achieve this goal, we apply two losses during training: the main image reconstruction loss for auto-encoding and the auxiliary node attention loss.\n\nAs the main image reconstruction loss, a structural similarity index measure (SSIM) \\cite{wang_image_2004} is used, which encourages the decoder to reconstruct the input image and thus encourages the encoder to understand the image in terms of graphs in an implicit manner. We apply SSIM losses with multi-scale setting on the refined images in the experiments, which is formulated as \n\\begin{equation}\n\t\\mathcal{L}_{\\text{main}} = \\text{MS-SSIM}(\\tilde{I}_{\\text{refine}}, I)\n\\end{equation} \nwhere $\\tilde{I}_{\\text{refine}}$ and $I$ are the reconstructed refined image and input image, respectively.\n\nThe node attention module is expected to predict the node positional attention maps from the input image solely relying on the gradient information from the reconstruction loss. Due to the lack of explicit supervision, the attention module occasionally tends to predict some nodes multiple times while ignoring other nodes. To overcome this behavior and stabilize the node attention training, we apply the \\textit{auxiliary loss} $\\mathcal{L}_{\\text{aux}}$. The idea is to penalize the overlay behavior between the predicted node attentions and encourage the attention module to discover as many different nodes as possible. On the other hand, since the pre-defined maximum number of nodes (attention channels) is fixed and is always larger than the real value, the overlay is expected to happen. We propose a penalty loss conditioned on the similarity measure for each sample, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, the penalty is applied to the samples only if the reconstruction is not good enough. The auxiliary loss can be written as \n\\begin{equation}\n\\mathcal{L}_{\\text{aux}} = \\left\\{\n\t\\begin{array}{ll}\n\t\\text{mean}([\\text{ReLU}(\\sum_{i}^{N_{\\text{max}}}M_{att}[i] - 1)]^2),& \\text{if MS\\_SSIM < mean MS\\_SSIM} \\\\ \n\t0, & \\text{if MS\\_SSIM > mean MS\\_SSIM}\n\t\\end{array}\\right.\n\\end{equation}\nwhere $M_{att}$ is the normalized predicted node attention maps with the maximum cell for each channel being 1. MS\\_SSIM and mean MS\\_SSIM are the similarity measures of each sample and the mean measure of the entire batch during training. All the operations are element-wise, and we skip the height and weight index for attention maps for the sake of simplicity.\n\n\nFormally, the overall loss can be written as\n\\begin{equation}\n\\mathcal{L} = \\mathcal{L}_{\\text{main}} + \\lambda \\cdot \\mathcal{L}_{\\text{aux}}\n\\end{equation}\nwhere $\\lambda$ is the weight for loss balancing and is empirically set to 1 in our experiments.\n\n\\section{Experiments}\n\\label{sect_exp}\nWe perform the following experiments to demonstrate and verify the proposed method: 1) we compare the results of our unsupervised approach and a supervised baseline, qualitatively and quantitatively; 2) we perform an ablation study of our approach using different maximum number nodes; and 3) we also study the effect of different image reconstruction losses.\n\n\\paragraph{Datasets:}\nWe use a synthetic dataset, called Simple Shape dataset, to demonstrate the proposed idea. The dataset contains 50k images with each presenting one of the three simple shapes including line, triangle, and rectangle. Each shape is processed with random affine transformation (including scale, rotation, shear, and translation) and is drawn on an empty black canvas with the size being $128\\times128$ pixels. Please see Fig~\\ref{fig_qualitative} for some visualized samples. Thus, for each image, the number of nodes varies from 2 to 4, which is why the maximum number of nodes is set as 4 in the main experiments.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{imgs\/imgs\/46500}\\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/imgs\/46506}\\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/imgs\/46509}\\\\\n\t\\caption{\\centering input image}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46500_gt-cropped}\\\\\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46506_gt-cropped}\\\\\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46509_gt-cropped}\\\\\n\t\\caption{\\centering ground truth graph}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_c_imgs\/46500_simple_shape_nodes4of4_ours} \\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_c_imgs\/46506_simple_shape_nodes4of4_ours} \\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_c_imgs\/46509_simple_shape_nodes4of4_ours} \\\\\n\t\\caption{\\centering coarse image}\n\t\\end{subfigure} \n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs\/46500_simple_shape_nodes4of4_ours} \\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs\/46506_simple_shape_nodes4of4_ours} \\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs\/46509_simple_shape_nodes4of4_ours} \\\\\n\t\\caption{\\centering refined image}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs_from_g\/46500_simple_shape_nodes4of4_ours}\\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs_from_g\/46506_simple_shape_nodes4of4_ours}\\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs_from_g\/46509_simple_shape_nodes4of4_ours}\\\\\n\t\\caption{\\centering ours prediction (image)}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46500_ours-cropped}\\\\\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46506_ours-cropped}\\\\\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46509_ours-cropped}\\\\\n\t\\caption{\\centering ours prediction (graph)}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs_from_g\/46500_simple_shape_nodes4of4_baseline}\\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs_from_g\/46506_simple_shape_nodes4of4_baseline}\\\\\n\t\\includegraphics[width=\\linewidth]{imgs\/pred_imgs_from_g\/46509_simple_shape_nodes4of4_baseline}\\\\\n\t\\caption{\\centering baseline prediction (image)}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.11\\textwidth}\n\t\\centering\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46500_baseline-cropped}\\\\\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46506_baseline-cropped}\\\\\n\t\\includegraphics[height=\\linewidth]{imgs\/graphs\/46509_baseline-cropped}\\\\\n\t\\caption{\\centering baseline prediction (graph)}\n\t\\end{subfigure}\n\t\\caption{The qualitative results of our approach. (a)-(b) are the inputs image and the corresponding graphs. (c)-(f) are the predictions from our approach, in which (c)-(d) are the output images from the network and (e)-(f) are the graph predictions in image and matrix formats. (g)-(h) are the graph predictions from the baseline, in image and matrix formats, respectively. }\n\t\\label{fig_qualitative}\n\\end{figure}\n\n\\paragraph{Baseline:} \n\nSince there is no existing previous method that performs the same task, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot generating graphs from images without any external supervision, we provide a baseline that shares a similar design of the network, but has access to the ground truth graph information. To have a fair comparison, the capacity of the baseline network, is aligned with the design of the proposed network as much as possible, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot the overall structure, number of layers, and channels are set as similar as possible. Since the ground truth is provided, the decoder is no longer needed for the baseline, and we directly apply two supervisions on the graph outputs of the encoder. First, the supervision for the node attention channels is provided by creating a ground truth heat map with Gaussian kernels at the node ground truth locations. As the node attentions can have random orders, we perform summation across the channels and compute the mean squire pixel error between the single channel attention and the ground truth heat map. Second, for each step, we perform the node matching between the prediction and the ground truth, to re-generate the temporary ground truth adjacency matrix aligned with the correct temporary node order. This allows for a cross-entropy classification loss to be computed. With the aforementioned two supervisions, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, pixel-wise heat map error and cross-entropy classification error, the graph can be learned in a fully-supervised manner.\n\nNote that the baseline is intended to formulate a relatively fair comparison with our approach. Thus, it is preferred that the two models have similar capacities, which is why we apply minimal modifications to our model. One could carefully re-design the nodes detection and relation classification mechanisms for the fully-supervised training scenario, and potentially achieve better performance. However, for investigating the fundamental difference coming from either using supervised and self-supervised training strategies, it is best to use the same network architectures as much as possible.\n\n\\paragraph{Metric:}\n\nTo evaluate the quality of scene graph, the image-wise SGGen metric is often used \\cite{lu_visual_2016, xu_scene_2017, yang_graph_2018}. The idea is to organize the prediction and ground truth graphs as two sets of triplets with the format of \\textit{}. Then triplet matching is performed to compute the conventional Recall@k metric. The main reason only the recall is used and not also precision in previous work is that the manual annotations are sparse and it is not possible to annotate all existing relations between objects in images. However in our dataset, since the images are created from a given graph, we are able to capture the complete graph information. Thus, the precision metric is also applied. In our evaluation protocol, like previous work \\cite{lu_visual_2016, xu_scene_2017, yang_graph_2018}, we also consider the task as detection of triplets, and use F1-score with underlying metrics being precision and recall. \n\nSince our model might have multiple predictions for the same triplet, we pre-process the raw prediction and delete the redundant triplets before evaluation. The recall is computed without sorting the confidence and all the predicted triplets are used for the computation of precision. In this sense, our F-score metric containing precision and recall covers the previously used SGGen, which is essentially recall, and is more challenging in terms of triplet redundancy evaluation, which is essentially the precision.\n\n\\paragraph{Implementation details:} \n\nWe train our approach and the baseline using PyTorch \\cite{paszke_automatic_2017}, and we use Adam \\cite{kingma_adam:_2014} optimizer with batch size $= 128$, $\\beta_1 = 0.6$, and $\\beta_2 = 0.9$ for the training of both models. We also train the network in each setting with different random seeds for 10 times, and calculate the mean and standard deviation for each metric. For the \\textit{proposed} approach, we train with the initial learning rate 0.0005 for 30 epochs and reduce the learning rate by the factor of 10 for the last 10 epochs. As for the \\textit{baseline} approach, we notice that providing the full ground truth will significantly simplify the task, which is expected. Thus, we train the baseline model with the initial learning rate 0.0003 for 15 epochs and reduce the learning rate by the factor of 10 for the last 5 epochs. We find that the sub-task for the node attention prediction is particularly easy to train and over-fits quickly. Also, the supervision quality for the adjacency matrix is dependent on the quality of the predicted nodes, which is used for generating the temporary ground truth on the fly. To solve these issues for the fully-supervised baseline, for the first two epochs we train the node attention module and later we fix it and only train the relation classification module.\n\n\\subsection{Results}\\label{sect_results}\n\n\\paragraph{Self-reconstruction \\vs full supervision:}\nWe compare the performances of the proposed self-supervised approach and the fully-supervised baseline. The results are presented in Table~\\ref{tab_main_qiantitative} and Figure~\\ref{fig_qualitative}. \n\nOur unsupervised approach achieves a comparable performance as the baseline, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot 67.9 and 61.3 respectively, in terms of F1-score. This shows that, even without any external explicit ground truth as a training target, the self-reconstruction is sufficient to provide supervision for the task of graph prediction. Unlike many other encoder-decoder approaches that process the data in the same format, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot image-feature-image translation, in our case, the information in the bottleneck is a conventional graph with random node order and the corresponding adjacency matrix. By creating differentiable transformation modules, as introduced in Section~\\ref{sect_method}, an image-graph-image auto-encoding framework is able to regress the graph information in the canonical formation automatically. As discussed before, note that we do not claim that our approach is absolutely better than the baseline in terms of the performance. One can optimize the design of the baseline in many aspects and achieve performance improvements. However, we opt to keep the network design the same to gain more meaningful insights into our self-supervised approach.\n\nIt must be clarified that the unsupervised approach contains several limitations that the supervised counterpart does not share. First, even though we use a simple dataset, it is less efficient for the network to learn under the self-supervised setting, which can be noticed by the number of training epochs used by two approaches. Second, if the task is more complicated and challenging, the self-supervised learning would be less effective or even fail, compared to the fully-supervised approach. Third, during the experiments we notice that the randomness has a larger impact on our self-supervised approach: although F1-scores are with similar standard deviation, the recalls have noticeable larger variance. With a very small probability, the network fails to reconstruct the full image, which is not likely to happen in a fully-supervised training scenario. \n\n\\begin{table}\n\t\\caption{Quantitative results of our approach and supervised baseline}\n\t\\label{tab_main_qiantitative}\n\t\\centering\n\t\\begin{tabular}{lllll}\n\t\t\\toprule\n\t\tMethod & Supervision & Precision & Recall & F1-score \\\\\n\t\t\\midrule\n\t\tOurs & Image reconstruction & \\textbf{57.7}$\\pm$2.7 & \\textbf{84.1}$\\pm$12.6 & \\textbf{67.9}$\\pm$5.4\\\\\n\t\tBaseline & Full (nodes \\& edges) & 54.0$\\pm$5.9 & 70.9$\\pm$5.9 & 61.3$\\pm$5.9 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\n\\paragraph{Effect of maximum number of nodes:}\n\nIn the previous main experiment we set the pre-defined maximum number of nodes to 4, according to the property of the dataset. In this experiment, we study the performance change when the defined maximum number of nodes is redundant even for the most complicated rectangle shape, which is the typical setting for real tasks. Table~\\ref{tab_num_of_nodes} presents the quantitative results of different values for the maximum number of nodes. \n\nFrom the table, it can be concluded that the redundancy of maximum number of nodes will result in the performance drop of the triplets matching: 54.6, 49.9, and 41.7 F1-score when the number of nodes is 5, 6, and 8, respectively. This is mainly because, when the network has extra chances to reconstruct the image, the adjacency prediction tends to be conservative. For example, to reconstruct a triplet in an image, if there are two extra triplets for the network to predict, the network will tend to generate three triplets with the confidence of connectivity being 1\/3. This will result in a reconstructed image that aligns with the input image due to triplets overlaying, but none of these triplets is correctly classified as connected. \n\nHowever, note that when the redundancy is limited, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot 4 and 5, the performance can remain at the acceptable level. Thus, in the real case that the size of the graph is unknown in a dataset without any graphs labels, as long as the maximum number of nodes is not extremely large, \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot 100\\% extra nodes or more, one can expect to still have a decent graph quality.\n\n\\begin{table}\n\t\\caption{Quantitative results of our approach with different defined maximum number of nodes}\n\t\\label{tab_num_of_nodes}\n\t\\centering\n\t\\begin{tabular}{clll}\n\t\t\\toprule\n\t\tNumber of nodes & Precision & Recall & F1-score \\\\\n\t\t\\midrule\t\n\t\t4 & \\textbf{57.7}$\\pm$2.7 & \\textbf{84.1}$\\pm$12.6 & \\textbf{67.9}$\\pm$5.4\\\\\n\t\t5 & 46.1$\\pm$3.5 & 68.9$\\pm$14.9 & 54.6$\\pm$6.9 \\\\\n\t\t6 & 43.2$\\pm$6.7 & 62.0$\\pm$12.9 & 49.9$\\pm$5.9 \\\\\n\t\t8 & 31.4$\\pm$2.9 & 62.3$\\pm$5.5 & 41.7$\\pm$2.8 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\\paragraph{Loss ablations:}\n\\label{sect_result_loss_ablations}\n\n\\begin{table}\n\t\\caption{The results of our approaches trained with different losses.}\n\t\\label{tab_loss_ablation}\n\t\\centering\n\t\\begin{tabular}{ccc|l}\n\t\t\\toprule\n\t\t& \\multicolumn{2}{c|}{$\\mathcal{L}_{\\text{main}}$ on } & \\\\\n\t\t$\\mathcal{L}_{\\text{aux}}$ & Refined image & Coarse image & F1-score \\\\\n\t\t\\midrule\n\t\tYes & MS-SSIM & - &\\textbf{67.9}$\\pm$5.4 \\\\\n\t\tNo & MS-SSIM & - &63.2$\\pm$8.2 \\\\\n\t\tYes & SSIM & - & 50.3$\\pm$17.7 \\\\\n\t\tYes & - & MS-SSIM & 51.7$\\pm$7.1 \\\\\n\t\tYes & - & SSIM & 12.1$\\pm$9.7 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\nWe also perform the ablation study of different loss settings, to verify the design choice of the training loss. Five loss settings are tested, with their performances listed in Table~\\ref{tab_loss_ablation}. For the sake of simplicity we only show F1-score in this experiment, since the precision and recall are highly correlated to it.\n\nOur default setting (first row in Table~\\ref{tab_loss_ablation}) exhibits the best performance, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot 67.9 F1-score, and significantly outperforms the alternatives. If the auxiliary loss is disabled, one can observe a performance drop by 4.7 F1-score and a increase of standard deviation by 2.8. This validates the contribution of the auxiliary loss: it stabilizes the training procedure and improves the node attention quality without requiring human annotations.\n\nWhen replacing the multi-scale structural similarity index measure (MS-SSIM) with regular SSIM loss (third row), a performance drop can be observed with a margin larger than 17 in F1-score. This is mainly because the structural similarity at multiple scales can better measure the graph difference represented by images and ignore the pixel-level image differences at higher resolutions. Instead of applying the losses after the refinement sub-network, we also apply them on the coarse reconstructed image directly provided by the differentiable CG module (fourth and fifth row). The results are 51.7 and 12.1 F1-score when using MS-SSIM and SSIM, respectively. The performance degradation is mainly due to the domain gap between the training images and online generated images from the differentiable CG module. This domain gap can also be observed in Figure \\ref{fig_qualitative}. In this case, the supervision is not passed through the refinement network, thus the domain transformation is not performed. Since the (MS-)SSIM measures pixel-level similarity between two images, the domain gap will result in additional noise during the loss computation and thus inhibit the image reconstruction task which should be domain-invariant and only focus on the graph structure itself. This experiment also shows the importance of the refinement network and verifies that it can perform the domain adaption task via self-supervised training.\n\n\\section{Conclusion}\n\nIn this work, we propose a novel neural network that can learn to estimate graph information, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot node and adjacency matrices, from image content without the need for manually annotated ground truth. At its core, the node and adjacency matrices are self-learned by properly designing the network architecture of the encoder, aligning it with a decoder based on differentiable image drawing techniques, and training this end-to-end differentiable system on basis of image reconstruction loss. In terms of the commonly used triplet matching metric, our approach achieves a performance comparable to the fully-supervised baseline. Although the current unsupervised approach is limited to line drawings of simple shapes and has certain limitations related to its task complexity and training stability, we believe this approach is an important stepping stone towards self-supervised learning of the image to graph translation task for more complex imagery.\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nPhysics and applications of the whispering gallery microresonators have been attracting a lot of interest over the past decade.\nIn particular, the frequency comb generation in the ring microresonators \nhas revealed a plethora of complex nonlinear effects that included \nthe formation of the so-called {\\em soliton combs}, see, e.g., \\cite{soliton1}. Microresonator combs and solitons are gradually finding their applications in the high precession spectroscopy \\cite{spectr} and information processing \\cite{koos}. The soliton combs in the microring resonators are the dissipative soliton pulses primarily relying on the balance between the Kerr nonlinearity and dispersion, that can be modeled using the Lugiato-Lefever (LL) equation \\cite{soliton1}. The LL model was originally introduced to explain spatial patterns and localized states in the wide-aperture nonlinear resonators \\cite{lug}. Apart from the microring resonators, the bottle \\cite{bot,pol1,snap,prl,dvo}, microbubble \\cite{Farnesi2015,Lu2016}, spheroidal \\cite{mat} and microsphere \\cite{coen,suh} resonators have also been developed for sensing and frequency conversion applications. Four-wave mixing, nonlinear switching, Brillouin and Raman effects have been observed in the spheroidal \\cite{mat}, bottle \\cite{pol1,yang,Asano2016}, microbubble \\cite{Farnesi2015,Lu2016} and microsphere \\cite{coen,suh} resonators. \n\nIn this work, we are proposing a generalization of the LL model to study nonlinear effects and frequency comb generation in the bottle microresonators. These resonators are made of silica or semiconductor strands\/fibers (radius $r$) and operate close to the cut-off frequency of a high order whispering gallery type mode. The side surface of the fiber is curved (bubbled) with the radius of curvature $R$, $R\\gg r$, see Fig. 1(c). This curvature shapes the 'bottle' and provides the axial confinement, that transforms the slowly propagating waveguide mode into a discrete set of the resonator modes \\cite{bot,snap,prl,dvo}. Dispersion near the waveguide cut-off is typically anomalous \\cite{yulin,josab1,josab}, which together with the positive Kerr nonlinearity creates conditions for the phase-matched cascaded four-wave mixing and soliton formation. Light can be coupled into the resonator using the evanescent tails of a tapered fiber mode aligned perpendicular to the bottle axis. \n\nAn advantage of the bottle resonators is that they can be integral parts of the surface nano-scale photonic circuits \\cite{snap}. If used as the Kerr comb generators, the bottle resonators can cover a very wide range of the comb repetition rates, which equals the cavity free spectral range (FSR), from THz to MHz \\cite{dvo}. This flexibility of the FSR design in microresonators has been first reported for microspheroidal resonators \\cite{mat}, which have geometry and hence modal properties similar to the bottle ones. Importantly, bottle resonators provide the GHz to MHz FSR values simultaneously with the on-chip integration option \\cite{dvo}, while the ring resonators with comparable FSRs are becoming too large for on-chip integration. Studying nonlinear effects in the low FSR devices is particularly interesting since the nonlinear shifts of the resonances start to compete and can exceed the FSR. Studies of these regimes have started to emerge only recently in the context of the fiber loop resonators \\cite{wab}, where 100's of meters of fiber required to achieve the few MHz FSRs.\nBelow, we introduce the bottle resonator LL model and demonstrate that the modes of the nonlinear bottle resonators can form multiple coexisting nonlinear resonances and their instabilities can lead to the generation of the low repetition rate frequency combs.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.32\\textwidth]{f1a}\n\\includegraphics[width=0.32\\textwidth]{f1b}\n\\includegraphics[width=0.32\\textwidth]{f1c}\n\\caption{(a) Density plot showing the spatial profiles vs the eigen-frequency $\\epsilon$ of the $50$ modes in the linear bottle resonator.\n(b) Spatial profiles, $|\\tilde\\psi(z)|$, of the upper branches of the nonlinear modes numerically computed using Eq. (\\ref{eq3}) with $\\partial_t=0$: $z_p=0$, $P_0=1$, $\\kappa=10^{-3}$; $\\delta=0$ ($n=0,2$), $\\delta=-10$ ($n=10,12$). (c) Geometry of a bottle resonator.}\n\\label{f1}\n\\end{figure*}\n\\section{Lugiato-Lefever equation for bottle resonators}\nThe modal structure of bottle resonators has been studied theoretically and experimentally \\cite{snap,prl,dvo}. It was demonstrated that the linear Schr\\\"odinger equation with the parabolic potential and positive effective mass, corresponding to the anomalous group velocity dispersion, describes the axial modal family belonging to one whispering gallery (azimuthal) resonance \\cite{snap,prl}. The same equation, but including nonlinearity has been derived from the Maxwell equations in the context of the slow-light mode in optical fibers with longitudinal index variations \\cite{josab1,josab}, which is exactly the technology used to fabricate bottle resonators \\cite{bot,pol1,snap,prl}. For our purposes, we also need to account the pump and loss terms, which are well established in the optical resonator context, starting from the classical LL paper \\cite{lug} and all the way to the recent publications on the microresonator frequency combs, see, e.g., \\cite{soliton1,spectr}. Thus the generalized LL equation for bottle microresonators is\n\\begin{equation}\ni\\partial_T\\Psi = - \\frac{1}{2}d \\partial_Z^2\\Psi \n + \\left(\\omega_0+\\frac{f^2}{2d} U(Z)\\right)\\Psi-i\\kappa_0\\Psi-f|\\Psi|^2\\Psi-fP(Z)e^{-i\\omega_p T}.\\label{eq1}\n\\end{equation}\nHere $T$ is the physical time and $Z$ is the coordinate along the resonator axis. It is appropriate to term Eq. (\\ref{eq1}) as the {\\em generalized LL} equation, because it differs from the classical one \\cite{soliton1,spectr,lug} by the potential term $U(Z)$ and the spatially varying pump $P(Z)=P_0e^{-(Z-Z_p)^2\/w^2}$. Here $P_0$ is the dimensionless pump amplitude, $Z_p$ is the position of the input nano-fiber running across the bottle and $w$ is the characteristic width of the evanescently coupled light \\cite{snap,prl}.\n$\\omega_0$ is the reference frequency chosen close to the eigenfrequency of the ground states of the potential $U(Z)$, see below, and $\\omega_p$ is the pump frequency. $\\kappa_0$ is the photon decay rate. $d>0$ is the group velocity dispersion coefficient \\cite{yulin,josab1,josab}. $f$ is the FSR parameter, which enters Eq. (\\ref{eq1}) through an appropriate scaling of the dimensionless electric field envelope $\\Psi$ and of the pump and potential terms. \n\nLinear spectrum of the pump and loss free resonator is found assuming\n$\\Psi(Z,T)=\\phi_n(Z)e^{-i\\omega_0T+i\\varepsilon_nT}$, which results in the eigenvalue problem\n$-\\varepsilon_n\\phi_n=-\\frac{d}{2}\\partial_Z^2\\phi_n+\\frac{f^2}{2d}U(Z)\\phi_n$.\nAssuming $U(Z)=Z^2$, we recover the spectrum of the quantum harmonic oscillator: $\\varepsilon_n=-f (n+ 1\/2)$. Thus $f$ is indeed the FSR, $\\omega_0+f\/2$ is the physical frequency of the fundamental cavity mode and $q=\\sqrt{d\/f}$ is the width of this mode. One can assume as an estimate $f=60$~MHz, which physically corresponds, e.g., to the silica cylinder with the radius $r=300\\mu$m and with the bottle curvature $R=1$km: $f\\simeq c\/(3\\pi)\/\\sqrt{rR}$, where $c=3\\cdot 10^8$m\/s~\\cite{dvo}. $R\\simeq 1$km has been demonstrated in \\cite{prl}. The number $N$ of modes in a bottle resonator of the length $L$ is estimated from $f^2 L^2\/(2d)=f N$. Thus, $d=0.05Hz~m^2$ and $L=0.3$mm give the fundamental mode width $q\\simeq 30\\mu$m and $N\\simeq 50$. Meaning of all geometrical parameters introduced above is clarified in Fig. 1(c). In what follows we restrict our simulations to the $N=50$ case, however, one can increase $N$ to thousands by increasing $L$ just to few mm's, $N\\sim L^2$. $f|\\Psi|^2$ represents the nonlinear shift of the resonance frequencies. We are primarily interested here in the case, when nonlinear effects are relatively large, so that $f|\\Psi|^2$ is comparable to the FSR, $f$.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.33\\textwidth]{f2a}\n \\includegraphics[width=0.33\\textwidth]{f2b}\n \\includegraphics[width=0.33\\textwidth]{f2c}\n \\includegraphics[width=0.33\\textwidth]{f2d}\n \\caption{(a-c) Structure of the multiple nonlinear resonances for the different pump positions and the resonator losses. Plots show $I=\\int_{-\\infty}^{\\infty}|\\tilde\\psi(z)|^2dz$ vs $\\delta$. (a) $z_p=0$ (maximum of the ground state mode), $\\kappa=10^{-3}$, (b) $z_p=0.79$ (maximum of the $n=1$ mode), $\\kappa=10^{-3}$, (c) $\\kappa=10^{-1}$, $z_p=0$. The bold points show the onsets of the instabilities of the upper branches of the nonlinear resonances, which are stable below these points. The encircled numbers correspond to the modal index $n$. (d) The instability growth rates $\\mathrm{Re}\\, \\lambda$ vs $\\delta$ of the upper branch of the ground state resonance. Parameters as in (a) - solid line, and (c) - dotted line. The inset shows $\\mathrm{Re}\\, u(z)$ of the unstable perturbation.}\n \\label{f2}\n\\end{figure*}\n\nFor our numerical studies we have dimensionalized Eq. (\\ref{eq1}) to the form\n\\begin{equation}\ni\\partial_t\\psi=-(1\/2)\\partial_z^2\\psi+\\left(\\delta+U(z)\/2\\right) \\psi-i\\kappa\\psi-|\\psi|^2\\psi-P(z),\\label{eq3}\n\\end{equation} \nwhere $\\psi=\\Psi e^{i\\omega_pT}$, scaled time is $t=fT$, distance is $z=Z\/q$, $\\kappa=\\kappa_0\/f$ is the normalized loss rate and $\\delta=(\\omega_0-\\omega_p)\/f$ is the normalized detuning.\n$U(z)=z^2$ for $zl$, where $l=L\/q$ is the scaled resonator length, $l^2\/2=N$ and $N=50$.\nDimensionless pump width $w\/q$ is $0.2$ in the numerical examples shown. The density plot in Fig. \\ref{f1}(a) shows $|\\psi(z)|$ for the linear modes, calculated from $-2\\epsilon~\\psi=-\\partial_z^2\\psi+U(z) \\psi$, vs their eigen-frequencies, $\\epsilon\\simeq -(n+1\/2)$.\n\\section{Multistability, instabilities and comb generation}\nFor $\\kappa=P=0$, the nonlinear modes split from the linear spectrum at points $\\delta=-(n+1\/2)$ and their amplitudes increase with $\\delta$ through the positive Kerr nonlinearity, see dashed lines in Figs. \\ref{f2}(a)-\\ref{f2}(c).\nAs soon as the pump and loss are introduced, the dashed lines split into pairs. If $P(z)=P(-z)$, as for $z_p=0$, then the odd modes are not excited, see Figs. \\ref{f2}(a) and \\ref{f2}(c). Otherwise, the even and odd modes show up together, see Fig. \\ref{f2}(b). In the $\\kappa=0$ limit, the nonlinear modes extend to $\\delta\\to+\\infty$, so that for any $\\delta>0$ we have $N$ co-existing solutions: {\\em multistability}. As $\\kappa$ increases the intervals of $\\delta$, where the tilted nonlinear resonances exist, become narrower and eventually shrink below the FSR, at what point the {\\em multistability} is replaced by the more usual {\\em bistability}, cf. Figs. \\ref{f2}(a,b) and \\ref{f2}(c). Data for Figs. \\ref{f2}(a)-\\ref{f2}(c) were obtained by numerically solving Eq. (\\ref{eq3}) assuming $\\psi(z,t)=\\tilde\\psi(z)$. Some examples of the nonlinear mode profiles $\\tilde\\psi$ are shown in Fig. \\ref{f1}(b).\n\nIn order to study stability of the nonlinear modes $\\tilde\\psi$ with respect to noise, we have linearized Eq. (\\ref{eq3}) using the substitution $\\psi(z,t)=\\tilde\\psi(z)+u(z)e^{\\lambda t}+v^*(z)e^{\\lambda^* t}$ with $|\\tilde\\psi|\\gg |u|,|v|$ and solved the resulting eigenvalue problem,\n\\begin{eqnarray}\n&& \\lambda u=-i\\left(\\delta+U(z)\/2\\right) u+(i\/2)\\partial^2_zu-\\kappa u+2i|\\tilde\\psi|^2u+i\\tilde\\psi^2v, \\label{eq4}\\\\ \\nonumber &&\n\\lambda v=i\\left(\\delta+U(z)\/2\\right) v-(i\/2)\\partial^2_zv-\\kappa v-2i|\\tilde\\psi|^2v-i\\tilde\\psi^{*2}u,\n\\end{eqnarray}\nnumerically.\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{f3a}\n\\includegraphics[width=0.3\\textwidth]{f3b}\n\\includegraphics[width=0.3\\textwidth]{f3c}\n\\includegraphics[width=0.3\\textwidth]{f3d}\n\\caption{(a) Instability development of the ground state mode. Parameters as in Fig. \\ref{f1}(a), $\\delta=3$. Spectra of the field across the resonator at the initial (b) and the advanced (c) stages of the comb generation calculated as $S(\\varepsilon,z)=|\\int_{t_0}^{t_0+\\tau}\\psi(z,t)e^{-i\\epsilon t}dt|$: $t_0=0$ in (b),\n $t_0=1450$ in (c) and $\\tau=50$. (d) The spectra at $z=0$ and $\\tau=50$: $t_0=0$ (blue); $t_0=400$ (orange); $t_0=900$ (green); $t_0=1400$ (red). $\\epsilon$ in (b)-(d) corresponds to the physical frequency $\\omega_p-f\\varepsilon$.}\n \\label{f3}\n\\end{figure*}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{f4a}\n\\includegraphics[width=0.3\\textwidth]{f4b}\n\\includegraphics[width=0.3\\textwidth]{f4c} \n\\caption{\n(a) Instability development of the $n=8$ mode. Parameters as in Fig. \\ref{f1}(a), $\\delta=-5$. (b) Spectrum $S(\\varepsilon,z)$, see Fig. \\ref{f3}, at the advanced stage of the comb generation: $t_0=1450$, $\\tau=50$. (c) The spectra at $z=0$. The colors and the time intervals same as in Fig. \\ref{f3}(d).}\n\\label{f4}\n\\end{figure*}\nIn the LL equation without the potential term and with the homogeneous pump, the upper branch of the homogeneous solution is always unstable in the case of focusing nonlinearity \\cite{lug}. The harmonic potential spatially confines the modes, and therefore the instability range shrinks, which is particularly pronounced for the modes with the low to moderate $n$'s, see Fig. \\ref{f2}. In particular, we have found the coexistence of the stable upper branches of the $n=0$ and $n=2$ resonances, Fig. \\ref{f2}(a), and of the \n$n=0,1$ and $3$ ones, see Fig. \\ref{f2}(b). Of course, the lowest amplitude solution is also stable at the same time. Thus the true multistability is realized in our system, which is a relative rare situation in nonlinear optical devices. Increasing the losses leads to broadening of the resonances and either reduction or complete suppression of all the instabilities, Fig. \\ref{f2}(c).\n\nThe inset in Fig. \\ref{f2}(d) shows the eigensolution of Eqs. (\\ref{eq4}) driving the instability of the ground state $n=0$ and the figure itself shows the corresponding growth rate as a function of $\\delta$. Thus the ground state becomes unstable above some critical detuning with respect to the perturbations that are shaped like $n=1$ mode. The noise driven excitation of this instability leads to the development of the periodic in time and space oscillations of the localized wavepacket in the harmonic potential, see Fig. \\ref{f3}(a). These oscillations lose their regularity with time, while more of the higher order modes are getting excited, so that the spectral content of the field is broadening and the frequency comb is generated. Figs. \\ref{f3}(b) and \\ref{f3}(c) show the spectra of the intracavity field calculated at every point inside the resonator at the initial stage of the evolution and when the frequency comb has already fully developed. Fig. \\ref{f3}(d) shows how the spectrum at $z=0$ changes with time and acquires the comb structure. Since, practically, the signal is going to be collected at a point, this is the type of spectrum which is expected to be seen in the experiments. By looking at the space-time evolution of the field in Fig. \\ref{f3}(a), one can say that this comb corresponds to the quasi-soliton pulse oscillating in the harmonic potential and immersed into the sea of the weakly nonlinear modes represented by the discrete part of the spectrum for $\\epsilon<-1\/2$.\nThe dynamics generally gets more complex as the pump frequency is tuned into a resonance with the higher order modes. Fig. \\ref{f4} shows the space-time and spectral evolution observed when we initialized Eq. (\\ref{eq3}) with the unstable $n=8$ mode. In this case, several localized wavepackets emerge and interact in the potential between themselves and with the extended nonlinear modes. This produces the combs, see Figs. \n\\ref{f4}(b) and \\ref{f4}(c), which are both broader and more intense than the ones in Fig. \\ref{f3}. \n\\section{Summary}\nWe have introduced a generalization of the LL model applicable to the bottle resonators and demonstrated multistability and generation of the low repetition rate combs in these devices. \n\\section*{Funding}\nThe Leverhulme Trust (RPG-2015-456); H2020 (691011, Soliring); ITMO University (Grant 074-U01); RFBR (17-02-00081); RSCF (17-12-01413).\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBosonic atoms in optical lattices,\n described by the Bose-Hubbard model~\\cite{jaksch:olatt,fisher:bhubb},\n display a non-trivial quantum phase transition between a superfluid and Mott insulator.\n The latter is an incompressible state with an integer number of atoms per site.\n In a\n trap the phase diagram is revealed by the spatial structure of the gas: one has concentric superfluid and insulating shells.\n This structure has been elegantly explored by\n radio frequency (RF) spectroscopy~\\cite{campbell:ketterle-clock-shift}, a technique which has also given insight into strongly interacting Fermi gases across the BEC-BCS crossover~\\cite{bloch:many-body-cold-atoms-review}.\nHere we use a Random Phase Approximation (RPA) that treats fluctuations around the strong coupling Gutzwiller mean field theory to explore the radio-frequency spectrum of lattice bosons.\n\nWe find two key results: (1) Our previous sum-rule based analysis \\cite{hazzard:rf-spectra-sum-rule} of experiments at MIT~\\cite{campbell:ketterle-clock-shift} stands up to more rigorous analysis: in the limit of small spectral shifts, the RPA calculation reduces to that simpler theory. (2) In a gas with more disparate initial and final state interactions (such as Cesium), the spectrum becomes more complex, with a bimodal spectrum appearing even in a homogeneous gas. The bimodality reveals key features of the many-body state. For example, in the limit considered by Sun, Lannert, and Vishveshwara~\\cite{sun:rf-spectra-condensate-probe}, the spectral features are related to the nearest-neighbor phase coherence. In the Gutzwiller approximation, the phase coherence directly maps onto the condensate density. In this paper we provide a physical picture of this result and explain how this bimodality can be observed in a spatially resolved experiment.\n\n\\subsection{RF Spectroscopy}\nIn RF spectroscopy, a radio wave is used to flip the hyperfine spin of an atom from $\\ket{a}$ to $\\ket{b}$. The rate of excitation reveals details about the many-body state because the $\\ket{a}$ and $\\ket{b}$ atoms have slightly different interactions. Generically the interaction Hamiltonian is $H_{\\rm int}=\\sum_{j} U_{aa} n_a (n_a-1)\/2+U_{bb} n_b (n_b-1)\/2+U_{ab}n_a n_b$, with $U_{aa}\\neq U_{ab}\\neq U_{bb}$, where $n_{\\sigma}$ is the number of $\\sigma$-state atoms on site $j$. In the simplest mean-field picture, the energy needed to flip an atom on site $j$ from state $a$ to state $b$ is shifted by an energy $\\delta\\omega= U_{bb} n_b+(U_{ab}-U_{aa}) n_a$. Applying this picture to an inhomogeneous gas suggests that the absorption spectrum reveals a histogram of the atomic density. Such a density probe is quite valuable: in addition to the aforementioned examples, it was the primary means of identifying Bose-Einstein condensation in atomic hydrogen \\cite{fried:h}.\n\n\n\\begin{figure}[hbtp]\n\\setlength{\\unitlength}{1.0in}\n\\begin{picture}(3.10,2.8)\n\\put(0.,-0.05){\n\\includegraphics[width=1.5in,angle=0]{TwoCorrelnCartoonMottExc}\n}\n\\put(1.55,-0.05){\n\\includegraphics[width=1.5in,angle=0]{TwoCorrelnCartoonSFExc}\n}\n\\put(0.72,1.2){\n\\includegraphics[width=1.6in,angle=0]{TwoCorrelnCartoonNoExc}\n}\n\\put(1.48,1.29){(a)}\n\\put(0.71,0.0){(b)}\n\\put(2.26,0.){(c)}\n\\end{picture}\n\\caption{(Color online) Illustration of two types of RF-active excitations of the lattice superfluid near the Mott transition. Open (blue) circles are atoms in the $\\ket{a}$ state, filled (red) circles are atoms in the $\\ket{b}$ state, and the arrows indicate a delocalized particle while other particles are localized.\n(a) Illustrates the initial superfluid state, consisting of a dilute gas of atoms moving in a Mott background. Final states in (b) and (c), show the excitation of a core or delocalized atom.\n}\n\\label{fig:two-correlations}\n\\end{figure}\n\nRecently Sun, Lannert, and Vishveshwara~\\cite{sun:rf-spectra-condensate-probe} found a bimodal spectrum in a special limit of this problem, as did Ohashi, Kitaura, and Matsumoto~\\cite{ohashi:rf-spectra-dual-character} in a separate limit, calling into question this simple picture. We give a simple physical interpretation of the bimodality.\nAs illustrated in Fig.~\\ref{fig:two-correlations}, the superfluid state near the Mott insulator can be caricatured as a dilute gas of atoms\/holes moving in a Mott background. An RF photon can either flip the spin of one of the core atoms, or flip the spin of one of the mobile atoms. The energy of these two excitations will be very different, implying that the RF spectrum should be bimodal. Through our RPA calculation, we verify this feature, calculating the frequencies of the two peaks and their spectral weights. Interestingly, this calculation reveals that the two excitations in our cartoon model are strongly hybridized.\n\nWe find that that for parameters relevant to experiments on $^{87}$Rb, that the degree of bimodality is vanishingly small and our previous sum rule arguments \\cite{hazzard:rf-spectra-sum-rule} accurately describe such experiments. On the other hand, there are opportunities to study other atoms (for example, Na, Cs, Yb) for which the bimodality may be more pronounced. Moreover, if the interactions or tunneling rates can be tuned via a spin-dependent lattice or a Feshbach resonance then this spectral feature will appear in a dramatic fashion.\n\nThis bimodal spectrum, with one peak produced by the ``Mott\" component and another by the ``superfluid\" component, is reminiscent of the spectrum of a finite temperature Bose gas in the absence of a lattice. As described by Oktel and Levitov~\\cite{oktel:cs-ref}, in that situation one sees one peak from the condensate, and one from the incoherent thermal atoms. We would expect that at finite temperature our ``Mott\" peak continuously evolves into their ``thermal\" peak.\n\n\n\n\\section{Bose-Hubbard Model}\n\\subsection{Model and RF spectra}\nIn the rf spectra experiments we consider, initially all atoms are in the $a$-internal state and the rf pulse drives them to the $b$-state. Consequently,\nwe consider two-component bosons trapped in the periodic potential formed by interfering laser beams, described by a Bose-Hubbard model~\\cite{jaksch:olatt},\n\\begin{eqnarray}\nH &=& -\\!\\!\\!\\!\\!\\!\\sum_{\\begin{array}{c}{\\scriptstyle \\langle i,j\\rangle}\\\\{\\scriptstyle \\sigma=\\{a,b\\}}\\end{array}} t_\\sigma c^\\dagger_{i,\\sigma} c_{j,\\sigma}\n+ \\sum_{\\sigma,j} (V_{j,\\sigma}-\\mu_\\sigma) c^\\dagger_{j,\\sigma} c_{j,\\sigma}\n\\nonumber \\\\\n&&{}+\\sum_{j} \\left ( \\sum_{\\alpha,\\beta}\\frac{U_{\\alpha\\beta}}{2} c^\\dagger_{j,\\alpha} c^\\dagger_{j,\\beta} c_{j,\\beta} c_{j,\\alpha}\\right ) ,\\label{eq:bh-ham-defn}\n\\end{eqnarray}\nwhere $c_\\sigma$ and $c^\\dagger_\\sigma$ are the annihilation and creation operators for states in the internal state $\\sigma$,\n$\\mu_\\sigma$ is the chemical potential, $V_{j,\\sigma}$ is the external potential with $\\delta$, the vacuum $a$-$b$ splitting, absorbed into it, $U_{\\alpha\\beta}$ is the $\\alpha$ state-$\\beta$ state on-site interaction strength, and $t_\\sigma$ is the hopping matrix element.\nThe interactions are tunable via Feshbach resonances and spin-dependent lattices are also available~\\cite{deutsch}. For this latter setup, the hopping matrix elements may be tuned by the intensity of the lattices, and introducing small displacements of the lattice will reduce the overlap between the Wannier states of $a$ and $b$ atoms, and therefore may also be an efficient way to control the relative size of $U_{aa}$ and $U_{ab}$.\nThe interaction $U_{bb}$ will be irrelevant: we will only consider the case where there is a vanishingly small concentration of $b$-state particles.\nIn calculating the response to RF photons we will take $V_j=\\text{constant}$. Trap effects will later be included through a local density approximation~\\cite{hazzard:rf-spectra-sum-rule} which is valid for slowly varying traps~\\cite{pollet:mi,bergkvist:mi,wessel:mi,batrouni:mi,demarco:stability,dupuis:mi-sf-review,sengupta:bhubb-rpa,konabe:out-coupling-single-ptcl-spec,\nmenotti:trivedi-single-ptcl-spectral-weight,ohashi:rf-spectra-dual-character}.\n\nExperimentally the RF spectrum is measured by counting the number of atoms transferred from state $a$ to $b$ when the system is illuminated by a RF pulse. These dynamics are driven by a perturbation\n\\begin{eqnarray}\nH_{\\text{rf}} &=& \\sum_j \\gamma(t) c^\\dagger_{j,b} c_{j,a} + \\text{H.c.}.\\label{eq:rf-pert}\n\\end{eqnarray}\nwhere $\\gamma(t)$ is proportional to the time-dependent amplitude of the applied RF field multiplied by the dipole matrix element between states $a$ and $b$: typically $\\gamma$ is a sinusoidal pulse with frequency $\\omega$ with a slowly varying envelope ensuring a small bandwidth. Due to the small wave-number of RF photons, recoil can be neglected.\n\nFor a purely sinusoidal drive, the number of atoms transferred per unit time for short times is\n\\begin{eqnarray}\n\\Gamma(\\omega) &=& \\frac{2\\pi}{\\hbar} \\sum_{i,f}p_i \\delta(\\omega-(E_f-E_i)) \\left|\\opsandwich{f}{H_{\\text{rf}}}{i}\\right|^2\\label{eq:rf-spectra-fgr-1}\n\\end{eqnarray}\nwhere the sum is over the initial states (occupied with probability $p_i=e^{-\\beta E_i}$) and the final states, all of which are eigenstates of $H$ with energies $E_i$ and $E_f$. We will restrict ourselves to $T=0$ and the physically relevant case where the initial states contain no $b$-atoms.\n\n\\subsection{Sum Rules}\nTaking moments of Eq.~\\eqref{eq:rf-spectra-fgr-1}~\\cite{oktel:cs-ref,oktel:cs-ref2,oktel:rf-spectra},\nthe mean absorbed photon frequency is\n\\begin{eqnarray}\n\\expec{\\omega} &=&\n=\\frac{\\int \\! d\\omega \\,\\omega \\Gamma(\\omega)}{\\int \\! d\\omega \\, \\Gamma(\\omega)}=\n\\frac{\\expec{[H_{\\text{rf}},H]H_{\\text{rf}}}}{\\expec{H_{\\text{rf}}^2}} \\label{eq:average-shift-given-commutators}\\\\\n&=& \\delta -z(t_b-t_a) f_c\n+ \\left ( U_{ab}-U_{aa}\\right ) g_2 \\expec{n}.\\label{eq:sum-rule-general}\n\\end{eqnarray}\nWe defined $\\delta$ to be the vacuum $a$-$b$ splitting, the local phase coherence factor is\n\\begin{eqnarray}\nf_c &=&\\frac{\\expec{c^\\dagger_{i,a} c_{j,a} }}{\\expec{n}},\\label{eq:cond-dens}\n\\end{eqnarray}\nwith $i$ and $j$ nearest neighbors,\nthe site filling is $n\\equiv c^\\dagger_a c_a$,\nand the lattice coordination is $z$. The zero-distance density-density correlation function is\n\\begin{eqnarray}\ng_2 &=& \\frac{\\expec{c_a^\\dagger c_a^\\dagger c_a c_a}}{\\expec{n}^2}.\n\\end{eqnarray}\n The second term in Eq.~\\eqref{eq:sum-rule-general} may be interpreted as the mean shift in the kinetic energy when the spin of an atom is flipped. In particular, within a strong-coupling mean-field picture $\\langle c^\\dagger_{i,a} c_{j,a} \\rangle=\\langle c^\\dagger_{i,a}\\rangle\\langle c_{j,a} \\rangle$ is the condensate density, which can therefore be measured with this technique. The second term in Eq.~\\eqref{eq:sum-rule-general} is the shift in the interaction energy.\n\nOur subsequent approximations will satisfy this sum rule.\nThis is non-trivial: for example, even in simultaneous limits of $t_b=0$, $U_{ab}=U_{aa}$, and $t_a\\rightarrow 0$ considered in Ref.~\\cite{sun:rf-spectra-condensate-probe}, their results violate this sum rule by a factor of $\\sim 3$.\n\nSince it plays no role in the remainder of the discussion, we will set to zero the vacuum level splitting: $\\delta=0$. This amounts to working in a ``rotating frame\".\n\n\n\\section{Random phase approximation\\label{sec:rpa}}\n\n\\subsection{General setup and solution}\n\n\nTo calculate the RF spectrum we employ a time-dependent strong-coupling mean-field theory which\n includes $k=0$ fluctuations around the static strong-coupling Gutzwiller mean field theory~\\cite{fisher:bhubb}.\nThis mean field theory is exact in the deep Mott limit and in the deep superfluid when $t_a=t_b$, and it\nyields fairly accurate ground states in the intermediate regime~\\cite{pollet:mi,bergkvist:mi,wessel:mi,batrouni:mi,demarco:stability}.\nRefs.~\\cite{menotti:trivedi-single-ptcl-spectral-weight,ohashi:rf-spectra-dual-character} previously used analogous RPA's to calculate the Bose-Hubbard model's quasiparticle spectra and RF spectra with $U_{ab}=0$, which reduces to the $k=0$ single particle spectra.\n\n\n\\begin{figure}[hbtp]\n\\setlength{\\unitlength}{1.0in}\n\\subfigure[]{\n\\hspace{-0.1in}\\includegraphics[width=1.625in,angle=0]{HomSystemLargeUSametMu198}\n}\n\\subfigure[]{\n\\hspace{-0.05in}\\includegraphics[width=1.625in,angle=0]{HomSystemLargeUSametMu202}\n}\n\\subfigure[]{\n\\hspace{-0.1in}\\includegraphics[width=1.675in,angle=0]{HomSystemRbSametMu2000}\n}\n\\subfigure[]{\n\\hspace{-0.15in}\n\\includegraphics[width=1.675in,angle=0]{HomSystemRbSametMu2004}\n}\n\\caption{(Color online) Homogeneous system's spectral density as a function of $\\omega\/U_{aa}$ and $t_a\/U_{aa}$ (whiter indicates larger spectral density) compared with sum rule prediction (red, single line). Delta functions are broadened to Lorentzians for visualization purposes.\n(a,b) We take $U_{ba}=1.2 U_{aa}$ and $t_b=t_a$, with (a) $\\mu = 1.98$ and (b) $\\mu=2.02$. (c,d) We take parameters corresponding to typical $^{87}$Rb experiments: $U_{ba}=1.025 U_{aa}$ and $t_b=t_a$, and take (c) $\\mu = 1.999$ and (d) $\\mu=2.004$. In both cases, a double peak structure is visible, but the region of the phase diagram in which it is important is much smaller for $^{87}$Rb parameters than for Fig~(a,b)'s parameters.\n\\label{fig:homog-rf-spec}\n}\n\\end{figure}\n\n\nWe use the homogeneous time-dependent Gutzwiller variational ansatz\n\\begin{eqnarray}\n\\hspace{-0.1in}\\ket{\\psi(t)} \\!&=& \\!\\bigotimes_i\\! \\left [ \\sum_n \\left ( f_n(t) \\ket{n,0}_i + g_n(t) \\ket{n-1,1}_i\\right ) \\right ] \\label{eq:gutz-variational}\n\\end{eqnarray}\nwhere $\\ket{n_a,n_b}_i$ is the state at site $i$ with $n_a$ particles in the $a$ state and $n_b$ in the $b$ state. The equation of motion for $f_n(t)$ and $g_n(t)$ are derived by minimizing the action $S=\\int dt {\\cal L}$, with Lagrangian\n\\begin{eqnarray}\n{\\cal L}= \\langle \\psi |i\\partial_t |\\psi\\rangle - \\langle\\psi| H|\\psi \\rangle-\\lambda \\langle\\psi|\\psi\\rangle,\n\\end{eqnarray}\nwhere $\\lambda$ is a Lagrange multiplier which enforces conservation of probablility. At time $t=-\\infty$, where $\\gamma(t)=0,$ we take $g_n=0$, and choose $f_n$ to minimize $ \\langle\\psi| H|\\psi \\rangle$,\n\\begin{eqnarray}\n\\lambda f_n &=& -t_a z \\left ( \\sqrt{n} \\alpha^* f_{n-1} + \\sqrt{n+1}\\alpha f_{n+1}\\right ) \\nonumber \\\\\n &&{}+\\left ( \\frac{U_{aa}}{2} n(n-1)-\\mu n\\right ) f_n,\\label{eq:gutz-fns}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\alpha &=& \\sum_n \\sqrt{n} f_n^* f_{n-1}.\\label{eq:mean-field}\n\\end{eqnarray}\n Solving the subsequent dynamics to quadratic order in $\\gamma$, one finds\n\\begin{eqnarray}\n\\Gamma(t) &=& N_s \\int \\! dt' \\,\\gamma(t)\\gamma(t')\\chi^{(R)}(t-t'),\n\\end{eqnarray}\nwhere the retarded response function is\n\\begin{eqnarray}\n\\chi^{(R)}(t) &=& \\frac{1}{i} \\sum_n \\sqrt{n} \\left ( G_n^*(t) f_n - G_n(t) f_n^* \\right ) .\\label{eq:rf-spectra-in-terms-of-Gns}\n\\end{eqnarray}\nThe Green's functions $G_n(t)$ satisfy the equations of motion for the $g_n$'s in the absence of an RF field, but in the presence of a delta function source, and boundary condition $G_n(t)=0$ for $t<0$.\nThe relevant equations are simplest in Fourier space, where\n$G_n(\\omega) = \\int \\! dt\\, e^{i\\omega t} G_n(t)$ obeys\n\\begin{eqnarray}\n\\sqrt{n} f_n &=&-\\omega G_n+\\sum_m \\Lambda_{nm} G_m\\label{eq:Gn-eqn}\n\\end{eqnarray}\nwhere $\\Lambda=\\bar \\Lambda+\\Theta$ is a Hermitian matrix. The tridagonal part $\\bar\\Lambda$ is\n\\begin{eqnarray}\n\\bar\\Lambda_{n,n+1}&=&-zt_a\\alpha \\sqrt{n}\\\\\n\\bar\\Lambda_{n,n-1}&=&-zt_a\\alpha^* \\sqrt{n-1}\\\\\n \\bar\\Lambda_{nn}&=& -\\mu n -\\lambda +\\frac{U_{aa}}{2}(n-1)(n-2) \\\\\\nonumber&&+ U_{ab}(n-1).\n\\end{eqnarray}\nThe remaining contribution, $\\Theta$, is\n\\begin{eqnarray}\n \\Theta_{nm}= -z t_b f_{n-1} f^*_{m-1}.\n\\end{eqnarray}\nSpecializing to the case where $\\alpha(t)=\\alpha e^{i\\omega t}$, the response is given\nin terms of normalized eigenvectors $v_m$, with $\\sum_{m} \\Lambda_{nm} v^{(j)}_m=\\epsilon_j v_n^{(j)}.$ It takes the form of\na sum of delta-functions,\n\\begin{eqnarray}\nI(\\omega) &=& \\sum_j \\left(\\sum_m \\sqrt{m} f_m v_m^{(j)}\\right)^2 \\delta(\\omega-\\epsilon_j)\\label{eq:spectra-general}.\n\\end{eqnarray}\n\nThe $f_n$'s are found at each point in the phase diagram by starting with a trial $\\alpha$, solving Eq.~\\eqref{eq:gutz-fns}, then updating $\\alpha$ via Eq.~\\eqref{eq:mean-field} and iterating. We find that almost all spectral weight typically lies in only one or two peaks. Fig.~\\ref{fig:homog-rf-spec} shows sample spectra.\n The superfluid near the Mott state displays a multi-modal spectrum, but in the weakly interacting limit only a single peak is seen. An avoided crossing is clearly visible in these plots.\n Fig.~\\ref{fig:double-branch-3d} shows the manifold of spectral peaks in the $t_a\/U_{aa}$ and $\\mu\/U_{aa}$ plane, using height to denote frequency and opacity to denote spectral weight. Taking moments of $\\chi^R(\\omega)$, we see that Eq.~\\eqref{eq:sum-rule-general} is satisfied.\n\n \\begin{figure}[hbtp]\n\\setlength{\\unitlength}{1.0in}\n\\begin{picture}(3.50,3.70)\n\\put(-0.4,0){\n\\includegraphics[width=4.in,angle=0]{spectrum-3d-white-background}\n}\n\\put(0.25,1.25){$\\mu\/U_{aa}$}\n\\put(3.0,1.0){$t\/U_{aa}$}\n\\put(1.99,1.45)\n{\n$\\omega\/U_{aa}$\n}\n\\end{picture}\n\\vspace{-0.6in}\n\\caption{(Color online) Three-dimensional plot of RF spectral frequencies versus rescaled hopping $t_{a}\/U_{aa}$ and rescaled chemical potential $\\mu\/U_{aa}$ for $U_{ab}\/U_{aa}=1.2$.\nLarger opacity indicates larger spectral weight.\nWhite lines represent contours of fixed $\\mu$, $U$ and $\\omega$. The main branch is colored so that the progression from green to red to blue corresponds to increasing $\\omega$.\nThe double peaked spectrum is apparent from the ``double-valuedness\" of the surface.\n To avoid clutter, numerical values are omitted from the axes: the Mott plateaus occur at frequencies $\\omega=0,0.2U_{aa}$ and $0.4U_{aa}$, are each $U_{aa}$ wide and the first lobe's critical $t$ is around $0.029U_{aa}$ in 3D.\n}\n\\label{fig:double-branch-3d}\n\\end{figure}\n\n\n\n\n\n\\subsection{Limiting Cases}\nAlthough finding the spectrum in Eq.~\\eqref{eq:spectra-general} is a trivial numerical task, one can gain further insight by considering limiting cases. First, when $U_{ab}=U_{aa}$ and $t_a=t_b$ the system possesses an $SU(2)$ symmetry. In this limit we find that $G_n(t)=-i\\sqrt{n} f_n \\theta(t)$ is constant for $t>0$. Thus our approximation gives a spectrum $I(\\omega)$ which is proportional to $\\delta(\\omega)$. This result coincides with the exact behavior of the system: the operator $X=\\sum_j b_j^\\dagger a_j$ is a ladder operator, $[H,X]=\\delta X$, and can only generate excitations with energy $\\delta$ (set equal to zero in our calculation). The fact that our approximations correctly capture this behavior is nontrivial: in a field theoretic language one would say that our equation of motion approach includes the vertex corrections necessary for satisfying the relevant ``Ward identities\"~\\cite{pethick:pseudopot-breakdown,baym:self-consistent-approx,zinn-justin:qft}.\n\nThe current $^{87}$Rb experiments are slightly perturbed from this limit, with $\\eta\\equiv (U_{ab}-U_{aa})\/U_{aa}\\approx-0.025$ and $t_b=t_a$. We find that the $\\delta$-function is shifted by a frequency proportional to $\\eta$, but that the total spectral weight remains concentrated on that one frequency: the sum of the spectral weights at all other frequencies scale as $\\eta^2$. Consequently it is an excellent approximation to treat the spectrum as a delta-function, and our RPA calculation reduces to the results in \\cite{hazzard:rf-spectra-sum-rule}. We emphasize however that other atoms, such as Cesium, can be in a regime where $\\eta$ is large.\n\n\n\n\n\n\n\nWe gain further insight by considering the superfluid near the Mott phase with $t_a\/U_a\\ll1$. Here one can truncate the basis to two states with total particle number $n$ and $n+1$ on each site. Then the $f_n$'s and $G_n$'s can be found analytically: one only needs to solve $2\\times2$ linear algebra problems. In the $t_b=0$, $U_{ab}=U_{aa}$ limit, this is similar to Ref.~\\cite{sun:rf-spectra-condensate-probe}'s approach, but includes the hopping self consistently, allowing us to satisfy the sum rule Eq.~\\eqref{eq:sum-rule-general}.\nThis\ntruncation\nis exact\nin the small $t_a$ limit, and yields\n\\begin{eqnarray}\n\\chi^{(R)}(\\omega) &=& A_+ \\delta(\\omega-\\omega_+) + A_- \\delta(\\omega-\\omega_-)\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\omega_{\\pm} =& \\frac{\\epsilon_1 + \\epsilon_2}{2} \\pm \\sqrt{\\Delta^2 + \\left ( \\frac{\\epsilon_1-\\epsilon_2}{2}\\right ) ^2}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\epsilon_1 &\\equiv& (U_{ab}-U_{aa}) (n-1) + z t_a f_{n+1}^2 (n+1) \\nonumber\\\\\n\\epsilon_2 &\\equiv& (U_{ab}-U_{aa})n + z\\left [ t_a (n+1) - t_b\\right ] f_n^2\n\\nonumber\\\\\n\\Delta &\\equiv& - \\sqrt{n(n+1)} t_a z f_n f_{n+1}.\n\\end{eqnarray}\nif $n\\ge 1$ and\n\\begin{eqnarray}\n\\epsilon_1 &\\equiv& z t_a f_1^2\\nonumber\\\\\n\\epsilon_2 &\\equiv& z (t_a-t_b) f_0^2\n\\nonumber\\\\\n\\Delta &\\equiv& 0.\n\\end{eqnarray}\nif $n= 0$ (here, only the $\\epsilon_2$ peak has non-zero spectral weight).\nWe omit the cumbersome analytic expressions for the spectral weights $A_\\pm$. The spectrum consist of two peaks -- hybridized versions of the excitations caricatured in Fig.~\\ref{fig:two-correlations}. One can identify $\\epsilon_1$ and $\\epsilon_2$ as the energies of those caricature processes, recognizing that the hybridization term, $\\Delta$, grows with $t_a$. The avoided crossing between these modes is evident in Fig.~\\ref{fig:homog-rf-spec}.\n\n\n\n\n\n\\subsection{Inhomogeneous spectrum}\n\nWe model the trapped spectrum through a local density approximation. We assume that a given point in the trap has the properties of a homogeneous gas with chemical potential $\\mu(r)=\\mu_0-V(r)$. In Fig.~\\ref{fig:trap-rf-spec} we show the density profile and the spectrum corresponding to each point in space. Also shown is the trap averaged spectrum.\nThe bimodality of the homogeneous spectrum is quite effectively washed out by the inhomogeneous broadening of the trap. On the other hand, if one images the atoms flipped into the $b$ state as in Ref.~\\cite{campbell:ketterle-clock-shift}, there is a clear qualitative signature of the bimodality. If one excites the system with an RF pulse whose frequency lies between the resonant frequencies of two Mott plateaus, one will excite two ``shells\" of atoms. These shells should be clearly visible, even in column integrated data.\n\n\n\\begin{figure}[hbtp]\n\\setlength{\\unitlength}{1.0in}\n\\begin{picture}(3.5, 7.55)\n\\put(0.18,0.)\n{\\includegraphics[width=4.4in,angle=0]{trap-ave-compare-2}}\n\\end{picture}\n\\vspace{-0.68in}\n\\caption{\n(Color online) (a) Density $n$ as a function of distance to trap center rescaled by the lattice spacing, $r\/d$, in a local density approximation. For all subfigures, we take $t_a\/U_{aa}=0.004$, which is moderately smaller than the tip of the first Mott lobe.\n(b-e) Left: spectrum of a homogeneous gas with density $n(r)$, representing the spatially resolved spectrum observed in an experiment on a trapped gas. Horizontal axis is position, vertical is frequency, color from dark to light represents increasing spectral density. Continuous (red) curve denotes sum rule result for $\\langle \\omega \\rangle$.\nWe round the $\\delta$-functions to Lorentzians for visualization. Right: trap-averaged spectrum for a 3D trap within our RPA (black, solid line) compared with sum rule (red, dashed line). (b) $U_{ab}=1.2 U_{aa}, t_b=t_a$ (c) $U_{ab}= U_{aa}, t_b=t_a+0.1U_{aa}$ (d) $U_{ab}=1.2 U_{aa}, t_b=t_a+0.1U_{aa}$ (e) $^{87}$Rb parameters: $U_{ab}=1.025 U_{aa}, t_b=t_a$.\n}\n\\label{fig:trap-rf-spec}\n\\end{figure}\n\n\\section{Conclusions and discussion\\label{sec:discussion}}\nIn this paper we have shown that the RF spectra of a homogeneous Bose gas in an optical lattice will have two (or more) peaks in the superfluid state when the parameters are tuned close to the superfluid-Mott insulator phase transition. Physically, this bimodality is a result of the strong correlations in the system. These correlations result in two distinct forms of excitations (which are strongly hybridized): those involving ``core\" atoms, and those involving delocalized atoms. When $\\eta=(U_{ab}-U_{aa})\/U_{aa}$ is small, such as in the experiments on $^{87}$Rb, this bimodality is absent.\n\nOur approach, based upon applying linear response to a time dependent Gutzwiller mean field theory, is both simple and quite general. It allows arbitrary interactions between both spin states, and it allows arbitrary spin-dependent hopping rates. The major weakness of the theory is that it fails to fully account for short range-correlations: the atoms are in a quantum superposition of being completely delocalized, and being confined to a single site. The physical significance of this approximation is most clearly seen when one considers the case where the final-state atoms have no interactions, $U_{ab}=0$, and see no trap or lattice. Imaging the $b$-atoms after a time-of-flight is analogous to momentum resolved photoemission \\cite{jin:photoemission}, and would reveal the dispersion relationship of the single-particle excitations. The fact that the spectrum consists of two sharp peaks means that all of the non-condensed atoms are approximated to have the same energy. One will also see that their momentum is uniformly distributed throughout the first Brillioun zone. In the strong lattice limit, where the bandwidth is small, this approximation is not severe.\n\n\\section{Acknowledgements}\nWe thank Sourish Basu, Stefan Baur, Stefan Natu, Kuei Sun, Smitha Vishveshwara, Henk Stoof, Ian Spielman, and Mukund Vengalattore for useful discussions. This material is based upon work supported by the National Science Foundation through grant No. PHY-0758104, and partially performed at the Aspen Center for Physics.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\n\n\n\n\n\nIn this article, we study the asymptotic\nerror rates for integration by quasi-Monte Carlo (QMC) as $n\\to\\infty$\nwhile $f$ is fixed.\nMost of the error upper bounds in QMC are based on fooling functions\n$f_n$ that, given $n$ integration points, are poorly integrated.\nBy contrast, most of the\npublished empirical results follow the integration\nerror for a single integrand $f$ as $n$ increases.\nThe upper bounds have us play against an adaptive adversary choosing\nan unfavorable $f_n$ at each sample size $n$\ninstead of keeping $f$ fixed as $n\\to\\infty$.\nThe error bounds, that we describe in more detail below,\nare typically $O(\\log(n)^r\/n)$ where $r$ can be as large as the\ndimension of the integrand's domain.\nThese bounds can be enormous and, to our knowledge, there has never\nbeen an integrand exhibited where a standard QMC point set\nis shown to need $r>1$.\nThat raises the question of whether $r>1$ is simply\na consequence of the adversarial formulation.\nThe alternative is that some function $f$ is a `persistent fooling function'\ncausing large errors for infinitely many $n$.\nIn an earlier version of this article we posed a question about whether\n{\\sl any} integrand of bounded variation in the sense of Hardy and Krause (BVHK)\non $[0,1]^d$ for any $d\\geqslant1$\nhas an integration error above $c\\log(n)^r\/n$ for infinitely many $n$\nwith $r>1$ and $c>0$ using a digital sequence such as Sobol's for the integration points.\nFor background on bounded variation in the sense of Hardy and Krause\nor in the sense of Vitali, we refer the reader to \\cite{variation}.\n\nWe owe a great debt to Erich Novak who pointed us to some unpublished\nwork of Trojan described in detail in Chapter 10 of\nthe information based complexity monograph of Traub, Wasilkowski\nand Wo\\'zniakowsk \\cite{trau:wasi:wozn:1988}.\nTrojan's work is about very general problems of computing\nlinear operators on Banach spaces based on the values of $n\\to\\infty$ linear functionals.\nHe shows that the adversarial worst case convergence rate is also very nearly the attained\nrate for some specific problem instances.\nIn the QMC context, that work pertains to a single integrand $f$\nas the number $n$ of evaluation points diverges to infinity.\nA consequence of that work is that for any infinite\nsequence of integration points, there are indeed integrands\nin BVHK$[0,1]^d$ with an absolute error larger than $c\\log(n)^{(d-1)\/2}\/(n\\log\\log(n))$\ninfinitely often, for any $c>0$.\nFurthermore, those integrands are present within a reproducing kernel Hilbert space (RKHS)\non a certain unanchored space. They are dense in that space, though\nthis does not mean that the usual Gaussian processes on such spaces\ngive them positive measure.\nWe only get $r=(d-1)\/2$ logarithmic factors instead of $d$ or $d-1$\nof them. The explanation is that we use an $L^2$ bound\njust like Roth \\cite{roth:1954} used in getting a lower bound on star discrepancy.\nA different analysis might yield larger $r$.\nThe $\\log\\log(n)$ factor in the denominator\ncan be replaced by a sequence that diverges more slowly.\n\nWe have not been able to construct a function in BVHK$[0,1]^d$\nthat provably needs $r>1$ powers of $\\log(n)$\nfor a Sobol' sequence \\cite{sobo:1967:tran}\nor the Halton \\cite{halt:1960}\nsequence, even when exploiting known weaknesses of commonly used QMC sequences.\nSo, we are left to wonder: where are the logs?\n\nAn outline of this paper is as follows.\nSection~\\ref{sec:back} presents some results from the QMC literature\nand introduces notation on some QMC sequences.\nSection~\\ref{sec:proofofbound} proves our main result\ndescribed above on existence of persistent fooling functions.\nSection~\\ref{sec:d=1} looks at the case $d=1$\nto exhibit some example functions requiring $r=1$\nfor the van der Corput sequence: $f(x)=1\\{x<2\/3\\}$\nand $f(x)=x$.\nSection~\\ref{sec:d=2} computes error for some $d=2$\ndimensional problems. The Halton and Sobol'\npoints there are closely related to van der Corput\npoints, yet two dimensional generalizations of the\nproblematic integrands from Section~\\ref{sec:d=1}\nfail to show a need for $r>1$.\nIn fact some of the empirical results are more consistent\nwith an $O(1\/n)$ error.\nSection~\\ref{sec:bigm} computes a local discrepancy\n$\\delta({\\boldsymbol{z}}$)\nfor Sobol' nets with $d=2,3$ and $1\\leqslant m\\leqslant100$ where all components of ${\\boldsymbol{z}}$ equal $2\/3$\nchosen because $2\/3$ is difficult to approximate by dyadic rationals.\nIt also includes $d=4$ for $1\\leqslant m\\leqslant 50$.\nThe cases with $d>2$ are the closest we have found to needing $r>1$\nbut are inconclusive.\nSection~\\ref{sec:discussion} discusses these results.\n\n\\section{Background}\\label{sec:back}\n\nFrom the Koksma-Hlawka inequality \\cite{hick:2014}\ncombined with convergence rates for the star discrepancy \\cite{nied:1992},\nwe get the widely quoted convergence rates\nfor the error in quasi-Monte Carlo integration\nof a function $f:[0,1]^d\\to{\\mathbb{R}}$.\nAn integrand $f$ of bounded variation in the\nsense of Hardy and Krause, written $f\\in\\mathrm{BVHK}[0,1]^d$,\ncan be integrated with error $O(n^{-1}(\\log n)^{d-1})$\nusing $n$ function evaluations. If we must use the first $n$ points\nof an infinite sequence, then the rate\n$O(n^{-1}(\\log n)^{d})$ is attainable.\nThis article is mostly about the infinite sequence version.\nBoth of these rates are often written $O(n^{-1+\\epsilon})$\nwhere $\\epsilon$ can be any positive constant\nbut $\\log(n)^d\\gg n^\\epsilon$ for many use cases of interest.\n\nFor high dimensional problems, such powers of\n$\\log(n)$ are enormous and then\nthere is genuine uncertainty about whether\n$O(n^{-1}(\\log n)^{d})$ is better than the\nroot mean squared error (RMSE) of $O(n^{-1\/2})$ from plain\nMonte Carlo (MC) at practically relevant $n$.\nThese rates omit three implied constants: one in\nthe star discrepancy (see \\cite{faur:lemi:2014} for information),\none in the total variation of $f$\nand the third one is the standard deviation of $f$.\nThese unknown constants contribute to\nuncertainty about the $n$ at which QMC would\noutperform MC.\nA further complication is that the Koksma-Hlawka\nbound is for a worst case integrand.\nThe situation is quite different in Monte Carlo\n(MC) where the rate $\\sigma n^{-1\/2}$ holds for all finite $n$ making\nit simultaneously a guide to how accuracy progresses\nfor a single integrand of variance $\\sigma^2$ and\nthe RMSE formula (upper and lower bound)\nfor all integrands of variance~$\\sigma^2$.\n\n\nThat observed errors for realistic $n$ and large $d$\ndo not follow a trend like $\\log(n)^d\/n$\nwas reported by Schlier \\cite{schl:2004} among others.\nThat work also found that the variance of $f({\\boldsymbol{x}})$\nwas more useful than its total variation in explaining the empirical accuracy\nof QMC integration on test functions, despite the fact\nthat proved theoretical bounds for QMC error use total\nvariation and variance does not require any of the smoothness that\nQMC relies on.\nMany papers include empirically estimated convergence\nrates for individual $f$ found by fitting a regression model\nfor log error versus $\\log(n)$. See for instance L'Ecuyer \\cite{lecu:2018}.\nWe do not see results that look like a large power of $\\log(n)$\nis present.\n\nThis mismatch between empirical results and theoretical ones\nis troubling. Empirical results alone don't give enough confidence\nthat they will apply to future problems. Similarly, bounds that are\nfavorable (but asymptotic) or unfavorable (but worst case) could\nalso fail to provide a reliable guide to attained accuracy.\nThis mismatch has brought practical difficulties.\nFor instance, the logarithmic powers\nin the Koksma-Hlawka bound led Bratley, Fox and Niederreiter \\cite{brat:fox:nied:1992}\nto limit their software to $d\\leqslant 12$.\n\nFor some randomizations of digital nets\nthe RMSE is $O(n^{-1\/2})$ whenever $f\\in L^2[0,1]^d$ \\cite{snxs}\nand is also $O(\\log(n)^{(d-1)\/2}\/n)$ under further smoothness\nconditions \\cite{smoovar,localanti,yue:mao:1999}.\nIn such cases the large powers of $\\log(n)$ are\nsubject to a simultaneous $O(n^{-1\/2})$ bound that\nlimits how much worse randomized QMC can be compared to MC\nfor finite $n$.\nIt would be interesting to know whether something like that also\nholds for plain QMC. Perhaps the coefficient of $\\log(n)^r\/n$ is ordinarily\nvery small, or the effect is only relevant for impractically large $n$ or\nperhaps not even present for most commonly investigated integrands.\nFor a survey of randomized QMC see L'Ecuyer and Lemieux \\cite{lecu:lemi:2002}.\n\n\n\n\n\n\n\n\n\n\nWe conclude this section by describing $(t,m,d)$-nets and $(t,d)$-sequences\nusing the formulation from Niederreiter \\cite{nied:1987}.\nLet $b\\geqslant2$ be an integer.\nFor ${\\boldsymbol{k}} = (k_1,\\dots,k_d)\\in{\\mathbb{N}}} % natural numbers {1, 2, ..._0^d$ and\n${\\boldsymbol{c}} = (c_1,\\dots,c_d)\\in\\Z_0^d$\nwith $0\\leqslant c_jx_{ij}\\})\\bigg)^2 \\,{\\mathrm{d}}{\\boldsymbol{y}}\n\\end{align}\nwhere $x_{ij}$ is the $j$th component of ${\\boldsymbol{x}}_i$.\n\nThe left hand side of~\\eqref{eqn:ainorm} is a squared norm in the RKHS\nwhile the right hand side is a plain $L^2$ squared norm.\nTo provide a lower bound, we construct a function $h$ satisfying\n\\begin{align*}\n&\\int_{[0,1]^d} h({\\boldsymbol{y}})^2 \\,{\\mathrm{d}}{\\boldsymbol{y}}=O((\\log n)^{d-1}),\\quad\\text{and}\\\\\n&\\int_{[0,1]^d} h({\\boldsymbol{y}})\\bigg(\\sum_{i=0}^{n-1} a_i\\prod_{j=1}^d (y_j-1\\{y_j>x_{ij}\\})\\bigg) \\,{\\mathrm{d}}{\\boldsymbol{y}}=\\Omega\\Bigl(\\frac{(\\log n)^{d-1}}{n}\\Bigr),\n\\end{align*}\nneither of which involve the RKHS inner product, so the function $h$ does not have to be in the RKHS.\n\nDefine $E({\\boldsymbol{k}},{\\boldsymbol{c}})$ to be the $d$-dimensional interval\nfrom~\\eqref{eq:elemint} with $b=2$, that is\n$$\nE({\\boldsymbol{k}},{\\boldsymbol{c}})=\\prod_{j=1}^d\\Bigl[\n\\frac{c_j}{2^{k_j}},\n\\frac{c_j+1}{2^{k_j}}\n\\Bigr)\n$$\nwhere ${\\boldsymbol{k}} = (k_1,\\dots,k_d)\\in{\\mathbb{Z}}^d$\nand ${\\boldsymbol{c}} = (c_1,\\dots,c_d)\\in{\\mathbb{Z}}^d$\nsatisfy $k_j\\geqslant0$ and $0\\leqslant c_j <2^{k_j}$. Given ${\\boldsymbol{k}}$, we define $|{\\boldsymbol{k}}|=\\sum_{j=1}^dk_j$.\nFor a given vector ${\\boldsymbol{k}}$,\nthe $2^{|{\\boldsymbol{k}}|}$ elementary intervals $E({\\boldsymbol{k}},{\\boldsymbol{c}})$\npartition $[0,1)^d$ into congruent sub-intervals.\n\nFor each $E({\\boldsymbol{k}},{\\boldsymbol{c}})$, define $U_{{\\boldsymbol{k}},{\\boldsymbol{c}}}({\\boldsymbol{y}})$ by\n$$\nU_{{\\boldsymbol{k}},{\\boldsymbol{c}}}({\\boldsymbol{y}})=\n\\begin{cases}\n(-1)^{\\sum_{j=1}^d 1\\{2^{k_j}y_j-c_j<1\/2\\}}, & {\\boldsymbol{y}}\\in E({\\boldsymbol{k}},{\\boldsymbol{c}})\\\\\n0, &\\text{else}.\n\\end{cases}\n$$\nIf we divide $E({\\boldsymbol{k}},{\\boldsymbol{c}})$ into $2^d$ sub-intervals, the value of $U_{{\\boldsymbol{k}},{\\boldsymbol{c}}}({\\boldsymbol{y}})$ is constant on each sub-interval and it alternates between $1$ and $-1$.\n\nIt is straightforward to verify that $\\int_{[0,1]^d}U_{{\\boldsymbol{k}},{\\boldsymbol{c}}}({\\boldsymbol{y}})U_{{\\boldsymbol{k}}',{\\boldsymbol{c}}'}({\\boldsymbol{y}})\\,{\\mathrm{d}}{\\boldsymbol{y}}=0$ if ${\\boldsymbol{k}}\\neq {\\boldsymbol{k}}'$ or ${\\boldsymbol{c}}\\neq {\\boldsymbol{c}}'$. It is trivially true if $E({\\boldsymbol{k}},{\\boldsymbol{c}})\\cap E({\\boldsymbol{k}}',{\\boldsymbol{c}}')=\\varnothing$.\nIf instead $E({\\boldsymbol{k}},{\\boldsymbol{c}})\\cap E({\\boldsymbol{k}}',{\\boldsymbol{c}}')\\neq \\varnothing$, then there must be some $j$ with $k_jx_{ij}\\}) \\,{\\mathrm{d}}{\\boldsymbol{y}}=\\frac{1}{4^{m+d}}.$$\nBecause there are $2^m$ intervals associated with ${\\boldsymbol{k}}$, the cardinality of $P_{{\\boldsymbol{k}}}$ is at least $2^m-n\\geqslant n$. Hence\n\\begin{equation}\\label{eqn:Uprod}\n \\int_{[0,1]^d} \\bigg( \\sum_{{\\boldsymbol{c}}\\in P_{{\\boldsymbol{k}}}} U_{{\\boldsymbol{k}},{\\boldsymbol{c}}}({\\boldsymbol{y}})\\bigg)\\prod_{j=1}^d (y_j-1\\{y_j>x_{ij}\\})\\,{\\mathrm{d}}{\\boldsymbol{y}}\\geqslant \\frac{n}{4^{m+d}}.\n\\end{equation}\nNow we define\n$$h({\\boldsymbol{y}})=\\sum_{{\\boldsymbol{k}}:|{\\boldsymbol{k}}|=m} \\sum_{{\\boldsymbol{c}}\\in P_{{\\boldsymbol{k}}}} U_{{\\boldsymbol{k}},{\\boldsymbol{c}}}({\\boldsymbol{y}}).$$\nThe number of ${\\boldsymbol{k}}$ with $|{\\boldsymbol{k}}|=m$ is the number of ways to partition $m$ into $d$ nonnegative ordered integers,\nwhich equals ${m+d-1\\choose d-1}$. Equation~\\eqref{eqn:Unorm} and $2n\\leqslant 2^m<4n$ imply that\n$$\\int_{[0,1]^d} h({\\boldsymbol{y}})^2 \\,{\\mathrm{d}}{\\boldsymbol{y}}=\\sum_{{\\boldsymbol{k}}:|{\\boldsymbol{k}}|=m} \\sum_{{\\boldsymbol{c}}\\in P_{{\\boldsymbol{k}}}} 2^{-m}\\leqslant \\sum_{{\\boldsymbol{k}}:|{\\boldsymbol{k}}|=m}1={m+d-1\\choose d-1}\\leqslant C_d (\\log n)^{d-1}$$\nfor some positive number $C_d$ independent of $n$. On the other hand, equation~\\eqref{eqn:Uprod} implies that\n$$ \\int_{[0,1]^d} h({\\boldsymbol{y}})\\prod_{j=1}^d (y_j-1\\{y_j>x_{ij}\\})\\,{\\mathrm{d}}{\\boldsymbol{y}}\\geqslant {m+d-1\\choose d-1}\\frac{n}{4^{m+d}}\\geqslant \\frac{c_d(\\log n)^{d-1}}{n}$$\nfor another positive number $c_d$ independent of $n$.\n\nBy the Cauchy-Schwarz inequality and equation~\\eqref{eqn:ainorm}\n\\begin{align*}\n &\\int_{[0,1]^d} h({\\boldsymbol{y}})\\bigg(\\sum_{i=0}^{n-1} a_i\\prod_{j=1}^d (y_j-1\\{y_j>x_{ij}\\})\\bigg) \\,{\\mathrm{d}}{\\boldsymbol{y}}\\\\\n &\\leqslant \\bigg(\\int_{[0,1]^d} h({\\boldsymbol{y}})^2 \\,{\\mathrm{d}}{\\boldsymbol{y}}\\bigg)^{\\frac{1}{2}}\\bigg(\\int_{[0,1]^d} \\bigg(\\sum_{i=0}^{n-1} a_i\\prod_{j=1}^d (y_j-1\\{y_j>x_{ij}\\})\\bigg)^2 \\,{\\mathrm{d}}{\\boldsymbol{y}}\\bigg)^{\\frac{1}{2}}\\\\\n &\\leqslant \\bigg(\\int_{[0,1]^d} h({\\boldsymbol{y}})^2 \\,{\\mathrm{d}}{\\boldsymbol{y}}\\bigg)^{\\frac{1}{2}}\\Bigl\\Vert\\bsone-\\sum_{i=0}^{n-1} a_i K({\\boldsymbol{x}}_i,\\cdot)\\Bigr\\Vert\n\\end{align*}\nwhich provides the lower bound\n\\begin{align*}\n \\Bigl\\Vert1-\\sum_{i=0}^{n-1} a_i K({\\boldsymbol{x}}_i,\\cdot)\\Bigr\\Vert\n&\\geqslant (C_d (\\log n)^{d-1})^{-\\frac{1}{2}} \\frac{c_d(\\log n)^{d-1}}{n} \\sum_{i=0}^{n-1} a_i=\\bigg(\\frac{c_d}{C_d^{1\/2}}\\sum_{i=0}^{n-1} a_i\\bigg)\\frac{(\\log n)^{(d-1)\/2}}{n} .\n\\end{align*}\n\nCombining the above lower bound with equation~\\eqref{eqn:bound1} we get\n$$\\Bigl\\Vert\\bsone-\\sum_{i=0}^{n-1} a_i K({\\boldsymbol{x}}_i,\\cdot)\\Bigr\\Vert\n\\geqslant \\max\\Biggl(\\Bigl|1-\\sum_{i=0}^{n-1} a_i\\Bigr|,\\bigg(\\frac{c_d}{C_d^{1\/2}}\\sum_{i=0}^{n-1} a_i\\bigg)\\frac{(\\log n)^{(d-1)\/2}}{n} \\Biggr).$$\nFor $\\lambda>0$,\n$$\n\\min_{a\\in{\\mathbb{R}}}\\max( |1-a|,\\lambda a) = \\frac{\\lambda}{\\lambda+1}\n$$\nso that\n\\begin{align*}\n \\Bigl\\Vert1-\\sum_{i=0}^{n-1} a_i K({\\boldsymbol{x}}_i,\\cdot)\\Bigr\\Vert\n&\\geqslant\n\\frac{{c_d(\\log n)^{(d-1)\/2}}\/{(nC_d^{1\/2})}}{1+{c_d(\\log n)^{(d-1)\/2}}\/{(nC_d^{1\/2})}}\n\\end{align*}\nand we let $A_d =(c_d\/C_d^{1\/2})\/(1+c_dM_d\/C_d^{1\/2})$\nfor $M_d = \\sup_{n\\in{\\mathbb{N}}}\\log(n)^{(d-1)\/2}\/n$.\n\\end{proof}\n\\begin{corollary}\nFor any sequence of points $({\\boldsymbol{x}}_i)_{i\\geqslant0}$ in $[0,1]^d$, there exists a function $f$ in the RKHS with kernel defined by \\eqref{eqn:kernel} such that\n$$\\limsup_{n\\to \\infty}\\frac{\\big|\\int_{[0,1]^{d}}f({\\boldsymbol{x}}) \\,{\\mathrm{d}}{\\boldsymbol{x}}-\\frac{1}{n}\\sum_{i=0}^{n-1} f({\\boldsymbol{x}}_i)\\big|}{(n\\log\\log n)^{-1}(\\log n)^{(d-1)\/2}}=+\\infty$$\n\\end{corollary}\n\\begin{proof}\nApply Theorem~\\ref{thm:asymptoticrate} with the lower bound on $r_n$ from Theorem~\\ref{thm:Roth}.\n\\end{proof}\n\n\n\n\\section{Discrepancy and the case of $d=1$}\\label{sec:d=1}\n\nLet $x_0,x_1,\\dots,x_{n-1}\\in[0,1]$.\nThe local discrepancy of these points at $\\alpha\\in[0,1]$ is\n$$\n\\delta_n(\\alpha) = \\frac1n\\sum_{i=0}^{n-1}1\\{x_i<\\alpha\\}-\\alpha\n$$\nand the star discrepancy is $D_n^*=\\sup_{0\\leqslant\\alpha\\leqslant1}|\\delta_n(\\alpha)|$.\nNo infinite sequence $x_i$ can have $D_n^*=o(\\log(n)\/n)$.\nUsing results from discrepancy theory we see below that there are specific\nvalues of $\\alpha$ for which $D_n^*=\\Omega(\\log(n)\/n)$.\nFor those values $1\\{x<\\alpha\\}-\\alpha$ is a persistent\nfooling function. We show below that $f(x)=x$ is also a persistent\nfooling function for the van der Corput sequence.\n\n\nThe set of $\\alpha\\in[0,1]$\nwith $|\\delta_n(\\alpha)|=o(\\log(n)\/n)$ has Hausdorff dimension 0\nfor any sequence $(x_i)_{i\\geqslant0}\\subset[0,1]$. See \\cite{hala:1981}.\nSo $r=1$ is not just available for $d=1$ it is the usual\nrate for functions of the form $f(x) = 1\\{x<\\alpha\\}$.\n\nFor $x_i$ taken from the van der Corput sequence,\nand $\\alpha = \\sum_{k=1}^\\infty a_k\/2^k$ for bits $a_k\\in\\{0,1\\}$,\nDrmota, Larcher and Pillichshammer \\cite{drmo:larc:pill:2005} note\nthat $n|\\delta_n(\\alpha)|$\nis bounded as $n\\to\\infty$, if and only if $\\alpha$ has a representation\nwith only finitely many nonzero $a_k$. Further, letting\n$$h_\\alpha(m) = \\#\\{ k (1-\\epsilon) h_\\alpha(m)\n\\bigr\\}=1\n\\end{align}\nfor any $\\epsilon >0$.\nThe base $2$ representation of $2\/3$ is $0.10101\\cdots$\nand so $h_{2\/3}(m)=m$.\nIt follows that $f(x) = 1\\{x<2\/3\\}$ has\n$|\\hat\\mu_n-\\mu|>c\\log(n)\/n$ infinitely often for some $c>0$.\nEven more, the fraction of such $n$ among the first $N=2^m$\nsample sizes becomes ever closer to $1$ as $m\\to\\infty$.\n\nIf we average the local discrepancy over $\\alpha$ we get\n$$\n\\int_0^1\\delta_n(\\alpha)\\,{\\mathrm{d}}\\alpha = \\frac1n\\sum_{i=0}^nx_i-\\frac12\n$$\nwhich is the integration error for the function $f(x)=x$\nthat we study next.\nIn our study of $f(x)=x$, we use sample sizes $n$\nwith base $2$ expansion $10101\\cdots101$.\nThat is, for some $L\\geqslant1$\n$$n=n_L = \\sum_{\\ell=0}^L4^{\\ell}.$$\nThe first few values of $n_L$ are\n$1$, $5$, $21$, $85$, $341$, $1365$, and $5461$.\n\n\\begin{proposition}\\label{prop:sumx}\nFor integers $0\\leqslant ic\n$$\nif $0 \\frac18\\frac{2L-1}{(L+1)\\log(4)}\\to\\frac1{4\\log(4)},\n\\end{align*}\ncompleting the proof. \n\\end{proof}\n\n\nWe can see why the sample sizes $n_L$ give unusually innaccurate estimates\nof the integral of $x$ in the van der Corput sequence.\nThose values of $n$ consistently miss getting into the `ones block'\nfor digit $k$.\n\n\n\nFigure~\\ref{fig:vdcempiricals} shows some empirical\nbehavior of the scaled errors for the two integrands\nwe considered in this section.\nThe scaled error there is essentially the number of observations\nby which the count of points in $[0,2\/3)$ differs from\n$2n\/3$. It is $n|\\delta_n(2\/3)|$ which is the customary\nscaling in the discrepancy literature.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.9\\hsize]{figvdcempiricals}\n\\caption{\\label{fig:vdcempiricals}\nBoth panels show the scaled error $n|\\hat\\mu_n-\\mu|$ versus $n$\nfor the first $2^{14}$ points of the van der Corput sequence.\nThe top panel has the integrand $f(x)=x$ and the values\nfor $n=n_L$ are indicated with open circles. The bottom panel has the integrand $f(x) = 1\\{x<2\/3\\}$\nand the values $n=\\tilde n_L$ are indicated.\n}\n\\end{figure}\n\nThe integrands $x$ and $1\\{x<2\/3\\}$ both have\ntotal variation one. It would be interesting to know\nwhat integrand of total variation one has the largest\nvalue for $\\limsup_{n\\to\\infty} n|\\hat\\mu_n-\\mu|\/\\log(n)$\nin the van der Corput sequence.\n\nFor integration, it is advisable to use $n=b^m$ for $m\\geqslant0$\nin the van der Corput sequence. Then the Koksma-Hlawka inequality\ngives us $|\\hat\\mu_{b^m}-\\mu|=O(1\/b^m)$ as $m\\to\\infty$\nfor any $f$ of bounded variation on $[0,1]$ because\nthe van der Corput sequence has $D^*_{b^m} = b^{-m}=1\/n$.\nAny $(t,d)$-sequence for $d=1$ has $D^*_{b^m}=O(1\/n)$.\nFor $d=1$, bounded variation in the sense of Hardy\nand Krause reduces to the familiar one dimensional notion\nof bounded variation.\nAs a result a log power $r>0$ can apply to\nthe limit as $n\\to\\infty$ through positive\nintegers but not as $n=b^m$ for $m\\to\\infty$.\n\n\n\n\n\\section{Empirical investigations for $d=2$}\n\\label{sec:d=2}\n\nThis section reports on an empirical search for\nan integrand with errors that are $\\Omega( \\log(n)^r\/n)$\nfor some $r>1$ when using a sequence of points\nwith $D_n^*=O(\\log(n)^2\/n)$. We know from Section~\\ref{sec:proofofbound}\nthat this is attainable for $11$ is not needed.\nWe look at two generalizations of the van der Corput\nsequence. The first is the Halton sequence for $d=2$.\nIn that sequence, ${\\boldsymbol{x}}_{i1}$ is the van der Corput sequence\nin base $2$ and ${\\boldsymbol{x}}_{i2}$ is the van der Corput sequence\nin base $3$.\nThe second sequence is the Sobol' sequence in base $2$.\nIt has $t=0$ and like the Halton points, the first\ncomponent ${\\boldsymbol{x}}_{i1}$ is the van der Corput sequence\nin base $2$. For $n=2^m$, the second component\n${\\boldsymbol{x}}_{i2}$ of the Sobol' sequence is a permutation\nof the first $n$ elements of the van der Corput sequence.\n\n\nWe look first at the scaled error $n|\\hat\\mu_n-\\mu|$\nfor $f({\\boldsymbol{x}}_i) = (x_{i1}-1\/2) (x_{i2}-1\/2)$.\nThis integrand has integral zero and two dimensional Vitali variation of one.\nIt integrates to zero over each component of ${\\boldsymbol{x}}$ when the other\ncomponent is fixed at any value. It has no nonzero ANOVA component\nof dimension smaller than $2$. Each of the factors is a persistent\nfooling function for the van der Corput points.\n\nFor the Halton sequence, the scaled error equals $1\/4$ when $n=1$\nand it remains below $1\/4$ for $2\\leqslant n\\leqslant 2^{20}$. It repeatedly approaches $1\/4$\nfrom below.\nFigure~\\ref{fig:d2prodempirical} shows the first $2^{19}$ scaled errors\nfor this product integrand for both Halton and Sobol' points.\nFor both constructions we see no evidence that the error is\n$\\Omega(\\log(n)^r\/n)$ for $r>1$. In fact, what is quite surprising\nthere is the apparent $O(1\/n)$ error rate, which is then\n{\\sl better} than what the van der Corput sequence attains for $f(x)=x$.\n\nBy plotting only $2^{19}$ points\ninstead of all $2^{20}$ it becomes possible to see something of\nthe structure within the triangular peaks for the Sobol' points.\nThe linear scale for $n$ (versus a logarithmic one) shows\na clear repeating pattern consistent with an $O(1\/n)$ error rate.\nWhether or not the errors are $O(1\/n)$, this integrand is\nnot a promising one to investigate further in the\nsearch for an integrand needing $r>1$.\n\n\\begin{figure}[t]\\centering\n\\includegraphics[width=.9\\hsize]{figd2prodempirical.png}\n\\caption{\\label{fig:d2prodempirical}\nThe panels show scaled errors $n|\\hat\\mu_n-\\mu|$ versus $n$\nfor the integrand $f({\\boldsymbol{x}})=(x_1-1\/2) (x_2-1\/2)$ on $[0,1]^2$.\nThe top panel is for Halton points and the bottom one is\nfor Sobol' points.\n}\n\\end{figure}\n\n\nAnother challenging integrand for van der Corput\npoints was $1\\{x<2\/3\\}$.\nFor the Sobol' points we then take\n$f({\\boldsymbol{x}})=\\prod_{j=1}^2( 1\\{x_j<2\/3\\}-2\/3)$\nfor $d=2$. Once again we have removed\nthe additive contribution to the integrand\nthat we know is integrated at the $\\log(n)\/n$\nrate making it easier to discern the effect\nof the bivariate component.\nFor Halton points, we do not use $2\/3$\nas that has a terminating expansion in base $3$\nthat defines the second component of the ${\\boldsymbol{x}}_i$.\nWe use\n$f({\\boldsymbol{x}})=(1\\{x_1<2\/3\\}-2\/3) (1\\{x_2<3\/5\\}-3\/5)$\nfor Halton points because $3\/5$ does not have a terminating\nexpansion in base $3$ that could make the problem\nartificially easy. Both of these functions have\ntwo dimensional Vitali variation equal to one, just\nlike $\\prod_{j=1}^2(x_j-1\/2)$.\nFigure~\\ref{fig:d2rectempirical} shows the scaled errors\n$n|\\hat\\mu-\\mu|\/\\log(n)$. The logarithmic scaling\nin the denominator is there so that\na value of $r>1$ would give an infinite series of new records.\nWe don't see many such new records for $n\\leqslant 2^{22}$\nfor either of the two test cases. These integrands were designed\nto exploit weaknesses in the Sobol' and Halton sequences\nand they did not produce the appearance of errors growing\nfaster than $\\log(n)\/n$. It remains possible that errors\ngrow like $\\log(n)^r\/n$ for $r>1$ but with the records being set very sparsely.\nThe situation is quite different from Figure~\\ref{fig:vdcempiricals}\nin the one dimensional case where we see a steady sequence of\nrecord values with a fractal pattern in the empirical errors.\n\n\n\\begin{figure}[t]\\centering\n\\includegraphics[width=.9\\hsize]{figd2rectempirical.png}\n\\caption{\\label{fig:d2rectempirical}\nThe panels show scaled errors $n|\\hat\\mu_n-\\mu|\/\\log(n)$ versus $n>1$.\nThe top panel has $f({\\boldsymbol{x}})=(1\\{x_1<2\/3\\}-2\/3)\\times(1\\{x_2<2\/3\\}-2\/3)$\nfor Sobol' points.\nThe bottom panel has\n$f({\\boldsymbol{x}})=(1\\{x_1<2\/3\\}-2\/3)\\times(1\\{x_2<3\/5\\}-3\/5)$\nfor Halton points.\n}\n\\end{figure}\n\nThere are some other integrands that could be difficult for\nQMC. A badly approximable irrational number $\\theta$\nis one where the distance between $n\\theta$\nand $\\Z$ is above $c\/n$ for some $c>0$ and all integers $n\\geqslant1$.\nFor instance, $\\theta=\\sqrt{2}-1$ is badly approximable.\nAn integrand like $\\prod_{j=1}^d (1_{x_j<\\theta}-\\theta)$\ncould pose a challenge to QMC methods, but some exploration\nwith Halton and Sobol' points was inconclusive: there was\nno empirical indication that $r>1$ would be required.\nAnother potentially difficult integrand is $\\prod_{j=1}^d (x_j^\\theta-1\/(1+\\theta))$\nfor $0<\\theta<1$. Partial derivatives of this integrand with respect to a subset\nof components $x_j$ are unbounded. That would make them fail the sufficient\nconditions for scrambled nets to attain $O(n^{-3\/2+\\epsilon})$ RMSE\nin \\cite{smoovar,localanti,yue:mao:1999}.\nThese integrands did not show a need for $r>1$.\n\nNext, we consider the function $f({\\boldsymbol{x}}) = 1\\{x_1+x_2<1\\}$.\nThis integrand has infinite Vitali variation\nover $[0,1]^2$. Therefore it also has infinite Hardy-Krause\nvariation and so the Koksma-Hlawka bound degenerates to $+\\infty$.\nBecause $f$ is Riemann integrable we know that $\\hat\\mu_n\\to\\mu_n$\nif $D_n^*\\to0$.\nFinding that $r>1$ for this $f$ would not provide\nan example of a BVHK function needing that $r>1$.\nIt remains interesting to see what happens because the case is\nnot covered by QMC theory without randomization.\n\nFigure~\\ref{fig:d2notbvempirical} shows scaled errors\nfor this integrand, using an exponent of $r=1$ for $\\log(n)$.\nThere is a striking difference between the results for Sobol'\npoints versus Halton points. We see a few approximate doublings of the\nscaled error for Sobol' points even when using $r=1$.\nThis does not prove that\nwe need $r>1$ here; perhaps we saw the last doubling\nor perhaps similar jumps for larger $n$ arise at extremely sparse intervals.\nIt does serve to raise some additional\nquestions. For instance, why are Halton points so much more\neffective on this integrand, and to which other integrands\nof unbounded variation might that apply?\nFor smooth integrands, Sobol' points commonly perform\nmuch better when $n$ is a power of two than otherwise.\nHere, those sample sizes did not bring much better outcomes.\n\n\n\\begin{figure}[t]\\centering\n\\includegraphics[width=.9\\hsize]{figd2notbvempirical.png}\n\\caption{\\label{fig:d2notbvempirical}\nThis shows\nscaled errors $n|\\hat\\mu_n-\\mu|\/\\log(n)$ versus $n>1$.\nThe integrand is $1\\{x_1+x_2<1\\}$ which has infinite\nVitali variation on $[0,1]^2$. The upper curve is for Sobol' points.\nThe lower curve is for Halton points. There is a reference\nline at scaled error of zero.\n}\n\\end{figure}\n\n\\section{Very large $m$ for Sobol' nets}\\label{sec:bigm}\n\nIf the need for $r>1$ is only evident for very large $n$\nthen we might fail to detect it by computing $\\hat\\mu_n$.\nIf we restrict to sample sizes $n=2^m$ for $m\\geqslant0$ then\nwe can use properties of the generating matrices of Sobol'\nsequences to compute the scaled error $n|\\hat\\mu-\\mu|$\nfor very large $n$ when $f$ is the indicator function\nof a set $[\\bszero,{\\boldsymbol{a}})$.\nThe first $n=2^m$ Sobol' points form a $(t,m,s)$-net in\nbase $2$ for which the Koksma-Hlawka bound gives\na rate of $O(n^{-1}\\log(n)^{d-1})$. As a result we must\nlook to $d=3$ or larger for a problem that needs $r>1$\nfor these values of $n$.\nWe choose ${\\boldsymbol{a}} = (2\/3,2\/3,\\dots,2\/3)$ of length $d$\nas $2\/3$ is difficult to approximate in base $2$.\n\nWe can partition $[0,2\/3)^d$ into a countable number\nof elementary intervals in base $2$.\nThe number of points of the digital net\n${\\boldsymbol{x}}_0,\\dots,{\\boldsymbol{x}}_{b^m-1}$\nthat are in $E({\\boldsymbol{k}},{\\boldsymbol{c}})$ equals the number of solutions\n$\\vec{i}\\in \\{0,1\\}^m$ to a linear equation\n$C\\vec{i}=\\vec{a} \\mathrm{\\,mod\\,} 2$\nfor a matrix $C\\in\\{0,1\\}^{|{\\boldsymbol{k}}|\\times m}$\nthat takes the first $k_j$ rows from the $j$'th\ngenerating matrix of the digital net, for $j=1,\\dots,d$\nand some $\\vec{a}\\in\\{0,1\\}^m$.\nSee \\cite{pan:owen:2021:tr} for a description and\na discussion of how to find the number of such points.\nFor our Sobol' points we use the direction\nnumbers from Joe and Kuo \\cite{joe:kuo:2008}.\n\nTo make the computation finite,\nwe replace $[0,2\/3)^d$ by $[0,a_m)^d$\nwhere $a_m=\\lfloor 2^m (2\/3)\\rfloor\/2^m$.\nFor $a\\in(0,1)$, let $\\mu(a) = a^d$\nand $\\hat\\mu(a) =(1\/n)\\sum_{i=0}^{n-1}1_{{\\boldsymbol{x}}_i\\in[0,a)^d}$.\nThen\n\\begin{align*}\n0&\\leqslant \\mu\\Bigl(\\frac23\\Bigr)-\\mu(a_m)\n= \\Bigl(\\frac23-a_m\\Bigr)\\sum_{j=0}^{d-1}\n\\Bigl(\\frac23\\Bigr)^ja_m^{d-j-1}\n\\leqslant \\Bigl(\\frac23\\Bigr)^{d-1}\\frac{d}n.\n\\end{align*}\nThe number of the first $n$ Sobol'\npoints that belong to $[0,2\/3)^d\\setminus[0,a_m)^d$\nis at most $d$. Therefore\n\\begin{align*}\n0&\\leqslant\n\\hat\\mu(2\/3)-\\hat\\mu(a_m)\\leqslant d\/n\n\\end{align*}\nNow\n\\begin{align*}\n&\\quad n(\\hat\\mu(2\/3)-\\mu(2\/3))\n-n(\\hat\\mu(a_m)-\\mu(a_m))\\\\\n&=\nn(\\hat\\mu(2\/3)-\\hat\\mu(a_m)\n-n(\\mu(2\/3)-\\mu(a_m))\\\\\n&\\in [-(2\/3)^{d-1}d,d]\\subseteq [-4\/3,d]\n\\end{align*}\nso the absolute error in using $n((\\hat\\mu(a_m)-\\mu(a_m))$\ninstead of $n((\\hat\\mu(2\/3)-\\mu(2\/3))$\nis at most $d$.\n\n\nFigure~\\ref{fig:errorvsmd} shows the results for $d=2,3,4$.\nFor $d=2$ we see strong linear trend\nlines in the scaled error consistent with\nneeding $r=1$. For $d=3,4$ the trend is not\nobviously linear but it does not have a compelling\nand repeating $r>1$ pattern even at $n=2^{50}$ for $d=4$\nor $n=2^{100}$ for $d=3$.\n\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{0.34\\hsize}\n\\includegraphics[width=\\textwidth]{errorvsm,d=2}\n\\end{minipage}\n\\hspace*{-.35cm}\n\\begin{minipage}{0.34\\hsize}\n\\includegraphics[width=\\textwidth]{errorvsm,d=3}\n\\end{minipage}\n\\hspace*{-.35cm}\n\\begin{minipage}{0.34\\hsize}\n\\includegraphics[width=\\textwidth]{errorvsm,d=4}\n\\end{minipage}\n\\caption{\\label{fig:errorvsmd}\nThe panels show the signed scaled error\n$n(\\hat\\mu-\\mu)$ for a Sobol' sequence\nwhen $f=\\prod_{j=1}^d1_{x_j<2\/3}$.\nThe panels have $d=2,3,4$.\nThe sample sizes go to $2^{100}$ for\n$d=2,3$ and to $2^{50}$ for $d=4$.\nThe computed values differ from the true\nones by at most $d$ for all $m$.\n}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\\label{sec:discussion}\n\nFor any $\\epsilon>0$\nand any $d\\geqslant1$ and any infinite sequence of points ${\\boldsymbol{x}}_i\\in[0,1]^d$\nand any $c>0$ that there are functions in BVHK$[0,1]^d$ where the QMC error\nexceeds $c(\\log n)^{r}\/n$ infinitely often when $r<(d-1)\/2$.\nFor $d=1$ we can easily construct such functions for $r=1$\nusing results from discrepancy theory. We don't know of\nany specific examples with $r>1$ and simple multidimensional\ngeneralizations of the functions needing $r=1$ for $d=1$\ndid not show an apparent need for $r>1$ when $d=2$.\nThe best candidates we saw for the Sobol' sequence\nare $f({\\boldsymbol{x}}) =\\prod_{j=1}^d1\\{x_j<2\/3\\}$ for $d=3$ or $4$\nbut we have no proof that they require $r>1$.\n\n\n\nOne surprise is that comparing\nFigures~\\ref{fig:vdcempiricals} and~\\ref{fig:d2prodempirical} we see an error for $f({\\boldsymbol{x}})=(x_1-1\/2)(x_2-1\/2)$ that appears\nto be $O(1\/n)$ while the one dimensional\nfunction $f(x)=x$ has error $\\Omega(\\log(n)\/n)$ (theoretically\nand also empirically over small $n$).\nA second surprise is that for a two dimensional function\nof unbounded variation we see very different behavior\nfor Sobol' and Halton points in\nFigure~\\ref{fig:d2notbvempirical}. The error for Sobol'\npoints appears to grow faster than $\\log(n)\/n$\nwhile that for Halton points is far lower and might\neven grow at a different rate. Neither of these surprises\ncontradict known theory but it is odd to see the two\ndimensional problem apparently easier to solve than\na corresponding one dimensional one and it is also odd\nto see Halton points appear to be so much more robust\nto unbounded variation than Sobol' points.\n\nSo, where could the logs be? It is possible that they are only\npractically important for enormous $n$, or what is almost\nthe same thing, that they are present with a very tiny implied constant.\nIt is also possible that the commonly investigated integrands\nhave error $O(\\log(n)\/n)$ even for $d\\geqslant2$.\n\n\\section*{Acknowledgments}\nThis work was supported by the U.S.\\ NSF under\ngrant IIS-1837931. Thanks to Fred Hickernell and\nErich Novak for discussions related to this problem.\nWe are also grateful to Traub, Wasilkowski and Wo\\'zniakowski\nfor ensuring that Trojan's work was not lost.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum systems with classically chaotic counterparts possess\nunique characteristic features as summarized, e.g., in\n\\cite{rev,haake}. Following the semiclassical approach, one\noften relates quantum properties of a system to its classical\nmotion, using for example a direct comparison of\n phase space portraits of the classical dynamics and\n wave function quasiprobability representations in the phase\nspace (via Husimi or Wigner functions) \\cite{rev,rvj,holt}.\nEven in the case of globally\nchaotic dynamics, individual unstable classical trajectories\n can be retraced\nby stationary quantum eigenfunctions which are ``scarred\" by the\nclassical\nsolution \\cite{Hell84}. When the classical phase space is mixed -\npartially chaotic and partially regular - a similar separation\ninto regular and irregular wavefunctions is possible in the\nquantum world \\cite{Perc73}.\n Stable regions of phase space (tori) lend\nthemselves to semiclassical EBK\n (Einstein-Brillouin-Keller)\nquantization, yielding both the approximate eigenenergies and\nthe corresponding wavefunctions \\cite{MF65}. Similarly,\n there are ``irregular\" wavefunctions living in the region of\nchaotic classical motion. Some of them can be associated\n with residual structures\nof classically regular motion such as cantori while the other\nare practically structureless.\nIn low-dimensional systems, KAM\n(Kolmogorov-Arnold-Moser)\ntori provide impenetrable\nborders;\n the only way regular and\nirregular wave functions may communicate with each other is by\nquantum mechanical\ntunneling processes. In higher-dimensional systems, classical Arnold\ndiffusion provides another mechanism of transport, a process\nwhich is, however, typically quite slow \\cite{LL81}. On the other\nhand, quantum mechanical tunneling through impenetrable borders\nof classical mechanics may be quite effective. Once the particle\ntunnels from, say, a stable island into the surrounding chaotic\nphase space, it can visit distant regions following the classically\nchaotic transport mechanism. In particular, it can tunnel into some other\nstable island thus providing the coupling between two\nwavefunctions localized on distinct and separated islands or\nit can wander very far away (possibly leading, e.g., to ionization\nas in atoms driven by external radiation).\n\nInterestingly, this \n``chaos assisted'' tunneling mechanism posseses\nunique features typically absent in the standard ``barrier''\ntunneling of quantum mechanics, such as a great sensitivity to\nthe variation of external parameters manifesting itself in\nfluctuations of observable quantities. Previous work\nconsidered mainly model one-dimensional time dependent\nsystems \\cite{LB90,GDJH91,PGL92} or model two-dimensional\nautonomous systems \\cite{BTU93,BBEM93,TU94,LU96}. A similar\nproblem in the scattering case has also been discussed on a\nkicked model system \\cite{GS94}. We shall consider here a\nrealistic, experimentally accessible (although simplified, see\nbelow) system - namely the hydrogen atom illuminated by\nmicrowave radiation. Instead\nof considering tunneling between two regions (tori) mediated by the\nchaotic transport between them, we shall rather consider the\nsingle tunneling process out of the stable island. Then the\nchaotic diffusion process will lead to ionization. While in the\nformer case, the probability may flow periodically between two\nregions linked by the tunneling coupling, in our problem the\nprocess is irreversible and constitutes the mechanism of the\ndecay.\n\nThe paper is organized as follows. Section II contains the description\nof the systems studied - the hydrogen atom in the field of a microwave\nradiation of either circular or linear polarization - and a general\npresentation of the ionization via chaos assisted tunneling.\nSection III presents a simple\nmodel for the description of the fluctuations present in the\ndecay, catalized by chaos assisted tunneling.\nWe present there the distribution of resonance widths\nand consider also the distribution of energy shifts of the\nsingle, initially localized state due to the coupling to other\n``delocalized'' states, and, via these states, to the continuum.\nThis theory is confronted with\nthe numerical data obtained for the hydrogen atom in the field\nof microwave radiation of either circular or linear polarization\nin Section IV. Finally we present conclusions in Section V\nwhile the Appendix contains the details of the derivation of\nformulae presented in Section III.\n\n\\section{Nondispersive electronic wave packets and their ionization}\n\nIn order to obtain the simplest study of quantum transport\nthrough a chaotic sea, one should use an initial state as localized\nas possible in phase space as, for example, a minimum wave packet\nlocalized on a classical stable equilibrium point.\nUnfortunately, in atomic systems, no stable equilibrium point of the electron\noutside the nucleus exists.\n\nA simple alternative is to use a nonlinear resonance between the\ninternal motion of the electron and an external driving force.\nRecently, interesting new objects have been proposed in the studies of\nhydrogen atoms illuminated by microwave radiation of either linear \\cite{ab1}\nor circular \\cite{bb} polarization: the so called non-dispersive\nwave packets. The corresponding classical dynamics picture\ncorresponds to the stable resonance island\nembedded in a chaotic sea. For the motion contained within\nthe principal 1:1 resonance between the Kepler frequency of the\nunperturbed Rydberg electron and the frequency of the driving field,\nthe frequency of the electronic motion is locked on the external\nmicrowave frequency. Semiclassically, a wave packet localized on such a\nregular island will be confined to it modulo \nthe exponentially\ndecaying\ntails of the wavefunction which may extend into the chaotic region.\nIn a quantum treatment, one finds\nwave packets which are really single eigenstates of the atom dressed\nby the microwave field, i.e. single eigenstates of the\ncorresponding Floquet \\cite{sfloq} Hamiltonian \\cite{ab1,dzb95}.\nThey are localized in all spatial dimensions and propagate along the classical\ntrajectory in the same way a classical particle would do.\nFor a generic case (e.g.\nlinear polarization microwaves, or more generally, any time-periodically\nperturbed system \\cite{holt}), it undergoes periodic deformations\nwhich faithfully follow the change of shape of the resonance island\nover one period, repeating the same shape every period.\nOnly in the case of circular microwave polarization,\nthe shape of the wave packet eigenstate does not change. This is\ndue to the fact that the time-dependence may be removed from the\nHamiltonian of the problem by a transformation to the frame\nrotating with the field \\cite{proh,groz,zgd96}.\n\nAs mentioned above, a finite\n$\\hbar$ value leads to quantum mechanical tunneling from\nthe island to the chaotic sea surrounding it. Then the electron\ngains energy \nfrom the driving field \nand eventually becomes ionized by a process\nclassically known as chaotic diffusive ionization. Since many different\npaths link the initial wave packet with the continuum, its\nionization time (or its reciprocal - the ionization rate or resonance width)\nfluctuates strongly with the parameters of the problem, the\nmicrowave frequency, $\\omega$, or its amplitude, $F$ \n\\cite{zdb95}.\nTherefore, these wave packets are ideally suited for a quantitative\nstudy of the ionization promoted by chaos assisted tunneling.\n\n\\subsection{Circularly polarized microwaves}\n\nLet us consider first the conceptually simpler case of hydrogen atoms\nilluminated by a circularly polarized microwave (CPM) \\cite{bb,dzb95,zdb95}.\nThe problem is fully three dimensional; however, as\nit has been shown elsewhere \\cite{dzb95,zdb95}, one can consider\nthe quantum dynamics\nin the space restricted to the polarization plane of the microwave field.\nWhile\nthis excludes possible excitations in the direction\nperpendicular to the polarization plane, the dynamics of\nthe wave packets and their properties are qualitatively\nnot affected by the reduced dimensionality \\cite{dzb95,zdb95}. In the\nfollowing, we shall present results obtained within\nsuch a reduced two-dimensional (2D) model.\n\nThe time-dependence of the\nHamiltonian describing the CPM ionization of H atoms is explicitely\nremoved by transforming to the coordinate frame which\ncorotates with the microwave field \\cite{proh,groz},\nwhere it reads (in atomic units):\n\\begin{equation}\nH=\\frac{{\\bf p}^2}{2}-\\frac{1}{r}+Fx-\\omega\\ell_z,\n\\label{hcirc}\n\\end{equation}\nwith $\\ell_z$ the angular momentum operator.\n\nAt the center of the \nprincipal resonance island between the Kepler\nand the microwave frequency, a periodic orbit exists whose period\nexactly matches\nthe period of the microwave. In the laboratory frame, this\nis a circular orbit with radius $x$ such that:\n\\begin{equation}\n\\frac{1}{\\omega^2x^2}+\\frac{F}{\\omega^2}=x.\n\\label{pos}\n\\end{equation}\nWe introduce the effective\nprincipal quantum number $n_0$ (not necessarily an integer) corresponding\nto this main resonance:\n\\begin{equation}\nn_0 = \\omega^{-1\/3}.\n\\label{n0}\n\\end{equation}\nDue to the classical scaling of the Coulomb problem \\cite{del},\namong the two parameters\n$\\omega$ and $F$\nonly one is necessary to \ntune the dynamics classically.\nThus, we define quantities (position and microwave electric field) scaled\nwith respect to $n_0$:\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\nx_0 = x n_0^{-2} = x \\omega^{2\/3}\\\\\nF_0 = F n_0^{4} = F \\omega^{-4\/3}\n\\end{array}\n\\right.\n\\end{equation}\n$F_0$ represents the ratio of the external microwave\nfield to the Coulomb field of the nucleus on the\nunpertubed resonant circular orbit. The classical dynamics\ndepends only on this parameter.\nThe scaled radius $x_0$ of the resonant circular orbit is the solution\nof the scaled equation:\n\\begin{equation}\n\\frac{1}{x_0^2}+F_0=x_0\n\\label{pos0}\n\\end{equation}\n\nIn the corotating frame, the resonant orbit corresponds to an equilibrium point.\nThis point is stable if the\ndimensionless stability parameter\n\\begin{equation}\nq=\\frac{1}{\\omega^2x^3}=\\frac{1}{x_0^3}\n\\label{stab}\n\\end{equation}\nis chosen in the interval $8\/9From this figure, we immediately obtain the following qualitative conclusions:\n\\begin{itemize}\n\\item\nThe distribution of energy shifts $P(s)$ tends to a constant as\n$s \\rightarrow 0.$\n\\item Above some critical value, the distribution drops and decreases\nroughly algebraically with $s.$ The slope is close to $-2$, indicating\na $P(s) \\equiv 1\/s^2$ behaviour.\n\\item Finally, $P(s)$ falls off abruptly above some value.\n\\item The distribution of ionization widths $P(w)$ behaves algebraically\nwith $w$ at small width, with a slope close to $-1\/2.$\nHence, the small widths are the most probable ones.\n\\item Above some critical value, the distribution drops faster,\nroughly with a $1\/w^{3\/2}$ behaviour.\n\\item Finally, similarly to the shift distribution, there is a\nfinal sharp cutoff.\n\\end{itemize}\n\nThese conclusions are similar to the ones obtained in a slightly\ndifferent physical situation, when two symmetric regular islands are\ncoupled to a single chaotic sea (see Ref.~\\cite{TU94} for a complete\ndiscussion of this physical situation). There, states lying on the\nregular islands appear in doublets with even\/odd parities with respect\nto the discrete symmetry exchanging the two regular islands. The splitting\nof the doublet is a direct measure of the chaos assisted process where\nthe particle tunnels from one regular island, then diffuses in the chaotic sea\nand finally tunnels to the other regular island. This process is very similar\nto the one studied in the present paper where the particle tunnels and\nthen diffuses towards infinity.\nThe splitting distribution observed in \\cite{TU94,LU96} is indeed very\nsimilar to our shift distribution as explained below.\n\nIn the following section, we propose a simple model - mainly based on \nphysical ideas similar to those used in \\cite{TU94} - which allows\nus to understand\nthe properties of our shift and width distributions.\n\n\\section{Statistical Model}\n\n\nWe shall consider a simple statistical model amenable to an\nanalytical treatment. Despite its simplicity, we shall see\nthat it is capable of\ndescribing quantitatively the fluctuations of resonance widths and\nshifts observed in the numerical analysis of the microwave ionization of\nhydrogen atoms.\n\nThe model system is directly based on the understanding\nwe have of the origin of the fluctuations and follows the idea\nalready used in~\\cite{TU94}.\nThe idea is to consider the wave packet eigenstate as coupled\nrandomly to a set of chaotic states (described by Random Matrix Theory)\nwhich are \nthemselves randomly coupled to the atomic continuum.\nMore precisely, our model consists of a single state $|0\\rangle$ (representing a\nstate localized on the elliptic island) coupled (weakly) to\nthe space of $N$ ``chaotic'', irregular levels, which are close to \n$|0\\rangle$ in energy.\nThe latter\nstates are considered to be, strictly speaking, resonances\nrather than bound states due to the mechanism responsible\nfor decay (e.g., the ionization of an atom).\nWhile we consider\na single localized state in the model, there may be several other\nstates localized on the same island. They will have typically\nmuch different energies and their mutual coupling via the\nchaotic states is negligible.\n\nThe Hamiltonian\nmatrix may be represented as a $(N+1)\\times (N+1)$ matrix:\n\\begin{equation}\n{\\cal H}=\\left(\n\\begin{array}{cc}\n0 & \\sigma \\langle V| \\\\\n\\sigma |V\\rangle & H_0-i {\\cal W}\n\\end{array}\n\\right),\n\\label{mat1}\n\\end{equation}\nwhere we have assumed that the energy of the localized state sets\nthe zero of\nthe energy scale.\nThe first vector in the basis represents the localized state,\nthe following $N$ ones the chaotic states.\nIn the chaotic subspace, the statistical properties of the Hamiltonian\nare well represented by a $N\\times N$ random matrix $H_0$.\nWe shall deal with\nproblems with a preserved (generalized) time-reversal invariance --\n$H_0$ should belong, therefore, to\nthe Gaussian Orthogonal Ensemble (GOE) of real symmetric matrices~\\cite{haake,bohigas}.\nThe matrix elements of $H_0$ are independent random Gaussian variables:\n\\begin{equation}\nP((H_0)_{ij}) = \\frac{\\exp \\left(-\\frac{(H_0)_{ij}^2}{2\\sigma_{ij}^2} \\right)}\n{\\sqrt{2\\pi \\sigma_{ij}^2}},\n\\end{equation}\nwith\nthe variance satisfying\n\\begin{equation}\n\\sigma_{ij}^2 = (1+\\delta_{ij}) \\frac{\\pi^2 \\Delta^2}{N}.\n\\end{equation}\nHere, $\\Delta $ is the mean level spacing between consecutive\nchaotic states close to energy 0.\n\nFollowing a commonly accepted approach \\cite{HILSS}\nthe coupling of the chaotic state\nto the continuum is introduced by the decay matrix:\n\\begin{equation}\n{\\cal W} = \\frac{\\gamma}{2} |W\\rangle \\langle W|\n\\label{w}\n\\end{equation}\nwhere the $N$ component real vector $|W\\rangle $ describes the coupling of\nthe chaotic states to the continuum.\nAs in Ref.~\\cite{HILSS}, we take this\nvector to be composed of Gaussian\ndistributed random numbers with vanishing mean and unit variance.\nThe real coefficient $\\gamma$ measures the strength of the\ndecay to the continuum.\nSuch a form implies that there is only one significant\ndecay channel for chaotic states. This is far from obvious and,\nas discussed below, probably true only at relatively low field\nstrength.\nWhen there are several open ionization channels, a convenient form\nof the decay matrix is~\\cite{HILSS}:\n\\begin{equation}\n{\\cal W}=\\sum_{k=1}^{M} \\frac{\\gamma_k}{2} \\ |W^{[k]}\\rangle \\langle W^{[k]}|,\n\\label{Gam}\n\\end{equation}\nwhere $M$ denotes the number of open channels\n(degeneracy of the continuum) and the $N$ component\nreal\nvectors $|W^{[k]}\\rangle $ describe the coupling of the chaotic states\nto channels $k=1,..,M$. Again, we take these\nvectors to be composed of Gaussian\ndistributed random numbers with vanishing mean and unit variance.\nThe $\\gamma_k$ real coefficients measure the strength of the\ndecay to continuum $k$.\n\nIn a similar way, the (real) $N$-component vector $|V\\rangle $ in\nEq.~(\\ref{mat1}) describes the\ncoupling of the localized state $|0\\rangle$ to the chaotic states.\nEach component of $|V\\rangle $ is taken as a Gaussian distributed random\nnumber of zero mean and unit variance. The coefficient $\\sigma$ in\nEq.~(\\ref{mat1}) is a\nmeasure of the strength of the coupling between the localized state\nand the chaotic subspace.\n\nIf the coupling to the continuum is neglected $(\\gamma=0)$, the model\ndescribes a single bound state randomly coupled to $N$ chaotic states.\nThis is exactly the model succesfully used in~\\cite{TU94} to describe the \nsplitting of doublets induced by chaos assisted tunneling.\n\nThe model has several free parameters: the mean level\nspacing between chaotic states, $\\Delta$, the strength of the\ncoupling with the ionization channel, $\\gamma$,\nand the strength of the coupling to the localized state, $\\sigma$.\nThere is a trivial scaling law of the Hamiltonian ${\\cal H}$, Eq.~(\\ref{mat1}),\nwhich implies that, except for a global multiplicative factor, there\nare only two relevant dimensionless parameters,\n$\\sigma\/\\Delta$ and $\\gamma\/\\Delta$ in the model.\nFor several open channels, the relevant parameters are $\\sigma\/\\Delta$ and\n$\\gamma_k\/\\Delta,\\ k=1..M.$\n\n\nDue to the interaction with ``chaotic\" resonances, the state $|0\\rangle$\nis not an eigenstate of the full Hamiltonian ${\\cal H}$, Eq.~(\\ref{mat1}).\nHowever, in most cases, the coupling to chaotic resonances is weak\nand the true eigenstate does not differ very much from $|0\\rangle :$ this\nis the perturbative regime that we shall consider in the rest of this paper.\nThis regime is obtained when the coupling is much smaller than the mean\nlevel spacing between chaotic states, i.e.:\n\\begin{equation}\n\\sigma \\ll \\Delta .\n\\label{pert1}\n\\end{equation}\n\nWe will see below that this condition is always satisfied for the\nreal physical system (hydrogen atom + microwave field) for the microwave\nfields studied in this paper. At very high field, when the nondispersive\nwave packet is destroyed, this pertubative approximation should break down.\nThe physical interpretation is clear: if Eq.~(\\ref{pert1}) is not satisfied,\nthe localized state is spread over several eigenstates of ${\\cal H}$ and\ncompletely looses its identity. It has been shown elsewhere \\cite{zbd96} that,\nin the perturbative regime, the localized state can be interpreted as a soliton\nweakly interacting with the background of chaotic states, but\nessentially keeping its shape when a parameter (for example the microwave\nfield strength) is varied.\n\nIn the following, we will also assume that the ionization rates (widths)\nof the chaotic states are small compared to their mean level spacing,\ni.e. that the decay matrix $W$, Eq.~(\\ref{Gam}), can be considered as\na pertubation. This implies:\n\\begin{equation}\n\\gamma \\ll \\Delta .\n\\label{pert2}\n\\end{equation}\nSuch a condition is not {\\em strictly} necessary, but it makes calculations\nsimpler. Physically, it means that the various chaotic\nresonances are isolated. This is a typical situation in our system -\nthe ionization of an atom in\nthe presence of microwave driving for not too strong microwave amplitudes.\n\nWith the above assumptions -- motivated by the physics of the process\nstudied --\nthe shift and width of the localized state may be obtained\nusing the lowest nonvanishing order of perturbation theory.\nSuch an approach is justified unless an accidental degeneracy\nbetween the localized state and one of the eigenstates of the matrix\n$H_0$ occurs. Neglecting such degeneracies (which only affect the tail of\nthe distribution, see Sec.~IV.A below) and performing\nan average over the random matrix ensemble defined by Eq.~(\\ref{mat1})\nmakes it possible to extract\nthe analytic expressions both for the distribution of shifts\nand for the distribution of widths. The details\nof the derivation are given in the Appendix.\nWe give here only the important results.\n\nThe shift distribution is obtained\nalong a similar way to Leyvraz and Ullmo\n\\cite{LU96} and takes the form of a Cauchy law\n\\begin{equation}\nP(s)=\\frac{1}{\\pi}\\frac{s_0}{s_0^2+s^2},\n\\label{eqs}\n\\end{equation}\nwith\n\\begin{equation}\ns_0=\\frac{\\pi \\sigma^2}{\\Delta}.\n\\label{s0ab}\n\\end{equation}\nImportantly, this result is independent of the\ndegree of correlation among the eigenvalues of the $H_0$ matrix: the\nsame result is obtained for an uncorrelated spectrum (Poisson-like\ndistributed eigenvalues, physically corresponding to coupling\nof the localized state with a set of ``regular\" states, instead of\nchaotic ones as in the model described above),\na GOE or a picket fence (harmonic ocsillator)\nspectrum \\cite{LU96}. This Cauchy distribution is the same than the one\nobtained \\cite{TU94,LU96} in the absence of ionization.\n\nThe situation is a bit more complicated for the width distribution.\nFor Poissonian distributed\neigenvalues of $H_0$ (uncorrelated spectrum), a similar Cauchy distribution\nis obtained for the {\\em square root}\nof the width,\ncorresponding to the\nfollowing distribution of widths (see Appendix):\n\\begin{equation}\nP_{\\rm Poisson}(w)=\\frac{1}{\\pi}\\frac{\\sqrt{w_0}}{\\sqrt{w}(w+w_0)},\n\\label{eqx}\n\\end{equation}\nwith\n\\begin{equation}\nw_0=\\frac{4\\sigma^2\\gamma}{\\Delta^2}.\n\\end{equation}\nFor a matrix $H_0$ belonging to the GOE, the distribution\nis slightly more complicated (see Appendix):\n\\begin{equation}\nP_{\\rm GOE}(w) = \\frac{2}{\\pi^2} \\frac{{w'_0}^{3\/2} \\ln (w+\\sqrt{1+w\/w'_0})\n+ \\sqrt{ww'_0(w+w'_0)})}{w(w+w'_0)^{3\/2}}\n\\label{eqx2}\n\\end{equation}\nwith\n\\begin{equation}\nw'_0= \\frac{\\pi^2 \\sigma^2 \\gamma}{\\Delta^2} = \\frac{\\pi^2}{4}\\ w_0.\n\\end{equation}\nAlthough it is distinct from Eq.~(\\ref{eqx}), it has in fact quite a similar\nshape, see Fig.~\\ref{fig2.2}.\nIn particular, if $P_{\\rm GOE}$ is rescaled by a global multiplicative\nfactor of about 20\\%, it is almost indistinguishable from $P_{\\rm Poisson}.$\nQuite a large statistics is necessary to determine which of\nthe two distributions is a correct one for a given data set (see\nalso next Section).\n\nWe can also obtain the distribution of widths for the chaotic states in\nthe perturbative regime. This is the well-known ``non-overlaping resonances\"\nregime~\\cite{brody,bg:prl,dupret} where the widths are distributed according\nto a Porter-Thomas distribution:\n\\begin{equation}\nP_{PT}(w)=\\frac{1}{\\sqrt{w\\overline{w}}}\n\\exp(-w\/2\\overline{w}),\n\\label{PT}\n\\end{equation}\nwhere $\\overline{w}$ denotes the average width.\nFor small $w$, $P_{PT}$ diverges as $w^{-1\/2}$ while\nfor large $w$ it decays exponentially.\n\n\n\n\\section{Analysis of the data}\n\n\\subsection{Quantitative analysis of the fluctuations with the statistical\nmodel}\n\nThe exact expressions, Eqs.(\\ref{eqs}-\\ref{eqx2}), obtained\nin the perturbative regime, reproduce qualitatively most of the\nstatistical distributions numerically observed for\nthe shift and width of the nondispersive wave packet\nof the hydrogen atom in a microwave field, see Fig.~\\ref{fig2}.\nFor the shift distribution $P(s),$ the distribution is constant\nnear $0$ and decays like $1\/s^2$ at large $s$. The only difference\nis the absence of the sharp cut-off in the perturbative expression.\nThis can be easily understood: the large energy shifts correspond\nto quasi-degeneracies between the localized state and one specific\nchaotic state, i.e. to the immediate vicinity of an avoided crossing.\nThere, the simple pertubative scheme breaks down and the actual shift\nremains finite as the perturbative expression diverges as the inverse\nof the unperturbed spacing between the chaotic and localized states\nHence,\nthe actual distribution has to fall faster than the perturbative one\nat large shifts, as numerically observed.\n\nThe width (ionization rate) distribution behaves similarly.\nBoth the initial $1\/w^{1\/2}$ regime and the following $1\/w^{3\/2}$ regime\nobserved for the \nwave packet\nare well reproduced by the simple statistical model. Again, the difference\nis the absence of the cut-off for very large widths. \nThe reason is identical,\nnamely the breakdown of the perturbative approximation.\n\nNevertheless, we can go beyond the perturbative scheme, using\nthe statistical model described in the previous section, but calculating\nnumerically the shift and width distribution.\nThis has been done by numerical diagonalization of the complex Hamiltonian,\ni.e. of\nrandom matrices\ncorresponding to Eq.~(\\ref{mat1}), generated according to the rules\ngiven above. A diagonalization of a matrix of size\n$N=80$ yields a single shift and width for chosen values of\n$\\sigma\/\\Delta$ and $\\gamma\/\\Delta$.\nUsing different \nrandom matrix realizations\nwe accumulate up to 50~000 data for comparison. We have verified\nthat the distributions obtained do not depend on the matrix size $N$.\n\n\nIn Fig.~\\ref{fig2.1}, we show the numerical results\nfor the shift distribution obtained for \nthe hydrogen atom in a circularly polarized\nmicrowave field with the\nperturbative analytical expression for our random matrix model\n(pure Cauchy distribution),\nEq.~(\\ref{eqs}), and with the\nfull non-perturbative result using our statistical model,\nboth on a linear scale \n-- well suited for small and moderate\nshifts -- and on a double logarithmic scale -- well suited for the\ntail of the distribution at large shifts.\nAs expected, the perturbative analytical expression reproduces\nthe numerically observed distribution, except for the exponentially\nsmall tail at large shift. The full non-perturbative distribution\nis found to be in excellent agreement with the numerical data\nfor the real system -- the hydrogen atom in a circularly polarized\nmicrowave field, which proves that our simple statistical model\nactually catches the physics of the chaos assisted tunneling\nphenomenon. A similar conclusion has been reached \\cite{TU94} for\ndoublet splittings induced by chaos assisted tunneling.\n\nIn Fig.~\\ref{fig2.2}, a similar analysis is done for the\ndistribution of widths, both on linear and\ndouble logarithmic scale. On the linear scale, instead\nof the width distribution,\nEq.~(\\ref{eqx2}), itself which diverges at zero\n(see Appendix), we plotted\nthe distribution for the square root of the width,\nwhich tends to a constant value at zero. As can be seen,\nthe agreement is again excellent over the full range,\nthe perturbative expression being inaccurate in the tail only,\nas expected. In addition to the\nperturbative analytical expression, Eq.(\\ref{eqx2}),\nwe have also drawn the distribution\nexpected when the states in the chaotic sea have eigenergies\ndescribed by a Poisson distribution rather than a GOE one, Eq.~(\\ref{eqx}).\nBoth distributions are similar far from the origin and differ by\nabout 20\\% at $w=0.$ At first glance, it seems that the Poisson\ncurve agrees slightly better than the GOE curve, which is somewhat\nsurprising and not understood\nas chaotic motion surrounding the stable island\nsuggests the choice of the GOE. However, the deviation is at the border\nof statistical significance.\nIn the double-logarithmic plot, we have also added the Porter-Thomas\ndistribution, Eq.~(\\ref{PT}), which reproduces correctly the tail\nat large widths.\n\nIt is remarkable that both the shift and the width\ndistributions are so well reproduced by the random matrix\nmodel.\nThis proves that our simple statistical model carries the essential part\nof the physics. The data presented are the first, as far as we know, manifestation\nof the chaos assisted tunneling process in a realistic, experimentally\naccessible systems.\n\nEven if the perturbative expression, Eq.~(\\ref{eqx}),\nincorrectly describes the\nnon-perturbative large width tail,\nit is\nclear that the initial $w^{-1\/2}$ decrease, similar to\n$P_{PT}$, Eq.~(\\ref{PT}), is followed by a regime of $w^{-3\/2}$ behaviour.\nThe length of this $w^{-3\/2}$ behaviour provides an interesting\ninformation about the system. For strong coupling, $\\sigma \\ge \\Delta,$\nthe localized state $|0\\rangle$ is strongly mixed with the chaotic state.\nThus, its width\ndistribution would be the same as that of other resonance states, i.e.\na Porter-Thomas distribution. Hence, the $w^{-3\/2}$ part\nthere shrinks to zero and the power $-1\/2$ law is followed immediately by\nthe exponential tail.\nThe relative importance of the $w^{-3\/2}$ part in the\nwidth distribution indicates, therefore, the presence of the\nweak coupling, perturbative regime.\nCompared to the pure chaotic state where fluctuations\nof the widths are already known to be large, the effect of additional tunneling\nis to shift some part of the width distribution towards small\nwidths while keeping the exponential cut-off, that is\nto increase the fluctuations. In the perturbative limit, the fluctuations\nbecome so large that the average width \n{\\em diverges}.\n\nThe success of the statistical model allows to give a complete\nphysical interpretation of the observed data:\n\\begin{itemize}\n\\item The smallest shifts and widths, observed for\n\\begin{eqnarray}\ns < s_0\\\\\nw < w_0\n\\end{eqnarray}\nwith probabilities behaving, respectively, as $s^0$ \nand $w^{-1\/2}$,\ncorrespond to the localized state lying far from quasi-degeneracies\nwith one of the chaotic states. Then, the localized state is weakly\ncontaminated by the various surrouding chaotic levels. For example, its shift\nis the sum of the effect of level repulsion by the various chaotic states.\nChaotic states with higher (resp. lower) energy push the localized state down\n(resp. up) in energy, which globally results in a small shift with a random \nsign.\nThe width results from the interference between the elementary ionization \namplitudes\ncontributed by \nthe various chaotic states. \nAs there is only one open decay channel, the\namplitudes -- not the probabilities -- have to be added. \nAs they are essentially random\nuncorrelated variables, the interference is mainly destructive, producing\nsmall widths with a large probability.\n\n\\item The intermediate shifts and widths, observed for\n\\begin{eqnarray}\ns_0 < s << \\sigma\\\\\nw_0 < w << \\gamma\n\\end{eqnarray}\nwith probabilities behaving respectively as $s^{-2}$ and $w^{-3\/2}$,\ncorrespond to one chaotic state being much closer in energy\nto the localized state than to the other chaotic states, but\nnevertheless sufficiently far to be \ncoupled only perturbatively. Then,\nthe single coupling to this particular chaotic state dominates both\nthe shift and the width of the localized state.\n\n\\item The largest shifts and widths, observed for\n\\begin{eqnarray}\ns \\simeq \\sigma \\\\\nw \\simeq \\gamma\n\\end{eqnarray}\ncorrespond to the quasi-degeneracy between the localized state and one chaotic\nstate, with strong non-perturbative mixing between the two.\nThis is where the exponentially decreasing tails are observed.\n\\end{itemize}\n\nIn the latter two cases, as one chaotic state is dominant, approximate\nexpressions for the distributions can be obtained by a simple\ntwo-level model. Although the expression of the shift and the width are\nthen easy to obtain (diagonalization of a $2\\times 2$ complex matrix),\nwe have not been able to perform analytically the averaging\nover the ensemble of random matrices.\n\n\\subsection{Extraction of the relevant physical parameters from the statistical data}\n\n\nThe simplest possible measures of the fluctuations of a quantity are typically\nits average value and the variance. Inspection of\nthe perturbative distributions for both the shifts, Eq.~(\\ref{eqs}), and\nthe widths, Eq.~(\\ref{eqx}), suggests, however, some caution.\nIndeed, the average value of the width is not defined (the corresponding\nintegral diverges) and the same is true for the variance of the shift\n(the average value is zero due to the symmetry of the distribution).\nThis is because of the existence of long algebraic tails, $1\/s^2$ and\n$1\/w^{3\/2}$, in the distributions. The variance of the shift and the\naverage value of the width are infinite because of the diverging contributions\nof these tails. This is an example of unusual random processes such as Levy\nflights~\\cite{levy} where rare events play the dominant role.\nUltimately,\nit is because, in perturbation theory, the contamination of a state by its\nneighbors is proportional to the ratio of the coupling matrix element\nto the energy difference and consequently decreases slowly: even a far distant\nlevel can have a large effect.\n\nFor the full non-perturbative \ndistributions, the exponential cutoff\nat large values kills the divergence of the various integrals and\naverage values and variances -- as well as higher moments of the\ndistributions -- are well defined and could be calculated.\nTheir precise values, however, depend crucially on the position of the cutoffs.\nHence, distributions identical for small widths (shifts) and only\ndiffering at large widths (shifts) may have completely different\naverage values and variances.\nCalculating such quantities requires a very accurate knowledge\nof the tails of the distributions. The average values and the\nvariances are thus fragile and difficult to calculate on a real\nsystem like the hydrogen atom in a microwave field; they do not provide\nus with the most interesting physical information.\n\nRather than the average values, we prefer to define typical values.\nThe typical width $\\tilde{w}$ lies at the middle of the distribution, \nsuch that half of the widths are larger and half of them smaller, i.e.\n\\begin{equation}\n\\int_0^{\\tilde{w}} P(w)\\ dw = \\frac{1}{2}\n\\end{equation}\nWith such a definition, the typical width is robust, only weakly sensitive\nto the tail of the distribution.\n\nA similar definition holds for the typical shift, slightly modified\nto take into account that $s$ can be either positive or negative:\n\\begin{equation}\n\\int_0^{\\tilde{s}} P(|s|)\\ d|s| = \\frac{1}{2}.\n\\end{equation}\n\nFor the perturbative distributions in the statistical random matrix model,\nthe typical widths and shifts can be calculated from\nEqs.~(\\ref{eqs}),(\\ref{eqx}),(\\ref{eqx2}):\n\\begin{equation}\n\\left\\{\n\\begin{array}{lll}\n\\displaystyle \\tilde{s}&=&\\displaystyle \\frac{\\pi \\sigma^2}{\\Delta}\\\\\n\\displaystyle \\tilde{w}&=& \\displaystyle \\frac{4 \\sigma^2 \\gamma}{\\Delta^2}\\ \\ \\ \\ {\\rm for\\ a\\ Poisson\n\\ spectrum}\\\\\n\\displaystyle \\tilde{w}&=& \\displaystyle \\frac{5.48 \\sigma^2 \\gamma}{\\Delta^2}\\ \\ \\ \\ {\\rm for\\ a\\ GOE\n\\ spectrum}.\n\\end{array}\n\\right.\n\\label{typical}\n\\end{equation}\nFor the full non-perturbative \nrandom model distributions -- as long as the basic hypothesis\nof small coupling, Eqs.~(\\ref{pert1}) and (\\ref{pert2}), are true --\nwe carefully checked that\nthe typical widths and shifts are not significantly \n(within few tenths of percent)\ndifferent from the previous analytic expressions.\n\nFor a real physical system like the wave packets \nin the\nhydrogen atom exposed to a microwave\nfield, we can reliably extract from the statistical data the typical\nwidth and shift. We have also compared the numerically obtained\ndistributions with non-perturbative distributions of our statistical\nmodel and performed best fits of the former by the latter. Again, we checked\nthat the typical width and shift for the best fit do not differ\nby more than \nfew tenths of percent from the direct measures. This implies\nthat the typical shift and width can be safely used to extract\nfrom the statistical distributions the relevant \nphysical parameters\n$\\sigma$ (coupling between the localized\nstate and the chaotic states) and $\\gamma$ (ionization widths of the\nchaotic states). Slightly different values are obtained\nif the Poisson or GOE expressions are used.\nIn the following, we have used the GOE expression.\n\nIn fact, only $\\sigma^2\/\\Delta$ and\nthe dimensionless parameter $\\gamma\/\\Delta$\n(from the ratio $\\tilde{w}\/\\tilde{s}$) can be easily extracted.\nObtaining the other dimensionless parameter $\\sigma\/\\Delta$ or\n$\\sigma$ and $\\gamma$ themselves requires to know the mean spacing $\\Delta$ \nbetween\nchaotic resonances.\nSurprisingly, this is not straightforward.\nTo understand the problem, consider\nthe diagonalization of the Floquet Hamiltonian for the LPM case. The\nnumber of states present in a single Floquet zone depends on the\nnumber of photon blocks included in the diagonalization. When\nit is increased, new states appear in the vicinity of the wave packet state\ncorresponding to either low lying atomic states (with very different\nenergy but shifted upwards by an integer times the photon frequency) or\nhighly excited states or resonances (shifted downwards).\nThese states should not \ncontribute to the determination\nof $\\Delta$ since they have a vanishing overlap with atomic states\nbuilding the wave packet.\nHence, the mean level spacing $\\Delta$ between \nchaotic states\nis a somewhat ambiguous quantity. However -- as will be seen \nat the beginning of the next section --\nall our results are obtained either in the perturbative regime\n$\\sigma ,\\gamma \\ll \\Delta$ or close to it, and the mean spacing\n$\\Delta$ is just a scaling parameter. A rough estimate of $\\Delta$\nis obtained assuming that only states with similar principal quantum number,\nlet us say differing by less than a factor 2, are efficiently coupled.\nExperimental results on hydrogen atoms in a linearly polarized microwave field\n\\cite{koch}\nsuggest that the physics of ionization is entirely dominated\nby states with principal quantum number less than $2n_0$\n(when the experimental cut-off is changed from infinity to $2n_0$,\nno important change is observed). At low microwave field,\nthe number of efficiently coupled states is smaller,\nbut this is not the regime of chaotic diffusion we are interested in.\nChaotic motion requires overlap between classical resonance islands,\ni.e. efficient coupling between states of largely different principal\nquantum numbers.\nThis gives the following approximate mean level spacing, used later in\nthis paper:\n\n\\begin{equation}\n\\Delta\\approx \\frac{1}{n_0^4}.\n\\end{equation}\n\n\n\\subsection{Tunneling and chaotic ionization rates}\n\\label{sectyp}\n\nFig.~\\ref{fig3} shows the typical width and shift for the non-spreading\nwave packet of the hydrogen atom in a circularly polarized microwave field\nat frequency $\\omega=1\/(40)^3,$\nas a function of the scaled electric field $F_0=F(40)^4.$\nEach point in this curve results from the numerical diagonalization\nof several hundred matrices, each of typical size several tens\nof thousands, for neighbouring values of the microwave\nfield strength and frequency.\nThe statistical model described above makes it possible to separate\nthe intrinsic huge fluctuations of the ionization rate\nand extract values of the various couplings.\nThis is very clear in Fig.~\\ref{fig3} where both the typical width and the\ntypical shift are relatively smooth functions of the field strength,\nwith short range fluctuations smaller than a factor 2, whereas the\nraw shift and width display fluctuations over \nat least 3 orders of magnitude,\ncompare Fig.~\\ref{fig3} with Fig.~\\ref{fig1}.\n\nIn Fig.~\\ref{fig3}, one can easily check that the typical width\nis always smaller than the typical shift by at least \none order\nof magnitude. As the ratio of the two is $\\gamma\/\\Delta$, see\nEq.~(\\ref{typical}), this implies that inequality (\\ref{pert2})\nis verified.\nAlso, the typical shift is smaller than the mean level spacing\n(represented by the dashed line) which, using \nEq.~(\\ref{typical}),\nshows that inequality (\\ref{pert1}) is also verified.\nAltogether, this proves that our data are effectively obtained\nin the perturbative regime.\n\nThe third observation in Fig.~\\ref{fig3} is that neither the typical\nwidth, nor the typical shift are monotously increasing functions of the\nmicrowave field strength, but display various bumps. These bumps\nare obviously strongly correlated, which indicates that they\nare due to variations of the tunneling rate $\\sigma$ rather than\nvariations of $\\gamma.$\nIndeed, in Fig.~\\ref{fig4},\nwe plot the dimensionless parameters $\\sigma\/\\Delta$ and $\\gamma\/\\Delta$\ndeduced from the typical shift and width.\nIt confirms that the tunneling rate $\\sigma\/\\Delta$ is a slowly increasing\nfunction of the\nfield strength with various structures. The bumps occur\nprecisely at values of the field strength where there is\na resonance between the\neigenfrequencies $\\omega_+$ and $\\omega_-$, see Eq.~(\\ref{modes}),\nof the motion in the vicinity of the stable fixed point supporting\nthe non spreading wave packet. This has been analyzed in reference~\\cite{zbd96}\nwhere it is shown that the bump around $F_0=0.023$ corresponds to the\n1:4 resonance and the bump just below $F_0=0.04$ to the 1:3 resonance.\nIn the vicinity of such a resonance, the classical dynamics is strongly\nperturbed, some secondary resonant tori and islands appear. The bumps\nin Fig.~\\ref{fig4} are just quantum manifestations of an increased\ntransport rate induced by these classical resonances.\nNot surprisingly, $\\gamma\/\\Delta$, which represents the ionization rate\nof states surrounding the resonance island, is practically\nnot affected by these resonances (only small residual\noscillations are visible around $F_0=0.04$). On the other hand, it increases\nvery fast up to scaled field $F_0\\approx 0.04$\nwhere it saturates to a roughly constant value. This has a simple\nsemiclassical explanation. Below $F_0\\approx 0.04$, chaos is not established\naround the principal resonance island, there still exist some regular\ntori further in phase space which strongly slow down the classical\nchaotic diffusion. Above 0.04, only the principal resonance island survives,\nthe chaotic ionization rate is quite large ($\\gamma\/\\Delta$ is of the order\nof 0.1) and only slowly increases with the\nfield strength.\n\nStrictly similar observations can be made for the hydrogen atom exposed\nto a linearly polarized microwave field, which proves that they\nare not specific to one system under study, but rather general\nproperties of chaos assisted tunneling followed by chaotic\ndiffusion. Fig.~\\ref{fig5} displays the typical width and typical shift\nof the non-spreading wave packet for $n_0=40$, as a function\nof the scaled microwave field $F_0=Fn_0^4.$ Again, these data are\nobtained in the perturbative regime, see Eqs.~(\\ref{pert1})-(\\ref{pert2}),\nand display obviously correlated bumps. The dimensionless tunneling\nrate $\\sigma\/\\Delta$ and chaotic ionization rate $\\gamma\/\\Delta,$\nshown in Fig.~\\ref{fig6}, indicate that the bumps are due to\nsecondary resonances\ninside the primary resonance island between the external microwave frequency\nand the internal Kepler motion. Comparison between Fig.~\\ref{fig4}\nand Fig.~\\ref{fig6} shows that both the tunneling rate\n and chaotic ionization rate\nare of the same order of magnitude in linear and circular polarization,\nwith similar changes versus $F_0$, up to possibly a roughly\nconstant multiplicative factor.\nThis is a confirmation of the experimental observation that\nvery similar ionization threshold frequency dependences\n are observed in the two cases provided $F_0$ is appropriately\nrescaled\n\\cite{koch:circ}. As in \\cite{koch:circ} we observe that larger\nvalues of $F_0$ are necessary in LPM to result in the similar\nbehaviour as for CPM.\n\nTo make the study complete, we have also studied how the\ntypical width and the typical shift change when the principal\nquantum number $n_0$ -- or equivalently, the microwave\nfrequency $\\omega=1\/n_0^3$ -- is changed.\nThe result is shown in Fig.~\\ref{fig7} for the circular polarization,\nfor a fixed scaled microwave field $F_0=0.0426.$ In this plot,\nthe classical dynamics of the system is absolutely fixed, the only\nvarying parameter being the effective Planck's constant\n$\\hbar_{\\rm eff}=1\/n_0.$ The striking phenomenon is the fast decrease\nof both the typical width and typical shift with $n_0.$ In the logarithmic\nscale of Fig.~\\ref{fig7}, is appears as a straight line indicating\nan exponential decrease with $n_0.$ Also, the two quantities\ndecrease along parallel lines, which -- according to \nEq.~(\\ref{typical}) --\nindicates that the tunneling rate $\\sigma$ is responsible\nfor this decrease.\nIn Fig.~\\ref{fig8}, we plotted the dimensionless tunneling\nrate $\\sigma\/\\Delta$ and chaotic ionization rate $\\gamma\/\\Delta$ as\na function of $n_0.$ Note that $\\sigma\/\\Delta$ is plotted using a logarithmic\nscale and $\\gamma\/\\Delta$ on a linear scale!\nThe exponential decrease is of the form:\n\\begin{equation}\n\\sigma\/\\Delta \\simeq \\exp (-n_0S) = \\exp \\left( - \\frac{S}{\\hbar_{\\rm\neff}}\\right)\n\\end{equation}\nwith $S\\approx 0.06 \\pm 0.01$\n(extracted from the plot).\n\nSuch a dependence is typical for a tunneling process. $S$ then\nrepresents the tunneling action from the stable fixed point\nwhere the wave packet sits to the chaotic region surrounding the\nresonance island. If complex orbits are used, \n$S$ can be thought\nas the imaginary part of the action of a complex tunneling\norbit \\cite{whelan,PROF.DR.AMAURY-MOUCHET}. In our realistic system, finding such complex orbits\nis much more difficult than in the few\nmodel systems where this analysis has been done. We have not been able to find\nthe complex path associated with the tunneling process, but\nour purely quantum results may provide a guide in this search, as\nthey show that the imaginary action has to be of the order of 0.06\nfor $F_0=0.0426.$\n\nOn a linear scale, see Fig.~\\ref{fig9}, the dimensionless chaotic ionization\nrate\n$\\gamma\/\\Delta$ is a slowly increasing function of $n_0.$ A simple\nclassical analysis using the so-called Kepler map \\cite{i3e}, which is known\nto produce relatively good predictions for the ionization\nthreshold of Rydberg states by a microwave field \\cite{zgd96,bd93}\npredicts a linear dependence versus $n_0,$ while the numerical result\nseems rather a quadratic function. \nThis discrepancy\ncould be due either\nto the approximations done to obtain the Kepler map\nor to the fact that, for high $n_0,$ the statistical model\nused to extract $\\gamma\/\\Delta,$ see \nEqs.~(\\ref{eqs}),(\\ref{eqx}),(\\ref{eqx2}),(\\ref{typical}), is no\nlonger valid because several ionization channels are open (see next section).\n\nLet us note that to get 800 data points\nfor $n_0=100$ (enough to determine the typical shift\nand width) requires about 40 hours of Cray J98 single\nprocessor CPU time. The presented results are, in this sense,\nquite costly (the size of diagonalized matrices exceeded 200 000\nin this case).\n\n\n\\subsection{Limitations of the model}\n\n\nAlthough the simple statistical model well describes the fluctuations\nof the width and shift of the non-spreading wave packet\nfor a range of scaled microwave field ($F_0\\in [0.04,0.08]$ for\nLPM and $F_0\\in [0.02,0.06]$ for CPM, both for $n_0$ around 40) --\nthe region\nwhere experiments are usually done -- additional difficulties\nappear for lower and higher field values.\n\nFor lower $F_0$ values, a statistically\nsignificant part of the data show very small widths, at the limits\n of the numerical precision. \nAt the same time, \nplots of the wave packets similar to those shown \n in \\cite{dzb95} suggest that the states are more extended,\nextending far from the stable island. The \nsituation then may not correspond\nto a clear-cut case of the chaos assisted tunneling process. \nOur LPM data \nindicate \nthat in such cases the singularity \nfor small widths is much\nstronger than $\\Gamma^{-1\/2}$.\nSimilarly, in the CPM case,\nwe did not present the random matrix fit for $F_0=0.038,$ as a\nsignificant part of the data is \naffected there by a strong classical\n$1:3$ resonance \\cite{zbd96}. Thus we do not \nface a clear case\nof a single localized state but\nrather two strongly coupled localized states\ndecaying via a chaos assisted tunneling process. Since such a case\nis quite rare, we prefer to exclude it from the analysis and not to \nconstruct the extension of the random matrix theory. It could not\nbe tested convincingly on a single case, anyway.\n\nMost importantly, we could not extend the random model fits to\nhigher $F_0$ values for a very simple\nreason. There, we observed indications of the \nopening of other ionization channels\n(see Eq.~(\\ref{Gam})). A typical signature of such\na behaviour is the disappearance of the\nsingularity $\\Gamma^{-1\/2}$ in the distribution of the widths.\nTo understand this, note\nthat the typical Porter-Thomas distribution\nbehaves, for small widths, as \n$\\Gamma^{M\/2-1}$ with $M$ being\nthe number of open channels \\cite{brody,dupret}. \nIn the chaos assisted tunneling\nprocess leading to ionization, while the full distribution\ndiffers from the Porter-Thomas distribution, as exemplified\nearlier for the single channel case, the small width functional behaviour is\nsimilar in both cases.\n\nThe study of the available data reveals\nthat the opening of the second, and possibly third ionization\nchannel appears gradually with the increasing microwave amplitude,\n$F_0$. Thus the different possible ionization channels are not\nof equal importance, i.e., they are not equivalent (in the language\nof the random matrix theory \\cite{HILSS}). To build the random\nmatrix model of the process one then needs to introduce\nadditional free parameters describing the strength of the coupling\nto the additional ionization channels, i.e. various values\nfor the $\\gamma_k$ in Eq.~(\\ref{Gam}). Although such a procedure\nis quite straightforward, it is clear that fitting these parameters\nto two data sets (shifts and widths) provides little information\nand must be ambiguous. Typical distributions of the square root of\nthe width obtained for large microwave amplitudes are presented in\nFig.~\\ref{fig9} for LPM and CPM wave packets.\nNote the presence of the hole for small widths.\n\n\nOn the other hand, since in the perturbative limit, the level shifts\ndepend only on the real coupling between the localized state and\nthe remaining chaotic subspace, \nEqs.~(\\ref{s0ab}), (\\ref{typical}), \none can expect that the shifts will\nbe still well described by the Cauchy law, Eq.~(\\ref{eqs}), independently\nof the opening of additional ionization channels.\nIndeed, it is the case as exemplified in\nFig.~\\ref{fig9} for LPM and CPM wave packets.\n\nSimilarly, the opening of additional ionization channels is expected\nin the semiclassical limit. The limit is realized by decreasing\nthe microwave frequency, then the wave packet is composed of\ncircular states of higher $n_0$. While the data corresponding to\nthe single channel decay have been obtained for $n_0=40$ at\n$F_0=0.0426,$\nwe observed the opening of the second channel for the same $F_0$\nstarting at $n_0=60$. Panel (e) in Fig.~\\ref{fig9}\npresents the histogram of the square root of the width for $n_0=90$\nand shows the existence of at least two open channels.\nAgain the corresponding shift distribution is not affected and\nis well described by the Cauchy distribution [panel (b)].\n\n\n\\section{Physical Interpretation and Conclusions}\n\nWe have presented a statistical theory of ionization\ncatalyzed by chaos assisted tunneling. The corresponding\nphysical picture is built of a single state, localized on a stable\nisland and coupled (quantum mechanically, due to a finite value of\n$\\hbar$) to the surrounding chaotic sea. Once the tunneling\ninto the sea takes place, the diffusive \nchaotic excitation leads finally to\nionization.\n\nA random matrix theory model allows us to determine analytically\nthe distribution of the energy shifts \n(induced by the interaction with the chaotic sea)\nof the localized state, as well as\nthe distribution of its widths (ionization rates), in the\nperturbative limit. Non-perturbative corrections may also be\nunderstood and estimated. We concentrated on the simplest case of\nsingle channel ionization -- the model then is characterized \nby few parameters only. In that case, both the distributions of shifts\nand widths have long algebraic tails\nexplaining the large scale fluctuations of both quantities.\nThese fluctuations are a characteristic feature of chaos assisted\ntunneling processes. Fluctuations (and universal properties of\nfluctuations) are well established properties of chaotic systems.\nIn the ionization brought about by chaos assisted tunneling,\nthe combination of a weak tunneling process with chaotic\ncoupling to the continuum increases dramatically the range\nof the fluctuations, by extending the distribution considerably \ntowards extremely small widths, i.e. metastable states.\n\nThe developed theory has been confronted with numerical data\nobtained for the shifts and widths of non-spreading wave packets --\nstates localized on a stable 1:1 resonance island between\nthe Kepler frequency of a Rydberg electron and the\nfrequency of an externally applied microwave field of either\nlinear or circular polarization\n-- a system accessible to present experiments. The numerical\ndata have been obtained for simplified models of the atom -\na one-dimensional atom in LPM and a two-dimensional atom in\nCPM. This allowed us to study the frequency range well in the\nexperimental region - the important atomic states building\nthe wave packet correspond to the principal quantum numbers\nused in the experiments. The principal reason for the simplification\nis that fully three-dimensional numerical calculations -- although\npossible for a single set of parameters\nas exemplified by us before \\cite{zdb95} -- are\nstill prohibitive for present day computers.\nMore importantly, however, the statistical\nproperties of non-spreading wave packet states are not affected\nby the \nreduced dimensionality of the atom as tested by us for the\nthree-dimensional CPM case.\n\nIt turns out that the developed statistical theory very well\ndescribes the numerical data in the range of the single channel decay, \nwhich makes us confident that it contains\nthe essential ingredients of the physical process. The quality of the fits\nallows to extract order from chaos, that is to extract from\nstrongly fluctuating quantities, see Fig.~\\ref{fig1}, the physical\nparameters describing the coupling of the non-spreading wave packet\nto the chaotic states\n(tunneling rate) and the \nionization rate of chaotic states. \nThese parameters exhibit reasonably smooth behaviour.\nFor example, we have shown\nthat secondary \nclassical resonances inside the regular island increase the tunneling rate.\nAs an unambiguous signature of a\ntunneling process\nwe also could demonstrate the exponential decrease of the tunneling rate\nwith the principal quantum number.\n\n\nLet us emphasize the importance of the fluctuations of the ionization\nrate (width) of non-spreading wave packets in the hydrogen atom.\nIn a real experiment, it is likely that the atoms will experience\nvarious values of the microwave field strength -- either\nbecause of spatial inhomogeneities or because they are prepared\nby a slow increase of the microwave strength as explained in\n\\cite{zd:jpb97} -- and more or less\naverage the short range fluctuations of the ionization rate.\nIn the total, the residual ionization of the atom will be given\nby the average ionization rate, a quantity dominated by the\nfluctuations towards large ionization rates and which can be\n significantly larger than the typical ionization rate.\nFor example, for the data in Fig.~\\ref{fig1} discussed\nin this paper, the average ionization rate is about 6.4 times\nlarger than the typical ionization rate. In the limit\nof the perturbative regime, the ratio of the two even diverges!\nThis is an example of physical processes as Levy flights where the\nphysics of a fluctuating system is dominated by rare events.\n\n>From a practical point of view, the present study also tells us that\nthe lifetimes of the non-spreading wave packets either in CPM or in LPM\nare rather long. Indeed, for $n_0=60$\nand $F_0=0.0426$ -- these values are representative\nof what could be used in a real experiment, microwave frequency\naround 30 GHz and microwave field \namplitude of the order of 10V\/cm --\nthe typical lifetime of the non-spreading wave packet in CPM,\ndue to ionization catalyzed\nby chaos assisted tunneling, is of the order of several micro-seconds, that is\nabout 100 000 Kepler periods. However, fluctuations\nby one or two orders of magnitude are expected around this typical value.\nEven the longest lifetimes should be shorter\nthan the natural lifetime, due to spontaneous emission, of the\norder of a fraction of a second. At higher $n_0\\simeq 100,$\nthe typical ionization lifetime is of the order of several milliseconds,\ni.e. 10 million Kepler periods, but still shorter than\nthe lifetime induced by spontaneous emission \\cite{ibb97}. \nHence, for practical\nexperiments in CPM, spontaneous emission should not be a problem. In LPM,\nspontaneous emission is a slightly stronger effect, but \nlargely dominated by chaos assisted tunneling ionization \nfor $n_0\\leq 100$\n\\cite{ab97}.\n\nFinally, the physical situation and the model described\nhere are not restricted to atomic non-spreading wave packets.\nIt should describe physical systems where a given state is weakly\ncoupled to a dense family of completely different other states\nwhich can decay on a rather long time scale. Then, the effective\ndecay rate of the initial state, induced by the coupling with\nthe family of decaying states, should present\nhuge fluctuations. An example is given in nuclear physics by the so-called\nsuper-deformed nuclei \\cite{superdeformed} where the ground state of a super-deformed\nnucleus can only decay by coupling to highly excited\n(hence chaotic) states of the non-deformed nucleus.\nOur model then predicts the distribution of lifetimes\nof super-deformed nuclei.\n\n\\acknowledgments\nCPU time on a Cray C98 computer has been\nprovided by IDRIS and RZG.\nLaboratoire Kastler Brossel de\nl'Universit\\'e Pierre\net Marie Curie et de l'Ecole Normale Sup\\'erieure is\nunit\\'e associ\\'ee 18 du CNRS. J.Z. acknowledges support of KBN\nunder project No.~2P03B~03810.\nThe additional support under the bilateral collaboration scheme\n(J.Z. and D.D.) of the French Embassy in Poland, no.76209 \nand the Programme International de Coop\\'eration Scientifique\n(CNRS) no.408 is appreciated.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbsjf b/data_all_eng_slimpj/shuffled/split2/finalzzbsjf new file mode 100644 index 0000000000000000000000000000000000000000..2a8209c863ea57f51cede8d65b1110f396bfdd21 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbsjf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{SEC:introduction}\n\n\nHelicity scattering amplitudes in Quantum Field Theory (QFT) encode the full dependence\non the spin degrees of freedom of the particles involved in the scattering, and are the building blocks \nfor computing various kinds of physical observables through which we try to understand \nthe interactions among particles observed in nature.\nThe incorporation of spin degrees of freedom, or polarization effects, in terms of \nspin- respectively polarization-dependent physical observables, leads to a richer phenomenology.\nSuch observables offer valuable means to discriminate different dynamical models, \nin particular for discovering potential Beyond-Standard-Model effects.\nFor a review of the role of particle polarizations in testing the Standard Model and searching for new physics, \nwe refer to refs.~\\cite{Lampe:1998eu,Leader:2001gr,Accomando:1997wt,MoortgatPick:2005cw} \nand references therein.\n\n\nUnlike physical observables, individual scattering amplitudes in QFT generally possess \ninfrared\\footnote{We use the term ``infrared'' (IR) to denote both soft and collinear divergences.} \n(IR) and ultraviolet (UV) divergences, and thus a regularization scheme (RS) for handling these\nintermediate divergences needs to be introduced. \nDimensional regularization~\\cite{tHooft:1972tcz,Bollini:1972ui} is by far the most convenient one \nto use in gauge theories as it respects gauge and Lorentz invariance\\footnote{We ignore the treatment of\n$\\gamma_5$ in dimensional regularization for the moment.}\nand allows one to handle both UV and IR divergences in the same manner.\nThe key ingredient of dimensional regularization is the analytic continuation of \nloop momenta to $D=4-2\\epsilon$ space-time dimensions with indefinite $\\epsilon$. \nHaving done this, one is still left with some freedom regarding the dimensionality of the momenta\nof the external particles, of algebraic objects like the space-time metric tensor and Dirac matrices, \nas well as the number of polarizations of both external and internal particles. \nThis gives rise to different dimensional regularization variants (for a review see e.g. \nref.~\\cite{Gnendiger:2017pys} and references therein), which in general leads to different \nexpressions for singular amplitudes. \nApparently the RS dependence is intimately connected to the singularity structures of amplitudes, \nwhich fortunately obey a nice factorization form at the amplitude level~\\cite{Sen:1982bt,Collins:1989bt,Catani:1998bh,Sterman:2002qn,Dixon:2008gr,Gardi:2009qi,Gardi:2009zv,Becher:2009cu,Becher:2009kw,Becher:2009qa,Feige:2014wja}.\nThe result for a physical quantity, such as a physical cross section which is free of any such divergence, \nmust not depend on the RS that has been used. \nHowever, in practice, such a result is obtained as a sum of several partial contributions, \nwhich usually are individually divergent and computed separately before being combined. \nTherefore, these intermediate results can depend on the RS, and have to be computed consistently \nto ensure the cancellation of the spurious RS-dependence.\n\n\nThe conventional dimensional regularization (CDR)\\footnote{By the acronym ``CDR'' \nwe refer in this article to the usual CDR~\\cite{Collins:1984xc} where, \nin addition, $\\gamma_5$ is treated by Larin's prescription~\\cite{Larin:1991tj,Larin:1993tq}.} \nscheme~\\cite{Collins:1984xc} is a very popular RS, where all vector bosons are treated as D-dimensional objects. \nIt is conceptually the simplest one and does guarantee a consistent treatment.\nIt is typically employed in calculating (unpolarized) amplitude interferences \nwhere the sum over the polarizations of an external particle is conveniently made\nby using the respective unpolarized Landau density matrix. \nFor computing helicity amplitudes at the loop level, the two commonly used RS \nare the \\textquotesingle t Hooft-Veltman (HV) scheme~\\cite{tHooft:1972tcz} \nand the Four-Dimensional-Helicity (FDH) scheme~\\cite{Bern:1991aq,Bern:2002zk}. \nIn the FDH, the usage of spinor-helicity representations~\\cite{DeCausmaecker:1981wzb,DeCausmaecker:1981jtq,Gunion:1985vca,Kleiss:1985yh,Xu:1986xb,Kleiss:1986qc,Dittmaier:1998nn,Schwinn:2005pi,Arkani-Hamed:2017jhn}\nand unitarity-cut based methods~\\cite{Bern:1994zx,Bern:1994cg,Bern:1997sc,Britto:2004nc,Bern:2007dw} lead to compact expressions \nfor helicity amplitudes, which are computationally very advantageous, \nwhile the proper renormalization procedure for non-supersymmetric theories beyond one loop order \nrequires some expertise~\\cite{Kilgore:2011ta,Kilgore:2012tb,Gnendiger:2016cpg,Gnendiger:2017rfh}.\nAnother widely used dimensional regularization variant, the Dimensional-Reduction (DRED) scheme~\\cite{Siegel:1979wq}, \nwas initially devised for application to supersymmetric theories and was later shown to be applicable also \nto non-supersymmetric theories~\\cite{Capper:1979ns,Jack:1993ws}. \nThe DRED and FDH have much in common, while there are also subtle differences \nbetween the two~\\cite{Bern:2002zk,Kilgore:2012tb,Broggio:2015dga,Gnendiger:2017pys}.\n\n\nFor computing D-dimensional helicity amplitudes, especially for amplitudes at the loop level, \none typically uses the projection method~\\cite{Karplus:1950zza,Kniehl:1990iva,Binoth:2002xg} \nwhich is based on Lorentz covariant tensor decomposition of scattering amplitudes (with external state vectors being stripped off).\nThe entire dependence of loop amplitudes on loop integrals is encoded in the Lorentz invariant \ndecomposition coefficients which multiply the relevant Lorentz tensor structures. \nLorentz tensor decomposition is employed, for instance, at one loop in the \nPassarino-Veltman reduction procedure~\\cite{Passarino:1978jh}, and also in \nthe systematic constructions of dimensionally regularized QCD helicity amplitudes~\\cite{Abreu:2018jgq,Boels:2018nrr}. \nDuring the last decades, there have been many computations done for high order QCD corrections to scattering amplitudes using the projection method, for example,~\\cite{Glover:2003cm,Glover:2004si,Binoth:2006hk,Gehrmann:2011aa,Gehrmann:2013vga,Gehrmann:2015ora,vonManteuffel:2015msa,Bernreuther:2004ih,Bernreuther:2004th,Bernreuther:2005gw,Bernreuther:2005rw,Aglietti:2004nj,Moch:2005id,Gehrmann:2005pd,Bonciani:2008wf,Bonciani:2008az,Asatrian:2008uk,Beneke:2008ei,Bell:2008ws,Assadsolimani:2014oga,Ablinger:2017hst,Liu:2017axv,Henn:2016tyf,Lee:2018nxa,Lee:2018rgs,Ablinger:2018yae,Ahmed:2015qpa,Borowka:2016ehy,Heinrich:2017bvg,Chen:2017jvi,Jones:2018hbb,Anastasiou:2018fjr,Maltoni:2018zvp,vonManteuffel:2019wbj}.\n\n\nDespite being very generic, versatile, and widely used in many high-order applications, \nthere are a few aspects of the Lorentz tensor decomposition approach that makes the traditional projection method \nnot so easy to be carried out in certain cases, as will be discussed in the next section.\nFor example, besides facing complexities in deriving D-dimensional projectors for tensor \ndecomposition coefficients in some multiple-parton, multiple-scale scattering processes, \nevanescent Lorentz structures\\footnote{The evanescent Lorentz structures appearing in a Lorentz tensor decomposition should not be confused with \\textit{operator mixings} in the renormalization of composite operators in effective field theories \\cite{Collins:1984xc,Buras:1989xd}, nor with evanescent terms in the DRED or FDH regularized Lagrangian~\\cite{vanDamme:1984ig,Jack:1993ws,Jack:1994bn,Stockinger:2005gx,Harlander:2006rj,Bern:2002zk,Kilgore:2011ta,Kilgore:2012tb,Gnendiger:2016cpg,Gnendiger:2017rfh}.} can appear in the D-dimensional basis for the loop amplitudes in question. \nTheir presence can lead to intermediate spurious poles in the resulting D-dimensional projectors~\\cite{Binoth:2002xg,Glover:2004si,Gehrmann:2015ora}. \nFurthermore, when there are several external fermions involved in the scattering~\\cite{Glover:2004si,Abreu:2018jgq}, \nthe complete and linearly independent set of basis \nstructures in D dimensions will generally increase with the perturbative order at which the \nvirtual amplitude is computed (as the Dirac algebra is formally infinite-dimensional in non-integer D dimensions). \n\n\nGiven the impressive long list of high-order QCD calculations of important phenomenological consequences \ndone in CDR and, moreover, having in mind the aforementioned critical features of D-dimensional Lorentz tensor decomposition, it should be justified to think of possible add-ons in order to facilitate the computations of polarized amplitudes in a way fully compatible with CDR. \nIn this article we propose an alternative regularization prescription of external states \n(for both bosons and fermions) in order to avoid Lorentz tensor decomposition in the conventional projection method \nfor extracting helicity amplitudes. The prescription outlined below is devised to be fully compatible with CDR \nso that certain results known in CDR can be directly recycled.\n\n\nAs will become clear in following sections, the idea is based on the following simple observation. \nIn 4 dimensions, there are only four linearly independent Lorentz 4-vectors, \nand hence any Lorentz 4-vector can be expressed linearly using just three linearly independent \nLorentz 4-vectors with the aid of the Levi-Civita tensor.\nTherefore all polarization vectors can be built up by just using three linearly \nindependent external momenta in a Lorentz covariant way, provided that there are \nenough linearly independent momenta involved in the process. \nThis basic mathematical fact is of course well known, and without surprise it was already \nexploited about forty years ago in calculating (tree-level) multiple photon bremsstrahlung processes \nin massless QED~\\cite{DeCausmaecker:1981wzb,DeCausmaecker:1981jtq}.\nIt was initially used for simplifying the massless QED vertex by rewriting the \nslashed photon polarization vector in terms of the slashed momenta of external charged fermions\n(from which the photon was radiated), a trick that preluded the introduction of the \n4-dimensional massless spinor-helicity formalism~\\cite{Gunion:1985vca,Kleiss:1985yh,Xu:1986xb,Kleiss:1986qc}.\nIn this article, instead of seeking simplifications of the gauge interaction vertices of fermions in \n4-dimensional massless theories, this mathematical fact is employed for finding a CDR-compatible \nway to directly project out polarized loop amplitudes, circumventing Lorentz tensor decomposition. \nFurthermore, despite being different from CDR, we would like to argue that thanks to the \namplitude-level factorization of UV and IR singularities, such a prescription can be used\nin a hybrid way together with results known in CDR to obtain RS-independent finite remainders \nof loop amplitudes, without the need to recalculate the integrated subtraction coefficients \ninvolved in an IR subtraction framework. \nIn other words, we will show that such a hybrid CDR-compatible prescription is unitary\nin the sense defined in refs.~\\cite{vanDamme:1984ig,Catani:1996pk}.~\\\\\n\n\nThe article is organized as follows. \nIn the next section, the conventional projection method for computing polarized\namplitudes is reviewed with comments on a few aspects which motivated the work\npresented in this article.\nIn section~\\ref{SEC:Prescription} the proposed prescription to obtain polarized \ndimensionally regularized scattering amplitudes is outlined in detail.\nSection~\\ref{SEC:unitarity} is devoted to the discussion of the unitarity of the \nhybrid regularization prescription of section~\\ref{SEC:Prescription}.\nIn particular we show that pole-subtracted RS-independent finite remainders\nare always obtained and furthermore demonstrate this feature in the context of an IR subtraction method.\nIn section~\\ref{SEC:examples}, we provide two simple examples of calculating finite remainders \nof one-loop virtual amplitudes in order to illustrate the usage of the prescription and to comment \non a few practical points worthy of attention. \nWe conclude in section~\\ref{SEC:conclusion}.\n\n\n\\section{A Recap of the Projection Method}\n\\label{SEC:projectionmethod}\n\nIn this section, we review the projection method for computing polarized amplitudes, \nand discuss a few aspects that motivated the work in this article.\n\n\nThe projection method~\\cite{Karplus:1950zza,Kniehl:1990iva,Binoth:2002xg}, \nbased on Lorentz covariant tensor decomposition, can be used to obtain helicity amplitudes \nfor a generic scattering process at any loop order. \nThe entire dependence of scattering amplitudes on loop integrals \nis encoded in their Lorentz-invariant decomposition coefficients that multiply \nthe corresponding Lorentz tensor structures and are independent of the external \nparticles' polarization vectors.\nThese Lorentz-invariant decomposition coefficients are sometimes called \\textit{form factors} \nof the amplitudes, a relativistic generalization of the concept of charge distributions. \nIn order to extract these form factors containing dimensionally regularized loop integrals, \nprojectors defined in D dimensions should be constructed and subsequently applied directly \nto the Feynman-diagrammatic expression of the amplitude, which can proceed diagram by diagram.\n\n\n\\subsection{Gram matrix and projectors}\n\\label{SEC:projectionmethod:recap}\n\nScattering amplitudes in QFT with Poincar\\'e symmetry are multi-linear in the \nstate vectors of the external particles, i.e., proportional to the tensor product \nof all external polarization vectors, to all loop orders in perturbative calculations, \nas manifestly shown by the Feynman diagram representations. \nThe color structure of QCD amplitudes can be conveniently described using\nthe color-decomposition~\\cite{Berends:1987cv,Mangano:1987xk,Mangano:1987kp,Mangano:1988kk,Bern:1990ux} \nor the color-space formalism of ref.~\\cite{Catani:1996vz}. \nQCD amplitudes are thus viewed as abstract vectors in the color space of external colored particles.\nSince projecting QCD amplitudes onto the factorized color space and spin (Lorentz) structures \ncan be done independently of each other, we suppress for ease of notation possible color \nindices of scattering amplitudes in the following discussions.\n\n \nAs nicely summarized and exploited in~\\cite{Boels:2017gyc,Boels:2018nrr}, \nevery scattering amplitude is a vector in a linear space spanned by a finite set of Lorentz \ncovariant structures, in dimensional regularization at any given perturbative order. \nThese structures are constrained by physical requirements such as on-shell kinematics \nand symmetries of the dynamics. Scattering amplitudes can thus be written as a linear combination of \na set of chosen Lorentz basis structures, where the decomposition coefficients are functions \nof Lorentz invariants of external kinematics. All non-rational dependence\nof the decomposition coefficients on external kinematics appear via loop integrals. \nThis implies the following linear ansatz for a scattering amplitude \n$\\hat{\\mathcal{M}}$ at a fixed perturbative order, \n\\begin{eqnarray} \\label{EQ:ampFFsprimitive}\n\\hat{\\mathcal{M}} = \\sum_{n=1}^{N_P} c_n ~\\hat{T}_n \\, , \n\\end{eqnarray}\nwhere each form factor $c_n$ is a function of Lorentz invariants of external momenta, \nand each Lorentz structure $\\hat{T}_n$ is multi-linear in the external polarization \nstate vectors. In general, $\\hat{T}_n$ contains contractions of external gauge bosons' \npolarization state vectors with either the space-time metric tensor connecting two different \npolarizations or with external momenta, and contains also products of Dirac matrices sandwiched \nbetween external on-shell spinors. \nThe Levi-Civita tensor can also occur if the scattering process involves \nparity-violating vertices. The complete and linearly independent set of Lorentz structures \nfor $\\hat{\\mathcal{M}}$ at any given perturbative order depends on its \nsymmetry properties as well as the Lorentz and Dirac algebra in use.\n\nNote that, as discussed in detail for the four-quark scattering amplitude \n$q \\bar{q} \\rightarrow Q\\bar{Q}$ in~\\cite{Glover:2004si,Abreu:2018jgq}, \nthe complete and linearly independent set of D-dimensional basis structures must in general \nbe enlarged according to the perturbative order at which $q \\bar{q} \\rightarrow Q\\bar{Q}$ \nis computed, because the Dirac algebra is infinite-dimensional for non-integer dimensions. \nAt each perturbative order only a finite number of linearly independent Lorentz structures \ncan appear in an amplitude, as is evident from inspecting the corresponding Feynman diagrams\nwhich is a set of finite elements.~\\\\\n\n\nTo be specific, we consider in the following the Lorentz tensor decomposition of scattering amplitudes \nin CDR at fixed order in perturbation theory.\nIn the following discussion of the projection method, we investigate also how to uncover linear dependent relations \namong a set of (preliminarily chosen) Lorentz tensor structures arising from on-shell constraints, \nwithout making explicit reference to the origin of these linear dependencies.\n\n\nLet us assume that by construction the set of the $N_P$ Lorentz structures \n$\\hat{T}_n$ in eq.~(\\ref{EQ:ampFFsprimitive}), denoted by $\\mathbf{T}_{P} \\equiv \\{\n\\hat{T}_1, \\cdots, \\hat{T}_{N_P}\\}$, is linearly complete for the $\\hat{\\mathcal{M}}$ in question,\nbut the $\\hat{T}_n$ may not be linearly independent of each other. \nFor an analogy we recall the representation of QCD amplitudes in terms of a set of \ncolor structures in color space without demanding linear independence of these color structures.\nLet us thus call eq.~(\\ref{EQ:ampFFsprimitive}) a \\textit{primitive} Lorentz covariant decomposition \nof $\\hat{\\mathcal{M}}$. \nPossible linear relations among the $N_P$ Lorentz structures $\\hat{T}_n$ due to Lorentz and\/or Dirac algebra \nand also on-shell constraints, such as equations of motion as well as transversality satisfied \nby external state vectors, can be uncovered by computing their $N_P \\times N_P$ Gram matrix \n$\\hat{\\mathrm{\\mathbf{G}}}$, whose matrix elements are defined as \n\\begin{eqnarray} \\label{EQ:grammatrix}\n\\hat{\\mathrm{\\mathbf{G}}}_{ij} = \\langle \\hat{T}^{\\dagger}_i , \\hat{T}_j \\rangle.\n\\end{eqnarray} \nThe symbol $\\langle \\hat{T}^{\\dagger}_i, \\hat{T}_j \\rangle$ denotes the Lorentz invariant inner product \nbetween these two linear Lorentz structures. It is typically defined as the trace of the matrix product \nof $\\hat{T}_i$'s Hermitian conjugate, i.e. $\\hat{T}^{\\dagger}_i$, and $\\hat{T}_j$ with tensor products \nof external state vectors (spinors) being substituted by the corresponding unpolarized Landau density matrices. \nIn other words, this Lorentz invariant quantity can be viewed as the unpolarized interference\nbetween two linear Lorentz structures $\\hat{T}_i$ and $\\hat{T}_j$ over all helicity states of \nexternal particles in accordance with certain polarization sum rules (encoded in the unpolarized Landau \ndensity matrices). \n\n\nThis $N_P \\times N_P$ Gram matrix $\\hat{\\mathrm{\\mathbf{G}}}$ in eq.~(\\ref{EQ:grammatrix}) can then be \nused to determine the linearly independent subset of $\\mathbf{T}_{P}$ spanning the \nvector space where the considered amplitude $\\hat{\\mathcal{M}}$ lives. \nIf the determinant of $\\hat{\\mathrm{\\mathbf{G}}}$ is not identically zero, then the set $\\mathbf{T}_{P}$\nis both complete and linearly independent, and thus qualifies as a basis of the vector space \nwhere $\\hat{\\mathcal{M}}$ lives. \nOtherwise, $\\hat{\\mathrm{\\mathbf{G}}}$ is not a full-rank matrix, and its matrix rank \n$N_R \\equiv \\mathrm{R}[\\hat{\\mathrm{\\mathbf{G}}}]$ tells us the number of \nlinearly independent members of $\\mathbf{T}_{P}$. \nSince $\\mathbf{T}_{P}$ is assumed to be linearly complete w.r.t. $\\hat{\\mathcal{M}}$ by \nconstruction, $N_R$ is thus the number of basis elements of a linear basis of the vector space that \ncontains $\\hat{\\mathcal{M}}$.\n\n\nThe number $N_P-N_R$ of linear dependent relations in $\\mathbf{T}_{P}$ can be extracted from \nthe \\textit{null-space} of this Gram matrix $\\hat{\\mathrm{\\mathbf{G}}}$. \nTechnically, the null-space of a matrix (not necessarily a square matrix) is \nthe solution space of the corresponding homogeneous system of linear algebraic equations\ndefined by taking this matrix as the system's coefficient matrix. \nThe null-space of $\\hat{\\mathrm{\\mathbf{G}}}$ can be conveniently represented as \na list of linearly independent $N_P$-dimensional basis vectors of the \nsolution space of the homogeneous linear algebraic system defined by \n$\\hat{\\mathrm{\\mathbf{G}}}$. The number of members of this list is equal to the dimension of \n$\\hat{\\mathrm{\\mathbf{G}}}$ minus its matrix rank, i.e., $N_P-N_R$. \nFor the information we would like to extract\\footnote{To just\nidentify the linearly dependent columns and\/or rows of the multivariate Gram matrix, numerical samples of this matrix \nat a few test points are usually enough.}, this null-space provides \nthe complete set of linear combination coefficients (being rational in the external kinematics) \nof the column vectors of $\\hat{\\mathrm{\\mathbf{G}}}$ that lead to vanishing $N_P$-dimensional vectors. \nAfter having removed those linearly dependent columns (and their corresponding transposed rows),\n we end up with a reduced full-rank Gram matrix \namong the thus-selected linearly independent set of Lorentz structures, denoted by $\\mathbf{T}_{R}$. \nThe set $\\mathbf{T}_{R}$ can then be directly taken as the basis of the vector space of $\\hat{\\mathcal{M}}$. \n\n\nElimination of redundancies in the set $\\mathbf{T}_{P}$ for $\\hat{\\mathcal{M}}$ involving external gauge bosons, \ndue to Ward identities of local gauge interactions, can be effectively accounted for by choosing physical \npolarization sum rules for those external gauge bosons (with their reference vectors chosen as momenta of other external particles).\nThis point can be easily seen once we realize that any unphysical structure, \nwhich may happen to be just one specific $\\hat{T}_n$ or a linear combination of some of them\n(with rational coefficients in external kinematics), \ngets nullified by the physical polarization sum rules of external gauge bosons.\nNotice, however, reduction in the number of linearly independent basis structures \nof $\\hat{\\mathcal{M}}$ due to additional process-specific symmetries such as \ncharge, parity, and\/or Bose symmetry is not achieved by analyzing $\\hat{\\mathrm{\\mathbf{G}}}$ in this way. \nInstead they have to be accounted for from the outset when determining \nthe primitive set $\\mathbf{T}_P$ in eq.~(\\ref{EQ:ampFFsprimitive}). ~\\\\\n\n\nIn terms of the thus-determined basis $\\mathbf{T}_{R}$, the linear decomposition of $\\hat{\\mathcal{M}}$ \ncan be recast into \n\\begin{eqnarray} \\label{EQ:ampFFs}\n\\hat{\\mathcal{M}} = \\sum_{n=1}^{N_R} \\tilde{c}_n ~\\hat{T}_n ~,~\n\\end{eqnarray}\nand the Gram matrix $\\hat{\\mathrm{\\mathbf{G}}}_R$ of $\\mathbf{T}_{R}$ with matrix elements defined \nsimilarly as eq.~(\\ref{EQ:grammatrix}) is now an invertible $N_R \\times N_R$ matrix.~\\\\\n\n\nNow we are ready to discuss projectors $\\hat{P}_n$ for the \nLorentz decomposition coefficients (or form factors) $\\tilde{c}_n$ of $\\hat{T}_n$ \nin eq.~(\\ref{EQ:ampFFs}).\nThey are defined by\n\\begin{eqnarray} \\label{EQ:projectordef}\n\\tilde{c}_n = \\langle \\hat{P}^{\\dagger}_n , \\hat{\\mathcal{M}} \\rangle~ ~~ \n\\text{for any $n ~\\in~ \\{1, \\cdots, N_R\\}$} \\, ,\n\\end{eqnarray} \nwhere the same Lorentz-invariant inner product operation as in eq.~(\\ref{EQ:grammatrix}) is \nused in the above projection. \nThe defining equation (\\ref{EQ:projectordef}) of $\\hat{P}_n$ holds for any linear object \n from the vector space spanned by the basis $\\mathbf{T}_{R}$, rather than just for a particular \nscattering amplitude $\\hat{\\mathcal{M}}$.\nInserting eq.~(\\ref{EQ:ampFFs}) into \neq.~(\\ref{EQ:projectordef}) then, taking the aforementioned property into account, the defining equation \n for the projectors translates into \n\\begin{eqnarray} \\label{EQ:projectordefequvi}\n\\langle \\hat{P}^{\\dagger}_n , \\hat{T}_m \\rangle = \\delta_{nm} \\quad \\text{for any $n,m ~\\in~ \\{1, \\cdots, N_R\\}$} \\,. \n\\end{eqnarray}\n\n\nEach projector $\\hat{P}^{\\dagger}_n$ can be expressed in terms of a linear combination \nof Hermitian conjugate members of $\\mathbf{T}_{R}$ that span also a vector space.\nWe thus write \n\\begin{eqnarray} \\label{EQ:projectoransatz}\n\\hat{P}^{\\dagger}_n = \\sum_{k=1}^{N_R} \\left(\\hat{\\mathrm{\\mathbf{H}}}\\right)_{nk} ~ \\hat{T}^{\\dagger}_k \\, ,\n\\end{eqnarray}\nwhere the elements $\\left(\\hat{\\mathrm{\\mathbf{H}}}\\right)_{nk}$ are to be determined. \nInserting eq.~(\\ref{EQ:projectoransatz}) into eq.~(\\ref{EQ:projectordefequvi}) and\n using the definition of Gram matrix elements we get\n\\begin{eqnarray}\n\\sum_{k=1}^{N_R} \\left(\\hat{\\mathrm{\\mathbf{H}}}\\right)_{nk} ~ \\left(\\hat{\\mathrm{\\mathbf{G}}}_R\\right)_{km} = \\delta_{nm}~, \n \\quad \\text{i.e.,} \\; \\hat{\\mathrm{\\mathbf{H}}} ~ \\hat{\\mathrm{\\mathbf{G}}}_R = \\hat{1}. \n\\end{eqnarray}\nRecall that $\\hat{\\mathrm{\\mathbf{G}}}_R$ is invertible by the aforementioned trimming procedure. \nThis then answers the question of how to construct the projectors $\\hat{P}^{\\dagger}_n$ \n from linear combinations of the Hermitian conjugates of $\\mathbf{T}_{R}$.\nIn the special and ideal case of a norm-orthogonal basis $\\mathbf{T}_{R}$, \nits Gram matrix $\\hat{\\mathrm{\\mathbf{G}}}_R$ is equal to the identity matrix of dimension $N_R$ \nand hence $\\hat{\\mathrm{\\mathbf{H}}} = \\hat{\\mathrm{\\mathbf{G}}}_R^{-1} = \\hat{1} $.\nSubsequently, we have $\\hat{P}^{\\dagger}_n = \\hat{T}^{\\dagger}_n$, as is well known\nfor a norm-orthogonal basis.~\\\\\n\n\nBy taking the Dirac traces and keeping all Lorentz indices in D dimensions in the projection, \nthese Lorentz-invariant tensor decomposition coefficients, or form factors, \nare evaluated in D dimensions. These form factors are independent of the external polarization vectors, \nand all their non-rational dependence on external momenta is confined to loop integrals. \nScalar loop integrals appearing in these form factors can be reduced to a finite \nset of master integrals with the aid of linear integration-by-parts identities~\\cite{Tkachov:1981wb,Chetyrkin:1981qh}. \nOnce these dimensionally regularized form factors have been determined, external particles' state vectors \ncan be conveniently chosen in 4 dimensions, leading to helicity amplitudes in accordance \nwith the HV scheme. \nIn fact, once the (renormalized) virtual amplitudes are available at hand in such a \nD-dimensional tensor-decomposed form (with all (singular) Lorentz-invariant form factors computed \nin D dimensions), then changing the regularization convention for the external particles' states \nconsistently in both the virtual amplitude and the corresponding IR-subtraction terms, \nshould not alter the finite remainder that is left after subtracting all poles, \nalthough the individual singular pieces do change accordingly.\n\n\n\\subsection{Comments on the D-dimensional projection}\n\\label{SEC:projectionmethod:comments}\n\nWe now discuss a few delicate aspects of the Lorentz tensor decomposition in D dimensions \nthat motivated the work presented in this article. \n\n\nIn general, the Gram matrix $\\hat{\\mathrm{\\mathbf{G}}}$ or $\\hat{\\mathrm{\\mathbf{G}}}_R$ \ncomputed using Lorentz and Dirac algebra in CDR depends on the space-time dimension D. \n We can examine its 4-dimensional limit by inserting \n$D=4-2\\epsilon$ and check whether its determinant power-expanded in $\\epsilon$ is zero or not\nin the limit $\\epsilon = 0$. A determinant vanishing at $\\epsilon = 0$ implies\nthe presence of Lorentz structures in the D-dimensional \nlinearly independent basis set $\\mathbf{T}_{R}$ that are redundant in 4 dimensions. \n\n\nTo be more specific, we can compute the matrix rank of $\\hat{\\mathrm{\\mathbf{G}}}_R$ \nat $D =4$, denoted by $\\mathrm{R}[\\hat{\\mathrm{\\mathbf{G}}}_R^{D=4}]$, and \nthe difference $N_R -\\mathrm{R}[\\hat{\\mathrm{\\mathbf{G}}}_R^{D=4}]$\ntells us the number of Lorentz structures \nappearing in $\\mathbf{T}_R$ that are redundant in D=4. \nFurthermore, if we compute the null-space of the 4-dimensional limit of \n$\\hat{\\mathrm{\\mathbf{G}}}$, then we can explicitly uncover all these special linear \nrelations among $\\hat{T}_n$ due to the constraint of integer dimensionality\\footnote{Any potential non-linear \nrelation among the $\\hat{T}_n$ is irrelevant here as we use a linear basis.} \nin a similar way as one identifies $\\mathbf{T}_{R}$ out of $\\mathbf{T}_{P}$.\nThese special linear relations can be used to construct exactly the number \n$N_R -\\mathrm{R}[\\hat{\\mathrm{\\mathbf{G}}}_R^{D=4}]$ of \nevanescent Lorentz structures out of $\\mathbf{T}_{R}$ \n that are non-vanishing in D dimensions but vanishing in 4 dimensions. \n In this way, the original basis set $\\mathbf{T}_R$ \ncan be re-cast into a union of two subsets: one is linearly independent and complete \nin 4 dimensions, and the other one only consists of $N_R -\\mathrm{R}[\\hat{\\mathrm{\\mathbf{G}}}_R^{D=4}]$ evanescent Lorentz structures. \nSuch a reformulation of the Lorentz tensor decomposition basis in D dimensions \ncan be very useful in exhibiting the additional non-four-dimensional structures\ninvolved in the virtual amplitude.~\\\\\n\n\nIn case the number of structures in $\\mathbf{T}_{R}$ is not very small \n(say, not less than 10) and if there are several kinematic variables involved, \nalgebraically inverting $\\hat{\\mathrm{\\mathbf{G}}}_R$ can be computationally \nquite cumbersome~\\cite{Boels:2018nrr}. \nMoreover, the resulting projectors constructed in the above fashion may be hardly \nusable if the amplitudes themselves are already quite complicated. \nThis situation occurs naturally in multiple-parton multiple-scale scattering processes.\nPossible simplifications may be obtained by suitably recombining the linear basis structures \nin $\\mathbf{T}_{R}$ classified into several groups, such that they are mutually orthogonal \nor decoupled from each other~\\cite{Boels:2018nrr}. \nFor example, we could divide the set of tensor structures into symmetric and anti-symmetric\nsectors, and also choose the anti-symmetrized product basis for strings of Dirac matrices~\\cite{Dugan:1990df,Gehrmann:2011aa}. \nThis amounts to choosing the basis structures in $\\mathbf{T}_{R}$ such that a partial triangularization \nof the corresponding Gram matrix $\\hat{\\mathrm{\\mathbf{G}}}_R$ is achieved already by construction. \nThis will facilitate the subsequent inversion operation, and also make the results simpler. \nIn addition, in case the set of tensor structures all observe factorized forms in terms of products \nof a smaller set of lower rank tensor structures, then this factorization can also be exploited \nto greatly facilitate the construction of projectors~\\cite{Binoth:2002xg}. \nAlternatively, it is also a good practice to ``compactify'' the vector space as much as possible, \nbefore the aforementioned construction procedure is applied, by employing all possible physical constraints \nand symmetries, such as parity and\/or charge symmetry of the amplitudes in question, \nand also by fixing the gauge of the external gauge bosons~\\cite{Gehrmann:2013vga,vonManteuffel:2015msa,Boels:2017gyc,Boels:2018nrr}. \n\n\nOther than the aforementioned technical complexity in inverting the Gram matrix,\nthere is another delicate point about the Lorentz tensor decomposition approach in D dimensions,\nas already briefly mentioned above. \nIn cases where the external state consists only of bosons, a list of fixed number of Lorentz tensor structures \nis indeed linearly complete in D dimensions to all orders in perturbation theory~\\cite{Binoth:2002xg,Gehrmann:2013vga}. \nHowever, if there are external fermions involved in the scattering, the complete and linearly independent set of basis \nstructures will generally increase with the perturbative order at which the scattering amplitude is computed, \nbecause the Dirac algebra is formally infinite dimensional in non-integer D dimensions, as discussed \nfor the four-quark scattering amplitude $q \\bar{q} \\rightarrow Q\\bar{Q}$ in~\\cite{Glover:2004si,Abreu:2018jgq}.\nOf course, at each given perturbative order only a finite number of linearly \nindependent Lorentz structures can appear in an amplitude, because the corresponding Feynman diagrams are \njust a set of finite elements. These additional D-dimensional Lorentz structures \nare either evanescent by themselves or will lead to additional evanescent structures of \nthe same number computed by the procedure discussed above. \n\n\nThe last comment we would like to make about the projection method in D dimensions \nis the possible appearance of intermediate spurious poles in these projectors~\\cite{Binoth:2002xg,Glover:2004si,Gehrmann:2015ora}, \nwhich are closely related to the presence of the aforementioned evanescent Lorentz structures \nin the D-dimensional linearly independent basis. \nSince the presence of evanescent Lorentz structures in the D-dimensional basis\nimplies a Gram matrix that vanishes in 4 dimensions, \none expects that projectors resulting from its inverse can contain poles in $D-4=-2\\epsilon$,\nfor instance in~\\cite{Binoth:2002xg} for four-photon scattering. \nOf course, all intermediate spurious poles generated this way in individual \nform factors projected out should cancel in the physical amplitudes composed out of them, \nsuch as helicity amplitudes or linearly polarized amplitudes.~\\\\\n\n\nAll these sometimes cumbersome issues discussed above motivated the work that will be presented in the following: \nthe construction of simple and general polarized amplitude projectors in D dimensions \nthat avoids conventional Lorentz tensor decomposition, yet is still fully compatible with CDR.\n\n\n\\section{The Prescription}\n\\label{SEC:Prescription}\n\nThe idea behind the proposed prescription to obtain polarized dimensionally\nregularized scattering amplitudes can be briefly outlined as follows, \nwith details to be exposed in the subsequent subsections. \n\n\n\nFor external gauge bosons of a scattering amplitude, massless and\/or massive, \nwe decompose each external polarization vector \nin terms of external momenta. \nWe then keep the form of Lorentz covariant decomposition fixed while formally promote \nall its open Lorentz indices, which are now all carried by external momenta, \nfrom 4 dimensions to D dimensions, like every Lorentz vector in CDR. \nIf external fermions are present in the scattering amplitude, \nstrings of Dirac matrices sandwiched between external on-shell spinors will show up. \nFor each open fermion line, we first rewrite this quantity as a trace of products of \nDirac matrices with the aid of external spinors' Landau density matrices, \nup to an overall Lorentz-invariant normalization factor.\nThe space-like polarization vectors of a massive spinor can also be represented \nin terms of external momenta. Again, once such a momentum basis representation \nis established in 4 dimensions, the Lorentz covariant form will be kept fixed \nwhile all open Lorentz and Dirac indices, carried by either external momenta or \nDirac matrices, will be respectively promoted in accordance with CDR.\n\n\nAs scattering amplitudes are multi-linear in the state vectors of the external particles \nto all loop orders in perturbation theory, the tensor products of momentum basis representations \nof all external gauge bosons and all properly re-written external spinor products, \nwith their open indices promoted accordingly as in CDR, \nwill be taken as the \\textit{external} projectors for polarized amplitudes. \nHelicity amplitude projectors of a generic scattering process defined in this way\nnaturally obey a simple factorized pattern as the tensor product of \nthe respective polarization projector of each external gauge boson and open fermion line. \nFeatures and subtleties worthy of attention during these rewriting procedures \nwill be discussed and explained below.\n\n\n\\subsection{Momentum basis representations of polarization vectors}\n\\label{SEC:prescription:MBR}\n\nLet us start with the cases with only bosons in the external states. \nWe recall that the polarization vector $\\varepsilon^{\\mu}_{\\lambda}(p)$ of a physical vector-boson state \nof momentum $p^\\mu$ has to satisfy $\\varepsilon^{\\mu}_{\\lambda}(p)~ p_{\\mu} = 0$. \nHere the subscript $\\lambda$ labels the number of physical spin degrees of freedom, \ni.e., $\\lambda=1,2,3$ in D = 4.\nBy convention the physical polarization state vectors are orthogonal and normalized by \n$\\varepsilon^{*}_{\\lambda}(p) \\cdot \\varepsilon_{\\lambda'}(p) = -\\delta_{\\lambda \\lambda'}$.\nThe polarization vectors of a massless gauge boson obey an additional condition \nin order to encode the correct number of physical spin degrees of freedom.\nIn practice, this additional condition is usually implemented by introducing an auxiliary \nreference vector $\\hat{r}_{\\mu}$ that is not aligned with the boson's momentum \nbut otherwise arbitrary, to which the physical polarization vectors have to be orthogonal, \n$\\varepsilon^{\\mu}_{\\lambda}(p)~ \\hat{r}_{\\mu} = 0$.\nThus, the reference vector $\\hat{r}_{\\mu}$ and the bosons' momentum $p_{\\mu}$ define \na plane to which the massless gauge boson's physical polarization vectors are orthogonal.\nWe also recall that in CDR the number of physical polarizations of a massless gauge\nboson in D dimension is taken to be D-2. This is in contrast to our prescription, \nwhere the number of physical polarizations remains two in D dimensions, see below.\n\n\\subsubsection{The $2\\to 2$ scatterings among massless gauge bosons}\n\\label{SEC:prescription:MBR:2to2}\n\nLet us first consider a prototype $2\\to 2$ scattering among 4 external \nmassless gauge bosons:\n\\begin{align} \\label{momentaassignment}\ng_1(p_1) + g_2(p_2) \\rightarrow g_3(p_3) + g_4(p_4), \n\\end{align}\nwith on-shell conditions $p_j^2 = 0,~ j=1,...,4.$\nThe Mandelstam variables associated with \\eqref{momentaassignment}\n\\begin{eqnarray} \\label{EQ:kinematicinvariants}\n s \\equiv \\left(p_1 + p_2 \\right)^2 = \\left(p_3 + p_4 \\right)^2 ~, \\qquad\n t \\equiv \\left(p_2 - p_3 \\right)^2 = \\left(p_1 - p_4 \\right)^2\n\\end{eqnarray} \nencode the independent external kinematic invariants.\n\nThe representation of the gauge bosons' polarization state vectors \nin terms of three linearly independent external momenta, $p_1, p_2, p_3$ \n can be determined in the following way. \nWe first write down a Lorentz covariant parameterization ansatz for the linear representation \nand then solve the aforementioned orthogonality and normalization conditions \nfor the linear decomposition coefficients. Once we have established a definite Lorentz \ncovariant decomposition form in 4 dimensions solely in terms of external momenta and \nkinematic invariants, this form will be used as the definition of \nthe corresponding polarization state vector in D dimensions. \n\n\nWhile the decomposition of polarization state vectors in terms of external momenta \nis Lorentz covariant, it is always helpful to have in mind \na particular reference frame so that a clear geometric picture can be used to\nillustrate the choices of and constraints on polarization state vectors. \nTo this end, we consider in the following discussion the center-of-mass frame of \nthe two incoming particles, as illustrated in figure~\\ref{FIG:coordinatesystem}, \nwhere the beam axis is taken as the Z-axis with its positive direction set along $p_1$.\nFurthermore, the scattering plane determined by $p_1$ and $p_3$ is set as the X-O-Z plane \nwith $p_3$ having a non-negative X-component by definition. \nThe positive direction of the Y-axis of the coordinate system will be determined \naccording to the \\textit{right-hand} rule. The reference frame is shown \nin figure~\\ref{FIG:coordinatesystem}.~\\\\\n\n\\begin{figure}[tbh!]\n\\centering\n\\includegraphics[width=12cm,height=6cm]{CMSframecoordinates.png}\n\\caption{The chosen coordinate system in the center-of-mass reference\nframe of the two incoming particles.}\n\\label{FIG:coordinatesystem}\n\\end{figure}\n\nLet's now come to the momentum basis representations of the polarization vectors \nin this reference frame. \nThere are two common basis choices regarding the transverse polarization states, \nthe linear and the circular polarization basis, \nthe latter represents helicity eigenstates of gauge bosons.\nThese two bases can be related via a ${\\pi}\/{2}$ rotation in the complex plane. \nIn the following, we will first establish a Lorentz covariant decomposition of \na set of elementary linear polarization state vectors in terms of external 4-momenta \nand then compose circular polarization states of all external gauge bosons, \ni.e., their helicity eigenstates, out of these elementary ones.\nIt is beneficial to postpone such an explicit basis transformation \nuntil the very last step of the calculation, as will become clear from \nthe later discussions.\n\n\nFor the two initial-state massless gauge bosons $g_1(p_1)$ and $g_2(p_2)$, \nwhose momenta are taken as the reference momenta for each other, \nwe first introduce a common linear polarization state vector $\\varepsilon_{X}^{\\mu}$ \nalong the X-axis direction, i.e., transverse to the beam axis but within the X-O-Z plane. \nThe set of equations that determines $\\varepsilon_{X}^{\\mu}$ reads: \n\\begin{eqnarray} \\label{EQ:XpolEQs}\n&&\\varepsilon^{\\mu}_{X} = c^{X}_1~ p^{\\mu}_1 + c^{X}_2~ p^{\\mu}_2 + c^{X}_3~ p^{\\mu}_3 \\, ,\\nonumber\\\\ \n&&\\varepsilon_{X} \\cdot p_1 = 0 \\, ,\\nonumber\\\\\n&&\\varepsilon_{X} \\cdot p_2 = 0 \\, ,\\nonumber\\\\\n&&\\varepsilon_{X} \\cdot \\varepsilon_{X} = -1 \\, .\n\\end{eqnarray}\nSolving eq.~(\\ref{EQ:XpolEQs}) for the coefficients $c^{X}_1,~ c^{X}_2,~ c^{X}_3$,\nand subsequently inserting the solution back to the first line of eq.~(\\ref{EQ:XpolEQs}), \nwe obtain the following momentum basis representation for $\\varepsilon_{X}$: \n\\begin{eqnarray} \\label{EQ:XpolMBR}\n\\varepsilon^{\\mu}_{X} = \\mathcal{N}_{X} \n\\Big(t~ p^{\\mu}_1 + (-s-t)~ p^{\\mu}_2 + s~ p^{\\mu}_3 \\Big) \\, ,\n\\end{eqnarray}\nwhere $\\mathcal{N}^{-2}_{X} = -t s (s + t)$.\nNotice that, as will be made clear later, the overall Lorentz-invariant normalization \nfactor $\\mathcal{N}_{X}$ needs to be included only in the very last step \nof the computation of polarized loop amplitudes, for instance after UV renormalization \nand IR subtraction if an IR subtraction method is employed. \nTherefore, we never have to deal with $\\mathcal{N}_{X}$, i.e., with a square root explicitly \nin the intermediate stages. \n If we choose to incorporate the overall normalization factors only at the level of \nsquared amplitudes (or interferences), then square roots of kinematic invariants never appear.\nFurthermore, we can always by convenience define this overall normalization factor \nsuch that the coefficients exhibited in eq.~(\\ref{EQ:XpolMBR}) are polynomials\nin the external kinematic invariants (rather than rational functions). This can be helpful \nas computer algebra systems are typically more efficient when dealing with polynomials only.\n\n\n\nConcerning the two final-state massless gauge bosons $g_3(p_3)$ and $g_4(p_4)$, \nwhose momenta are also taken as reference momenta for each other, \nwe can introduce a common linear polarization state vector $\\varepsilon_{T}^{\\mu}$\ndefined to be transverse to $p_3$ and $p_4$ but still lying within the X-O-Z plane,\n in analogy to $\\varepsilon_{X}^{\\mu}$. \nThe definition of $\\varepsilon_{T}^{\\mu}$ then translates into the following set of equations:\n\\begin{eqnarray} \\label{EQ:TpolEQs}\n&&\\varepsilon^{\\mu}_{T} = c^{T}_1~ p^{\\mu}_1 + c^{T}_2~ p^{\\mu}_2 + c^{T}_3~ p^{\\mu}_3 \\, , \\nonumber\\\\ \n&&\\varepsilon_{T} \\cdot p_3 = 0 \\, , \\nonumber\\\\\n&&\\varepsilon_{T} \\cdot p_4 = 0 \\, , \\nonumber\\\\\n&&\\varepsilon_{T} \\cdot \\varepsilon_{T} = -1 \\, .\n\\end{eqnarray}\nSolving eq.~(\\ref{EQ:TpolEQs}) for the coefficients $c^{T}_1,~ c^{T}_2,~ c^{T}_3$, \n one obtains\n\\begin{eqnarray} \\label{EQ:TpolMBR}\n\\varepsilon^{\\mu}_{T} = \\mathcal{N}_{T} \n\\Big(t~ p^{\\mu}_1 + (s+t)~ p^{\\mu}_2 + (-s-2 t)~ p^{\\mu}_3 \\Big) \\, ,\n\\end{eqnarray}\nwhere $\\mathcal{N}_{T}^{~-2} = -t s (s + t)$.\nThe comments given above on $\\varepsilon_{X}^{\\mu}$ apply here as well.\n\n\nThe last elementary polarization state vector needed for constructing\n helicity eigenstates of all four external massless gauge bosons is the one \northogonal to $p_1$, $p_2$, and $p_3$, denoted by $\\varepsilon_{Y}$,\nwhich is thus perpendicular to the X-O-Z plane. \nIn 4 dimensions, we obtain it using the Levi-Civita tensor:\n\\begin{eqnarray} \\label{EQ:YpolMBR}\n\\varepsilon^{\\mu}_{Y} = \\mathcal{N}_{Y}~ \\epsilon^{\\nu\\rho\\sigma\\mu} p_{1\\nu} p_{2 \\rho} p_{3 \\sigma} \n= \\mathcal{N}_{Y}~ \\epsilon^{\\mu}_{p_1 p_2 p_3} \\, ,\n\\end{eqnarray}\nwhere $\\mathcal{N}_{Y}^{~-2} = -s t (s + t)\/4$, \nand in the last line we introduced the notation \n$\\epsilon^{\\mu}_{p_1 p_2 p_3} \\equiv \\epsilon^{\\nu\\rho\\sigma\\mu} p_{1\\nu} p_{2 \\rho} p_{3 \\sigma}$. \n We use the convention $\\epsilon^{0123} = +1$ and \n$\\epsilon_{\\mu\\nu\\rho\\sigma} = -\\epsilon^{\\mu\\nu\\rho\\sigma}$.\n\n\nA comment concerning $\\epsilon^{\\mu\\nu\\rho\\sigma}$ is appropriate here.\nThe above polarization state vectors will be eventually used in D-dimensional calculations. \nTo this end, following~\\cite{Larin:1991tj,Larin:1993tq,Moch:2015usa}, \nwe will treat $\\epsilon^{\\mu\\nu\\rho\\sigma}$ merely as a symbol denoting an object whose \nalgebraic manipulation rules consist of the following two statements.\n\\begin{itemize}\n\\item \nAntisymmetry: it is completely anti-symmetric regarding any odd permutation of its arguments.\n\\item \nContraction Rule\\footnote{There is a subtle point concerning this when there are \nmultiple Levi-Civita tensors in the contraction, related to the choice of pairing, \nas will be briefly commented on in section \\ref{SEC:examples:eeQQ}.}: \nthe product of two $\\epsilon^{\\mu\\nu\\rho\\sigma}$ is replaced by a combination \nof products of space-time metric tensors $g^{\\mu\\nu}$ of the same tensor rank according to \nthe following fixed pattern:\n\\begin{eqnarray} \\label{EQ:LeviCivitaContRule}\n\\epsilon^{\\mu\\nu\\rho\\sigma} \\epsilon^{\\mu'\\nu'\\rho'\\sigma'} \n= \\mathrm{Det}\\Big[g^{\\alpha \\alpha'} \\Big]~, \n\\text{~ with $\\alpha = \\mu,~\\nu,~\\rho,~\\sigma$ and $\\alpha' = \\mu',~\\nu',~\\rho',~\\sigma'$,} \n\\end{eqnarray}\nwhich agrees with the well-known mathematical identity for Levi-Civita tensors in 4 dimensions.\n\\end{itemize}\n\n\nUsing eq.~(\\ref{EQ:LeviCivitaContRule}) with the D-dimensional space-time metric-tensor \nin determining $\\mathcal{N}_{Y}$ in eq.~(\\ref{EQ:YpolMBR}), \none gets ${\\mathcal{N}_{Y}^{-2}}=(3-D) s t (s + t)\/4$ with $D = 4 - 2 \\epsilon$. \n Because $\\mathcal{N}_{Y}$ is an overall normalization factor \nwhich must be used consistently in computing both the (singular) \nvirtual loop amplitudes, the UV-renormalization counter-terms, \nas well as potential IR subtraction terms, \n it is merely a normalization convention \nwhether the explicit $D$ appearing in $\\mathcal{N}_{Y}$ is set to 4 or to $4 - 2 \\epsilon$, \non which the final 4-dimensional finite remainder should not depend\n(albeit the individual singular objects do of course differ).\nThis point can be made even more transparent if one chooses to incorporate this overall normalization \nfactor only in the very last stage of the consistent computation of finite remainders \nwhere the 4-dimensional limit has already been explicitly taken. \n~\\\\\n\n\nThe circular polarization state vectors of all four external massless gauge bosons,\nnamely their helicity eigenstates, can be easily constructed from the three\n linear polarization states given above by a suitable ${\\pi}\/{2}$ rotation in the complex plane. \n The two helicity eigenstates of each gauge boson are given by \n\\begin{eqnarray} \\label{EQ:LP2HLmassless}\n\\varepsilon_{\\pm}(p_1;p_2) &=& \\frac{1}{\\sqrt{2}} \\left( \\varepsilon_{X} \\pm \ni * \\varepsilon_{Y} \\right) \\, , \\nonumber\\\\\n\\varepsilon_{\\pm}(p_2;p_1) &=& \\frac{1}{\\sqrt{2}} \\left( \\varepsilon_{X} \\mp \ni * \\varepsilon_{Y} \\right) \\, ,\\nonumber\\\\\n\\varepsilon_{\\pm}(p_3;p_4) &=& \\frac{1}{\\sqrt{2}} \\left( \\varepsilon_{T} \\pm \ni * \\varepsilon_{Y} \\right) \\, ,\\nonumber\\\\\n\\varepsilon_{\\pm}(p_4;p_3) &=& \\frac{1}{\\sqrt{2}} \\left( \\varepsilon_{T} \\mp \ni * \\varepsilon_{Y} \\right) \\, ,\n\\end{eqnarray}\nwhere the first argument of $\\varepsilon_{\\pm}(p;r)$ is the particle's momentum while the second \nshows the reference momentum. Eq.~(\\ref{EQ:LP2HLmassless}) shows that the helicity flips \nonce the particle's 3-momentum gets reversed or if the polarization vector is subject to complex conjugation. \nFurthermore, owing to the Ward identities fulfilled by the gauge amplitudes, the representations\nof helicity state vectors in eq.~(\\ref{EQ:LP2HLmassless}) can be further reduced respectively \nfor each gauge boson by removing the component proportional to the gauge boson's own 4-momentum. \nFor instance, for the gauge boson $g_1$ with 4-momentum $p_1$, the component of $\\varepsilon_{X}$ \nproportional to $p_1^{\\mu}$ in eq.~(\\ref{EQ:XpolMBR}) can be safely dropped when \nconstructing $\\varepsilon_{\\pm}(p_1;p_2)$, and similar reductions hold also for the \nother gauge bosons. \nHowever, as will become clear in the following discussions, it is beneficial to project \npolarized amplitudes first in the linear polarization basis and then have the helicity amplitudes\ncomposed at the last stage of the computation.\nSince these elementary linear polarization state vectors will be used to construct helicity states \nof several scattered particles, we should keep their complete momentum basis representation forms \nas given by eqs. \\eqref{EQ:XpolMBR}, \\eqref{EQ:TpolMBR}, and \\eqref{EQ:YpolMBR}.~\\\\\n\n \n\n\nWe emphasize again that in our prescription the number of physical polarizations in D dimensions \nof a massless gauge boson remains two, see eq.~\\eqref{EQ:LP2HLmassless}. \nIn order to illustrate resulting differences to CDR \nlet us do a simple exercise about polarization sums. \nIn CDR the sum over the physical polarizations of a massless gauge boson $g_1$ with 4-momentum \n$p_1^{\\mu}$ and gauge reference vector $r^{\\mu}=p^{\\mu}_2$ (cf. eq.~\\eqref{momentaassignment}) is \n\\begin{eqnarray}\\label{EQ:polsumCDRPhys}\n\\sum_{\\bar{\\lambda} = \\pm,~ D-4} \\bar{\\varepsilon}_{\\bar{\\lambda}}^{~\\mu}(p_1;p_2)\\bar{\\varepsilon}_{\\bar{\\lambda}}^{~*\\nu}(p_1;p_2)\n&=& -g^{\\mu\\nu} + \\frac{p_1^{\\mu} p_2^{\\nu} + p_2^{\\mu} p_1^{\\nu}}{p_1 \\cdot p_2} \\nonumber\\\\\n&=& -g^{\\mu\\nu} + \\frac{2}{s} \\left(p_1^{\\mu} p_2^{\\nu} + p_2^{\\mu} p_1^{\\nu} \\right)\n\\end{eqnarray}\nwhich is also the unpolarized Landau density matrix of the polarization states of $g_1$.\nAll Lorentz indices in \\eqref{EQ:polsumCDRPhys} are D dimensional and the $\\bar{\\lambda}$ labels\nthe polarization states in D dimensions.\nOn the other hand, in our prescription we sum over just the two transverse polarization states of \n$g_1$ that are defined by their respective momentum basis representations in \neqs.~(\\ref{EQ:XpolMBR}), \\eqref{EQ:TpolMBR}. \nWe get \n\\begin{eqnarray}\\label{EQ:polsumMBR}\n\\sum_{\\lambda = X,Y} \\varepsilon^{~\\mu}_{\\lambda}(p_1;p_2)\\varepsilon^{~*\\nu}_{\\lambda}(p_1;p_2) \n&=& \\frac{1}{D-3} \\left(-g^{\\mu\\nu} + \\frac{D-2}{s} \\left(p_1^{\\mu} p_2^{\\nu} + p_2^{\\mu} p_1^{\\nu} \\right) \\right) \\nonumber\\\\\n&+&\\frac{4-D}{D-3} \n\\Bigg(\\frac{t}{s (s+t)} p_1^{\\mu} p_1^{\\nu} \n+\\frac{s+t}{s t} p_2^{\\mu} p_2^{\\nu}\n+\\frac{s}{t (s+t)} p_3^{\\mu} p_3^{\\nu} \n\\nonumber\\\\&&~~~\n+\\frac{1}{s+t} \\left(p_1^{\\mu} p_3^{\\nu} + p_3^{\\mu} p_1^{\\nu} \\right)\n-\\frac{1}{t} \\left(p_2^{\\mu} p_3^{\\nu} + p_3^{\\mu} p_2^{\\nu} \\right)\n\\Bigg)\n\\end{eqnarray}\nwhere, as part of the definition of this expression, we have rewritten the \nproduct of two Levi-Civita tensors in $\\varepsilon^{\\mu}_{Y}(p_1;p_2)\\varepsilon^{*\\nu}_{Y}(p_1;p_2)$ \nin terms of space-time metric tensors. \nApparently eq.~(\\ref{EQ:polsumMBR}) is not identical to eq.~(\\ref{EQ:polsumCDRPhys}),\\footnote{Note that \nwith our prescription unpolarized squared amplitudes are supposed to be computed by incoherently summing over\nhelicity amplitudes, and not by using polarization sums like \\eqref{EQ:polsumMBR}.} but the two \nexpressions agree of course in D = 4 dimensions.~\\\\ \n\n\n\nBefore we move on to establish explicit momentum basis representations of longitudinal polarization\nvectors for massive vector bosons and also for massive fermions, let us emphasize that \nby construction these momentum basis representations of polarization state vectors \nfulfill all the defining physical constraints, i.e., orthogonality to momenta and \nreference vectors, which are assured even if the open Lorentz indices (carried\nby either the external momenta or the Levi-Civita symbol) are taken to be D-dimensional. \n\n\nMathematically, the procedure of determining norm-orthogonal polarization vectors\neqs.~\\eqref{EQ:XpolEQs}, \\eqref{EQ:TpolEQs} from a given set of linearly independent momenta \nin 4 dimensions resembles the Gram-Schmidt orthogonalization procedure. \nOur key insight here is that we establish these Lorentz covariant decomposition \nrepresentations in 4 dimensions in a form that facilitates the subsequent promotion \nof their open Lorentz indices from 4 to D, resulting in expressions which will be \ntaken as their definitions in D dimensions.\n\n\n\\subsubsection{Massive particles in the final state}\n\nNext we consider the scattering process eq.~(\\ref{momentaassignment}) but \nwith massive final-state vector bosons, for instance W or Z bosons, \nwith on-shell conditions \n\\begin{eqnarray} \\label{EQ:onshellmassive}\np_1^2 = p_2^2 = 0~,~~ p_3^2 = p_4^2 = m^2 \\, . \n\\end{eqnarray}\n\n\nConcerning the three elementary transverse polarization state vectors, \n$\\varepsilon^{\\mu}_{X}~,~ \\varepsilon^{\\mu}_{T}~,~ \\varepsilon^{\\mu}_{Y}$, \nthe above constructions can be repeated but with slightly different \nkinematics.\nIt is straightforward to arrive at the following explicit representations:\n\\begin{eqnarray} \\label{EQ:TranspolMBRmassive}\n\\varepsilon^{\\mu}_{X} &=& \\mathcal{N}_{X} \\, \n\\Big((t-m^2)~ p^{\\mu}_1 + (-s-t+m^2)~ p^{\\mu}_2 + s~ p^{\\mu}_3 \\Big) \\, ,\\nonumber\\\\\n\\varepsilon^{\\mu}_{T} &=& \\mathcal{N}_{T} \n\\Big((t+m^2)~ p^{\\mu}_1 + (s+t-3m^2)~ p^{\\mu}_2 + (-s-2 t+2m^2)~ p^{\\mu}_3 \\Big) \\, ,\\nonumber\\\\\n\\varepsilon^{\\mu}_{Y} &=& \\mathcal{N}_{Y}~ \\epsilon^{\\mu}_{p_1 p_2 p_3} \\, ,\n\\end{eqnarray}\nwith the normalization factors \n\\begin{eqnarray}\n\\mathcal{N}_{X}^{~-2} &=& s \\left(-t (s + t) + 2 m^2 t - m^4 \\right) \\, ,\\nonumber\\\\\n\\mathcal{N}_{T}^{~-2} &=& - s t (s + t) + 2 m^2 t (3 s + 2 t) - m^4 (s + 8 t) + 4 m^6 \\, , \\nonumber\\\\\n\\mathcal{N}_{Y}^{~-2} &=& \\frac{1}{4} s \\left(-t (s + t) + 2 m^2 t - m^4\\right) \\, ,\n\\end{eqnarray}\nwhich, as already emphasized above, can always be conveniently chosen to be \nincorporated only at the very last stage of the computation.~\\\\\n\n\nCompared to the massless case, the helicity eigenstates of massive gauge bosons are \nreference-frame dependent and their helicities are not Lorentz-invariant. \nHelicity eigenstates constructed from the above elementary linear polarization state \nvectors are defined in the center-of-mass reference frame of the two colliding particles.\nThe third physical polarization state of a massive gauge boson is described by \nthe longitudinal polarization vector (defined in the same reference frame), \nwhich has its spatial part aligned with the momentum of the boson.\nFor the massive particle $g_3(p_3)$ these conditions translate into \nthe following set of equations for its longitudinal polarization vector $\\varepsilon_{L3}^{\\mu}$:\n\\begin{eqnarray} \\label{EQ:L3polEQs}\n&&\\varepsilon_{L3}^{\\mu} = c^{L3}_1 \\left( p^{\\mu}_1 + p^{\\mu}_2 - p^{\\mu}_3 \\right) \n+ c^{L3}_2 p^{\\mu}_3 \\, , \\nonumber\\\\ \n&&\\varepsilon_{L3}^{\\mu} \\cdot p_3 = 0 \\, ,\\nonumber\\\\\n&&\\varepsilon_{L3} \\cdot \\varepsilon_{L3} = -1 \\, .\n\\end{eqnarray}\nSolving eq.~(\\ref{EQ:L3polEQs}) for $c^{L3}_1,~ c^{L3}_2$, one obtains \n\\begin{eqnarray} \\label{EQ:L3polMBR}\n\\varepsilon^{\\mu}_{L3} = \\mathcal{N}_{L3} \n\\Big(-2 m^2~ \\left( p^{\\mu}_1 + p^{\\mu}_2 \\right) + s~ p^{\\mu}_3 \\Big) \\, ,\n\\end{eqnarray}\nwhere $\\mathcal{N}_{L3}^{~-2} = s m^2 (s-4 m^2)$. \nFor the massive vector boson $g_4(p_4)$ one gets for its longitudinal polarization vector $\\varepsilon_{L4}^{\\mu}$: \n\\begin{eqnarray} \\label{EQ:L4polMBR}\n\\varepsilon^{\\mu}_{L4} = \\mathcal{N}_{L4} \n\\Big((s -2 m^2)~ \\left( p^{\\mu}_1 + p^{\\mu}_2 \\right) - s~ p^{\\mu}_3 \\Big) \\, ,\n\\end{eqnarray}\nwhere $\\mathcal{N}_{L4} = \\mathcal{N}_{L3}$.\n By construction the defining physical properties, such as orthogonality to the momenta, \nare fulfilled by these momentum basis representations, even if \ntheir open Lorentz indices are taken to be D-dimensional. \nWe emaphasize that in our prescription the number of physical polarizations of a massive vector boson \nremains three in D dimensions.~\\\\\n\n\nThere are also polarization vectors associated with massive fermions. \nThe helicity eigenstate of a massive fermion with 4-momentum $k$ can be described by \na Dirac spinor, e.g.~$u(k,S_k)$, characterized by the normalized space-like polarization \nvector $S_k^{\\mu}$. And component-wisely \n\\begin{eqnarray} \nS_k^{\\mu} = \\left(\\frac{|\\vec{k}|}{m}, \\frac{k^0}{m} \\frac{\\vec{k}}{|\\vec{k}|}\\right) \\, ,\n\\end{eqnarray} \nwhere $k^0$ and $m$ are, respectively, the energy and mass of the massive fermion,\nwhile $\\vec{k}$ represents its 3-momentum. \nInterestingly, this polarization vector has the same momentum basis decomposition form \nas the longitudinal polarization vector of a massive vector boson \n(of the same momentum), provided the same external kinematic configuration applies. \nBy identifying $p_3^{\\mu} = k^{\\mu}$, eq.~(\\ref{EQ:L3polMBR}) can be viewed as the momentum \nbasis representation of $S_k^{\\mu}$ for the same external kinematic configuration as above. \nNamely, \n\\begin{eqnarray} \\label{EQ:PLpolMBR}\nS_k^{\\mu} = \\mathcal{N}_{S_k} \n\\Big(-2 m^2~\\left( p^{\\mu}_1 + p^{\\mu}_2 \\right) + s~ k^{\\mu} \\Big)\n\\end{eqnarray} \nwith $\\mathcal{N}_{S_k}^{~-2} = s m^2 (s-4 m^2)$.\nThis is because the set of norm-orthogonal conditions that $S^{\\mu}$ has to fulfill, \nnamely $k \\cdot S = 0~,~ S \\cdot S = -1~,~ \\vec{S} \\parallel \\vec{k}$, \nwhich are sufficient to determine it up to an overall phase, are exactly \nthe same as those that the longitudinal polarization vector \nin eq.~(\\ref{EQ:L3polMBR}) has to fulfill. \n\n\n\\subsection{Normalized tensor products of external spinors}\n\\label{SEC:prescription:NTS}\n\n\nIn cases where external fermions are involved in scattering amplitudes, \nstrings of Dirac matrices sandwiched between external on-shell spinors \nwill show up. In order to evaluate each open fermion line using \ntrace techniques, we employ the standard trick of multiplying and dividing \nthis quantity by appropriate auxiliary Lorentz-invariant spinor inner products,\nwhich can be traced back to~ref.~\\cite{Bjorken:1966kh}.\nPulling out the chosen overall Lorentz-invariant normalization factor, \nthe rest can be cast into a trace of products of Dirac matrices with the aid \nof Landau density matrices of external spinors.\nThe momentum basis representations of (massive) fermion's space-like \npolarization vectors, such as eq.~(\\ref{EQ:PLpolMBR}), can be used \n in these density matrices.\nFor massless fermions, the spin density matrices are reduced to \nleft- respectively right-chirality projectors, which thus spares us from introducing \nany explicit polarization vector in this case. \nThis is because helicity states of massless fermions coincide with chiral spinors. ~\\\\\n\n\nFrom a single open fermion line in a Feynman diagram, we get a contribution \nwhich can be generically written as $\\langle \\psi_A|\\hat{\\mathrm{M}}|\\psi_B \\rangle$.\nThe symbol $\\hat{\\mathrm{M}}$ denotes a product of Dirac matrices with their Lorentz indices \neither contracted or left open, and $|\\psi_A\\rangle,~|\\psi_B \\rangle$ stand for \nthe two external on-shell Dirac spinors, either of $u$-type or $v$-type. \nViewed as a spinor inner product, $\\langle \\psi_A|\\hat{\\mathrm{M}}|\\psi_B \\rangle$ \ncan always be rewritten as a trace of a product of Dirac-matrices \nin the Dirac-spinor space:\n\\begin{eqnarray} \\label{EQ:SFLtrace1}\n\\langle \\psi_A|\\hat{\\mathrm{M}}|\\psi_B \\rangle = \\mathrm{Tr} \\Big{[} \n|\\psi_B \\rangle \\langle \\psi_A|\\hat{\\mathrm{M}} \\Big{]}.\n\\end{eqnarray}\nThis formal rewriting is not really useful unless we can further \nexploit the matrix structure of the external spinors' tensor product $|\\psi_B \\rangle \\langle \\psi_A|$ \nin the spinor space (explicitly in terms of elementary Dirac matrices). \nTo this end, we rewrite $|\\psi_B \\rangle \\langle \\psi_A|$ \nby introducing an auxiliary spinor inner product along the following line:\n\\begin{eqnarray} \\label{EQ:TPextsps1}\n|\\psi_B \\rangle \\langle \\psi_A| \n&=& \n\\frac{\\langle \\psi_B|\\hat{\\mathrm{N}}|\\psi_A \\rangle }{\\langle \\psi_B|\\hat{\\mathrm{N}}|\\psi_A \\rangle }\n|\\psi_B \\rangle \\langle \\psi_A| \\nonumber\\\\\n&=& \\frac{1}{\\langle \\psi_B|\\hat{\\mathrm{N}}|\\psi_A \\rangle}\n|\\psi_B \\rangle \\langle \\psi_B|\\hat{\\mathrm{N}}|\\psi_A \\rangle \\langle \\psi_A| \\nonumber\\\\\n&=& \\mathcal{N}_{AB}~\n|\\psi_B \\rangle \\langle \\psi_B| \\hat{\\mathrm{N}} |\\psi_A \\rangle \\langle \\psi_A| \\, , \n\\end{eqnarray}\nwhere $\\mathcal{N}_{AB} \\equiv ({\\langle \\psi_B|\\hat{\\mathrm{N}}|\\psi_A \\rangle})^{-1}$.\n The auxiliary matrix $\\hat{\\mathrm{N}}$ is only required to have a non-vanishing\nmatrix element $\\langle \\psi_B|\\hat{\\mathrm{N}}|\\psi_A \\rangle$ and otherwise can be chosen \nto be as simple as desired.\nFor instance, for massive external spinors of some particular helicity configurations,\n $\\hat{\\mathrm{N}}$ may be chosen to be the identity matrix in spinor space, provided that \nthe spinor inner products between those helicity spinors are not vanishing. \nA generally valid and simple choice is $\\hat{\\mathrm{N}} = \\gamma_{\\mu} p^{\\mu}$ \nwith a 4-momentum $p^{\\mu}$ that is not linearly dependent on the on-shell momenta $p_A$ and $p_B$\nof $\\langle \\psi_A|$ and $|\\psi_B \\rangle$, respectively. \n\n\nWe manipulate eq.~(\\ref{EQ:TPextsps1}) further by first substituting Landau density matrices for \n$|\\psi_A \\rangle \\langle \\psi_A|$ and $|\\psi_B \\rangle \\langle \\psi_B|$, conventionally given by \n\\begin{eqnarray} \\label{EQ:LDMofDSP}\nu(p,S_p) \\otimes \\bar{u}(p,S_p) &=& \\left(\\slashed{p} + m \\right) \\frac{1+\\gamma_5 \\slashed{S}_p}{2} \\, , \\nonumber\\\\\nv(p,S_p) \\otimes \\bar{v}(p,S_p) &=& \\left(\\slashed{p} - m \\right) \\frac{1+\\gamma_5 \\slashed{S}_p}{2} \\, .\n\\end{eqnarray} \nThen we simplify the resulting composite Dirac matrix object \nbefore finally obtaining a form that is suitable for being unambiguously used \nin eq.~(\\ref{EQ:SFLtrace1}) with the trace to be done in D dimensions. \n\n\nThere are several equivalent forms of these on-shell Dirac-spinors' projectors \nin 4 dimensions. In particular, one may commute the on-shell projection \noperator $\\slashed{p} \\pm m$ and the polarization projection operator \n$({1+\\gamma_5 \\slashed{S}_p})\/{2}$ using $p \\cdot S_p = 0$.\nHowever, it is well known that a fully anticommuting $\\gamma_5$ can not be\nimplemented in dimensional regularization in an algebraically consistent way~\\cite{Collins:1984xc}, \nif we still want this object to coincide with the usual $\\gamma_5$ in 4 dimensions. \nIn this article, we adopt the $\\gamma_5$ prescription of ref.~\\cite{Larin:1991tj,Larin:1993tq}, \nconventionally known as Larin's scheme, whose equivalent but more efficient implementations in high-order \nperturbative calculations are discussed in ref.~\\cite{Moch:2015usa}. \nIn our work, all appearances of $\\gamma_5$ matrices, originating either from \ninteraction vertices or external polarization projectors, should be \nregarded just for bookkeeping purposes and their interpretations shall be based \non~\\cite{Larin:1991tj,Larin:1993tq,Moch:2015usa}. \nAs a consequence of this prescription, $\\gamma_5$ no longer anticommutes with all \nDirac $\\gamma$ matrices, and 4-dimensional equivalent forms of eq.~(\\ref{EQ:LDMofDSP}) \nare no longer necessarily algebraically equivalent in D dimensions. \n\n\nIn order to eliminate potential ambiguities --- after having simplified \\eqref{EQ:SFLtrace1}, \\eqref{EQ:LDMofDSP}\nusing 4-dimensional Lorentz and Dirac algebra as much as possible ---, we should agree on one definite fixed form of \n\\eqref{EQ:SFLtrace1}, \\eqref{EQ:LDMofDSP}, solely in terms of a string of Dirac $\\gamma$ matrices with \nfixed product ordering, the Levi-Civita tensor, and external momenta. \nWe may call these their canonical forms in 4 dimensions. \nThis allows an unambiguous interpretation\\footnote{This is up to a potential subtlety \nrelated to the contraction order among multiple Levi-Civita tensors~\\cite{Moch:2015usa}, \nas will be commented on in section~\\ref{SEC:examples:eeQQ}.} \nof the expression in D dimensions where it will be manipulated according to the $D$-dimensional algebra \nafter being inserted back into eq.~(\\ref{EQ:SFLtrace1}).\n\n\nLet us now be more specific about this by working out a representative case, \na single open fermion line with two massive external $u$-type spinors, $u(p_A,S_A)$ and $u(p_B,S_B)$.\nWe choose $\\hat{\\mathrm{N}} = \\slashed{q}$ where $q^{\\mu}$ is a 4-momentum \nthat is linearly independent of $p_A$ and $p_B$. \nPulling out the normalization factor $\\mathcal{N}_{AB} = \\left({\\bar{u}(p_A,S_A) \\slashed{q} u(p_B,S_B)}\\right)^{-1}$, \neq.~(\\ref{EQ:TPextsps1}) reads in this case:\n\\begin{eqnarray}\\label{EQ:TPextspsaux}\n\\frac{1}{\\mathcal{N}_{AB}}~ u(p_B,S_B) \\otimes \\bar{u}(p_A,S_A) \n&=& \n\\left(\\slashed{p}_B + m \\right) \n\\frac{1+\\gamma_5 \\slashed{S}_B}{2}\n\\slashed{q}\n\\frac{1+\\gamma_5 \\slashed{S}_A}{2} \n\\left(\\slashed{p}_A + m \\right) \\, ,\n\\end{eqnarray}\n which can be brought into the form\n\\begin{eqnarray} \\label{EQ:TPextsps2}\n\\frac{1}{\\mathcal{N}_{AB}}~ u(p_B,S_B) \\otimes \\bar{u}(p_A,S_A) \n&=& \\left(\\slashed{p}_B + m \\right) \n\\frac{1}{4}\\slashed{q}\n\\left(\\slashed{p}_A + m \\right) \\nonumber\\\\\n&+& \\left(\\slashed{p}_B + m \\right) \n\\frac{1}{4} \n\\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_B} \\right) \\slashed{q}\n\\left(\\slashed{p}_A + m \\right) \\nonumber\\\\\n&+& \\left(\\slashed{p}_B + m \\right) \n\\frac{1}{4}\\slashed{q} \n\\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_A} \\right)\n\\left(\\slashed{p}_A + m \\right) \\nonumber\\\\\n&+& \\left(\\slashed{p}_B + m \\right) \n\\frac{1}{4} \\slashed{S}_B \\slashed{q} \\slashed{S}_A \n\\left(\\slashed{p}_A + m \\right) \\, .\n\\end{eqnarray} \nStrictly speaking, eq.~(\\ref{EQ:TPextsps2}) is identical to \\eqref{EQ:TPextspsaux}\nonly in 4 dimensions. The unambiguous eq.~(\\ref{EQ:TPextsps2}), which no longer contains \nany explicit $\\gamma_5$, will be taken as the definition of $u(p_B,S_B) \\otimes \\bar{u}(p_A,S_A)$ \nwhen it is inserted into eq.~(\\ref{EQ:SFLtrace1}) and manipulated in accordance with \nthe D-dimensional algebra.\n\n\nNotice that in eq.~\\eqref{EQ:TPextspsaux} the auxiliary matrix \n$\\slashed{q}$ and the polarization projection operators were \nplaced inside the on-shell projection operators $\\slashed{p}_{\\tiny{A\/B}} + m$,\na point which will be explained and become clear in section~\\ref{SEC:unitarity:PSA}. \nWe emphasize again that the momentum basis representations of the\nmassive fermions' helicity polarization vectors $S^{\\mu}_A$ and $S^{\\mu}_B$\nwill be eventually inserted, whose open Lorentz indices are carried \nby external momenta that are assumed to be D dimensional.\nSimilar rewritings and definitions like eq.~(\\ref{EQ:TPextsps2}) can be made \nalso for fermion lines with external $v$-type spinors, whose Landau density \nmatrices are given in eq.~(\\ref{EQ:LDMofDSP}). \n\n\n\nIn practice, it is very convenient to keep projections associated with each of \nthe four terms in eq.~(\\ref{EQ:TPextsps2}) separate from each other, for two reasons.\nFirst, this organization is in accordance with the power of the Levi-Civita tensor \nappearing in the terms, which is advantageous especially when Feynman diagram expressions are \nalso split into terms with even and odd products of $\\gamma_5$ (arising from axial vertices). \nSecond, for a fermion with fixed momentum, its polarization vector, e.g. $S_A$ or $S_B$ \nin eq.(\\ref{EQ:TPextsps2}), changes just by an overall minus sign when its helicity is flipped.\nTherefore, the expressions of eq.~(\\ref{EQ:SFLtrace1}) for the four different helicity configurations\ncan all be obtained by suitably combining the traces in eq.~(\\ref{EQ:SFLtrace1}) of the product \nof $\\hat{\\mathrm{M}}$ and each of the four terms in eq.~(\\ref{EQ:TPextsps2}).\n\nNotice that in general the normalization factor $\\mathcal{N}_{AB}$ in eq.~(\\ref{EQ:SFLtrace1}) \ndepends on the helicities of the external fermions A and B, \nas will be explicitly shown in the example given in section~\\ref{SEC:examples:eeQQ}.\nUsing \\eqref{EQ:TPextsps2} we need to compute these four individual projections separately just once, \nout of which all four different helicity configurations can be obtained. \n\n\nOnce a definite unambiguous form of the right-hand side of eq.~(\\ref{EQ:TPextsps2}) has been established \nin 4 dimensions, it will be kept fixed while all open Lorentz and Dirac indices will be promoted \nin accordance with computations in CDR. \nAdditionally, just like the aforementioned normalization factors associated with the gauge boson's polarization vectors, \nthe factor $\\mathcal{N}_{AB}$ in eq.~(\\ref{EQ:SFLtrace1}) is an overall normalization factor \nwhich must be adopted consistently in computing all amplitudes involved in the calculations of finite remainders. \nIf one chooses to incorporate this overall normalization factor only in the very last stage of calculating \nfinite remainders where the 4 dimensional limit can already be taken, \nit is then evident that we can evaluate these Lorentz invariant factors in 4 dimensions.~\\\\\n\n\nAs already mentioned above, in the massless limit the spin density matrices in eq.~(\\ref{EQ:LDMofDSP}) \nare reduced to left- or right-chiral projectors. Thus no polarization vectors are needed.\nFor instance, the massless limit of eq.~(\\ref{EQ:TPextsps2}) with $++$ helicity configuration reads: \n\\begin{eqnarray} \\label{EQ:TPextsps2massless}\n\\frac{1}{\\mathcal{N}_{AB}}~ u(p_B,+) \\otimes \\bar{u}(p_A,+) \n&=& \n\\slashed{p}_B \n\\frac{1-\\gamma_5 }{2}\n\\slashed{q}\n\\frac{1+\\gamma_5}{2} \n\\slashed{p}_A \\nonumber\\\\\n&=& \\frac{1}{2} \n\\Bigg(\n\\slashed{p}_B \n\\slashed{q} \n\\slashed{p}_A \n- \\slashed{p}_B \n\\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma q} \\right)\n\\slashed{p}_A \\Bigg). \n\\end{eqnarray}\nThe remarks below eq.~\\eqref{EQ:TPextsps2} concerning the use of this equation in D-dimensional calculations\napply also to \\eqref{EQ:TPextsps2massless}.\nThe above reformulations of tensor products of two external helicity spinors, \nsuch as eq.~(\\ref{EQ:TPextsps2}) and eq.~(\\ref{EQ:TPextsps2massless}), \ncan be applied to each single open fermion line, besides using for each external boson\na momentum basis representation of its polarization vector. ~\\\\\n\n\n\nTo summarize, the tensor product of momentum basis representations of all external gauge bosons' polarization vectors \nand all properly re-written external spinor products, such as those given by \neqs.~\\eqref{EQ:XpolMBR} - \\eqref{EQ:YpolMBR}, and eq.~(\\ref{EQ:TPextsps2}), (\\ref{EQ:TPextsps2massless}), \nwith their open indices promoted in accordance with CDR, will be taken as the external projectors for polarized amplitudes. \nPolarized amplitudes are thus first projected in the linear polarization basis for external gauge bosons \nand the basis indicated by eq.~(\\ref{EQ:TPextsps2}), (\\ref{EQ:TPextsps2massless}) for each open fermion line.\nIt is a good practice to first combine Levi-Civita tensors that appear in external projectors in order to \nreach an unambiguous canonical form that is homogeneous in the Levi-Civita tensor whose power is at most one\\footnote{See the end of \nsection~\\ref{SEC:examples:eeQQ} for more discussions about this.}. \nHelicity amplitudes can be subsequently obtained from these these polarized amplitudes by\nlinear combinations, such as those implied in eq.~(\\ref{EQ:LP2HLmassless}). \nFor instance, the transformation matrix of polarized scattering amplitudes \namong four massless gauge bosons from the linear to the circular polarization basis \nis a $16\\times16$ constant matrix that can be extracted from eq.~(\\ref{EQ:LP2HLmassless}). \nLikewise, constant transformation matrices can also be extracted from eq.~(\\ref{EQ:TPextsps2}) \nfor massive fermion lines and eq.~(\\ref{EQ:TPextsps2massless}) for massless fermion lines.~\\\\\n\n\nEventually every helicity amplitude thus composed is manifestly given as a function of Lorentz invariant \nvariables solely made out external momenta. \nThis is owing to the fact that the momentum basis representations of polarization vectors \nallow us to find a Lorentz covariant representation of the tensor product of external particle states \nsolely in terms of external momenta and algebraic constants (such as the metric tensor, the\nLevi-Civita tensor and Dirac matrices). \nSubsequently this makes it feasible to directly take these objects as the external polarization projectors.\nFrom the point of view of the projection method as outlined in section~\\ref{SEC:projectionmethod:recap},\nthe set of external polarization projectors described above might be loosely viewed as \na special choice of Lorentz decomposition basis which by construction are orthogonal among each other.\nConsequently, the corresponding Gram matrix is diagonal and its inversion is trivial. \nFurthermore, each structure that arises from such a decomposition is directly related to a physical quantity, \nand therefore its singularity pattern is protected by physical constraints obeyed by these physical quantities. \nIn this way the issues related to the conventional form factor decomposition as discussed \nin section~\\ref{SEC:projectionmethod:comments} are circumvented. \n\n\n\\subsection{Comments on other processes} \n\\label{SEC:prescription:comments}\n\n\nIn the preceding subsections we have discussed a prototype $2 \\to 2$ scattering process \nwhere there are only three linearly independent external momenta and consequently \nthe Lorentz invariant scattering amplitude cannot contain a term composed of one Levi-Civita tensor \nfully contracted with external momenta.\nThis fact can lead to a reduction of terms that are to be included in external projectors. \nFor instance, if the $2\\to 2$ scattering process is parity-invariant, then all terms in the external projectors \nthat are linear in Levi-Civita tensor can be dropped from the outset. \nThis simplification does no longer occur if there are more than four particles involved in the scattering, \ne.g. in a $2\\to 3$ process.\nWe comment on a few technical aspects in handling these cases.~\\\\ \n\n \n\\subsubsection*{$2\\to 3$ scattering.} \n\nWe have seen in section~\\ref{SEC:prescription:MBR} and \\ref{SEC:prescription:NTS} \nthat three linearly independent external momenta are sufficient to build momentum basis representations \nof external polarization vectors, and the concrete decomposition coefficients depend on the particular \nkinematics. For constructing momentum basis representations of polarization vectors for final-state particles, \nit is convenient to take a group of three linearly independent external momenta of which two are always \nchosen to be the momenta of the initial-state (massless) particles and the third one is the one of the \nfinal state particle in question. \n\n\nFor convenience, let us document below the momentum basis representations of linear polarization \nvectors introduced in section~\\ref{SEC:prescription:MBR}, but without specializing the external kinematic \nconfiguration.\nWe consider a generic configuration with two massless initial state particles with momenta $p_1$ and $p_2$ \n(which is applicable to most of the phenomenologically interesting high-energy scattering processes), \nwhile the mass of the particular final state particle, with momentum labeled as $p_3$, in question is \nleft unspecified. These three external momenta are assumed to be linearly independent. \nNo specification is made of the kinematics of the other particles in the final state. \n\n\nWe introduce the following symbols for kinematic invariants:\n\\begin{eqnarray}\ns_{12} = 2~ p_1 \\cdot p_2 ~,~~ s_{13} = 2~ p_1 \\cdot p_3 ~,~~ \ns_{23} = 2~ p_2 \\cdot p_3 ~,~~ m^2 = p_3 \\cdot p_3 ~,\n\\end{eqnarray} \nwhich are assumed to be independent of each other. Repeating the construction made \nin section~\\ref{SEC:prescription:MBR}, we obtain for this generic kinematic setting:\n\\begin{eqnarray} \\label{EQ:TranspolMBRmassiveGeneric}\n\\varepsilon^{\\mu}_{X} &=& \\mathcal{N}_{X} \n\\Big((-s_{23})~ p^{\\mu}_1 + (-s_{13})~ p^{\\mu}_2 + s_{12}~ p^{\\mu}_3 \\Big) , \\nonumber\\\\\n\\varepsilon^{\\mu}_{T} &=& \\mathcal{N}_{T} \n\\Big((s_{23} (s_{13} + s_{23}) - 2 m^2 s_{12} )~ p^{\\mu}_1 + (-s_{13} (s_{13} + s_{23}) + 2 m^2 s_{12})~ p^{\\mu}_2 + (s_{12} (s_{13} - s_{23}))~ p^{\\mu}_3 \\Big), \\nonumber\\\\\n\\varepsilon^{\\mu}_{Y} &=& \\mathcal{N}_{Y}~ \\epsilon^{\\mu}_{p_1 p_2 p_3}, \\nonumber\\\\\n\\varepsilon^{\\mu}_{L3}&=& \\mathcal{N}_{L3} \n\\Big(-2 m^2~ \\left( p^{\\mu}_1 + p^{\\mu}_2 \\right) + (s_{13} + s_{23}) ~ p^{\\mu}_3 \\Big) \\, ,\n\\end{eqnarray}\nwith normalization factors \n\\begin{eqnarray}\n\\mathcal{N}_{X}^{~-2} &=& s_{12} \\left(s_{13} s_{23} - m^2 s_{12}\\right) \\, ,\\nonumber\\\\\n\\mathcal{N}_{T}^{~-2} &=& s_{12} \\left( s_{13} s_{23} (s_{13} + s_{23})^2 - m^2 s_{12} (s_{13}^2 + 6 s_{13} s_{23} + s_{23}^2) + 4 m^4 s_{12}^2 \\right) \\, , \\nonumber\\\\\n\\mathcal{N}_{Y}^{~-2} &=& \\frac{1}{4} s_{12} \\left(s_{13} s_{23} - m^2 s_{12}\\right) \\, , \\nonumber\\\\\n\\mathcal{N}_{L3}^{~-2} &=& m^2 \\left((s_{13} + s_{23})^2 - 4 m^2 s_{12}\\right) \\, .\n\\end{eqnarray}\nAll previous comments on polarization vectors and normalization factors apply here as well.\n~\\\\\n\n\nDue to the presence of four linearly independent external momenta in a $2\\to 3$ process, \nthe aforementioned reduction of terms in the external projectors for $2\\to 2$ processes \nno longer applies in general. \nHowever, this fact also offers an opportunity to eliminate the explicit appearance of \nthe Levi-Civita tensor $\\epsilon^{\\mu\\nu\\rho\\sigma}$ from external projectors \n(which may already be pre-processed to be at most linear in $\\epsilon^{\\mu\\nu\\rho\\sigma}$),\nby applying the trick used in defining the van Neerven-Vermaseren basis~\\cite{vanNeerven:1983vr}.\nTo be more specific, let us consider a $2\\to 3$ scattering process where the four linearly independent \n4-momenta are denoted by $p_1, p_2,p_3,p_4$.\n A single power $\\epsilon^{\\mu\\nu\\rho\\sigma}$ in external polarization projectors can be rewritten as \n\\begin{eqnarray} \\label{EQ:NVtrick}\n\\epsilon^{\\mu\\nu\\rho\\sigma} &=& \n\\frac{\\epsilon_{p_1 p_2 p_3 p_4}}{\\epsilon_{p_1 p_2 p_3 p_4}} \n~\\epsilon^{\\mu\\nu\\rho\\sigma} \\nonumber\\\\\n&=& \\Delta ~ \n\\Big(\np_1^{\\rho } p_2^{\\nu } p_3^{\\mu } p_4^{\\sigma }-p_1^{\\nu } p_2^{\\rho } p_3^{\\mu } p_4^{\\sigma }-p_1^{\\rho } p_2^{\\mu } p_3^{\\nu } p_4^{\\sigma }+p_1^{\\mu } p_2^{\\rho } p_3^{\\nu } p_4^{\\sigma }+p_1^{\\nu } p_2^{\\mu } p_3^{\\rho } p_4^{\\sigma }-p_1^{\\mu } p_2^{\\nu} p_3^{\\rho } p_4^{\\sigma }\n\\nonumber\\\\&&~~~~~\n-p_1^{\\rho } p_2^{\\nu } p_3^{\\sigma } p_4^{\\mu }+p_1^{\\nu } p_2^{\\rho } p_3^{\\sigma } p_4^{\\mu}+p_1^{\\rho } p_2^{\\sigma } p_3^{\\nu } p_4^{\\mu }-p_1^{\\sigma } p_2^{\\rho } p_3^{\\nu } p_4^{\\mu }-p_1^{\\nu } p_2^{\\sigma }\np_3^{\\rho } p_4^{\\mu }+p_1^{\\sigma } p_2^{\\nu } p_3^{\\rho } p_4^{\\mu }\n\\nonumber\\\\&&~~~~~\n+p_1^{\\rho } p_2^{\\mu } p_3^{\\sigma } p_4^{\\nu }-p_1^{\\mu }p_2^{\\rho } p_3^{\\sigma } p_4^{\\nu }-p_1^{\\rho } p_2^{\\sigma } p_3^{\\mu } p_4^{\\nu }+p_1^{\\sigma } p_2^{\\rho } p_3^{\\mu } p_4^{\\nu\n }+p_1^{\\mu } p_2^{\\sigma } p_3^{\\rho } p_4^{\\nu }-p_1^{\\sigma } p_2^{\\mu } p_3^{\\rho } p_4^{\\nu }\n\\nonumber\\\\&&~~~~~\n-p_1^{\\nu } p_2^{\\mu } p_3^{\\sigma } p_4^{\\rho }+p_1^{\\mu } p_2^{\\nu } p_3^{\\sigma } p_4^{\\rho }+p_1^{\\nu } p_2^{\\sigma } p_3^{\\mu } p_4^{\\rho}-p_1^{\\sigma } p_2^{\\nu } p_3^{\\mu } p_4^{\\rho }-p_1^{\\mu } p_2^{\\sigma } \np_3^{\\nu } p_4^{\\rho }+p_1^{\\sigma } p_2^{\\mu } p_3^{\\nu } p_4^{\\rho } \\Big),\n\\nonumber\\\\\n\\end{eqnarray}\nwhere the normalization factor $\\Delta \\equiv \\frac{1}{\\epsilon_{p_1 p_2 p_3 p_4}}$ \ncan be conveniently pulled out and grouped together with other normalization factors of external projectors\n(and used consistently through out the whole calculation). \nThe treatment of the Levi-Civita tensor in eq.~(\\ref{EQ:NVtrick}) complies with the two rules listed\nin section~\\ref{SEC:prescription:MBR}. \nIn this way, no Levi-Civita tensor appears in external polarization projectors \nfor $2 \\to 3$ scattering amplitudes any more, up to a global normalization factor, \nand hence it is manifest that the form of external projectors can be unambiguously constructed.\n\n\n\\subsubsection*{$1\\to 2$ decay.} \n\nFor a $1\\to 2$ decay amplitude, the conventional Lorentz tensor decomposition and projection method \ncan be carried out quite simply (due to the limited number of basis structures and scales). \nFor instance, for the fermion's gauge interaction vertex a general form factor decomposition\ncan be found in~\\cite{Hollik:1998vz}. \nHere we briefly comment on how one can compute polarized $1\\to 2$ decay amplitudes if one wants \nto use the above prescription. \n\n\nThe computation requires the introduction of an intermediate auxiliary reference-vector, \ndenoted by $\\hat{r}^{\\mu}$, which will be formally treated on the same footing \nas an external 4-momentum.\nThe reference-vector $\\hat{r}^{\\mu}$ may be associated with the polarization vector of \nthe decaying particle (in which case it has a physical meaning), or chosen to be \nan auxiliary coordinate-system-specific vector merely for intermediate usage. \nThe important point we would like to emphasize here is that \nthe definition of $\\hat{r}^{\\mu}$ can be achieved by simply specifying the values of \na complete set of quadratic Lorentz invariant products between $\\hat{r}^{\\mu}$ \nand two linearly independent external momenta, which we denote by $p_1$ and $p_2$. \nFor instance, the normalized space-like $\\hat{r}^{\\mu}$ can be implicitly specified by \n\\begin{eqnarray} \\label{EQ:defRV}\n\\hat{r} \\cdot p_1 = 0~, ~~ \\hat{r} \\cdot p_2 = 0~,~~ \\hat{r} \\cdot \\hat{r} = -1, \n\\end{eqnarray}\nwhich guarantees that it lies in the plane transverse to $p_1$ and $p_2$. \nThis set of assignments \\eqref{EQ:defRV} is sufficient to algebraically manipulate\n$\\hat{r}$ in the computation of polarized $1\\to 2$ decay amplitudes. \nThere is no need for its explicit component-wise specification in a definite coordinate-reference system. \nWith the aid of the thus-defined $\\hat{r}$, all procedures outlined above for the \n$2 \\to 2$ scattering processes, discussed in section~\\ref{SEC:prescription:MBR}, can be repeated here. \nTo be a bit more specific, in this case the set of three linearly independent 4-vectors \n$\\{ p_1, p_2, \\hat{r}\\}$ will take over the roles that were played by the three linearly independent \nexternal momenta $\\{ p_1, p_2, p_3\\}$ in the $2 \\to 2$ scattering processes.\nIn fact the $\\hat{r}$ defined in eq.~(\\ref{EQ:defRV}) fulfills the same set of conditions\nthat $\\varepsilon_{X}$ in eq.~(\\ref{EQ:XpolEQs}) satisfies. \nMoreover, it never appears in Feynman propagators\\footnote{This means that the sectors of loop \nintegrals appearing in the projected amplitudes will not be enlarged by the introduction \nof this external reference-vector $\\hat{r}$.}, and the Lorentz invariants appearing in \nthe resulting projections are still just those made out of $p_1$ and $p_2$ (as the right-hand side \nof eq.~(\\ref{EQ:defRV}) are all constants).\nIn the end the physical decay rates are independent of the choice of this auxiliary $\\hat{r}$. \nIn the case of a scalar decaying into a pair of fermions, the introduction of such an auxiliary vector \ncould be avoided because the helicity polarization vector of a massive fermion, eq.~(\\ref{EQ:PLpolMBR}), \nmakes no reference at all to any transverse direction w.r.t. its momentum. \n\n\n\\section{Unitarity of the Prescription}\n\\label{SEC:unitarity}\n\nThe potential RS dependence of amplitudes is intimately connected to the\nstructure of their UV and IR singularities. Fortunately, in QCD they obey a \nfactorized form at the amplitude level~\\cite{Sen:1982bt,Collins:1989bt,Catani:1998bh,Sterman:2002qn,Dixon:2008gr,Gardi:2009qi,Gardi:2009zv,Becher:2009cu,Becher:2009kw,Becher:2009qa,Feige:2014wja}. \nThe final result for a physical quantity, for instance a cross section, \nis of course finite and must not depend on the RS used. \n\n\nThe usage of the polarization projectors defined in the previous sections \nyields helicity amplitudes that differ in general from those defined in many existing dimensional regularization \nvariants, in particular the CDR. In this section, we argue that our prescription \nof external state vectors will however lead to the same RS-independent finite remainders as for instance\nin CDR, and can therefore be used in a hybrid way with CDR to achieve a maximal convenience \nowing to the amplitude-level factorization of UV and IR singularities in QCD amplitudes. \n\n\\subsection{Pole subtracted amplitudes}\n\\label{SEC:unitarity:PSA}\n\n\nWe recall that in the D-dimensional Lorentz decomposition representation \nof a scattering amplitude, the Lorentz-invariant form factors encode all dependence \non dimensionally regularized loop integrals and are independent of the external polarization vectors.\nOnce the (renormalized) loop amplitudes are available in such a tensor decomposed form, \nwith all (singular) Lorentz-invariant form factors computed in D dimensions, \nthen merely changing the RS for the external particles' state vectors, consistently both for the loop amplitudes \nand the corresponding IR subtraction terms, should not alter the finite remainders resulting from \nsubtracting all poles and subsequently taking the 4-dimensional limit\\footnote{The equivalence between CDR and HV \nin leading to the same RS-independent finite remainders with the identical set of renormalization constants \nand anomalous dimensions~\\cite{Kilgore:2012tb,Broggio:2015dga} can be appreciated this way, \nand the same arguments apply here as well.}. \nBecause in the form-factor representation of an amplitude the loop-integral dependent part \nis separated from the part depending on the external states, it is thus unambiguous to implement \nwhatever non-CDR convention for external state vectors in the computation of singular amplitudes.\nThe crucial question for our purpose is \nwhether our non-CDR prescription for external state vectors can still be unambiguously and \ndirectly applied in the computation of amplitudes without performing the form factor decomposition first. \n\n\nIn our prescription all open Lorentz indices of the polarization projectors \ndefined in section~\\ref{SEC:Prescription} are set to be D-dimensional and no dimensional \nsplitting is ever introduced, just like in CDR. Thus, commutation between Lorentz \nindex contraction and loop integration is preserved within our prescription. \nThis means that applying our polarization projectors directly to the original \nFeynman-diagrammatic representation of a loop amplitude should lead to the same \npolarized amplitudes as would be obtained by applying these projectors \nto the D-dimensional form-factor decomposition representation of that amplitude.\nNo matter whether or not evanescent Lorentz structures appear explicitly \nor implicitly in the form-factor decomposition of the loop amplitude, \nthey are taken into account exactly as they are in the original Feynman-diagrammatic \nrepresentation of this amplitude.\nFrom this perspective we could already expect to end up with the same \n(4-dimensional) finite remainder as one would obtain from a computation purely within CDR.~\\\\\n\n\nBelow we demonstrate this crucial point more clearly via providing an alternative formulation of \nfinite remainders introduced in the proposed prescription, which also helps to clarify a few points \nalluded in the preceding section.\nLet us consider the finite remainders of amplitudes in CDR as defined by the celebrated amplitude-level \nfactorization formula. \nSingularities in the dimensionally regularized QCD amplitudes are known to \nfactorize~\\cite{Sen:1982bt,Collins:1989bt,Catani:1998bh,Sterman:2002qn,Dixon:2008gr,Gardi:2009qi,Gardi:2009zv,Becher:2009cu,Becher:2009kw,Becher:2009qa,Feige:2014wja}.\nFor our purpose, we can sketch this factorization property \nof a bare QCD scattering amplitude $\\hat{\\mathcal{A}}(\\epsilon)$ among several resolved \nexternal particles (with fixed external kinematics) schematically as follows:\n\\begin{eqnarray}\\label{EQ:AmpPoleFactorization}\n\\hat{\\mathcal{A}}(\\epsilon) = \\hat{\\mathcal{Z}}_{\\mathrm{IR}}(\\epsilon)~ \n\\mathcal{Z}_{\\mathrm{UV}}(\\epsilon)~ \\hat{\\mathcal{F}}(\\epsilon) \\, ,\n\\end{eqnarray}\nwhere\\footnote{The need of mass renormalizations in the case of massive quarks is understood.}\nwe have suppressed the dependence of the quantities on external kinematics and masses\nas well as on auxiliary dimensional scales except the dimensional regulator $\\epsilon$, \n(for a detailed exposition, see e.g.~\\cite{Catani:1998bh,Feige:2014wja,Broggio:2015dga,Magnea:2018ebr} \nand references therein).\nThe bare amplitude $\\hat{\\mathcal{A}}(\\epsilon)$ and the finite pole-subtracted amplitude \n$\\hat{\\mathcal{F}}(\\epsilon)$ should be viewed as vectors in the color space of the external particles, \nand the multiplicative singular IR-factor $\\hat{\\mathcal{Z}}_{\\mathrm{IR}}(\\epsilon)$ as a matrix. \nThe RS-dependent singular factors $\\mathcal{Z}_{\\mathrm{UV}}(\\epsilon)$ and $\\hat{\\mathcal{Z}}_{\\mathrm{IR}}(\\epsilon)$ \nencode all UV and IR pole-singularities of $\\hat{\\mathcal{A}}(\\epsilon)$, and are independent \nof the detailed kinematic configuration, such as polarization states, of the external resolved particles. \n(This is the meaning of ``factorization''.)\nBy the very meaning of pole factorization in eq.~(\\ref{EQ:AmpPoleFactorization}), \n$\\hat{\\mathcal{F}}(\\epsilon)$ is regular in $\\epsilon$ and has a finite 4-dimensional limit, \n$\\hat{\\mathcal{F}}(\\epsilon = 0)$. \nWe call this quantity the (4-dimensional) \\textit{finite remainder} of $\\hat{\\mathcal{A}}(\\epsilon)$ \ndefined by subtracting all poles minimally by the multiplicative factors \nas sketched in eq.~(\\ref{EQ:AmpPoleFactorization}). \n\n\nWe may summarize this by the following expression for the finite remainder \n$\\hat{\\mathcal{F}}_{4} \\equiv \\hat{\\mathcal{F}}(\\epsilon = 0)$, namely \n\\begin{eqnarray}\\label{EQ:AmpsFiniteRemainder}\n\\hat{\\mathcal{F}}_{4} = \n\\Bigg(\n\\hat{\\mathcal{Z}}^{-1}_{\\mathrm{IR};\\mathrm{CDR}}(\\epsilon)~ \n\\mathcal{Z}^{-1}_{\\mathrm{UV};\\mathrm{CDR}}(\\epsilon)~\\hat{\\mathcal{A}}_{\\mathrm{CDR}}(\\epsilon) \n\\Bigg)_{\\epsilon = 0}\n\\end{eqnarray}\nwhere we added the subscript ``CDR'' to all singular RS-dependent quantities given in CDR. \nFor the point to be demonstrated here, the concrete expressions of these singular multiplicative factors \ntaken from CDR are irrelevant. \nThe claim is that replacing all CDR-regularized external states of the fixed-angle bare scattering\namplitude $\\hat{\\mathcal{A}}_{\\mathrm{CDR}}(\\epsilon)$ by their respective counterparts \ngiven in terms of momentum basis representations defined in section~\\ref{SEC:Prescription} \nwill still result in the same finite remainder $\\hat{\\mathcal{F}}_{4}$, \nwhere all poles have been subtracted in a minimal way by the same untouched \n$\\hat{\\mathcal{Z}}^{-1}_{\\mathrm{IR};\\mathrm{CDR}}(\\epsilon) ~ \\mathcal{Z}^{-1}_{\\mathrm{UV};\\mathrm{CDR}}(\\epsilon)$, \nwithout appealing to the Lorentz tensor decomposition representation of $\\hat{\\mathcal{A}}_{\\mathrm{CDR}}(\\epsilon)$. \n\n\n\nIn order to facilitate the discussion, let us exhibit the dependence of \n$\\hat{\\mathcal{A}}_{\\mathrm{CDR}}(\\epsilon)$ on the CDR-regularized polarization state \n$\\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i)$ of a \\textit{representative} external \nmassless gauge boson with momentum $p_i$ and reference vector $r_i$. \nBecause the bare scattering amplitude $\\hat{\\mathcal{A}}_{\\mathrm{CDR}}$ is linear in \n$\\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i)$, \nwe write \n\\begin{equation} \\label{eq:ACDReps}\n\\hat{\\mathcal{A}}_{\\mathrm{CDR}} \n\\Big(\\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)\n= g_{\\mu \\nu}~\n\\Big(\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\\Big)^{\\mu} ~ \n\\bar{\\varepsilon}^{~\\nu}_{\\bar{\\lambda}}(p_i,r_i) \\, .\n\\end{equation}\nwhere we have introduced a compact notation $\\Big(\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\\Big)^{\\mu}$.\nFor the pole-subracted amplitude we have \n\\begin{eqnarray}\\label{EQ:AmpsFiniteRemainderCDR}\n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big) &\\equiv& \n\\hat{\\mathcal{Z}}^{-1}_{\\mathrm{IR};\\mathrm{CDR}}\\left(\\epsilon\\right)~ \n\\mathcal{Z}^{-1}_{\\mathrm{UV};\\mathrm{CDR}}\\left(\\epsilon\\right)~\n\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\n\\Big(\\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big) \\nonumber\\\\\n&=& \\hat{\\mathcal{F}}_{4}\\Big(\\varepsilon_{\\lambda}(p_i,r_i) \\Big) + \\mathcal{O}(\\epsilon) \\, ,\n\\end{eqnarray} \nwhose limit at $\\epsilon = 0$ is precisely the finite remainder $\\hat{\\mathcal{F}}_{4}$ \nin eq.~(\\ref{EQ:AmpsFiniteRemainder}) with 4-dimensional external polarization vector\n$\\varepsilon_{\\lambda}(p_i,r_i)$. \nNow we multiply this regular finite quantity by a generalized D-dependent \nLorentz-invariant norm-orthogonal factor $\\Delta_{\\bar{\\lambda} \\lambda}$ defined by\n\\begin{eqnarray}\\label{EQ:generalizedDelta}\n\\Delta_{\\bar{\\lambda} \\lambda} \n&\\equiv& - \\bar{\\varepsilon}^{~*}_{\\bar{\\lambda}}(p_i,r_i) \\cdot \n\\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i) \\nonumber\\\\\n&=& \\delta_{\\bar{\\lambda} \\lambda} + \\mathcal{O}(\\epsilon). \n\\end{eqnarray}\nHere $\\varepsilon^{\\text{MBR}}_{\\lambda}$ refers to a polarization vector for a massless\ngauge boson of our prescription\\footnote{The acronym ``MBR'' denotes\nmometum basis representation.} of section~\\ref{SEC:Prescription}, and the dot product in \n\\eqref{EQ:generalizedDelta} refers to the D-dimensional Minkowski scalar product. \nWe recall that the polarization index $\\bar{\\lambda}$ labels the $D-2$ polarization\nstates of CDR while in our prescription the index $\\lambda$ of $\\varepsilon^{\\text{MBR}}_{\\lambda}$\ntakes only two values for massless gauge bosons (and three for massive ones), \nwhich are $\\pm$ in helicity basis for polarizations. \nThe 4-dimensional limits of these simple Lorentz-invariant contractions \n$\\Delta_{\\bar{\\lambda} \\lambda}$ are the norm-orthogonal factors \n(i.e.~the Kronecker deltas) among different 4-dimensional physical \npolarization\/helicity states. \n\n\nNext we consider the sum of products \n\\begin{equation} \\label{EQ:AmpsFiniteRemainderMBRstartpoint}\n\\sum_{\\bar{\\lambda} = \\pm,~ D-4}\n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)\n~\\Delta_{\\bar{\\lambda} \\lambda}.\n\\end{equation} \nAs exhibited in eqs.~(\\ref{EQ:AmpsFiniteRemainderCDR}) and \\eqref{EQ:generalizedDelta}, both \n$\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)$ \nand $\\Delta_{\\bar{\\lambda} \\lambda}$ are regular in $\\epsilon$. \nThus they can be power expanded in $\\epsilon$, and their 4-dimensional limits can be taken separately before being \nmultiplied together and subsequently summed over polarizations. \nProceeding in this way, we first insert the $\\epsilon$-expanded expressions of these two factors given\nabove, and the resulting quantity is precisely the finite remainder \n$\\hat{\\mathcal{F}}_{4}\\Big(\\varepsilon_{\\lambda}(p_i,r_i) \\Big)$ of eq.~(\\ref{EQ:AmpsFiniteRemainder}):\n\\begin{equation} \\label{EQ:AmpsFiniteRemainderMBRleft}\n\\sum_{\\bar{\\lambda} = \\pm,~ D-4} \n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)\n~\\Delta_{\\bar{\\lambda} \\lambda}\n= \\hat{\\mathcal{F}}_{4}\\Big(\\varepsilon_{\\lambda}(p_i,r_i) \\Big) + \\mathcal{O}(\\epsilon). \n\\end{equation}\n\n\nOn the other hand, we can first perform the polarization sum in \\eqref{EQ:AmpsFiniteRemainderMBRstartpoint}\nin D dimensions and take the 4-dimensional limit afterwards. \nProceeding this way, we have \n\\begin{eqnarray}\\label{EQ:AmpsFiniteRemainderMBRright1}\n&&\\sum_{\\bar{\\lambda} = \\pm,~ D-4}\n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon; \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)\n~\\Delta_{\\bar{\\lambda} \\lambda} \\nonumber\\\\\n&=& \n - \\sum_{\\bar{\\lambda} = \\pm,~ D-4} \n\\hat{\\mathcal{Z}}^{-1}_{\\mathrm{IR};\\mathrm{CDR}}\\left(\\epsilon\\right)~ \n\\mathcal{Z}^{-1}_{\\mathrm{UV};\\mathrm{CDR}}\\left(\\epsilon\\right)~\n\\hat{\\mathcal{A}}_{\\mathrm{CDR}} \n\\Big(\\epsilon;~ \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)~ \n\\bar{\\varepsilon}^{~*}_{\\bar{\\lambda}}(p_i,r_i) \\cdot \n\\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i) \\nonumber\\\\\n&=& \n- \\hat{\\mathcal{Z}}^{-1}_{\\mathrm{IR};\\mathrm{CDR}}\\left(\\epsilon\\right)~ \n\\mathcal{Z}^{-1}_{\\mathrm{UV};\\mathrm{CDR}}\\left(\\epsilon\\right)~\n\\sum_{\\bar{\\lambda} = \\pm,~ D-4} \n\\Big(\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\\Big)_{\\mu} ~ \n\\bar{\\varepsilon}_{\\bar{\\lambda}}^{~\\mu}(p_i,r_i) ~\n\\bar{\\varepsilon}^{~*\\nu}_{\\bar{\\lambda}}(p_i,r_i) ~\n\\varepsilon^{\\text{MBR}}_{\\lambda,~\\nu} (p_i,r_i) \\, ,\\nonumber\\\\\n\\end{eqnarray} \nwhere we have used the fact that $\\hat{\\mathcal{A}}_{\\mathrm{CDR}} \n\\Big(\\epsilon;~ \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)$ is linear \nin the external polarization vector $\\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i)$.\nNow we employ eq.~(\\ref{EQ:polsumCDRPhys}) for summing over the physical polarizations\nof the CDR-regularized external gauge boson\\footnote{Note that here we should sum over \nphysical polarizations only, especially in the case of gluons, which ensures that unphysical components \nsuch as scalar and longitudinal polarizations are absent from the outset. \nWith this choice there is no need to incorporate diagrams involving ghost fields \nin the external states (when there are multiple external non-Abelian gauge bosons).}\nand obtain\n\\begin{eqnarray}\\label{EQ:AmpsFiniteRemainderMBRright2}\n&& -\\sum_{\\bar{\\lambda} = \\pm,~ D-4} \n\\Big(\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\\Big)_{\\mu} ~ \n\\bar{\\varepsilon}_{\\bar{\\lambda}}^{~\\mu}(p_i,r_i) ~\n\\bar{\\varepsilon}^{~*\\nu}_{\\bar{\\lambda}}(p_i,r_i) ~\n\\varepsilon^{\\text{MBR}}_{\\lambda,~\\nu} (p_i,r_i) \\nonumber\\\\\n&=& \n\\Big(\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\\Big)_{\\mu} \n~\\Bigg(\ng^{\\mu\\nu} - \\frac{p_i^{\\mu} r_i^{\\nu} + r_i^{\\mu} p_i^{\\nu}}{p_i \\cdot r_i}\n\\Bigg)~ \n\\varepsilon^{\\text{MBR}}_{\\lambda,~\\nu} (p_i,r_i) \\nonumber\\\\\n&=& \n\\Big(\\hat{\\mathcal{A}}_{\\mathrm{CDR}}\\Big)_{\\mu} \n~\\Bigg(\ng^{\\mu\\nu} \n\\Bigg)~ \n\\varepsilon^{\\text{MBR}}_{\\lambda,~\\nu} (p_i,r_i) \\nonumber\\\\\n&=& \\hat{\\mathcal{A}}_{\\mathrm{CDR}} \n\\Big(\\epsilon;~ \\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i)\\Big)\n\\end{eqnarray} \nwhere we have used the orthogonality of $\\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i)$ \nw.r.t. the particle's momentum $p_i$ and its reference vector $r_i$ in D dimensions, \nwhich $\\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i)$ has to satisfy by construction.\nInserting eq.~(\\ref{EQ:AmpsFiniteRemainderMBRright2}) back into \neq.~(\\ref{EQ:AmpsFiniteRemainderMBRright1}) we end up with \n\\begin{eqnarray}\\label{EQ:AmpsFiniteRemainderMBRright3}\n\\sum_{\\bar{\\lambda} = \\pm,~ D-4}\n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon;~ \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)\n~\\Delta_{\\bar{\\lambda} \\lambda} \n&=& \n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon;~ \\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i) \\Big) \\, ,\n\\end{eqnarray} \nwhose left-hand side has, according to eq.~(\\ref{EQ:AmpsFiniteRemainderMBRleft}), \na 4-dimensional limit that is equal to the finite remainder \n$\\hat{\\mathcal{F}}_{4}\\Big(\\varepsilon_{\\lambda}(p_i,r_i) \\Big)$ given in eq.~(\\ref{EQ:AmpsFiniteRemainder}).\nNotice that eq.~(\\ref{EQ:AmpsFiniteRemainderMBRright3}) is an identity holding to all orders in $\\epsilon$.\nThe right-hand side of \\eqref{EQ:AmpsFiniteRemainderMBRright3}, more explicitly, \n\\begin{equation} \\label{eq:rhsFcdr}\n\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon;~ \\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i) \\Big) \n= \\hat{\\mathcal{Z}}^{-1}_{\\mathrm{IR};\\mathrm{CDR}}\\left(\\epsilon\\right)~ \n\\mathcal{Z}^{-1}_{\\mathrm{UV};\\mathrm{CDR}}\\left(\\epsilon\\right)~\n\\hat{\\mathcal{A}}_{\\mathrm{CDR}} \n\\Big(\\epsilon;~ \\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i)\\Big)\n\\end{equation}\nis exactly the quantity suggested by our prescription.\nIn order to avoid confusion we emphasize that the subscript ``CDR'' on $\\hat{\\mathcal{F}}_{\\mathrm{CDR}}$\nat the right-hand side of \\eqref{EQ:AmpsFiniteRemainderMBRright3}, and on $\\hat{\\mathcal{F}}_{\\mathrm{CDR}}$\nand $\\hat{\\mathcal{A}}_{\\mathrm{CDR}}$ in eq.~\\eqref{eq:rhsFcdr} means that these are the respective \nCDR expressions with the exception that the CDR polarization vector of the external gluon with momentum $p_i$\nis replaced by the polarization vector of our MBR prescription. If there are more gluons in the external\nstate then the procedure outlined by eqs.~\\eqref{eq:ACDReps} - \\eqref{eq:rhsFcdr} can be iterated.\n\n\nWhat the above reformulations show is that, to all orders in $\\epsilon$, the \n$\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big(\\epsilon;~\\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i) \\Big)$ \ncan be formally viewed as an unpolarized interference between $\\hat{\\mathcal{F}}_{\\mathrm{CDR}}\\Big( \\epsilon;~ \\bar{\\varepsilon}_{\\bar{\\lambda}}(p_i,r_i) \\Big)$ \nand the Lorentz-invariant generalized norm-orthogonal factor defined in eq.~(\\ref{EQ:generalizedDelta}), \nusing physical polarization sum rules for all CDR external states. \nThe unpolarized Landau density matrices of external gauge bosons reduce into the unique spacetime metric tensor \nby the virtue of the built-in orthogonality between $\\varepsilon^{\\text{MBR}}_{\\lambda}(p_i,r_i)$ and $p_i,r_i$.~\\\\\n\n\n\nAn analogous reformulation can be made for external fermions in the scattering amplitude.\nIn fact, for each open fermion line such a reformulation is more straightforward than \nin the above gauge boson case, because there is no redundancy in the spinor representation of the Lorentz algebra \nand the number of the polarization\/helicity states of a fermion is two both in CDR and in our prescription.\nThe unpolarized Landau density matrix of an external fermion is the well-known projection operator \nonto the space of on-shell Dirac-spinors. \nAfter performing a similar reformulation of an open fermion line in the scattering amplitude, \ndenoted by $\\langle \\psi_A|\\hat{\\mathrm{M}}|\\psi_B \\rangle$ as in eq.~(\\ref{EQ:SFLtrace1}),\nwe end up with the following replacement:\n\\begin{eqnarray} \\label{EQ:fermionlinerewritten}\n\\langle \\psi^{\\mathrm{CDR}}_A|\\hat{\\mathrm{M}}|\\psi^{\\mathrm{CDR}}_B \\rangle \n~\\longrightarrow~ \n\\mathrm{Tr} \\Big{[} \n\\frac{\\hat{P}_{on}\\left(p_B,m_B\\right)}{2 f_B~ m_B}\n\\Big(\n~|\\psi^{\\mathrm{MBR}}_B \\rangle \\langle \\psi^{\\mathrm{MBR}}_A|~\n\\Big)\n\\frac{\\hat{P}_{on}\\left(p_A,m_A\\right)}{2 f_A~ m_A}\n~\\hat{\\mathrm{M}}\n\\Big{]} \\nonumber\\\\\n\\end{eqnarray}\nwhere $\\hat{P}_{on}(p,m)=(\\slashed{p} \\pm m)$ denotes the aforementioned on-shell projection operator \nfor a $u$- respectively $v$-type Dirac spinor with momentum $p$ and mass $m$,\nand $|\\psi^{\\mathrm{MBR}}_B \\rangle \\langle \\psi^{\\mathrm{MBR}}_A|$\nis exactly the matrix \\eqref{EQ:TPextsps1} that was further discussed in \neqs.~\\eqref{EQ:LDMofDSP} - (\\ref{EQ:TPextsps2}).\nThe appearance of ${1}\/{(2 f_A~ m_A)}$ and ${1}\/{(2 f_B~ m_B)}$ in eq.~(\\ref{EQ:fermionlinerewritten})\nis due to the conventional choice of normalization factors of on-shell Dirac spinors.\nHere the factors $f_A, f_B = 1 ~(-1)$ when the fermion $A$ respectively\n$B$ is associated with a $u$-type ($v$-type) spinor.\n\n\nQuantities that are sandwiched between the pair of on-shell projection operators, \n$\\hat{P}_{on}(p_A,m_A)$ and $\\hat{P}_{on}(p_B,m_B)$, associated with the two external spinors of \nthe open fermion line, can be manipulated and simplified according to the 4-dimensional \nLorentz\/Dirac-algebra. We just have to agree on one definite form that will be\ntaken as its canonical form (out of all its forms that are equivalent in 4 dimensions)\nand used unambiguously in D-dimensional algebraic computations. \nThis pair of on-shell projection operators sets the domain where matrices related to \nexternal fermions' states, namely $|\\psi^{\\mathrm{MBR}}_B \\rangle \\langle \\psi^{\\mathrm{MBR}}_A|$,\ncan be manipulated and moved around using just 4 dimensional Lorentz\/Dirac-algebra.\nWhile, in general, moving any of these matrices beyond this range must be done in accordance with the \nD-dimensional Lorentz\/Dirac-algebra in order to not introduce artificial terms by mistake. \nFor instance, the object $\\gamma_{\\mu} \\gamma_{\\nu} \\gamma_{\\rho} S_{\\sigma} \\epsilon^{\\mu\\nu\\rho\\sigma}$ \ncommutes with $\\slashed{P}$ in 4 dimensions because of the orthogonality condition\n$S \\cdot P = 0$. However, this is no longer true w.r.t. the D-dimensional algebra, \nand there is thus a non-vanishing evanescent commutator resulting from \ninterchanging the product order between the two.\nIn section~\\ref{SEC:examples:eeQQ} we will briefly comment on this subtle point again. \n\n\nFinally, in order to bring the external projector in eq.~(\\ref{EQ:fermionlinerewritten}) into a form analogous to \neq.~(\\ref{EQ:SFLtrace1}) with the tensor product of external spinors given by eq.~(\\ref{EQ:TPextsps2}),\nthe following defining property of the on-shell projection operators, valid for $p^2=m^2$ in D dimensions, \ncan be used:\n\\begin{eqnarray}\\label{EQ:onshellprojectorID}\n\\hat{P}_{on}(p,m) \\frac{\\hat{P}_{on}(p,m)}{2 f~ m} = \\hat{P}_{on}(p,m) \\, ,\n\\end{eqnarray}\nwhere $f =\\pm 1$ depending on whether ${\\hat P}_{on}$ is associated with a $u$-type or $v$-type spinor.\nNotice also that such an identity has a continuous limit at $m \\rightarrow 0$, \ndespite the superficial appearance of the singular $\\frac{1}{m}$ factor \nwhich does prevent setting $m=0$ directly in eq.(\\ref{EQ:onshellprojectorID}).\nSuch an alternative perspective thus helps to explain the choice made \nin eq.~\\eqref{EQ:TPextsps2} where the polarization projection operators \nwere placed inside the on-shell projection operators.~\\\\\n\n\n\nWe thus achieved what we aimed at in this subsection.\nWe found an alternative formulation of pole-subtracted finite amplitudes \nwhich helps to prove the following claim:\ndespite the fact that usage of the polarization projectors defined \nin section~\\ref{SEC:Prescription} results in helicity amplitudes different from those in CDR, \nreplacing all CDR-regularized external polarization states of \n$\\hat{\\mathcal{A}}_{\\mathrm{CDR}}(\\epsilon)$ in eq.~(\\ref{EQ:AmpPoleFactorization})\nby their counterparts given in terms of momentum basis representations \nconstructed in section~\\ref{SEC:Prescription} still results in the same \nRS-independent finite remainder, where all poles are chosen to be subtracted \nby the same factorized (singular) coefficients given in CDR, without appealing \nto Lorentz tensor decomposition representations of $\\hat{\\mathcal{A}}_{\\mathrm{CDR}}(\\epsilon)$.\nThe validity of this statement is not confined \nto one-loop or next-to-leading order (NLO) corrections to a Born-level scattering \namplitude, but holds as long as the amplitude-level factorization \nformula sketched in eq.~(\\ref{EQ:AmpPoleFactorization}) holds in CDR.\n\n\n\\subsection{Finite remainders in an IR subtraction framework}\n\\label{SEC:unitarity:FRIR}\n\n\nIn this subsection, we move on and analyze finite remainders defined in an IR-subtraction method \nthat are obtained with our MBR prescription for external polarization vectors.\nWe will then show that this hybrid CDR-compatible prescription is unitary as defined in the sense of refs.~\\cite{vanDamme:1984ig,Catani:1996pk}.~\\\\ \n\n\nIn practice the finite RS-independent physical observables at NLO and beyond are usually computed \nas combinations of separate, in general UV and\/or IR divergent contributions living in different \npartonic phase spaces. (UV renormalization is understood in what follows.)\nTo render individual contributions from each partonic phase space IR-finite and RS-independent respectively, \none can add and subtract properly defined auxiliary IR-subtraction terms. \nThe introduction of these auxiliary terms are designed to ensure the cancellation of all intermediate \nIR-divergences of amplitudes in each partonic phase space, while on the other hand \nthey leave no trace in the final properly combined physical observables. \nThis is the idea of IR-subtraction methods~\\cite{Ellis:1980wv,Kunszt:1992tn}, \nwhich are nowadays available in many different versions (e.g.,~\\cite{Frixione:1995ms,Catani:1996vz,Nagy:2003qn,Kosower:1997zr,GehrmannDeRidder:2005cm,Czakon:2010td,Czakon:2014oma,Caola:2017dug,Magnea:2018hab,Herzog:2018ily,Catani:2007vq,Somogyi:2006da}). \n\n\nLet us now sketch an IR-subtraction method by only being explicit about aspects that are relevant for \nshowing that our MBR prescription of external states is unitary. \n\n\nAssume that the Born-level scattering amplitude $\\mathcal{A}_n$ lives in a n-particle \nphase space, and we consider an IR-safe observable defined by the measurement function $F_J$. \nThe leading-order (LO) observable $\\sigma_{\\rm LO}$ is given by \n\\begin{eqnarray}\\label{EQ:LOamp}\n\\sigma_{\\rm LO} & = & \\int_{d\\Phi_n} |\\mathcal{A}_n|^2 ~ \\mathit{F}_{\\mathit{J}}^{~(n)},\n\\end{eqnarray}\nwhere we suppressed all pre-factors related to spin averaging for the initial state \nand the incident flux.\nThe NLO QCD correction $\\sigma_{\\rm NLO}$ consists of real radiations $\\int_{d\\Phi_{n+1}} d\\sigma_{\\rm NLO}^{\\mathcal{R}}$\nin the (n+1)-particle phase space and the (renormalized) virtual corrections \n$\\int_{d\\Phi_{n}} d\\sigma_{\\rm NLO}^{\\mathcal{V}}$ in the n-particle phase space. \nTo render individual contributions in each of these two phase spaces finite, \none adds and subtracts an appropriate IR-subtraction term $d\\sigma^{\\mathcal{S}}$. \nSubsequently $\\sigma_{\\rm NLO}$ can then be rewritten in an IR subtraction method as follows\\footnote{\nFor the sake of simplicity, we suppressed here an initial-state collinear subtraction \nterm related to the (re)definition of parton-distribution functions, \nwhich does not add any additional conceptual complexity to what we want to show.} \n\\begin{eqnarray}\\label{EQ:NLOamp}\n\\sigma_{\\rm NLO} & = & \\int_{d\\Phi_{n+1}} d\\sigma_{\\rm NLO}^{\\mathcal{R}} + \n \\int_{d\\Phi_{n}} d\\sigma_{\\rm NLO}^{\\mathcal{V}} \\nonumber\\\\ \n & = & \\int_{d\\Phi_{n+1}} |\\mathcal{A}_{n+1}^{\\mathcal{R}}|^2 ~ \n \\mathit{F}_{\\mathit{J}}^{~(n+1)} \n + \\left( \n \\int_{d\\Phi_{n+1}} d\\sigma^{\\mathcal{S}} ~\\mathit{F}_{\\mathit{J}}^{~(n)}\n - \n \\int_{d\\Phi_{n+1}} d\\sigma^{\\mathcal{S}} ~\\mathit{F}_{\\mathit{J}}^{~(n)}\n \\right) \n + \\int_{d\\Phi_{n}} 2\\mathrm{Re}\\left[\\mathcal{A}_{n}^{*} \\mathcal{A}_{n}^{\\mathcal{V}}\\right] ~ \n \\mathit{F}_{\\mathit{J}}^{~(n)} \\nonumber\\\\\n & = & \\int_{d\\Phi_{n+1}} \\Bigg[ \\left( \n |\\mathcal{A}_{n+1}^{\\mathcal{R}}|^2 ~ \n \\mathit{F}_{\\mathit{J}}^{~(n+1)}\\right)_{\\epsilon = 0} \n - \n \\left( d\\sigma^{\\mathcal{S}} ~ \\mathit{F}_{\\mathit{J}}^{~(n)}\n \\right)_{ \\epsilon = 0} \\Bigg] \n + \\int_{d\\Phi_{n}} \\left[ 2~ \\mathrm{Re}\\left[\\mathcal{A}_{n}^{*} \\mathcal{A}_{n}^{\\mathcal{V}}\\right] \n + \\int_{1} d\\sigma^{\\mathcal{S}} \n \\right]_{ \\epsilon = 0} \\mathit{F}_{\\mathit{J}}^{~(n)}. \\nonumber\\\\\n\\end{eqnarray}\nBy construction, the subtraction term $d\\sigma^{\\mathcal{S}}$ should have the same local \nIR-singular behavior as the squared real-radiation matrix $|\\mathcal{A}_{n+1}^{\\mathcal{R}}|^2$ \neverywhere in the (n+1)-particle phase space (subject to the constraint implied by $\\mathit{F}_{\\mathit{J}}$). \nConsequently, the resulting subtracted phase-space integrand \n$\\left[ \\left(|\\mathcal{A}_{n+1}^{\\mathcal{R}}|^2 ~ \\mathit{F}_{\\mathit{J}}^{~(n+1)}\\right)_{\\epsilon = 0} - \n\\left( d\\sigma^{\\mathcal{S}} ~\\mathit{F}_{\\mathit{J}}^{~(n)} \\right)_{ \\epsilon = 0} \\right]$ \ncan be numerically evaluated and integrated over the phase space in 4 dimensions, \nas indicated by $\\epsilon = 0$.\nNotice that it is $\\mathit{F}_{\\mathit{J}}^{~(n)}$ that is associated with \n$d\\sigma^{\\mathcal{S}}$, the same as for virtual corrections living in n-particle phase space.\nThe integration of $d\\sigma^{\\mathcal{S}}$ over the unresolved phase space \nhas to be done in D dimensions with the IR unresolved partonic d.o.f. regularized in the same way as \nthose in the virtual correction $2~\\mathrm{Re}\\left[\\mathcal{A}_{n}^{*} \\mathcal{A}_{n}^{\\mathcal{V}}\\right]$,\nfollowing from the unitarity constraint. \nThe resulting IR singularities that appear as poles in $\\epsilon$ must cancel \nthose appearing in $2~\\mathrm{Re}\\left[\\mathcal{A}_{n}^{*} \\mathcal{A}_{n}^{\\mathcal{V}}\\right]$, \nwhich renders the quantity in the second square bracket of \nthe last line of eq.~(\\ref{EQ:NLOamp}) finite in 4 dimensions as well.\n\n\nIn order that eq.~(\\ref{EQ:NLOamp}) is useful in practice, one must be able to perform \nthe D-dimensional integration $\\int_{1} d\\sigma^{\\mathcal{S}}$,\neither analytically or numerically. \nThanks to the IR factorization, $d\\sigma^{\\mathcal{S}}$ and likewise its integrated counterpart \n$\\int_{1} d\\sigma^{\\mathcal{S}}$ can be constructed, schematically,\nas a convoluted product of certain universal (process-independent) multiplicative coefficient \nand the (process-specific) squared Born amplitude \n$|\\mathcal{A}_n|^2$:\n\\begin{eqnarray}\\label{EQ:IRsubtractionterm}\nd\\sigma^{\\mathcal{S}} &=& \\left( d \\hat{I}_{RS} \\right) \\otimes |\\mathcal{A}_n|^2~,\\nonumber\\\\\n\\int_{1} d\\sigma^{\\mathcal{S}} &=& \\hat{I}_{RS} \\otimes |\\mathcal{A}_n|^2.\n\\end{eqnarray}\nThe factor $\\hat{I}_{RS}$ plays a similar role as the multiplicative factors \n$\\hat{\\mathcal{Z}}_{\\mathrm{IR}}(\\epsilon)$ in eq.~(\\ref{EQ:AmpPoleFactorization}).\nAt NLO it encodes all IR pole-singularities and is to be viewed \nas an operator in the color space of the external particles.\n\n\nIn fact each variant of an IR-subtraction method can be seen as providing a concrete constructive\nprescription for the integral representations of factorized IR-subtraction coefficients, \nlike the factor $\\hat{I}_{RS}$, that contain all the explicit pole-singularities of the \nloop amplitudes (after multiplication with certain relevant process-dependent hard-scattering amplitudes). \nThe crucial point relevant for the following discussion is that \nthese integral representations are based on the amplitude-level IR factorization, \nand are manifestly independent of the polarization states of external particles which \nappear in the (remaining) hard-scattering matrix elements.\\footnote{The dependence of factorized collinear \n$\\epsilon$-pole singularities on the polarization of a parent parton drops once one\nsums over the polarizations of all other particles and also integrates over all \nunresolved degrees of freedom in the collinear limit, notably the transverse plane of the radiated partons \n(which essentially eliminates any preference in the transverse direction).}\n\n\nAll quantities in eq.~(\\ref{EQ:NLOamp}) that contain explicit IR-divergences, i.e.\npoles in $\\epsilon$, contain RS-dependent pieces in their truncated \nLaurent series to order $\\epsilon^0$, especially the integrated $\\hat{I}_{RS}$. \nAt NLO, this concerns only $\\int_{d\\Phi_{n}} d\\sigma_{\\rm NLO}^{\\mathcal{V}}$ and \n$\\int_{1} d\\sigma^{\\mathcal{S}} = \\hat{I}_{RS} \\otimes |\\mathcal{A}_n|^2$\nthat live in the same n-particle phase space. \nBy appealing to an IR-subtraction method the unitarity constraint, originally imposed \nbetween the calculations of $\\int_{d\\Phi_{n+1}} d\\sigma_{\\rm NLO}^{\\mathcal{R}}$ and \n$\\int_{d\\Phi_{n}} d\\sigma_{\\rm NLO}^{\\mathcal{V}}$ is translated into the following ``locally distributed'' \nversion: \nwe just need to make sure that contributions associated with the same partonic phase space \nare computed consistently with a unitarity-respecting prescription, while \npole-subtracted 4-dimensional remainders living in different partonic phase spaces \ncan be computed independently of each other (using different methods).\nThus, as argued in ref.~\\cite{Catani:1996pk}, IR subtraction methods offer a \nconvenient way to isolate and investigate the RS-dependence of individual \nsingular pieces and subsequently ensure the unitarity of regularization prescriptions \nused in the calculation.~\\\\ \n\n\nWith the skeleton of an IR-subtraction framework ready, we can discuss \nhow each of the two square brackets in the last line of eq.~(\\ref{EQ:NLOamp}) should be evaluated \nwith our proposed prescription in order to ensure a correct NLO observable $\\sigma_{\\rm NLO}$.\n \n\nFirst, the subtraction of implicit IR-singularities in $d\\sigma_{\\rm NLO}^{\\mathcal{R}}$, \ni.e. terms in the first square bracket of the last line of eq.~(\\ref{EQ:NLOamp}), \nis to be done at the integrand level of phase-space integrals. This results in a subtracted \nreal-radiation contribution that is numerically integrable in 4 dimensions. \nIn the 4-dimensional limit ($\\epsilon = 0$) the external polarization states \ndefined by the momentum basis representations given in section~\\ref{SEC:Prescription}, \nall coincide with their respective standard 4-dimensional expressions. \nTherefore the RS-independence of the finite remainders of real-radiation contributions \nassociated with the 4-dimensional (n+1)-particle phase space is manifest \nas dimensional regularization can be avoided from the outset. \nThus we just have to make sure that in this hybrid prescription, \nthe integral-level subtraction of explicit $\\epsilon$-pole singularities \nin $2~\\mathrm{Re}\\left[\\mathcal{A}_{n}^{*} \\mathcal{A}_{n}^{\\mathcal{V}}\\right]$, \ni.e. the second square bracket of the last line of eq.~(\\ref{EQ:NLOamp}), \nis also done in a unitarity-respecting way so as to lead to the \ncorrect RS-independent finite remainder in the n-particle phase space.\n\n\nTo this end, we can proceed in two ways. \nWe could devise a proof analogous to the previous subsection, but now applied to the finite remainder \n$\\left[ 2\\mathrm{Re}\\left[\\mathcal{A}_{n}^{*} \\mathcal{A}_{n}^{\\mathcal{V}}\\right] + \\hat{I}_{RS} \\otimes |\\mathcal{A}_n|^2 \\right]_{\\epsilon = 0}$, \nwhere the integrated factor $\\hat{I}_{RS}$ plays a similar role\nas the perturbatively-expanded multiplicative factor $\\hat{\\mathcal{Z}}_{\\mathrm{IR}}(\\epsilon)$ in eq.~(\\ref{EQ:AmpPoleFactorization}). \nAlternatively, we argue in this subsection that the unitarization recipe of ref.~\\cite{Catani:1996pk} \nis indeed respected by our hybrid prescription. \nWe examine this now one by one. \n\\begin{enumerate}\n\\item \nThe external partons in the Born-level hard-scattering matrix element $\\mathcal{A}_n$ \nof the factorized IR-subtraction term $\\hat{I}_{RS} \\otimes |\\mathcal{A}_n|^2$ have to be \ntreated like the external partons in the virtual loop amplitude $\\mathcal{A}_{n}^{\\mathcal{V}}$ \n(of the same external kinematic configuration).\n\n\nThis is guaranteed by applying the same set of polarization projectors defined in \nsection~\\ref{SEC:Prescription} consistently to $\\mathcal{A}_n$ at LO and $\\mathcal{A}^{\\mathcal{V}}_n$ \nat NLO, computed respectively to the required powers in $\\epsilon$.\n\n\\item \nThe parent parton and its (soft and collinear) daughter partons involved in the \nintegral representation of the factorized process-independent (singular) coefficient function \n$\\hat{I}_{RS}$ have to be treated like the corresponding partons inside the\nloop integrals of $\\mathcal{A}_{n}^{\\mathcal{V}}$.\n\n\nThis is guaranteed by performing integrals involving IR-unresolved d.o.f.\nconsistently regularized with CDR. In particular, the phase-space integrals \nin $\\hat{I}_{RS}$ are done in D dimensions like D-dimensional loop integrals \nsubject to Cutkosky cuts. \n\\end{enumerate}\n\nConcerning the first point, as long as there is an unambiguous and consistent way of \ndirectly applying such a non-CDR regularization convention of external states \nin the computation of the virtual loop amplitude $\\mathcal{A}_{n}^{\\mathcal{V}}$ \n(without appealing to its Lorentz tensor decomposition representation), \nthen the demonstration is completed. \nSimilar as in section~\\ref{SEC:unitarity:PSA}, this point is guaranteed in our\nprojection prescription by the fact that all open Lorentz indices \nof the polarization projectors defined in section~\\ref{SEC:Prescription} are taken to be\na D-dimensional and no dimensional splitting is ever introduced, just like in CDR. \n\n\n\nThus we have argued that our hybrid prescription can be conveniently \nused in a NLO IR subtraction framework to correctly obtain all RS-independent \nfinite remainders needed for computing physical observables, \nwith the (process-independent) integrated IR-subtraction coefficients directly \ntaken from CDR. \nIn other words, we have argued that our hybrid CDR-compatible prescription is unitary.~\\\\\n\n\nAlthough beyond the scope of this article, it is possible, by analogy to the NLO case, \nto ensure unitarity of the prescription at NNLO and beyond, owing to the following generic features \nof an IR subtraction method (which the above NLO discussions essentially rely on).\n\\begin{itemize}\n\\item \nIn a typical IR subtraction framework, all \\textit{explicit} IR-singularities \nin loop amplitudes, manifested as poles in $\\epsilon$, are always subtracted \nby IR subtraction terms whose constructions are based on amplitude-level singularity factorization \nformulae, and the factorized IR-subtraction coefficients are independent of all \nexternal polarization states;\n\\item \nAny potential \\textit{implicit} IR singularity of the ($\\epsilon$-pole-free) finite remainders \nwill always be further subtracted at the integrand level of phase-space integrals \nover the external kinematics, and will be directly evaluated in 4 dimensions without employing \ndimensional regularization.\n\\end{itemize}\nThus concerning the 4-dimensional integrand level subtractions of implicit IR-singularities\nin those finite remainders, their $\\epsilon$-suppressed terms are never needed \nbecause the phase-space integration over the external kinematics is done (numerically) in 4 dimensions.\nWe leave a detailed exposition of this at NNLO for a future publication.\n\n\n\n\\section{Examples at NLO QCD}\n\\label{SEC:examples}\n\nThe polarization projectors constructed in section~\\ref{SEC:Prescription} are independent of \nthe loop order of virtual amplitudes, regardless of possible evanescent Lorentz structures \nthat may be generated in D dimensions. To illustrate its usage without being overwhelmed by \nirrelevant complications, we consider two prototype examples of a NLO QCD virtual amplitude,\nthe 1-loop QCD corrections to $gg \\rightarrow gg$ and to $e^+ e^- \\rightarrow Q \\bar{Q}$, \nin order to show that RS-independent finite remainders are indeed obtained as\n was discussed in the preceding sections. We will comment along the way points worthy of attention. \n\n\n\\subsection{$gg \\rightarrow gg$}\n\\label{SEC:examples:gggg}\n\nBecause singular amplitudes are RS-dependent, it is only meaningful for our purpose\nto compare properly defined finite remainders between computations done \nusing different regularization schemes. \nAs discussed in section~\\ref{SEC:unitarity:FRIR}, the IR-subtracted \nreal-radiation contribution at NLO is obviously RS-independent. \nThus we just need to show how the same finite remainders \nof the virtual corrections are obtained in computations using our\n hybrid prescription and in CDR.~\\\\\n\nWe consider the scattering process among 4 massless gluons:\n\\begin{equation} \\label{EQ:gggg}\ng_1(p_1)~+~ g_2(p_2) \\to g_3(p_3)~+~g_4(p_4),\n\\end{equation}\nin QCD without fermions for simplicity. The Mandelstam variables are given in\n eq.~(\\ref{EQ:kinematicinvariants}).\n\n\nThe corresponding scattering amplitude perturbatively expanded up to NLO reads \n\\begin{eqnarray} \\label{EQ:ggggAMP}\n\\Big|\\mathcal{A}_{gggg} \\Big\\rangle = \\Big|\\mathcal{A}^{[\\text{tree}]}_{gggg} \\Big\\rangle + \\Big|\\mathcal{A}^{[\\text{1-loop}]}_{gggg} \\Big\\rangle \n+ \\mathcal{O}(\\alpha^{3}_s) \\, , \n\\end{eqnarray}\nwhich is a vector in the color space of the external gluons. \nThe 1-loop virtual amplitudes were computed in refs.~\\cite{Bern:1991aq,Schubert:2001he,Binoth:2006hk}.\n For representing color structures of multi-gluon scattering amplitudes, \nlike eq.~(\\ref{EQ:ggggAMP}), it is most convenient to perform a color decomposition \nusing the choice of basis of refs.~\\cite{Berends:1987cv,Mangano:1987xk,Mangano:1987kp,Mangano:1988kk,Bern:1990ux}. \nIt is well known that tree-level QCD amplitudes with n external gluons can be decomposed\ninto color-ordered partial amplitudes, multiplied by associated single color traces \n(over all noncylic permutations of fundamental color generators).\nDecomposition of color structures of one-loop QCD amplitudes can be done in a similar\nway but with an extended color basis including products of two \ncolor traces\\footnote{This can be easily understood by combing the statement about tree-level color decomposition\nand the Fierz identities of SU(N) color algebra.}. \n\n\nFor the amplitude eq.~(\\ref{EQ:ggggAMP}) we generate symbolic expressions \nof all contributing Feynman diagrams using QGRAF~\\cite{Nogueira:1991ex}, \nand subsequently decompose them as follows:\n\\begin{eqnarray}\\label{EQ:colordecomposition}\n\\Big|\\mathcal{A}^{[\\text{tree}]}_{gggg} \\Big\\rangle &=& \\sum_{i=1}^{6} \\mathcal{A}^{[\\text{tree}]}_{gggg}\n(i) ~ |c_i \\rangle~,~ \\nonumber\\\\\n\\Big|\\mathcal{A}^{[\\text{1-loop}]}_{gggg} \\Big\\rangle &=& \\sum_{i=1}^{9} \\mathcal{A}^{[\\text{1-loop}]}_{gggg} (i) ~ |c_i \\rangle~,~\n\\end{eqnarray}\nusing the following basis of 9 color structures: \n\\begin{eqnarray}\\label{EQ:colorbasis}\n|c_1\\rangle &=& \\mathrm{Tr}\\Big[T_1~ T_2~ T_3~ T_4 \\Big]~,~~\n|c_2\\rangle = \\mathrm{Tr}\\Big[T_1~ T_2~ T_4~ T_3 \\Big] ~,~~\n|c_3\\rangle = \\mathrm{Tr}\\Big[T_1~ T_3~ T_4~ T_2 \\Big] \\nonumber\\\\\n|c_4\\rangle &=& \\mathrm{Tr}\\Big[T_1~ T_3~ T_2~ T_4 \\Big] ~,~~\n|c_5\\rangle = \\mathrm{Tr}\\Big[T_1~ T_4~ T_3~ T_2 \\Big] ~,~~\n|c_6\\rangle = \\mathrm{Tr}\\Big[T_1~ T_4~ T_2~ T_3 \\Big] \\nonumber\\\\\n|c_7\\rangle &=& \\mathrm{Tr}\\Big[T_1~ T_2\\Big] \\mathrm{Tr}\\Big[T_3~ T_4 \\Big]~,~\n|c_8\\rangle = \\mathrm{Tr}\\Big[T_1~ T_3\\Big] \\mathrm{Tr}\\Big[T_2~ T_4 \\Big] ~,~\n|c_9\\rangle = \\mathrm{Tr}\\Big[T_1~ T_4\\Big] \\mathrm{Tr}\\Big[T_2~ T_3 \\Big] ~,\\nonumber\\\\\n\\end{eqnarray}\nwhere the subscripts of the color generators label the associated gluons while \ntheir explicit color indices are suppressed.\nThese 9 color structures are linearly independent, \nas can be checked by computing its Gram matrix. The amplitude\n$\\Big|\\mathcal{A}^{[\\text{tree}]}_{gggg} \\Big\\rangle$ involves \nonly the first 6 non-cylic single color traces given in eq.~\\eqref{EQ:colorbasis}, \nwhich can be further reduced to 4 structures by reflection symmetries. The color structures\n$|c_7\\rangle,~ |c_8\\rangle,~|c_9\\rangle$ are needed in addition to represent \n$\\Big|\\mathcal{A}^{[\\text{1-loop}]}_{gggg} \\Big\\rangle$. \n\n\n\nEach of the color decomposition coefficients $\\mathcal{A}^{[\\text{tree}]}_{gggg}\n(i),~\\mathcal{A}^{[\\text{1-loop}]}_{gggg} (i)$ is a function of external kinematics \nand polarization state vectors, to which we now apply the polarization \nprojectors prescribed in section~\\ref{SEC:Prescription}.\nWe extract polarized amplitudes in the linear polarization basis for all four external gluons (cf. section~\\ref{SEC:prescription:MBR:2to2}), \nfrom which helicity amplitudes can be easily obtained. \nBecause the reaction (\\ref{EQ:gggg}) is parity-invariant the scattering amplitude does not contain terms involving \n$\\gamma_5$ or an odd number of Levi-Civita tensors. \nWe thus need to consider only the following 8 linear polarization projectors, \nwhich are even in $\\varepsilon_Y$, respectively in the number of Levi-Civita tensors:\n\\begin{eqnarray}\\label{EQ:LPgggg8}\n&&\\varepsilon^{\\mu_1}_X \\varepsilon^{\\mu_2}_X \\varepsilon^{\\mu_3}_T \\varepsilon^{\\mu_4}_T ~,~ \n\\varepsilon^{\\mu_1}_X \\varepsilon^{\\mu_2}_X \\varepsilon^{\\mu_3}_Y \\varepsilon^{\\mu_4}_Y ~,~ \n\\varepsilon^{\\mu_1}_X \\varepsilon^{\\mu_2}_Y \\varepsilon^{\\mu_3}_T \\varepsilon^{\\mu_4}_Y ~,~ \n\\varepsilon^{\\mu_1}_X \\varepsilon^{\\mu_2}_Y \\varepsilon^{\\mu_3}_Y \\varepsilon^{\\mu_4}_T ~,~ \\nonumber\\\\\n&&\\varepsilon^{\\mu_1}_Y \\varepsilon^{\\mu_2}_X \\varepsilon^{\\mu_3}_T \\varepsilon^{\\mu_4}_Y ~,~\n\\varepsilon^{\\mu_1}_Y \\varepsilon^{\\mu_2}_X \\varepsilon^{\\mu_3}_Y \\varepsilon^{\\mu_4}_T ~,~\n\\varepsilon^{\\mu_1}_Y \\varepsilon^{\\mu_2}_Y \\varepsilon^{\\mu_3}_T \\varepsilon^{\\mu_4}_T ~,~\n\\varepsilon^{\\mu_1}_Y \\varepsilon^{\\mu_2}_Y \\varepsilon^{\\mu_3}_Y \\varepsilon^{\\mu_4}_Y.\n\\end{eqnarray}\nFor the sake of simplicity of notation, the arguments of these polarization vectors are suppressed \nwhile their subscripts at the open Lorentz indices indicate the associated gluons.\n\n\nThe number of linear polarization projectors in eq.~(\\ref{EQ:LPgggg8}) equals\nthe number of independent helicity amplitudes, taking into account the parity symmetry of \nthe scattering amplitude. We do not consider additional relations among the \nlinear polarized amplitudes arising from Bose symmetry, which involve kinematic crossings.\nThe set of 8 linear polarization projectors in eq.~(\\ref{EQ:LPgggg8}) are sufficient for \nany parity-even scattering amplitude among four external massless bosons to any loop order, \nirrespective of any possible (evanescent) Lorentz structures therein.\\footnote{In case a $2\\to 2$ amplitude \ninvolves parity-violating couplings, 8 linear polarization projectors containing an odd number of \n$\\varepsilon_Y$ (or Levi-Civita tensors) can be used in addition.} \n\n\nWe insert the expressions (\\ref{EQ:XpolMBR}), \\eqref{EQ:TpolMBR}, and \\eqref{EQ:YpolMBR} for the \npolarization vectors in (\\ref{EQ:LPgggg8}). \nAll the Lorentz algebra is carried out using FORM~\\cite{Vermaseren:2000nd}. \nLet us emphasize again that, in order to avoid possible ambiguities in the definition and application \nof these external projectors, all pairs of Levi-Civita tensors in eq.~(\\ref{EQ:LPgggg8})\nare replaced according to the contraction rule eq.~(\\ref{EQ:LeviCivitaContRule}) \nbefore being used in the projection. Then the projectors (\\ref{EQ:LPgggg8}) are expressed \nsolely in terms of external momenta and space-time metric tensors.\nAfter pulling out the normalization factors as prescribed in section~(\\ref{SEC:Prescription}), \nthe resulting tensor projectors (which have only a polynomial dependence on external momenta and kinematics) \nwill be applied to the color stripped amplitudes \n$\\mathcal{A}^{[\\text{tree}]}_{gggg}(i), ~\\mathcal{A}^{[\\text{1-loop}]}_{gggg} (i)$. \nWe use the convention to set the variable $D=4$ in the projectors (\\ref{EQ:LPgggg8}), \nin particular in the normalization factors that are pulled out. Of course this convention is used both \nfor the amplitudes and the associated UV and\/or IR subtraction terms.\nThen the normalization factors pulled out from the respective projectors \\eqref{EQ:LPgggg8} are \n\\begin{eqnarray}\\label{EQ:LPnormfactors}\n&&\\mathcal{N}_{XXTT} = \\frac{1}{s^2 t^2 (s + t)^2} ~,~\n\\mathcal{N}_{XXYY} = \\frac{4}{s^2 t^2 (s + t)^2} ~,~\n\\nonumber\\\\\n&&\n\\mathcal{N}_{XYTY} = \\frac{4}{s^2 t^2 (s + t)^2} ~,~\n\\mathcal{N}_{XYYT} = \\frac{4}{s^2 t^2 (s + t)^2} ~,~\n\\nonumber\\\\\n&&\n\\mathcal{N}_{YXTY} = \\frac{4}{s^2 t^2 (s + t)^2} ~,~\n\\mathcal{N}_{YXTX} = \\frac{4}{s^2 t^2 (s + t)^2} ~,~\n\\nonumber\\\\\n&&\n\\mathcal{N}_{YYTT} = \\frac{4}{s^2 t^2 (s + t)^2} ~,~\n\\mathcal{N}_{YYYY} = \\frac{16}{s^2 t^2 (s + t)^2} ~.\n\\end{eqnarray}\n\n\nThe linear polarized amplitudes projected out by applying eq.~(\\ref{EQ:LPgggg8}) \nto $\\mathcal{A}^{[\\text{1-loop}]}_{gggg}(i)$ contain both UV and IR singularities,\nmanifested as poles in $\\epsilon$. We are only interested in the finite remainders\ndefined by subtracting all these singularities in accordance with a certain convention.\nFor our purpose, there is no need to stick to \na specific IR-subtraction scheme. All we need to know is a factorization formula \nproviding us with a set of terms that capture all singularities in \n$\\mathcal{A}^{[\\text{1-loop}]}_{gggg}(i)$ \n(with the process-independent singular coefficients obtained in CDR). \nTo be specific, we choose to define the finite remainders of the virtual amplitude\n$\\Big|\\mathcal{A}^{[\\text{1-loop}]}_{gggg} \\Big\\rangle$ by the following explicit \non-shell UV and IR subtraction terms:\n\\begin{eqnarray}\\label{EQ:ggggUVIRcounterterms}\n\\Big| \\mathcal{A}^{[\\text{UV}]}_{gggg} \\Big\\rangle \n&=& \\Big(-\\frac{11}{3} \\mathrm{N}_c \\frac{1}{\\epsilon} + \\mathcal{O}(\\epsilon^0) \\Big)~\n\\Big|\\mathcal{A}^{[\\text{tree}]}_{gggg} \\Big\\rangle \\, , \\nonumber\\\\ \n\\Big|\\mathcal{A}^{[\\text{IR}]}_{gggg} \\Big\\rangle &=& 2\\mathrm{N}_c\n\\left(\\frac{-2}{\\epsilon^2} + \\frac{1}{\\epsilon}\\left(\\frac{-11}{3} + \\mathrm{log}\\left(\\frac{-s_{12}}{\\mu^2_{DR}}\\right) + \\mathrm{log}\\left(\\frac{+s_{23}}{\\mu^2_{DR}}\\right)\\right) + \\mathcal{O}(\\epsilon^0) \\right)\n2 \\mathcal{A}^{[\\text{tree}]}_{gggg}(1;\\epsilon) ~ \\Big|c_1 \\Big\\rangle \\, \\nonumber\\\\\n&+& 2\\mathrm{N}_c\n\\left(\\frac{-2}{\\epsilon^2} + \\frac{1}{\\epsilon}\\left(\\frac{-11}{3} + \\mathrm{log}\\left(\\frac{-s_{12}}{\\mu^2_{DR}}\\right) + \\mathrm{log}\\left(\\frac{+s_{24}}{\\mu^2_{DR}}\\right)\\right) + \\mathcal{O}(\\epsilon^0) \\right)\n2 \\mathcal{A}^{[\\text{tree}]}_{gggg}(2;\\epsilon) ~ \\Big|c_2 \\Big\\rangle \\, \\nonumber\\\\\n&+& 2\\mathrm{N}_c\n\\left(\\frac{-2}{\\epsilon^2} + \\frac{1}{\\epsilon}\\left(\\frac{-11}{3} + \\mathrm{log}\\left(\\frac{+s_{13}}{\\mu^2_{DR}}\\right) + \\mathrm{log}\\left(\\frac{+s_{23}}{\\mu^2_{DR}}\\right)\\right) + \\mathcal{O}(\\epsilon^0) \\right)\n \\mathcal{A}^{[\\text{tree}]}_{gggg}(4;\\epsilon) ~ \\Big|c_4 \\Big\\rangle \\, \\nonumber\\\\\n&+& 2\\mathrm{N}_c\n\\left(\\frac{-2}{\\epsilon^2} + \\frac{1}{\\epsilon}\\left(\\frac{-11}{3} + \\mathrm{log}\\left(\\frac{+s_{14}}{\\mu^2_{DR}}\\right) + \\mathrm{log}\\left(\\frac{+s_{24}}{\\mu^2_{DR}}\\right)\\right) + \\mathcal{O}(\\epsilon^0) \\right)\n \\mathcal{A}^{[\\text{tree}]}_{gggg}(6;\\epsilon) ~ \\Big|c_6 \\Big\\rangle \\, ,\\nonumber\\\\\n\\end{eqnarray}\nwhere $s_{ij} \\equiv 2 p_i \\cdot p_j$, and $\\mu_{DR}$ denotes the auxiliary mass parameter of dimensional regularization. \nThe IR factorization coefficients listed in eq.~(\\ref{EQ:ggggUVIRcounterterms}) are\nextracted from the known singular part of $\\mathcal{A}^{[\\text{1-loop}]}_{gggg}(i)$ \ngiven in ref.~\\cite{Bern:1991aq}. The IR singular pieces of eq.~(\\ref{EQ:ggggUVIRcounterterms}) \nare the same for all IR subtraction methods (for the same renormalized virtual amplitude).\nIn our consideration the finite remainders of virtual amplitudes are defined by \nsubtracting all pole singularities of the loop amplitude~(\\ref{EQ:colordecomposition}) by means of \neq.~(\\ref{EQ:ggggUVIRcounterterms}).~\\\\\n\n\nWith these ingredients and prescriptions it is straightforward to get the analytic results for all \n8 non-vanishing finite remainders of the interferences between \n$\\Big|\\mathcal{A}^{[\\text{1-loop}]}_{gggg} \\Big\\rangle$ and $\\Big|\\mathcal{A}^{[\\text{tree}]}_{gggg} \\Big\\rangle$ \nin linear polarization basis.\\footnote{We used the library from the Package-X~\\cite{Patel:2015tea} for one-loop integrals.}\nThe finite remainder of unpolarized interferences in 4 dimensions is obtained by summing over these 8 quantities.\nOn the other hand, the finite remainder of the unpolarized interferences of the same scattering process \ncan be computed within CDR using a polarization sum formula like \\eqref {EQ:polsumCDRPhys} for each of the \n4 external gluons. We have checked analytically that both ways lead to the same finite expression.~\\\\\n\n\nThe constant transformation matrix from the linearly polarized amplitudes projected out \nusing eq.~(\\ref{EQ:LPgggg8}) to helicity amplitudes can be read off from the defining relations \neq.~(\\ref{EQ:LP2HLmassless}). \nIn order to obtain the finite remainders of helicity amplitudes it is advantageous to perform such a \ntransformation only at the very last stage of the computation, e.g.~,~at the level of finite remainders \nof linear polarized amplitudes.\n\n\nFinally we remark that we also computed the helicity amplitudes \nby first obtaining the Lorentz tensor decomposition representation \nof $\\Big|\\mathcal{A}^{[\\text{1-loop}]}_{gggg} \\Big\\rangle$, \nusing the form factor projectors\\footnote{We did not \nconsider reductions owing to Bose symmetry involving kinematic crossings, and hence we\nextracted an over-complete set of 20 form factors that are left after\nimposing the transversality constraint and gauge-fixings indicated by choices of reference vectors in \neq.~(\\ref{EQ:LP2HLmassless}).} given in ref.~\\cite{Binoth:2002xg},\nand then evaluating contractions between Lorentz structures and external polarization vectors in 4 dimensions. \nThis amounts to obtaining helicity amplitudes defined in the HV scheme. \nWe confirm numerically that for all helicity amplitudes defined by the \nsingularity subtraction terms listed in eq.~(\\ref{EQ:ggggUVIRcounterterms}) the same finite remainders \nare obtained at a few chosen test points (while the unsubtracted helicities amplitudes \ndiffer starting from the subleading power in $\\epsilon$). \n\n\n\\subsection{$e^+ e^- \\rightarrow Q \\bar{Q}$}\n\\label{SEC:examples:eeQQ}\n\nNext we consider quark-pair production in $e^+e^-$ collisions:\n\\begin{equation} \\label{EQ:eeQQ}\ne^-(p_1) ~+~ e^+(p_2) \\to Z^* \\to Q(p_3)~+~{\\bar Q}(p_4)~,\n\\end{equation}\nmediated by a Z-boson where $Q$ denotes a massive quark with mass \n$m$, i.e., $p_3^2 = p_4^2 = m^2$, and the electron (positron) is taken to be massless. \nThe corresponding bare scattering amplitude perturbatively expanded to NLO in QCD reads\n\\begin{eqnarray}\\label{EQ:eeQQAMP} \n\\Big|\\mathcal{A}_{eeQQ} \\Big\\rangle &=& \n\\mathcal{A}^{[\\text{tree}]}_{eeQQ}(1_{e^-}, 2_{e^+}, 3_{Q}, 4_{\\bar{Q}}) ~\\delta_{i_3 i_4} \\nonumber\\\\\n&+&\\frac{\\alpha_s}{4 \\pi} \\bar{C}( \\epsilon ) \n\\mathcal{A}^{[\\text{1-loop}]}_{eeQQ}(1_{e^-}, 2_{e^+}, 3_{Q}, 4_{\\bar{Q}}) ~2\\mathrm{C}_F \\delta_{i_3 i_4}\n+ \\mathcal{O}(\\alpha_s^2) \\, ,\n\\end{eqnarray} \nwhere $i_3$ ($i_4$) denotes the color index of the heavy quark (antiquark), \n$\\mathrm{C}_F=({N_c^2-1})\/({2N_c})$, and $\\bar{C}(\\epsilon) \\equiv \n( 4 \\pi )^\\epsilon e^{-\\epsilon \\gamma_E }$ with $\\gamma_E = 0.57721 \\ldots $ \ndenoting the Euler--Mascheroni constant.\nIn eq.~(\\ref{EQ:eeQQAMP}) we introduced symbolic labels $i_X$ in order to encode the dependence on \nthe momentum $p_i$ and helicity $\\lambda_i$ of an external particle $i$ of type $X$.\nThese 1-loop QCD corrections were first computed in ref.~\\cite{Jersak:1981sp}.\n\n\nBecause we work to the lowest order in electroweak couplings, the UV renormalization counterterms \ncan be introduced by the following replacement of the bare coupling vertex of the Z boson and the heavy quark:\n\\begin{eqnarray} \\label{EQ:eeQQUVcounterterms}\n\\Big( v_Q \\gamma^{\\mu} + a_Q \\gamma^{\\mu} \\gamma_5 \\Big) \n\\rightarrow \nZ^{[1]}_{\\psi,OS}(\\epsilon,\\alpha_s)\n\\Big( v_Q \\gamma^{\\mu} + Z^{ns}_5(\\alpha_s) ~a_Q \n\\frac{-i}{3!} \\epsilon^{\\mu \\nu \\rho \\sigma} \n\\gamma_{\\nu} \\gamma_{\\rho} \\gamma_{\\sigma} \\Big) \\, .\n\\end{eqnarray}\nHere $v_Q$ and $a_Q$ denote the vector and axial vector couplings of $Q$,\n\\begin{equation*}\n Z^{[1]}_{\\psi,OS}(\\epsilon,\\alpha_s) = \n-\\frac{\\alpha_s}{4\\pi} (4\\pi)^\\epsilon ~ \\Gamma(1+\\epsilon)\\frac{1}{\\epsilon} \n\\left(\\frac{\\mu_{DR}^2}{m^2}\\right)^\\epsilon \\mathrm{C}_F \\frac{(3-2\\epsilon)}{(1-2\\epsilon)} + \\mathcal{O}(\\alpha_s^2) \\, ,\n\\end{equation*}\nand we use Larin's prescription~\\cite{Larin:1991tj,Larin:1993tq} for the non-singlet axial vector current \nwhich involves $Z^{ns}_5(\\alpha_s) = 1 + \\frac{\\alpha_s}{4\\pi} \\left( -4 \\mathrm{C}_F \\right)+ \\mathcal{O}(\\alpha_s^2)$.\n\n\nFor subtracting the IR singularities of the renormalized 1-loop amplitude $\\mathcal{A}^{[\\text{1-loop,R}]}_{eeQQ}$, \nwe use the antenna subtraction method \\cite{Kosower:1997zr,GehrmannDeRidder:2005cm}.\nThe antenna subtraction term needed here reads \\cite{GehrmannDeRidder:2009fz}: \n\\begin{eqnarray}\\label{EQ:eeQQIRcounterterms} \n\\Big|\\mathcal{A}^{[\\text{IR}]}_{eeQQ} \\Big\\rangle &=& \n\\frac{\\alpha_s}{4 \\pi} \\bar{C}( \\epsilon ) \n\\mathcal{A}^0_3\\left(\\epsilon, \\frac{\\mu_{DR}^2}{s}; y \\right)\n\\mathcal{A}^{[\\text{tree}]}_{eeQQ}(1_{e^-}, 2_{e^+}, 3_{Q}, 4_{\\bar{Q}}) ~2\\mathrm{C}_F \\delta_{i_3 i_4}\n+ \\mathcal{O}(\\alpha_s^2), \n\\end{eqnarray} \nwhere $y=\\frac{1-\\beta}{1+\\beta}~,~ \\beta=\\sqrt{1-4m^2\/s}$, and $\\mathcal{A}^0_3\\left(\\epsilon, \\frac{\\mu_{DR}^2}{s}; y \\right)$ denotes the \nintegrated three-parton tree-level massive quark-antiquark antenna function \ngiven in \\cite{GehrmannDeRidder:2009fz,Abelof:2011jv}. \n\n\nBecause we take the leptons to be massless, there are only 8 non-vanishing helicity amplitudes\nwhich, in the absence of parity symmetry\\footnote{In the Standard Model the 1-loop scattering\namplitude of (\\ref{EQ:eeQQ}) still respects the combined symmetry of parity and charge conjugation,\nwhich relates the helicity amplitude with helicity configuration $+- ++$ to $+- --$, and similarly $-+ ++$ to $-+ --$.},\ndiffer from each other. \nWe now consider the extraction of polarized amplitudes in the helicity basis both at the tree level and the 1-loop level.\nFollowing the discussion of section~\\ref{SEC:prescription:NTS}, \nwe choose to attach an auxiliary spinor inner product \n\\begin{equation}\\label{EQ:flHLnormfactors}\n\\mathcal{N}_{\\lambda_{e} \\lambda_{Q} \\lambda_{\\bar{Q}}} = \n\\bar{u}(p_1, \\lambda_{e}) \\slashed{p}_3 v(p_2, -\\lambda_{e}) \\otimes \n\\bar{v}(p_4, \\lambda_{\\bar{Q}}) \\slashed{p}_1 u(p_3, \\lambda_{Q}) \n\\end{equation}\nto each helicity amplitude characterized by $\\lambda_{e},~ \\lambda_{Q},~\\lambda_{\\bar{Q}}$. \nThis factor is to be removed by numerical division at the end of the computation in 4 dimensions. \nPulling off $\\mathcal{N}^{~ -1}_{\\lambda_{e} \\lambda_{Q} \\lambda_{\\bar{Q}}}$ \nfrom each helicity amplitude, the polarization projections can be most conveniently performed, in analogy to eq.~\\eqref{EQ:SFLtrace1}, using \nthe following 8 regrouped projectors according to eqs.~(\\ref{EQ:TPextsps2}), \\eqref{EQ:TPextsps2massless}:\n\\begin{eqnarray} \\label{EQ:LPeeQQ8}\n\\hat{\\mathrm{P}}_1 &=& \\Big(\\slashed{p}_1 \\slashed{p}_3 \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\slashed{p}_1 \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n \\hat{\\mathrm{P}}_2 &=& \\Big(\\slashed{p}_1 \\slashed{p}_3 \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{\\bar{Q}}} \\right) \\slashed{p}_1 \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n\\hat{\\mathrm{P}}_3 &=& \\Big(\\slashed{p}_1 \\slashed{p}_3 \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\slashed{p}_1 \\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{Q}} \\right) \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n \\hat{\\mathrm{P}}_4 &=& \\Big(\\slashed{p}_1 \\slashed{p}_3 \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\slashed{S}_{\\bar{Q}} \\slashed{p}_1 \\slashed{S}_{Q} \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n\\hat{\\mathrm{P}}_5 &=& \\Big(\\slashed{p}_1 \\frac{i}{3!}\\epsilon_{\\gamma \\gamma \\gamma p_3} \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\slashed{p}_1 \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n \\hat{\\mathrm{P}}_6 &=& \\Big(\\slashed{p}_1 \\frac{i}{3!}\\epsilon_{\\gamma \\gamma \\gamma p_3} \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{\\bar{Q}}} \\right) \\slashed{p}_1 \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n\\hat{\\mathrm{P}}_7 &=& \\Big(\\slashed{p}_1 \\frac{i}{3!}\\epsilon_{\\gamma \\gamma \\gamma p_3} \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\slashed{p}_1 \\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{Q}} \\right) \\left(\\slashed{p}_3 + m \\right) \\Big)~,~\\nonumber\\\\\n \\hat{\\mathrm{P}}_8 &=& \\Big(\\slashed{p}_1 \\frac{i}{3!}\\epsilon_{\\gamma \\gamma \\gamma p_3} \\slashed{p}_2 \\Big) \\otimes\n\\Big( \\left(\\slashed{p}_4 - m \\right) \\slashed{S}_{\\bar{Q}} \\slashed{p}_1 \\slashed{S}_{Q} \\left(\\slashed{p}_3 + m \\right) \\Big)~,\n\\end{eqnarray}\nwhere the momentum basis representations of the two helicity polarization vectors \n$S_{Q}^{\\mu}$ and $S_{\\bar{Q}}^{\\mu}$, in analogy to eq.~(\\ref{EQ:PLpolMBR}), \nwill be inserted during the computation\\footnote{This insertion can \nconveniently be done after having performed the Dirac traces and having used \n$p_3 \\cdot S_{Q}=p_4 \\cdot S_{\\bar{Q}}=0$ and $S_{Q} \\cdot S_{Q} = S_{\\bar{Q}} \\cdot S_{\\bar{Q}} = -1$.}\nso that eventually the resulting projections are functions of the external momenta only.\nOf course, the manipulation of Dirac matrices associated with two disconnected fermion lines \n(separated by $\\otimes$ in eq.~(\\ref{EQ:LPeeQQ8})) can be performed independently and should not be confused. \nNotice that the set of polarization projectors in eq.~(\\ref{EQ:LPeeQQ8}) is also sufficient for computing virtual \namplitudes that involve box contributions, for instance $q(p_1)~\\bar{q}(p_2) \\to Q(p_3)~{\\bar Q}(p_4)$ in QCD, \nirrespective of any possible evanescent \nLorentz structure that can be generated at high loop orders in D dimensions. \nIn case $q(p_1)~\\bar{q}(p_2) \\to Q(p_3)~{\\bar Q}(p_4)$ is parity invariant, \nwhich is the case if one considers only QCD interactions, \nthen $\\hat{\\mathrm{P}}_2~,\\hat{\\mathrm{P}}_3~,\\hat{\\mathrm{P}}_5~,\\hat{\\mathrm{P}}_8$ \ncan be safely discarded and only 4 projectors are needed. \n\n\nIn the simple example considered here, where the amplitude \\eqref{EQ:eeQQAMP} involves only 3-point\nvertex functions, there is not much technical advantage in using eq.~(\\ref{EQ:LPeeQQ8}) instead of \nthe conventional form factor decomposition. If one nevertheless chooses to use the projectors \n(\\ref{EQ:LPeeQQ8}) for computing helicity amplitudes including QCD corrections, \none can compute the trace \\eqref{EQ:SFLtrace1} of the string of Dirac matrices along the lepton line,\nboth for the renormalized amplitude and the IR subtraction term \\eqref{EQ:eeQQIRcounterterms},\nin 4 dimensions, because the lepton line receives no QCD correction and remains purely tree level. \nIn this case we can replace $\\frac{i}{3!}\\epsilon_{\\gamma \\gamma \\gamma p_3}$ \nin eq.~(\\ref{EQ:LPeeQQ8}) by $\\slashed{p}_3 \\gamma_5$. \n\n\nHelicity amplitudes can be assembled by linear combinations of the \nprojections made with (\\ref{EQ:LPeeQQ8}), and the linear combination coefficients can be \nread off from eqs.~(\\ref{EQ:TPextsps2}), \\eqref{EQ:TPextsps2massless}. \nIt is convenient to perform such a transformation only in the \nlast stage of the computation at the level of 4-dimensional finite remainders. \nThe explicit form of the overall normalization factor given in \neq.~(\\ref{EQ:flHLnormfactors}) is usually needed only at the level \nof squared amplitudes (or interferences).\nThe squared modulus of $\\mathcal{N}_{\\lambda_{e} \\lambda_{Q} \\lambda_{\\bar{Q}}}$ is\n\\begin{eqnarray}\n\\Big| \\mathcal{N}_{\\lambda_{e} \\lambda_{Q} \\lambda_{\\bar{Q}}} \\Big|^2 &=& \n-\\frac{m^4 - 2 m^2 t + t (s + t)}{2} \\Bigg(\n\\lambda_{Q} \\lambda_{\\bar{Q}} ~\\Big(\n2(m^2 - t) \n(p_1 \\cdot S_{Q} ~ p_3 \\cdot S_{\\bar{Q}} - p_1 \\cdot S_{\\bar{Q}} ~ p_4 \\cdot S_{Q}) \\nonumber\\\\\n&&~~~~ + \n2 s ~(p_1 \\cdot S_{\\bar{Q}} ~ p_4 \\cdot S_{Q} - p_1 \\cdot S_{Q} ~ p_1 \\cdot S_{\\bar{Q}} ) \n\\Big) \\nonumber\\\\\n&&~~ + (m^2 - t) (m^2 - s - t) \\left(-1 + \\lambda_{Q} \\lambda_{\\bar{Q}} ~ S_{Q} \\cdot S_{\\bar{Q}} \\right)\n\\Bigg)\\nonumber\\\\\n&=&\n\\frac{1}{2} (m^2 - t) (m^2 - s - t) (m^4 - 2 m^2 t + t (s + t)) \\nonumber\\\\\n&-& \\lambda_{Q} \\lambda_{\\bar{Q}}~\n\\frac{ (m^4 - 2 m^2 t + t (s + t)) (4 m^6 + s t (s + t) - m^4 (3 s + 8 t) + m^2 (s^2 + 2 s t + 4 t^2))} {2 (s - 4 m^2)}, \\nonumber\\\\\n\\end{eqnarray}\nwhere we have inserted momentum basis representations of \n$S_{Q}^{\\mu}$ and $S_{\\bar{Q}}^{\\mu}$ that are given in analogy to eq.~(\\ref{EQ:PLpolMBR}). \nIn case the normalization factors are to be included at the amplitude level, \nwe can use for their computation either the concrete 4-dimensional representations\nof spinors and Dirac matrices, as listed for instance in~\\cite{Murayama:1992gi}, \nor employ the 4-dimensional spinor-helicity representation of \nthese objects~\\cite{Gunion:1985vca,Kleiss:1985yh,Xu:1986xb,Kleiss:1986qc,Dittmaier:1998nn,Schwinn:2005pi,Arkani-Hamed:2017jhn}.\n\n\n\nWith the ingredients just outlined we computed the finite remainders of the interferences between the \ntree-level and 1-loop helicity amplitudes, multiplied, for convenience, with the inverse square of the Z-boson propagator: \n\\begin{equation}\\label{EQ:tr1lohel}\n\\left(s-m^2_Z\\right)^2 \\times 2~ {\\rm Re}\\Big[\\mathcal{A}^{[\\text{tree}]*}_{eeQQ}(1_{e^-}, 2_{e^+}, 3_{Q}, 4_{\\bar{Q}})~\n \\mathcal{A}^{[\\text{1-loop}]}_{eeQQ}(1_{e^-}, 2_{e^+}, 3_{Q}, 4_{\\bar{Q}}) \\Big] \\, .\n\\end{equation}\nWe calculated \\eqref{EQ:tr1lohel} analytically using FORM~\\cite{Vermaseren:2000nd} and the involved loop integrals \nwith Package-X~\\cite{Patel:2015tea}. Table~\\ref{TAB:numbersFinInf} contains the finite remainders of \\eqref{EQ:tr1lohel}\nfor all helicity configurations evaluated at the test point $m=17.3$ GeV, $s= 10^6$ $({\\rm GeV})^2$, $t = -90$ $({\\rm GeV})^2$.\n($v_e$ and $a_e$ denote the vector and axial vector couplings of electron.)\n\n\\vspace{2mm}\n\\begin{table}[tbh!]\n\\begin{center}\n\\begin{tabular}{|c|l|}\n\\hline\nHelicities & $~~~~~~~~~~~~~~~~~~~$ Finite remainders of the interferences \\eqref{EQ:tr1lohel} in units of $({\\rm GeV})^2$ \\\\\n\\hline \n$+-,++$ & $-1.4211829*10^6 ~a_e^2 v_Q^2 - 2.8423658*10^6 ~a_e v_e v_Q^2-1.4211829*10^6 ~v_e^2 v_Q^2$ \\\\\n\\hline\n\\multirow{3}{*}{$+-,+-$} & $~~ 2.4731876*10^4 ~a_e^2 a_Q^2 + 4.9463752*10^4 ~a_e a_Q^2 v_e + 2.4731876*10^4 ~a_Q^2 v_e^2 $ \\\\\n & $~+ 4.9178930*10^4 ~a_e^2 a_Q v_Q + 9.8357861*10^4 ~a_e a_Q v_e v_Q + 4.9178930*10^4 ~a_Q v_e^2 v_Q $ \\\\ \n & $~+ 2.4446875*10^4 ~a_e^2 v_Q^2 + 4.8893750*10^4 ~a_e v_e v_Q^2 + 2.4446875*10^4 ~v_e^2 v_Q^2$ \\\\\n\\hline\n\\multirow{3}{*}{$+-,-+$} & $~~ 3.0551961 *10^{12}~a_e^2 a_Q^2 + 6.1103923 *10^{12}~a_e a_Q^2 v_e + 3.0551961 *10^{12}~a_Q^2 v_e^2 $ \\\\\n & $~+ 6.0752075 *10^{12}~a_e^2 a_Q v_Q -1.2150415 * 10^{13}~a_e a_Q v_e v_Q -6.0752075 *10^{12}~a_Q v_e^2 v_Q $ \\\\ \n & $~+ 3.0199891 *10^{12}~a_e^2 v_Q^2 + 6.0399783 *10^{12}~a_e v_e v_Q^2 + 3.0199891 *10^{12}~v_e^2 v_Q^2$ \\\\\n\\hline\n$+-,--$ & $-1.4211829*10^6 ~a_e^2 v_Q^2 - 2.8423658*10^6 ~a_e v_e v_Q^2-1.4211829*10^6 ~v_e^2 v_Q^2$ \\\\\n\\hline\n$-+,++$ & $-1.4211829*10^6 ~a_e^2 v_Q^2 + 2.8423658*10^6 ~a_e v_e v_Q^2-1.4211829*10^6 ~v_e^2 v_Q^2$ \\\\\n\\hline\n\\multirow{3}{*}{$-+,+-$} & $~~ 3.0551961 *10^{12}~a_e^2 a_Q^2 - 6.1103923 *10^{12}~a_e a_Q^2 v_e + 3.0551961 *10^{12}~a_Q^2 v_e^2 $ \\\\\n & $~+ 6.0752075 *10^{12}~a_e^2 a_Q v_Q -1.2150415 * 10^{13}~a_e a_Q v_e v_Q +6.0752075 *10^{12}~a_Q v_e^2 v_Q $ \\\\ \n & $~+ 3.0199891 *10^{12}~a_e^2 v_Q^2 - 6.0399783 *10^{12}~a_e v_e v_Q^2 + 3.0199891 *10^{12}~v_e^2 v_Q^2$ \\\\\n\\hline\n\\multirow{3}{*}{$-+,-+$} & $~~ 2.4731876*10^4 ~a_e^2 a_Q^2 - 4.9463752*10^4 ~a_e a_Q^2 v_e + 2.4731876*10^4 ~a_Q^2 v_e^2 $ \\\\\n & $~+ 4.9178930*10^4 ~a_e^2 a_Q v_Q + 9.8357861*10^4 ~a_e a_Q v_e v_Q - 4.9178930*10^4 ~a_Q v_e^2 v_Q $ \\\\ \n & $~+ 2.4446875*10^4 ~a_e^2 v_Q^2 - 4.8893750*10^4 ~a_e v_e v_Q^2 + 2.4446875*10^4 ~v_e^2 v_Q^2$ \\\\\n\\hline\n$-+,--$ & $-1.4211829*10^6 ~a_e^2 v_Q^2 + 2.8423658*10^6 ~a_e v_e v_Q^2-1.4211829*10^6 ~v_e^2 v_Q^2$ \\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{TAB:numbersFinInf} \nNumerical values of the finite remainders of the interferences \\eqref{EQ:tr1lohel}\nat the test point $m=17.3$ GeV, $s= 10^6$ $({\\rm GeV})^2$, $t = -90$ $({\\rm GeV})^2$.}\n\\end{center}\n\\end{table} \nThe interferences were computed to about 30 significant digits while only the\nfirst 8 significant digits are shown in table~\\ref{TAB:numbersFinInf}\nfor simplicity.\\footnote{For this reason there is no rounding in the shown digits.} \n CP invariance dictates that the helicity configurations \n$+- ++$ and $+- --$ yield identical expressions, and likewise $-+ ++$ and $-+ --$.\nThe large differences between the values of these helicity amplitudes are \ndue to the particular kinematic point considered: it corresponds to \na high-energy (small mass) limit of the scattering amplitude \nin the near-forward scattering region.\n\n\nWe computed also the finite remainder of the \nunpolarized interferences \\eqref{EQ:tr1lohel} within CDR at the same kinematic point\nwith the renormalized virtual amplitudes from refs.~\\cite{Bernreuther:2004ih,Bernreuther:2004th} \navailable in a form factor decomposed form.\nFor this unpolarized interference we obtain\n\\begin{equation*}\n6.1103923*10^{12} ~\\Big(a_e^2 a_Q^2 + v_e^2 a_Q^2 \\Big)\n~-~ 2.4300829*10^{13} ~a_e v_e a_Q v_Q ~+~ \n6.0399727*10^{12} ~\\Big(a_e^2 v_Q^2 + v_e^2 v_Q^2 \\Big),\n\\end{equation*}\nwhich precisely reproduces the sum of all helicity configurations listed in table~\\ref{TAB:numbersFinInf}. \n(The expressions coincide by the first 26 digits out of total 30 significant digits).~\\\\\n\n\nLet us comment on a point that was already alluded to in section~\\ref{SEC:prescription:NTS} \nand discussed in section~\\ref{SEC:unitarity:PSA}. It concerns the placing of Dirac matrices \nbetween pairs of on-shell projection operators.\nMoving the matrix $\\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{Q}} \\right)$ around \nin the external projectors in eq.~\\eqref{EQ:LPeeQQ8} according to the 4-dimensional algebra \nbetween the pair of on-shell projection operators, \n$\\left(\\slashed{p}_4 - m \\right)$ and $\\left(\\slashed{p}_3 + m \\right)$,\nalways leads to the same finite remainders documented in table~\\ref{TAB:numbersFinInf} \nusing the fixed singularity subtraction terms defined by \n\\eqref{EQ:eeQQUVcounterterms} and \\eqref{EQ:eeQQIRcounterterms}. \nYet, as expected, these different choices result in different (unsubtracted) singular virtual amplitudes.\nOnce we decide to move $\\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{Q}} \\right)$ \nbeyond $\\left(\\slashed{p}_4 - m \\right)$ or $\\left(\\slashed{p}_3 + m \\right)$,\nthis operation has to be made in accordance with the D-dimensional algebra in order to end up \nwith the same finite remainders. \nFor instance, the commutator between \n$\\left(\\frac{-i}{3!} \\epsilon_{\\gamma \\gamma \\gamma S_{Q}} \\right)$\nand $\\slashed{p}_3$, which vanishes in 4 dimensions because of\n$p_3 \\cdot S_{Q} = 0$, must not be omitted.~\\\\\n\n\nWe conclude this subsection with a remark on a subtle point concerning the specification \nof a definite contraction order among multiple Levi-Civita tensors, in order to reach \nan unambiguous canonical form for a projector as well as for the resulting projection in D dimensions. \nAs discussed in ref.~\\cite{Moch:2015usa}, the contraction of four Levi-Civita tensors \ncan lead to different expressions in D dimensions depending on the choice of pairings, \nwhich are not algebraically identical due to the lack of a Schouten identity.\nThis issue is of no concern for the amplitude of (\\ref{EQ:eeQQAMP}), \nespecially if we do the trace over the lepton line using 4-dimensional Dirac algebra \nbefore dealing with the heavy quark line. \nNevertheless, in more general situations to which our projector prescriptions also apply,\none should pair Levi-Civita tensors from inner vertices (of the same fermion line) \nin the contraction~\\cite{Moch:2015usa}, leaving all other Levi-Civita tensors appearing \nin the external projectors in a different category that are to be manipulated among themselves. \nOnce a definite choice of pairing and ordering of Levi-Civita tensors in the contraction \nis made, it should be consistently applied in the computations of all terms that contribute to\na (renormalized and subtracted) helicity amplitude.\nAlternatively, if several $\\gamma_5$ matrices from axial non-singlet current vertices\nand\/or pseudoscalar vertices are present along a fermion line, one can resort to \na fully anti-commuting $\\gamma_5$ and use the rule $\\gamma_5^2 = 1$ in $D$ dimensions~\\cite{Chanowitz:1979zu}, \nin order to bring the power of $\\gamma_5$ down to 0 or 1 before further manipulations. \nThis shall lead to the same final result one would get with a thorough implementation of Larin's \nprescription of non-singlet axial vector vertices and pseudoscalar vertices~\\cite{Larin:1993tq,Moch:2015usa}, \nalbeit it is computationally more convenient.\n\n\n\\section{Conclusions}\n\\label{SEC:conclusion}\n\nHelicity amplitudes encode the full dependence of scattering processes \non the polarizations of the external particles with spin and these amplitudes \nenrich the phenomenology of high energy physics. \nDifferent methods for computing dimensionally regularized helicity amplitudes exist, \ndepending on the particular regularization scheme used. \nFor the computation of helicity loop amplitudes with a scheme such as HV, \nthe projection method based on Lorentz covariant tensor decomposition is a widely used choice.\nDespite being very generic and versatile, there are a few delicate aspects of \nthe Lorentz tensor decomposition, as discussed in section~\\ref{SEC:projectionmethod:comments}, \nwhich makes the D-dimensional projection sometimes cumbersome to be carried out \nfor certain multiple-parton multiple-scale scatterings.~\\\\\n\n\nThe aim of this article was to formulate an alternative prescription to obtain polarized \ndimensionally regularized amplitudes, providing a recipe for constructing simple \nand general polarized amplitude projectors in D dimensions, which circumvents \nthe conventional Lorentz tensor decomposition and difficulties associated with it.\nThe polarization projectors devised in section~\\ref{SEC:Prescription} are based on \nthe momentum basis representations of external state vectors, and all their open \nLorentz indices are taken to be D-dimensional. This avoids dimensional splitting \nwhen applied to loop amplitudes. \nThe momentum basis representations of external gauge boson's polarization vectors \nas well as polarization vectors of massive fermions were discussed in detail\nin section~\\ref{SEC:prescription:MBR}. \n\n\nAs shown in section~\\ref{SEC:Prescription}, it is quite straightforward to construct these projectors,\nand their structures depend only on the masses and spins of the external particles. \nThe construction procedure requires almost no knowledge of the Lorentz structures \npresent in the loop amplitude, nor whether or not they are linearly independent of each other (in D dimensions).\nIn particular, there is no need to trim any unphysical Lorentz structure off the original Feynman-diagrammatic \nrepresentation of the amplitude before applying these external projectors. \nThe number and forms of these projectors are truly independent of the loop order of the virtual amplitude \nas well as of possible evanescent Lorentz structures that could be generated in D dimensions. \nConstraints from symmetry properties such as parity symmetry can be accounted for in a simple way \nin terms of this set of projectors. \n\n\nFrom the point of view of the projection method as recapped in section~\\ref{SEC:projectionmethod:recap},\nthe set of projectors prescribed in this article may be loosely viewed as a special choice of \nLorentz decomposition basis structures which by construction are orthogonal to each other. \nFurthermore, each of these decomposition structures is directly related to a physical quantity, \nand thus patterns of (explicit and\/or implicit) singularities therein are protected by \nphysical conditions observed by these physical quantities. \nIn this way the issues related to the conventional form factor decomposition as discussed \nin section~\\ref{SEC:projectionmethod:comments} are avoided.~\\\\\n\n\n\nThe usage of these D-dimensional polarized amplitude projectors results in helicity amplitudes \nwhich are eventually expressed solely in terms of Lorentz invariants made out of external momenta. \nThe resulting helicity amplitudes (and the incoherent sum of their squared moduli) \nare, however, different from those defined in many existing dimensional regularization schemes, \nin particular CDR.\nDespite being different from CDR, owing to the amplitude-level factorization of UV and IR singularities \ncombined with the crucial commutation between D-dimensional Lorentz index contraction and loop integration, \nour prescription for external states can be used in a hybrid way with CDR to obtain the same \nfinite remainders of loop amplitudes as in CDR, without having to re-calculate the (process-independent) \npole-subtraction coefficients. \nThis was demonstrated in section~\\ref{SEC:unitarity:PSA} in a formal way for minimally pole-subtracted amplitudes. \nThe validity of our argumentation is not confined to one-loop corrections to the Born amplitudes, but \npersists as long as the amplitude-level factorization formulae as sketched in eq.~(\\ref{EQ:AmpPoleFactorization}) \nhold in CDR. \nSubsequently, the same issue was discussed in section~\\ref{SEC:unitarity:FRIR} for finite remainders \ndefined in an IR subtraction method, where we argued that the unitarization recipe in ref.~\\cite{Catani:1996pk} \nis properly respected by our method. Thus we have shown that our hybrid CDR-compatible prescription is unitary.\nWe emphasize again that in order to unambiguously and consistently apply our prescription for\nexternal states to the calculation of loop amplitudes in D dimensions, there is no need to appeal to \ntheir Lorentz tensor decomposition representations.\n\n\n\nIn order to illustrate the usage of our hybrid prescription in practical applications, \nwe discussed in section~\\ref{SEC:examples} the construction of polarization projectors \nfor $gg \\rightarrow gg$ and $e^+ e^- \\rightarrow Q \\bar{Q}$, \nand computed their RS-independent finite remainders at one-loop order. \nWhile the arguments presented in section~\\ref{SEC:unitarity:FRIR}, as well as \nthe examples of section~\\ref{SEC:examples}, refer to NLO computations,\nit is possible to ensure unitarity of the prescription at NNLO in QCD and beyond,\nwith the aid of an IR-subtraction method as briefly commented on at the end of the section~\\ref{SEC:unitarity:FRIR}. \nThis is, however, beyond the scope of the current article, and we leave a detailed exposition \nof this in a future publication of polarized calculations where a NNLO subtraction method \nwill be employed.~\\\\\n\n\nGiven the impressive list of calculations of unpolarized observables done using the CDR \n(with Larin's prescription of $\\gamma_5$), we hope that, with this add-on, the resulting hybrid \nCDR-compatible prescription offers a convenient and efficient set-up for computing \nphysical observables associated with polarization effects for phenomenologically interesting processes \nin perturbative QCD.\n\n\n\\section*{Acknowledgments}\n\nThe author is grateful to W.~Bernreuther, M.~Czakon, and G.~Heinrich for discussions and comments on the manuscript.\nThe author also wishes to thank S.~Jahn, S.~Jones and M.~Kerner for helpful discussions and feedback on the draft, \nand T.~Ahmed, M.~Capozi, H.~Luo, J.~Schlenk, Z.G.~Si, Y.~Zhang for reading the manuscript. \n\n\n\n\\bibliographystyle{JHEP}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAfter its hype finally receded about half a decade ago, rather few advances in Semantic Desktop (SemDesk) research have been reported.\nAn overview of (modern) SemDesks can be found in \\cite{DraganD12}:\nExisting implementations are, for example, reproached for being rather complicated to use, not scaling well (thus draining lots of system resources), and there is still no real \"killer app\" available.\nConcerning SemDesk applications, two categories could be observed:\nnewly created semantic applications and plug-ins to enhance traditional, non-semantic ones \\cite{DraganD12}.\n\nAs a successor to the \\textit{NEPOMUK Semantic Desktop}\\footnote{\\url{www.semanticdesktop.org}}, DFKI's Smart Data \\& Knowledge Services department developed its own prototype\\footnote{\nmeanwhile spanning over six years of permanent usage in the department and a group knowledge graph having approx. 2.6 million triple statements\n} \\cite{maus2013weaving} making SemDesk technology ready for 24\/7 usage in practice, covering both, private and corporate scenarios.\nAfter lessons learned in past\\footnote{\ne.g. \\textit{ForgetIT} (\\url{www.forgetit-project.eu}) and \\textit{supSpaces} (\\url{www.supspaces.de})\n} and still ongoing projects\\footnote{\ne.g. \\textit{Managed Forgetting} (\\url{www.spp1921.de\/projekte\/p4.html.de})\n}, we now propose \\textit{Context Spaces} as an extension of this prototype addressing the issues mentioned before.\n\n\\section{Approach}\n\\noindent\n\\textbf{Context Spaces.}\nOne of SemDesk's cornerstones is the Personal Information Model (PIMO) \\cite{sauermann2007pimo}, which tries to represent a user's mental model as good as possible.\nInformation items (files, mails, bookmarks, ...) that are related to each other in a person's mind, but are separated on their computer (file system, mail client, web browser, ...), can thus be interlinked.\nWith \\textit{Context Spaces} (or \\textit{cSpaces} for short) we extend this idea by explicitly (and additionally) associating items with contexts of the user (see lower left of Figure \\ref{fig_cspaces}).\n\\vspace{-0.4cm}\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\textwidth]{img\/cspaces.png}\n\\caption{Conceptual overview of the cSpaces Semantic Desktop}\n\\label{fig_cspaces}\n\\end{figure}\n\\vspace{-0.4cm}\nThis is based on the intuition that every activity is performed in a certain context.\nHence, each information item stored on a person's computing device can be associated with one or more contexts (association strength may vary depending on the user's current context awareness).\nWe therefore assume that users are explicitly aware of the concept of context \\cite{GomezPerez2009} and that they are also aware of their current context (at least most of the time).\nExamples of such contexts are: \\textit{Spain holiday 2017}, \\textit{prepare ESWC18 paper}, or \\textit{my childhood memories}.\nWe do not enforce a certain definition of context: users should be able to stick to their own conceptualization as much as possible.\nHowever, we do assume that contexts express a certain relatedness of its elements.\nBesides being a kind of container for things, they may also be strongly related to (calendar) events, tasks or cases.\nContext hierarchies are also possible.\nMore details about our context model, which is an extension of \\cite{SchwarzContextModel}, will be presented in another paper.\nInstead of just having context as passive metadata, we in addition see it as an accessible element users can interact with (create new (sub)contexts, split or merge them, add\/remove elements, etc.).\n\nSemDesk user studies \\cite{SauermannEval} revealed that people omitted rather specific relations in favor of basic ones (like \\textit{isRelatedTo} or \\textit{isPartOf}), whereas the system is formal where possible, e.g. representing calendar events or address book contacts.\nThis matches our idea of providing a low effort opportunity to already keep things a bit more tidied up when simply associating them with a certain context (or multiple).\nAdditionally, some of these associations may also be inferred by the system reducing manual effort even more, e.g. a received email reply can automatically be associated with the original mail's context.\nMore advanced features supporting the user will be discussed in the section after next\n\n\\noindent\n\\textbf{Transparent Integration.}\nUsing contexts as an explicit interaction element only makes sense if applications also respect them.\nLike illustrated in Figure \\ref{fig_cspaces}, we therefore integrate cSpaces into the rest of the system using standard protocols like \\textit{Server Message Block (SMB)} for files, \\textit{IMAP} for mails, \\textit{CalDAV} for calendar entries, and \\textit{CardDAV} for contacts.\nFor web browsers, we use \\textit{Web\\-Extensions}\\footnote{\\url{https:\/\/wiki.mozilla.org\/WebExtensions}}, which provide cross-browser functionality and an integration level similar to having an underlying protocol.\nApplications are thus able to transparently operate on the knowledge graph (PIMO) managed by our app.\nEspecially in corporate scenarios, it is very convenient if users may just work with the resources in their contexts without caring whether they are actually spread across various sources like intranet shares, for example.\n\nUtilizing only standard protocols has certain limitations due to their rather basic, low-level character.\nSome activities, like writing a note or comment about a resource, can become inconvenient or non-intuitive.\nTo avoid this, we provide an additional sidebar as a single interaction point for using advanced features.\nUsers therefore do not need to learn a new (plugged-in) interface for each of their applications.\nThey can just keep using them the usual way having only the sidebar as a new UI to familiarize with.\nFrom the development point of view, the effort of creating and maintaining plug-ins needed for higher level functionality is comparatively low to that of earlier SemDesks.\nThey can be realized as \\textit{headless plug-ins} having very little functionality, often just the capability of \\textit{sending out} in-app-events to the sidebar (that is why we also shortly call them \"plug-outs\").\nIn addition, their corresponding UI elements and logic are located in the sidebar, where they can be easily reused.\nPlug-outs for different mail clients could, for example, share the same tagging UI\n\n\\noindent\n\\textbf{Self-Reorganization.}\nFeatures discussed so far primarily aim at our system's ease of use.\nThe other aspects mentioned in the beginning (scalability, missing \"killer app\") will be addressed using \\textit{Managed Forgetting}, by which we understand an escalating set of measures: temporal hiding, condensation, adaptive synchronization, archiving and deletion of resources and parts of the knowledge graph \\cite{forgetitbook}.\nBy having users work on cSpaces, we gather rich contextual information about all of their resources, which allows the system to semi-automatically help them in organizing their stuff.\nThus, cSpaces are continuously spawned, retracted, merged, condensed, or forgotten.\nAs an example, let us assume we do a consulting job for company XY.\nThe contract involves five meetings about different topics.\nOur system could represent this by having an overall cSpace containing general information about XY, e.g. contact and contract information.\nFor each meeting, there could be an individual sub-cSpace about its respective topic.\nSeveral months after the job has been completed, the system starts to remove details, e.g. train schedule to get to the meeting or auxiliary material for doing the presentation.\nAfter some years have passed, the sub-cSpaces could be merged with their parent, since the separation into different meetings is not relevant anymore.\nOnly the most important items, e.g. individual reports or an overall final report, are kept.\nAll other items are either condensed, moved to an archive or deleted completely (which can be adjusted by the user on a general level).\nAn item's current and estimated future value for the user are therefore continuously assessed resulting in different forgetting measures like temporal hiding (e.g. some items during one of the meetings), deletion, etc.\nThis especially means that the system is able to reorganize itself to a certain extent, which especially includes a kind of tidying-up-itself functionality.\nSome of the described features have already been implemented and successfully used in our research and industry prototypes \\cite{manaforge, pimodiary}, however most of them are still under heavy development.\\\\[-0.3cm]\n\n\\noindent\n\\textbf{Demo.}\nIn an early proof-of-concept implementation based on \\cite{maus2013weaving}, we already realized some of the file system, browser and calendar parts.\nThe screenshots in Figure \\ref{fig_scr} show a typical feature of our system:\n\\vspace{-0.45cm}\n\\begin{figure}\n\\centering\\includegraphics[width=1\\textwidth]{img\/screenshot.png}\n\n\\caption{\nScreenshot showing sidebar, file explorer and browser before (left half) and after a context switch (right half), illustrating the effects of a dynamic reorganization of the system.\n(Note: windows were rearranged for easier comparison.)\n}\n\\label{fig_scr}\n\\end{figure}\n\\vspace{-0.45cm}\nthe user selects a different context using the sidebar.\nAs a consequence, the \\textit{current context}, available as a folder in the file system as well as the browser, is dynamically reorganized by our app.\nNote that the system tries to present meaningful views on the current context in each app: e.g., the view in the browser only contains web links.\nTo really get an impression of how the interaction with the system looks like, we kindly refer the reader to our online demo video\\footnote{\\url{https:\/\/pimo.opendfki.de\/cSpaces\/}}, which also shows some additional features.\n\n\\section{Conclusion \\& Outlook}\nIn this paper, we presented a new SemDesk prototype based context spaces that users directly interact with and work on.\nThe system is transparently integrated using mostly standard protocols complemented by a sidebar for advanced features.\nUsers may thus stick to their favorite applications which should strongly contribute to the overall ease of use.\nLearning efforts are presumably low due to the sidebar being the only new UI that is introduced.\nBy exploiting its collected context information and applying features of Managed Forgetting, the system is able to dynamically reorganize itself which also includes a kind of tidying-up-itself functionality.\nWe therefore expect it to be more scalable than its predecessors while providing new levels of user support.\n\nNevertheless, a lot of functionality still needs to be fully implemented and evaluated.\nWe plan to do extensive user studies once the system matures.\\\\\n\n\\noindent\n\\textbf{Acknowledgements.}\nThis work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- DE 420\/19-1.\n\n\\bibliographystyle{splncs03}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}The study of algorithmic randomness has been an\nactive area of research in recent years. The basic problem is to\nquantify the randomness of a single real number. Here we think of a\nreal $r \\in [0,1]$ as an infinite sequence of 0's and 1's, i.e as an\nelement in $\\TN$. There are three basic approaches to algorithmic\nrandomness: the measure-theoretic approach of \\ml tests,\nthe incompressibility approach of Kolmogorov complexity, and the\nbetting approach in terms of martingales. All three approaches have\nbeen shown to yield the same notion of (algorithmic) randomness. The\npresent paper will consider only the measure-theoretic approach. A\nreal $x$ is \\ml random if for any effective sequence $S_1,\nS_2, \\dots$ of c. e. open sets with $\\mu(S_n) \\leq 2^{-n}$, $x \\notin\n\\cap_n S_n$. For background and history of algorithmic randomness we\nrefer to \\cite{DH:book,Nies:book}.\n\nIn a series of recent papers \\cite{BCRW08,BCD06,BCDW08,BCR06},\nG. Barmpalias, S. Dashti, R. Weber and the authors have defined a\nnotion of (algorithmic) randomness for closed sets and continuous\nfunctions on $2^{\\N}$. Some definitions are needed. For a finite\nstring $\\sigma \\in\\{0,1\\}^n$, let $|\\sigma| = n$. For two strings\n$\\sigma,\\tau$, say that $\\tau$ \\emph{extends} $\\sigma$ and write\n$\\sigma \\sqsubseteq \\tau$ if $|\\sigma| \\leq |\\tau|$ and $\\sigma(i) =\n\\tau(i)$ for $i < |\\sigma|$. For $x \\in \\TN$, $\\sigma \\sqsubset x$\nmeans that $\\sigma(i) = x(i)$ for $i < |\\sigma|$. Let $\\sigma^{\\frown}\n\\tau$ denote the concatenation of $\\sigma$ and $\\tau$ and let\n$\\sigma^{\\frown} i$ denote $\\sigma^{\\frown}(i)$ for $i=0,1$. Let\n$x\\lceil n =(x(0),\\dots,x(n-1))$. Two reals $x$ and $y$ may be coded\ntogether into $z = x \\oplus y$, where $z(2n) = x(n)$ and $z(2n+1) =\ny(n)$ for all $n$.\nFor a finite string $\\sigma$, let $I(\\sigma)$ denote $\\{x \\in\n\\TN:\\sigma \\sqsubset x\\}$. We shall call $I(\\sigma)$ the\n\\emph{interval} determined by $\\sigma$. Each such interval is a clopen\nset and the clopen sets are just finite unions of intervals. We let\n$\\B$ denote the Boolean algebra of clopen sets.\n\nNow a closed set $P$ may be identified with a tree\n$T_P\\subseteq \\{0,1\\}^*$ where $T_P = \\{\\sigma: P \\cap I(\\sigma)\n\\neq\\emptyset\\}$. Note that $T_P$ has no dead ends. That is, if\n$\\sigma\\in T_P$, then either $\\sigma^{\\frown}0 \\in T_P$ or\n$\\sigma^{\\frown}1\\in T_P$.\nset of infinite paths through $T$. It is well-known that $P \\subseteq\n\\TN$ is a closed set if and only if $P = [T]$ for some tree $T$. $P$\nis a $\\Pi^0_1$ class, or an effectively closed set, if $P = [T]$ for\nsome computable tree $T$; equivalently $T_P$ is a \\pz set. The\ncomplexity of the closed set $P$ is generally identified with that of\n$T_P$. Thus $P$ is said to be a $\\Pi^0_2$ closed set if $T_P$ is\n$\\Pi^0_2$; in this case $P = [T]$ for some $\\Delta^0_2$ tree $T$. The\ncomplement of a $\\Pi^0_1$ class is sometimes called a c.e. open\nset. We remark that if $P$ is a $\\Pi^0_1$ class, then $T_P$ is a\n$\\Pi^0_1$ set, but it is not, in general, computable.\nThere is a natural effective enumeration $P_0, P_1, \\dots$ of the\n$\\Pi^0_1$ classes and thus an enumeration of the c.e. open sets. Thus\nwe can say that a sequence $S_0,S_1,\\dots$ of c.e. open sets is\n\\emph{effective} if there is a computable function, $f$, such that\n$S_n = \\TN - P_{f(n)}$ for all $n$. For a detailed development of\n$\\Pi^0_1$ classes, see~\\cite{Ceta}.\n\nIt was observed in \\cite{BCD06} that there is a natural isomorphism\nbetween the space $\\C$ of nonempty closed subsets of $\\{0,1\\}^{\\N}$\nand the space $\\{0,1,2\\}^{\\N}$ (with the product topology)\ndefined as follows. Given a nonempty closed\n$Q\\subseteq \\TN$, let $T = T_Q$ be the tree without dead ends such\nthat $Q =[T]$. Let $\\sigma_0, \\sigma_1, \\ldots$ enumerate the elements\nof $T$ in order, first by length and then lexicographically. We then\ndefine the code $x = x_Q = x_T$ by recursion such that for each $n$,\n$x(n) =2$ if both $\\sigma_n\\fr 0$ and $\\sigma_n\\fr 1$ are in $T$,\n$x(n) =1$ if $\\sigma_n\\fr 0 \\notin T$ and $\\sigma_n\\fr 1 \\in T$, and\n$x(n) =0$ if $\\sigma_n\\fr 0 \\in T$ and $\\sigma_n\\fr 1 \\notin T$. For a\nfinite tree $T \\subseteq \\{0,1\\}^{\\leq n}$, the finite code $\\rho_T$ is\nsimilarly defined, ending with $\\rho_T(k)$ where $\\sigma_k$ is the\nlexicographically last element of $T \\cap \\{0,1\\}^n$.\n\nWe defined in \\cite{BCD06} a measure $\\mu^*$ on the space ${\\mathcal\nC}$ of closed subsets of $\\TN$ as follows.\n\\begin{equation}\n\\mu^*({\\mathcal X}) = \\mu(\\{x_Q:Q \\in {\\mathcal X}\\})\n\\end{equation}\nfor any ${\\mathcal X} \\subseteq {\\mathcal C}$ and $\\mu$ is the\nstandard measure on $\\{0,1,2\\}^{\\N}$. Informally this means that\ngiven $\\sigma \\in T_Q$, there is probability $\\frac13$ that both\n$\\sigma^{\\frown}0 \\in T_Q$ and $\\sigma^{\\frown}1 \\in T_Q$ and, for\n$i=0,1$, there is probability $\\frac13$ that only $\\sigma^{\\frown}i\n\\in T_Q$. In particular, this means that $Q \\cap I(\\sigma) \\neq\n\\emptyset$ implies that for $i=0,1$, $Q \\cap I(\\sigma^{\\frown}i) \\neq\n\\emptyset$ with probability $\\frac23$.\n\nBrodhead, Cenzer, and Dashti \\cite{BCD06} defined a closed set\n$Q\\subseteq 2^{\\N}$ to be (Martin-L\\\"{o}f) random if $x_Q$ is\n(Martin-L\\\"{o}f) random. Note that the equal probability of $\\frac13$\nfor the three cases of branching allows the application of Schnorr's\ntheorem that \\ml randomness is equivalent to prefix-free Kolmogorov\nrandomness. Then in \\cite{BCD06,BCDW08}, the following results are\nproved. No $\\Pi^0_1$ class is random but there is a random\n$\\Delta^0_2$ closed set. Every random closed set contains a random\nmember but not every member is random. Every random real belongs to\nsome random closed set. Every random $\\Delta^0_2$ closed set contains\na random $\\Delta^0_2$ member. Every random closed set is perfect and\ncontains no computable elements (in fact, it contains no\n$n$-c.e.\\ elements). Every random closed set has measure 0. A random\nclosed set is a specific type of random recursive construction, as\nstudied by Graf, Mauldin and Williams \\cite{GMW88}. McLinden and\nMauldin \\cite{MM09} showed that the Hausdorff dimension of a random\nclosed set is $log_2(4\/3)$.\n\nJust as an effectively closed set in $\\TN$ may be viewed as the set of\ninfinite paths through a computable tree $T \\subseteq \\{0,1\\}^*$, an\nalgorithmically random closed set in $\\TN$ may be viewed as the set of\ninfinite paths through an algorithmically random tree $T$. Diamondstone and\nKjos-Hanssen \\cite{DK09,KH09} give an alternative definition of random closed\nsets according to the Galton-Watson distribution and show that this definition\nproduces the same family of algorithmically random closed sets. The effective\nHausdorff dimension of members of random closed sets is studied in \\cite{DK09}.\n\nIn the present paper we will examine the notion of computable capacity\nand its relation to computable measures on the space $\\C$ of nonempty\nclosed sets. In section two, we present a family of computable\nmeasures on $\\C$ and show how they induce capacities. We define the\nnotion of computable capacity and present an effective version of\nChoquet's theorem that every capacity can be obtained from a measure\n$\\mu^*$ on the space of closed sets. The main theorem of section three\ngives conditions under which the capacity $\\T(Q)$ of a $\\mu^*$-random\nclosed set $Q$ is either equal to $0$ or $> 0$. We also construct a\n$\\Pi^0_1$ class with Lebesgue measure zero but with positive capacity,\nfor each capacity of a certain type.\n\n\\section{Computable Measure and Capacity on the Space of Closed Sets}\n\nIn this section, we describe the hit-or-miss topology on the space\n$\\C$ of closed sets, we define certain probability measures $\\mu_d$ on\nthe space $\\{0,1,2\\}^{\\N}$ and the corresponding measures $\\mu^*_d$ on\nthe homeomorphic space $\\C$. We present an effective version of\nChoquet's theorem connecting measure and capacity.\n\nThe standard (\\emph{hit-or-miss}) topology \\cite{D77} (p. 45)\non the space $\\C$ of closed sets is given by a sub-basis of sets of two types,\nwhere $U$ is any open set in $2^{\\N}$.\n\\[\nV(U) = \\{K: K \\cap U \\neq \\emptyset\\}; \\qquad \\qquad W(U) = \\{K: K \\subseteq U\\}\n\\]\n\nNote that $W(\\emptyset) = \\{\\emptyset\\}$ and that $V(\\TN) = \\C\n\\setminus \\{\\emptyset\\}$, so that $\\emptyset$ is an isolated element\nof $\\C$ under this topology. Thus we may omit $\\emptyset$ from $\\C$\nwithout complications.\n\nA basis for the hit-or-miss topology may be formed by taking finite\nintersections of the basic open sets. We want to work with the\nfollowing simpler basis. For each $n$ and each finite tree $A\n\\subseteq\\{0,1\\}^{\\leq n}$, let\n\n\\[\nU_A = \\{K\\in \\C: (\\forall \\sigma \\in A) (K \\cap I(\\sigma) \\neq \\emptyset)\\\n\\&\\ (\\forall \\sigma \\notin A) (K \\cap I(\\sigma) = \\emptyset) \\}.\n\\]\nThat is,\n\\[\nU_A = \\{K \\in \\C: T_K \\cap \\{0,1\\}^{\\leq n} = A\\}.\n\\]\nNote that the sets $U_A$ are in fact clopen. That is, for any tree $A\n\\subseteq \\{0,1\\}^{\\leq n}$, define the tree $A' = \\{\\sigma \\in\n\\{0,1\\}^{\\leq n}: (\\exists \\tau \\in \\{0,1\\}^n \\setminus A) \\sigma\n\\sqsubseteq \\tau\\}$. Then $U_{A'}$ is the complement of $U_A$.\n\nFor any finite $n$ and any tree $T \\subseteq \\{0,1\\}^{\\leq n}$, define the clopen set\n$[T] = \\cup_{\\sigma \\in T} I(\\sigma)$. Then $K \\cap [T] \\neq \\emptyset$\nif and only if there exists some $A \\subseteq \\{0,1\\}^{\\leq n}$ such that $K\n\\in U_A$ and $A \\cap T \\neq \\emptyset$. That is,\n\\[\nV([T]) = \\bigcup\\{U_A: A \\cap T \\neq \\emptyset\\}.\n\\]\nSimilarly, $K \\subseteq [T]$\nif and only if there exists some $A\\subseteq \\{0,1\\}^n$ such that $K\n\\in U_A$ and $A \\subseteq T$. That is,\n\\[\nW([T]) = \\bigcup \\{U_A: A \\subseteq T\\}.\n\\]\nThe following lemma can now be easily verified.\n\n\\begin{lemma} The family of sets $\\{U_A: A \\subseteq \\{0,1\\}^{\\leq n}\\ \\text{a tree}\\}$\nis a basis of clopen sets for the hit-or-miss topology on $\\C$.\n\\end{lemma}\n\nRecall the mapping from $\\C$ to $\\{0,1,2\\}^{\\N}$ taking $Q$ to $x_Q$.\nIt can be shown that this is in fact a homeomorphism.\n(See Axon \\cite{Ax10} for details.) Let $\\B^*$ be the family of clopen sets\nin $\\C$; each set is a finite union of basic sets of the form $U_A$ and thus $\\B^*$\nis a computable atomless Boolean algebra.\n\n\\begin{proposition} The space $\\C$ of nonempty closed subsets of $\\TN$ is homeomorphic\nto the space $\\{0,1,2\\}^{\\N}$. Furthermore, the corresponding map from $\\B$ to $\\B^*$\nis a computable isomorphism.\n\\end{proposition}\n\nNext we consider probability measures $\\mu$ on the space $\\{0,1,2\\}^{\\N}$ and the\ncorresponding measures $\\mu^*$ on $\\C$ induced by $\\mu$.\n\nA probability measure on $\\{0,1,2\\}^{\\N}$ may be defined as in \\cite{RSta}\nfrom a function $d: \\{0,1,2\\}^* \\to [0,1]$ such that $d(\\lambda) = 1$ and,\nfor any $\\sigma \\in\\{0,1,2\\}^*$,\n\\[\nd(\\sigma) = \\sum_{i=0}^2 d(\\sigma \\fr i).\n\\]\nThe corresponding measure $\\mu_d$ on $\\{0,1,2\\}^{\\N}$ is then defined\nby letting $\\mu_d(I(\\sigma)) = d(\\sigma)$. Since the intervals\n$I(\\sigma)$ form a basis for the standard product topology on\n$\\{0,1,2\\}^{\\N}$, this will extend to a measure on all Borel sets. If\n$d$ is computable, then $\\mu_d$ is said to be computable. The measure\n$\\mu_d$ is said to be \\emph{nonatomic} or \\emph{continuous} if\n$\\mu_d(\\{x\\}) = 0$ for all $x \\in \\{0,1,2\\}^{\\N}$. We will say that\n$\\mu_d$ is \\emph{bounded} if there exist bounds $b,c \\in (0,1)$ such\nthat, for any $\\sigma \\in \\{0,1,2\\}^*$ and $i \\in \\{0,1,2\\}$,\n\\[\nb \\cdot d(\\sigma) < d(\\sigma \\fr i) < c \\cdot d(\\sigma).\n\\]\nIt is easy to see that any bounded measure must be continuous. We will\nsay that the measure $\\mu_d$ is \\emph{regular} if there exist constants\n$b_0,b_1,b_2$ with $b_0+b_1+b_2 = 1$ such that for all $\\sigma$ and\nfor $i \\leq 2$, $d(\\sigma \\fr i) = b_i d(\\sigma)$.\n\nNow let $\\mu_d^*$ be defined by \\[\n\\mu_d^*({\\mathcal X}) = \\mu_d(\\{x_Q: Q \\in {\\mathcal X}\\}).\n\\]\nLet us say that a measure $\\mu^*$ on $\\C$ is computable if the\nrestriction of $\\mu^*$ to $\\B^*$ is computable.\n\n\\begin{proposition}\nFor any computable $d$, the measure $\\mu^*_d$ is a computable measure on $\\C$.\n\\end{proposition}\n\n\\begin{proof}\nFor any tree $A \\subseteq \\{0,1\\}^{\\leq n}$, it is easy to see that\n\n\\[\nK \\in U_A \\iff \\rho_A \\sqsubset x_K,\n\\]\nso that $\\mu_d^*(U_A) = \\mu_d(I(\\rho_A))$.\n\\end{proof}\n\nWe are now ready to define capacity. For details on capacity and random set\nvariables, see \\cite{Ng06}.\n\n\\begin{definition}\nA \\emph{capacity} on $\\C$ is a function $\\T: \\C \\to [0,1]$ with\n$\\T(\\emptyset) =0$ such that\n\\begin{itemize}\n\\item[(i)] $\\T$ is monotone increasing, that is,\n\\[\nQ_1 \\subseteq Q_2\n \\longrightarrow \\T (Q_1) \\leq \\T(Q_2).\n\\]\n\\item[(ii)] $\\T$ has the \\emph{alternating of infinite order} property, that is,\nfor $n \\geq 2$ and any $Q_1, \\dots, Q_n \\in \\C$\n\\[\n\\T(\\bigcap_{i=1}^n Q_i) \\leq \\sum \\{(-1)^{|I|+1} \\T(\\bigcup_{i \\in\n I}Q_i): \\emptyset \\neq I \\subseteq \\{1,2,\\dots,n\\} \\}.\n\\]\n\\item[(iii)] If $Q = \\cap_n Q_n$ and $Q_{n+1} \\subseteq Q_n$ for all\n$n$, then $\\T(Q) = lim_{n \\to \\infty} \\T(Q_n)$.\n\\end{itemize}\n\\end{definition}\n\nWe will also assume, unless otherwise specified, that the capacity\n$\\T(2^N) = 1$.\n\nWe will say that a capacity $\\T$ is computable if it is computable on\nthe family of clopen sets, that is, if there is a computable function $F$ from\nthe Boolean algebra $\\B$ of clopen sets into $[0,1]$ such that\n$F(B) = \\T(B)$ for any $B \\in \\B$.\n\nDefine $\\T_{d}(Q)=\\mu_d^*(V(Q))$. That is, $\\T_{d}(Q)$ is the\nprobability that a randomly chosen closed set meets $Q$. Here is\nthe first result connecting measure and capacity.\n\n\\begin{theorem} \\label{th1}\nIf $\\mu^{*}_{d}$ is a (computable) probability measure on $\\C$, then\n$\\T_{d}$ is a (computable) capacity.\n\\end{theorem}\n\n\\begin{proof}\nCertainly $\\T_d(\\emptyset) = 0$. The\nalternating property follows by basic probability. For (iii), suppose\nthat $Q = \\cap_n Q_n$ is a decreasing intersection. Then by\ncompactness, $Q \\cap K \\neq \\emptyset$ if and only if $Q_n \\cap K \\neq\n\\emptyset$ for all $n$. Furthermore, $V(Q_{n+1}) \\subseteq V(Q_n)$ for\nall $n$. Thus\n\n\\[\n\\T_d(Q) = \\mu^*_d(V(Q)) = \\mu^*_d(\\cap_n V(Q_n)) = lim_n \\mu^*_d(V(Q_n)) =\nlim_n\\T_d(Q_n).\n\\]\nIf $d$ is computable, then $\\T_d$ may be computed as\nfollows. For any clopen set $I(\\sigma_1) \\cup \\dots \\cup\nI(\\sigma_k)$ where each $\\sigma_i \\in \\{0,1\\}^n$, we compute the\nprobability distribution for all trees of height $n$ and add the\nprobabilities of those trees which contain one of the\n$\\sigma_i$.\n\\end{proof}\n\nChoquet's Capacity Theorem states that any capacity $\\T$ is determined by a measure,\nthat is $\\T = \\T_d$ for some $d$. See \\cite{Ng06} for details. We now give an\neffective version of Choquet's theorem.\n\n\\begin{theorem} [Effective Choquet Capacity Theorem] \\label{th2}\nIf $\\T$ is a computable capacity, then there is a computable measure\n$\\mu_d^*$ on the space of closed sets such that $\\T =\n\\T_d$. \\end{theorem}\n\n\\begin{proof}\nGiven the values $\\T(U)$ for all clopen sets $I(\\sigma_1)\\cup \\dots\n\\cup I(\\sigma_k)$ where each $\\sigma_i \\in \\{0,1\\}^n$, there is in\nfact a unique probability measure $\\mu_d$ on these clopen sets such\nthat $\\T = \\T_d$ and this can be computed as follows.\n\nSuppose first that $\\T(I(i)) = a_i$ for $i < 2$ and note that each\n$a_i \\leq 1$ and $a_0 + a_1 \\geq 1$ by the alternating property. If\n$\\T = \\T_d$, then we must have $d((0)) + d((2)) = a_0$ and $d((1)) +\nd((2)) = a_1$ and also $d((0)) + d((1)) + d((2)) = 1$, so that $d((2))\n= a_0 + a_1 - 1$, $d((0)) = 1 - a_1$ and $d((1)) = 1 - a_0$. This will\nimply that $\\T(I\\tau)) = \\T_d(I(\\tau))$ when $|\\tau| = 1$. Now suppose that\nwe have defined $d(\\tau)$ and that $\\tau$ is the code for a finite\ntree with elements $\\sigma_0,\\dots,\\sigma_n =\\sigma$ and thus $d(\\tau\n\\fr i)$ is giving the probability that $\\sigma$ will have one or both\nimmediate successors. We proceed as above. Let $\\T(I(\\sigma \\fr i)) =\na_i \\cdot \\T(I(\\sigma))$ for $i<2$. Then as above $d(\\tau \\fr 2) =\nd(\\tau) \\cdot (a_0 + a_1 - 1)$ and $d(\\tau \\fr i) = d(\\tau) \\cdot (1 -\na_i)$ for each $i$.\n\\end{proof}\n\n\n\\section{When is $\\T(Q)=0$?}\n\nIn this section, we compute the capacity of a random closed set under certain\nprobability measures. We construct a \\pz class with measure zero but\nwith positive capacity.\n\nWe say that $K \\in \\C$ is \\emph{$\\mu_d^*$-random} if $x_K$ is \\ml random\nwith respect to the measure $\\mu_d$. (See \\cite{RSta} for details.)\n\nOur next result shows that the $\\T_d$ capacity of a $\\mu^*_d$-random closed set\ndepends on the particular measure.\n\n\\begin{theorem}\\label{th4}\nLet $d$ be the uniform measure with $b_0 = b_1 = b > 0$ and $b_2 =\n1-2b > 0$ and let $\\ob = 1 - \\frac{\\sqrt 2}2$. Then\n\\begin{itemize}\n\\item[(a)] If $b \\geq \\ob$, then\nfor any $\\mu_d^*$-random closed set $R$, $\\T_d(R) = 0$.\n\\item[(b)] If $b < \\ob$, then there is a\n$\\mu_d^*$-random closed set $R$ with $\\T_d(R) > 0$.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\nFix $d$ as described above so that $d(\\sigma \\fr i) = d(\\sigma) \\cdot\nb$ and let $\\mu^* = \\mu_d^*$. We will compute the probability,\ngiven two closed sets $Q$ and $K$, that $Q \\cap K$ is nonempty.\nHere we define the usual product measure\non the product space $\\C \\times \\C$ of pairs $(Q,K)$ of nonempty closed sets\nby letting $\\mu^2(U_A \\times U_B) = \\mu^*(U_A) \\cdot \\mu^*(U_B)$\nfor arbitrary subsets $A,B$ of $\\{0,1\\}^n$.\n\nLet\n\\[\nQ_n = \\bigcup \\{I(\\sigma): \\sigma \\in \\{0,1\\}^n\\ \\&\\ Q \\cap I(\\sigma)\\neq \\emptyset\\}\n\\]\nand similarly for $K_n$. Then $Q \\cap K \\neq \\emptyset$ if and only if\n$Q_n \\cap K_n \\neq \\emptyset$ for all $n$. Let $p_n$ be the\nprobability that $Q_n \\cap K_n \\neq \\emptyset$ for two arbitrary\nclosed sets $K$ and $Q$, relative to our measure $\\mu^*$. It is\nimmediate that $p_1 = 1 - 2b^2$, since $Q_1 \\cap K_1 = \\emptyset$ only\nwhen $Q_1 = I(i)$ and $K_1 = I(1-i)$. Next we will determine the\nquadratic function $f$ such that $p_{n+1} = f(p_n)$. There are 9\npossible cases for $Q_1$ and $K_1$, which break down into 4 distinct\ncases in the computation of $p_{n+1}$.\n\n\\medskip\n\n{\\bf Case (i)}: As we have seen, $Q_1 \\cap K_1 = \\emptyset$ with\nprobability $1 - 2b^2$.\n\n\\medskip\n\n{\\bf Case (ii)}: There are two chances that $Q_1 = K_1 = I(i)$, each\nwith probability $b^2$ so that $Q_{n+1} \\cap K_{n+1} \\neq \\emptyset$\nwith (relative) probability $p_n$.\n\n\\medskip\n\n{\\bf Case (iii)}: There are four chances where $Q_1 = \\TN$ and $K_1\n=I(i)$ or vice versa, each with probability $b \\cdot (1-2b)$, so that\nonce again $Q_{n+1} \\cap K_{n+1} \\neq\\emptyset$ with relative probability\n$p_n$.\n\n\\medskip\n\n{\\bf Case (iv)}: There is one chance that $Q_1 = K_1 = \\TN$, with\nprobability $(1 - 2b)^2$, in which case $Q_{n+1} \\cap K_{n+1} \\neq\n\\emptyset$ with relative probability $1 - (1 -p_n)^2 = 2p_n -\np_n^2$. This is because $Q_{n+1} \\cap K_{n+1} = \\emptyset$ if and only\nif both $Q_{n+1} \\cap I(i) \\cap K_{n+1} = \\emptyset$ for both $i=0$\nand $i=1$.\n\n\\medskip\n\nAdding these cases together, we see that\n\\[\np_{n+1} = [2b^2 + 4b (1-2b)] p_n + (1 - 2b)^2 (2p_n - p_n^2) = (2b^2 -\n4b + 2) p_n - (1 - 4b + 4b^2) p_n^2.\n\\]\n\nNext we investigate the limit of the computable sequence $_{n \\in\n \\omega}$. Let $f(p) = (2b^2 - 4b + 2) p - (1 - 4b + 4b^2) p^2$.\nNote that $f(0) = 0$ and $f(1) = 1 - 4b^2 < 1$.\nIt is easy to see that the fixed points of $f$ are $p=0$ and $p =\n \\frac{2b^2 -4b+1}{(1-2b)^2}$. Note that since $b < \\frac 12$, the\n denominator is not zero and hence is always positive.\n\nNow consider the function $g(b) = 2b^2 - 4b +1 = 2 (b-1)^2 - 1$, which has\npositive root $\\ob$ and is decreasing for $0 \\leq b \\leq 1$. There are\nthree cases to consider when comparing $b$ with $\\ob$.\n\n\\medskip\n\n{\\bf Case 1}: If $b > \\ob$, then $g(b) < 0$ and hence the other fixed\npoint of $f$ is negative. Furthermore, $2b^2 - 4b +2 < 1$ so that\n$f(p) < p$ for all $p > 0$. It follows that the sequence $\\{p_n: n \\in\n\\N\\}$ is decreasing with lower bound zero and hence must converge to a\nfixed point of $f$ (since $p_{n+1} = f(p_n)$). Thus $lim_n p_n = 0$.\n\n{\\bf Case 2}: If $b = \\ob$, then $g(b) = 0$ and $f(p) = p - (4b-1)\np^2$, so that $p=0$ is the unique fixed point of $f$. Furthermore,\n$4b-1 = 3 - 2 \\sqrt2 > 0$, so again $f(p) < p$ for all $p$. It follows\nagain that $lim_n p_n = 0$.\n\nIn these two cases, we can define a \\ml test to prove that $T_d(R) =\n0$\nfor any $\\mu$-random closed set $R$.\n\nFor each $m, n \\in \\omega$, let\n\\[\nB_m = \\{(K,Q): K_m \\cap Q_m \\neq \\emptyset\\},\n\\]\nso that $\\mu^*(B_m) = p_m$ and let\n\\[\nA_{m,n} = \\{Q: \\mu^*(\\{K: K_m \\cap Q_m \\neq \\emptyset\\}) \\geq\n2^{-n}\\}.\n\\]\n\n\\begin{claim} \\label{c1} For each $m$ and $n$, $\\mu^*(A_{m,n}) \\leq 2^n \\cdot p_m$.\n\\end{claim}\n\n\\emph{Proof of Claim \\ref{c1}}. Define the Borel measurable function $F_m: \\C\n\\times \\C: \\to \\{0,1\\}$ to be the characteristic function of\n$B_m$. Then\n\\[\np_m = \\mu^2(B_m) = \\int_{Q \\in \\C} \\int_{K \\in \\C} F(Q,K) dK dQ.\n\\]\nNow for fixed $Q$,\n\\[\n\\mu^*(\\{K: K_m \\cap Q_m \\neq \\emptyset\\}) = \\int_{K \\in \\C} F(Q,K) dK,\n\\]\nso that for $Q \\in A_{m,n}$, we have $\\int_{K \\in \\C} F(Q,K) dK \\geq 2^{-n}$.\nIt follows that\n\\begin{align*}\np_m = \\int_{Q \\in \\C} \\int_{K \\in \\C} F(Q,K) dK dQ &\\geq \\int_{Q \\in\n A_{m,n}} \\int_{K \\in \\C} F(Q,K) dK dQ \\\\\n&\\geq \\int_{Q \\in A_{m,n}}\n2^{-n} dQ = 2^{-n} \\mu^*(A_{m,n}).\n\\end{align*}\nMultiplying both sides by $2^n$ completes the proof of Claim \\ref{c1}. $\\qed$\n\n\\medskip\n\nSince the computable sequence $_{n \\in \\omega}$ converges to 0,\nthere must be a computable subsequence $m_0,m_1,\\dots$ such that\n$p_{m_n} < 2^{-2n-1}$ for all $n$. We can now define our \\ml test. Let\n\n\\[\nS_r = A_{m_r,r}\n\\]\nand let\n\\[\nV_n = \\cup_{r>n} S_r.\n\\]\nIt follows that\n\\[\n\\mu^*(A_n) \\leq 2^{n+1}\\mu^*(B_{m_n}) < 2^{n+1} 2^{-2n-1} = 2^{-n}\n\\]\nand therefore\n\\[\n\\mu^*(V_n) \\leq \\sum_{r>n} 2^{-r} = 2^{-n}\n\\]\nNow suppose that $R$ is a random closed set. The sequence $\\la V_n\n\\ra_{n \\in \\omega}$ is a computable sequence of c.e. open sets with\nmeasure $\\leq 2^{-n}$, so that there is some $n$ such that $R \\notin\nS_n$. Thus for all $r > n$, $\\mu^*(\\{K: K_{m_r} \\cap R_{m_r} \\neq\n\\emptyset\\}) < 2^{-r}$ and it follows that\n\\[\n\\mu^*(\\{K: K \\cap R \\neq \\emptyset\\}) = lim_n \\mu^*(\\{K: K_{m_n} \\cap\nR_{m_n} \\neq \\emptyset\\}) = 0.\n\\]\nThus $\\T_d(R) = 0$, as desired.\n\n\\medskip\n\n{\\bf Case 3}:\nFinally, suppose that $b < \\ob$. Then $0 < 2b^2 - 4b +1 < 1$, so that\n$f$ has a positive fixed point $m_b = \\frac{2b^2 -4b+1}{(1-2b)^2}$.\nIt is clear that $f(p) > p$ for $0 < p < m_b$ and $f(p) < p$ for $m_b\n< p$. Furthermore, the function $f$ has its maximum at $p =\n[\\frac{1-b}{1-2b}]^2 > 1$, so that $f$ is monotone increasing on\n$[0,1]$ and hence $f(p) > f(m_b) = m_b$ whenever $p > m_b$. Observe that $p_0\n= 1 > m_b$ and hence the sequence $\\{p_n: n \\in \\N\\}$ is decreasing\nwith lower bound $m_b$. It follows that $lim_n p_n = m_b > 0$.\n\nNow $B = \\{(Q,K): Q \\cap K \\neq \\emptyset\\} = \\cap_n B_n$ is the\nintersection of a decreasing sequence of sets and hence $\\mu^2(B) =\nlim_n p_ = m_b >0$.\n\n\\begin{claim} \\label{c2} $\\mu^*(\\{Q: \\mu^*(\\{K: K \\cap Q \\neq\n\\emptyset\\}) > 0\\}) \\geq m_b$.\n\\end{claim}\n\n\\emph{Proof of Claim \\ref{c2}}. Let $B = \\{(K,Q): K \\cap Q \\neq \\emptyset$,\nlet $A = \\{Q: \\mu^*(\\{K: K \\cap Q \\neq \\emptyset\\}) > 0\\}$ and suppose\nthat $\\mu^*(A) < m_b$. As in the proof of Claim \\ref{c1}, we have\n\\[\nm_b = \\mu^2(B) = \\int_{Q \\in \\C} \\int_{K \\in \\C} F(Q,K) dK dQ.\n\\]\nFor $Q \\notin A$, we have $\\int_{K \\in Q} F(Q,K) dK = \\mu^*(\\{K: K\n\\cap Q \\neq \\emptyset\\}) = 0$, so that\n\\[\nm_b = \\int_{Q \\in A} \\int_{K \\in Q} F(Q,K) dK dQ \\leq \\int_{Q \\in A} dQ = \\mu^*(A),\n\\]\nwhich completes the proof of Claim \\ref{c2}. $\\qed$\n\n\\begin{claim} \\label{c3} $\\{Q: \\T_d(Q) \\geq m_b\\}$ has positive measure.\n\\end{claim}\n\n\\emph{Proof of Claim \\ref{c3}}. Recall that $T_d(Q) = \\mu^*(\\{K: Q \\cap K \\neq \\emptyset\\})$.\nLet $B = \\{(K,Q): K \\cap Q \\neq \\emptyset$, let $A = \\{Q: T_d(Q) \\geq m_b\\}$\nand suppose that $\\mu^*(A) = 0$. As\nin the proof of Claim \\ref{c1}, we have\n\\[\nm_b = \\mu^2(B) = \\int_{Q \\in \\C} T_d(Q) dQ.\n\\]\nSince $\\mu^*(A) = 0$, it follows that for any $B \\subseteq \\C$, we have\n\\[\n\\int_{Q \\in B} T_d(Q) dQ \\leq m_b \\mu^*(B).\n\\]\nFurthermore, $T_d(Q) < m_b$ for almost all $Q$, so there exists some $P$ with\n$T_d(P) < m_b - \\epsilon$ for some positive $\\epsilon$. This means that for some\n$n$, $\\mu^*(\\{K: P_n \\cap K_n \\neq \\emptyset\\}) < m_b - \\epsilon$. Then for \\emph{any}\nclosed set $Q$ with $Q_n = P_n$, we have $T_d(Q) < m_b - \\epsilon$. But\n$E = \\{Q: Q_n = P_n\\}$ has positive measure, say $\\delta > 0$. Then we have\n\n\\begin{align*}\nm_b = \\int_{Q \\in \\C} T_d(Q) dQ &= \\int_{Q \\in E} T_d(Q) dQ\\ +\\ \\int_{Q \\notin E} T_d(Q) dQ \\\\\n&\\leq \\ \\delta (m_b - \\epsilon) + (1- \\delta) m_b = m_b - \\epsilon \\delta < m_b.\n\\end{align*}\n\nThis contradiction demonstrates Claim \\ref{c3}. $\\qed$\n\nSince the set of $\\mu*$-random closed sets has measure one, there must\nbe a random closed set $R$ such that $\\T_d(R) \\geq m_b$ and in\nparticular, there is a $\\mu^*$-random closed set with positive capacity.\n\\end{proof}\n\nThus for certain measures, there exists a random closed set with\nmeasure zero but with positive capacity. For the standard measure, a random closed set\nhas capacity zero.\n\n\\begin{corollary}\n\\label{th3}\nLet $d$ be the uniform measure with $b_0 = b_1 = b_2 = \\frac13$. Then\nfor any $\\mu_d^*$-random closed set $R$, $\\T_d(R) = 0$.\n\\end{corollary}\n\nA random closed set may not be effectively closed. But we can also\nconstruct an effectively closed set with measure zero and with\npositive capacity.\n\n\\begin{theorem} \\label{th5}\nFor the regular measure $\\mu_d$ with $b = b_1 = b_2$, there is a\n$\\Pi^0_1$ class $Q$ with Lebesgue measure zero and positive capacity\n$\\T_d(Q).$\n\\end{theorem}\n\n\\begin{proof}\nFirst let us compute the capacity of $X_n = \\{x: x(n) =0\\}$. For\n$n=0$, we have $\\T_d(X_0) = 1 - b$. That is, $Q$ meets $X_0$ if and\nonly if $Q_0 = I(0)$ (which occurs with probability $b$), or $Q_0 =\n\\TN$ (which occurs with probability $1 - 2b$. Now the probability\n$\\T_d(X_{n+1})$ that an arbitrary closed set $K$ meets $X_{n+1}$ may\nbe calculated in two distinct cases. As in the proof of Theorem\n\\ref{th3}, let\n\\[\nK_n = \\bigcup \\{I(\\sigma): \\sigma \\in \\{0,1\\}^n\\ \\&\\ K \\cap I(\\sigma)\\neq \\emptyset\\}\n\\]\n\n{\\bf Case I} If $K_0 = \\TN$, then $\\T_d(X_{n+1}) = 1 - (1- \\T_d(X_n))^2$.\n\n\\medskip\n\n{\\bf Case II} If $K_0 = I((i))$ for some $i<2$, then $\\T_d(X_{n+1}) = \\T_d(X_n)$.\n\n\\medskip\n\nIt follows that\n\\begin{align*}\n\\T_d(X_{n+1}) &= 2b \\T_d(X_n) + (1-2b) (2 \\T_d(X_n)\n- (\\T_d(X_n))^2) \\\\\n&= (2-2b) \\T_d(X_n) - (1-2b) (\\T_d(X_n))^2\n\\end{align*}\n\nNow consider the function $f(p) = (2-2b) p - (1-2b) p^2$, where $0 < b\n < \\frac 12$. This function has the properties that $f(0) = 0$, $f(1) =\n 1$ and $f(p) > p$ for $0 < p < 1$. Since $\\T_d(X_{n+1})\n= f(\\T_d(X_n))$, it follows that $lim_n \\T_d(X_n) = 1$ and is the limit\nof a computable sequence.\n\nFor any $\\sigma = (n_0, n_1, \\dots, n_k)$, with $n_0 < n_1 < \\cdots\n0$. We have\nalso constructed for each such measure an effectively closed set with\npositive capacity and with Lebesgue measure zero.\n\n\nIn future work, we plan to extend our results to more general measures\nwhere for each string $\\sigma \\in T_Q$, the probability that $\\sigma\n\\fr i \\in T_Q$ depends on $\\sigma$. For example, such a measure on the\nspace of closed sets may be defined by making the probability that\nboth extensions $\\sigma \\fr i$ of a node $\\sigma \\in T$ belong to $T$\nequal to $1 - \\frac 2n$ and the probability that just one extension\nbelongs to $T$ equal to $\\frac 1n$, where $n = |\\sigma|$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\nThis work was supported in parts by the Siberian Branch of the Russian Academy of Science (project No. 0305-2017-0021), a Leverhulme Trust Research Project Grant RPG-2017-143 and by STFC (AWAKE-UK, Cockroft Institute core and UCL consolidated grants), United Kingdom; a Deutsche Forschungsgemeinschaft project grant PU 213-6\/1 ``Three-dimensional quasi-static simulations of beam self-modulation for plasma wakefield acceleration''; the National Research Foundation of Korea (Nos.\\ NRF-2015R1D1A1A01061074 and NRF-2016R1A5A1013277); the Portuguese FCT---Foundation for Science and Technology, through grants CERN\/FIS-TEC\/0032\/2017, PTDC-FIS-PLA-2940-2014, UID\/FIS\/50010\/2013 and SFRH\/IF\/01635\/2015; NSERC and CNRC for TRIUMF's contribution; and the Research Council of Norway. M. Wing acknowledges the support of the Alexander von Humboldt Stiftung and DESY, Hamburg. The AWAKE collaboration acknowledge the SPS team for their excellent proton delivery.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $W=\\left\\{ W_{t}:t\\geq 0\\right\\} $ be a standard Brownian motion\ninitialized at zero, set $W_{t}^{\\ast }=\\max_{s\\leq t}\\left\\vert\nW_{s}\\right\\vert $ and write $\\mathcal{F}_{t}^{W}=\\sigma \\left\\{ W_{u}:u\\leq\nt\\right\\} $, $t\\geq 0$. In \\cite{DS}, Davis and Suh proved the following\nresult.\n\n\\begin{theorem}[{\\protect\\cite[Th. 1.1]{DS}}]\n\\label{Th : DavisSuh}For every $p>0$ and every $c\\in \\mathbb{R}$, set\n\\begin{eqnarray}\nY_{t} &=&Y_{t}\\left( c,p\\right) =\\left( W_{t}^{\\ast }\\right) ^{p-2}\\left[\nW_{t}^{2}-t\\right] +c\\left( W_{t}^{\\ast }\\right) ^{p}\\text{, \\ \\ }t>0\\text{,}\n\\label{Y} \\\\\nY_{0}\\left( c,p\\right) &=&Y_{0}=0. \\notag\n\\end{eqnarray}\n\n\\begin{enumerate}\n\\item For every $p\\in (0,2]$, the process $Y_{t}$ is a $\\mathcal{F}_{t}^{W}$%\n-submartingale if, and only if, $c\\geq \\frac{2-p}{p}.$\n\n\\item For every $p\\in \\lbrack 2,+\\infty )$, the process $Y_{t}$ is a $%\n\\mathcal{F}_{t}^{W}$-supermartingale if, and only if, $c\\leq \\frac{2-p}{p}.$\n\\end{enumerate}\n\\end{theorem}\n\n\\bigskip\n\nAs pointed out in \\cite[p. 314]{DS} and in Section \\ref{S : BDG} below, part\n1 of Theorem \\ref{Th : DavisSuh} can be used to derive explicit expressions\nof the constants appearing in the Burkholder-Davis-Gundy (BDG) inequalities\n(see \\cite{Burk73}, or \\cite[Ch. IV, \\S 4]{RY}). The proof of Theorem \\ref%\n{Th : DavisSuh} given in \\cite{DS} uses several delicate estimates related\nto a class of Brownian hitting times: such an approach can be seen as a\nramification of the discrete-time techniques developed in \\cite{Burkhj}. In\nparticular, in \\cite{DS} it is observed that the submartingale (or\nsupermartingale) characterization of $Y_{t}\\left( c,p\\right) $ basically\nrelies on the properties of the random subset of $[0,+\\infty )$ composed of\nthe instants $t$ where $\\left\\vert W_{t}\\right\\vert =W_{t}^{\\ast }$. The aim\nof this note is to bring this last connection into further light, by\nproviding an elementary proof of Theorem \\ref{Th : DavisSuh}, based on a\ndirect application of It\\^{o} formula and on an appropriate version of the\nDoob-Meyer decomposition of submartingales. We will see that our techniques\nlead naturally to some substantial generalizations (see Theorem \\ref{T : DS}\nbelow).\n\n\\bigskip\n\nThe rest of the paper is organized as follows. In Section \\ref{S : Gen} we\nstate and prove a general result involving a class of stochastic processes\nthat are functions of a positive submartingale and of a monotone\ntransformation of its maximum. In Section \\ref{S : Proof} we focus once\nagain on the Brownian setting, and establish a generalization of Theorem \\ref%\n{Th : DavisSuh}. Section \\ref{S : BDG} deals with an application of the\nprevious results to (strong) BDG inequalities. Finally, in Section \\ref{S :\nBal} we provide an explicit connection with some classic \\textsl{balayage\nformulae} for continuous-time semimartingales (see e.g. \\cite{MY}).\n\nAll the objects appearing in the subsequent sections are defined on a common\nprobability space $\\left( \\Omega ,\\mathfrak{A},\\mathbb{P}\\right) $.\n\n\\section{A general result \\label{S : Gen}}\n\nThroughout this section, $\\mathcal{F}=\\left\\{ \\mathcal{F}_{t}:t\\geq\n0\\right\\} $ stands for a filtration satisfying the usual conditions. We will\nwrite $X=\\left\\{ X_{t}:t\\geq 0\\right\\} $ to indicate a \\textsl{continuous\\ }$%\n\\mathcal{F}_{t}$-\\textsl{submartingale }issued from zero and such that $%\n\\mathbb{P}\\left\\{ X_{t}\\geq 0\\text{, \\ }\\forall t\\right\\} =1$. We will\nsuppose that the Doob-Meyer decomposition of $X$ (see for instance \\cite[Th.\n1.4.14]{KS}) is of the type $X_{t}=M_{t}+A_{t}$, $t\\geq 0$, where $M$ is a\n\\textsl{square-integrable} continuous $\\mathcal{F}_{t}$-martingale issued\nfrom zero, and $A$ is an increasing (integrable) natural process. We assume\nthat $A_{0}=M_{0}=0$; the symbol $\\left\\langle M\\right\\rangle =\\left\\{\n\\left\\langle M\\right\\rangle _{t}:t\\geq 0\\right\\} $ stands for the quadratic\nvariation of $M$. We note $X_{t}^{\\ast }=\\max_{s\\leq t}X_{s}$, and we also\nsuppose that $\\mathbb{P}\\left\\{ X_{t}^{\\ast }>0\\right\\} =1$ for every $t>0$.\nThe following result is a an extension of Theorem \\ref{Th : DavisSuh}.\n\n\\begin{theorem}\n\\label{Th : general}Fix $\\varepsilon >0$.\n\n\\begin{enumerate}\n\\item Suppose that the function $\\phi :(0,+\\infty )\\mapsto \\mathbb{R}$ is of\nclass $C^{1}$, non-increasing, and such that\n\\begin{equation}\n\\mathbb{E[}\\int_{\\varepsilon }^{T}\\phi \\left( X_{s}^{\\ast }\\right)\n^{2}d\\left\\langle M\\right\\rangle _{s}]<+\\infty \\text{,} \\label{int}\n\\end{equation}%\nfor every $T>\\varepsilon $. For every $x\\geq z>0$, we set\n\\begin{equation}\n\\Phi \\left( x,z\\right) =-\\int_{z}^{x}y\\phi ^{\\prime }\\left( y\\right) dy;\n\\label{PHI}\n\\end{equation}%\nthen, for every $\\alpha \\geq 1$ the process\n\\begin{equation}\nZ_{\\varepsilon }\\left( \\phi ,\\alpha ;t\\right) =\\phi \\left( X_{t}^{\\ast\n}\\right) \\left( X_{t}-A_{t}\\right) +\\alpha \\Phi \\left( X_{t}^{\\ast\n},X_{\\varepsilon }^{\\ast }\\right) \\text{, \\ }t\\geq \\varepsilon \\text{,}\n\\label{subsup}\n\\end{equation}%\nis a $\\mathcal{F}_{t}$-submartingale on $[\\varepsilon ,+\\infty )$.\n\n\\item Suppose that the function $\\phi :(0,+\\infty )\\mapsto \\mathbb{R}$ is of\nclass $C^{1}$, non-decreasing and such that (\\ref{int}) holds for every $%\nT>\\varepsilon .$ Define $\\Phi \\left( \\cdot ,\\cdot \\right) $ according to (%\n\\ref{PHI}), and $Z_{\\varepsilon }\\left( \\phi ,\\alpha ;t\\right) $ according\nto (\\ref{subsup}). Then, for every $\\alpha \\geq 1$ the process $%\nZ_{\\varepsilon }\\left( \\phi ,\\alpha ;t\\right) $ is a $\\mathcal{F}_{t}$%\n-supermartingale on $[\\varepsilon ,+\\infty )$.\n\\end{enumerate}\n\\end{theorem}\n\n\\bigskip\n\n\\textbf{Remarks. }(i) Note that the function $\\phi \\left( y\\right) $ (and $%\n\\phi ^{\\prime }\\left( y\\right) $) need not be defined at $y=0$.\n\n(ii) In Section \\ref{S : Proof}, where we will focus on the Brownian\nsetting, we will exhibit specific examples where the condition $\\alpha \\geq\n1 $ is necessary and sufficient to have that the process $Z_{\\varepsilon\n}\\left( \\alpha ,\\phi ;t\\right) $ is a submartingale (when $\\phi $ is\nnon-increasing) or a supermartingale (when $\\phi $ is non-decreasing).\n\n\\bigskip\n\n\\textbf{Proof of Theorem \\ref{Th : general}. }(\\textsl{Proof of Point 1.})\nObserve first that, since $M_{t}=X_{t}-A_{t}$ is a continuous martingale, $%\nX^{\\ast }$ is non-decreasing and $\\phi $ is differentiable, then a standard\napplication of It\\^{o} formula gives that\n\\begin{eqnarray}\n\\phi \\left( X_{t}^{\\ast }\\right) \\left( X_{t}-A_{t}\\right) -\\phi \\left(\nX_{\\varepsilon }^{\\ast }\\right) \\left( X_{\\varepsilon }-A_{\\varepsilon\n}\\right) &=&\\phi \\left( X_{t}^{\\ast }\\right) M_{t}-\\phi \\left(\nX_{\\varepsilon }^{\\ast }\\right) M_{\\varepsilon } \\notag \\\\\n&=&\\int_{\\varepsilon }^{t}\\phi (X_{s}^{\\ast })dM_{s}+\\int_{\\varepsilon\n}^{t}\\left( X_{s}-A_{s}\\right) \\phi ^{\\prime }\\left( X_{s}^{\\ast }\\right)\ndX_{s}^{\\ast }. \\label{uyu}\n\\end{eqnarray}%\nThe assumptions in the statement imply that the application $\\widetilde{M}%\n_{\\varepsilon ,t}:=\\int_{\\varepsilon }^{t}\\phi (X_{s}^{\\ast })dM_{s}$ is a\ncontinuous square integrable $\\mathcal{F}_{t}$-martingale on $[\\varepsilon\n,+\\infty )$. Moreover, the continuity of $X$ implies that the support of the\nrandom measure $dX_{t}^{\\ast }$ (on $[0,+\\infty )$) is contained in the\n(random) set $\\left\\{ t\\geq 0:X_{t}=X_{t}^{\\ast }\\right\\} $, thus yielding\nthat\n\\begin{eqnarray*}\n\\int_{\\varepsilon }^{t}\\left( X_{s}-A_{s}\\right) \\phi ^{\\prime }\\left(\nX_{s}^{\\ast }\\right) dX_{s}^{\\ast } &=&\\int_{\\varepsilon }^{t}\\left(\nX_{s}^{\\ast }-A_{s}\\right) \\phi ^{\\prime }\\left( X_{s}^{\\ast }\\right)\ndX_{s}^{\\ast } \\\\\n&=&-\\int_{\\varepsilon }^{t}A_{s}\\phi ^{\\prime }\\left( X_{s}^{\\ast }\\right)\ndX_{s}^{\\ast }-\\Phi \\left( X_{t}^{\\ast },X_{\\varepsilon }^{\\ast }\\right) ,\n\\end{eqnarray*}%\nwhere $\\Phi $ is defined in (\\ref{PHI}). As a consequence,\n\\begin{equation}\nZ_{\\varepsilon }\\left( \\phi ,\\alpha ;t\\right) =\\widetilde{M}_{\\varepsilon\n,t}+\\int_{\\varepsilon }^{t}(-A_{s}\\phi ^{\\prime }\\left( X_{s}^{\\ast }\\right)\n)dX_{s}^{\\ast }+\\left( \\alpha -1\\right) \\Phi \\left( X_{t}^{\\ast\n},X_{\\varepsilon }^{\\ast }\\right) \\text{.} \\label{a}\n\\end{equation}%\nNow observe that the application $t\\mapsto \\Phi \\left( X_{t}^{\\ast\n},X_{\\varepsilon }^{\\ast }\\right) $ is non-decreasing (a.s.-$\\mathbb{P)}$,\nand also that, by assumption, $-A_{s}\\phi ^{\\prime }\\left( X_{s}^{\\ast\n}\\right) \\geq 0$ for every $s>0$. This entails immediately that $%\nZ_{\\varepsilon }\\left( \\phi ,\\alpha ;t\\right) $ is a $\\mathcal{F}_{t}$%\n-submartingale for every $\\alpha \\geq 1$.\n\n(\\textsl{Proof of Point 2.}) By using exactly the same line of reasoning as\nin the proof of Point 1., we obtain that\n\\begin{equation}\nZ_{\\varepsilon }\\left( \\phi ,\\alpha ;t\\right) =\\int_{\\varepsilon }^{t}\\phi\n(X_{s}^{\\ast })dM_{s}+\\int_{\\varepsilon }^{t}(-A_{s}\\phi ^{\\prime }\\left(\nX_{s}^{\\ast }\\right) )dX_{s}^{\\ast }+\\left( \\alpha -1\\right) \\Phi \\left(\nX_{t}^{\\ast },X_{\\varepsilon }^{\\ast }\\right) . \\label{aa}\n\\end{equation}%\nSince (\\ref{int}) is in order, we deduce that $t\\mapsto \\int_{\\varepsilon\n}^{t}\\phi (X_{s}^{\\ast })dM_{s}$ is a continuous (square-integrable) $%\n\\mathcal{F}_{t}$-martingale on $[\\varepsilon ,+\\infty )$. Moreover, $%\n-A_{s}\\phi ^{\\prime }\\left( X_{s}^{\\ast }\\right) \\leq 0$ for every $s>0$,\nand we also have that $t\\mapsto \\Phi \\left( X_{t}^{\\ast },X_{\\varepsilon\n}^{\\ast }\\right) $ is a.s. decreasing. This implies that $Z_{\\varepsilon\n}\\left( \\phi ,\\alpha ;t\\right) $ is a $\\mathcal{F}_{t}$-supermartingale for\nevery $\\alpha \\geq 1$. \\ \\ $\\blacksquare $\n\n\\bigskip\n\nThe next result allows to characterize the nature of the process $Z$\nappearing in (\\ref{subsup}) on the whole positive axis. Its proof can be\nimmediately deduced from formulae (\\ref{a}) (for Part 1) and (\\ref{aa}) (for\nPart 2).\n\n\\begin{proposition}\n\\label{P : epsilon}Let the assumptions and notation of this section prevail.\n\n\\begin{enumerate}\n\\item Consider a decreasing function $\\phi :(0,+\\infty )\\mapsto \\mathbb{R}$\nverifying the assumptions of Part 1 of Theorem \\ref{Th : general} and such\nthat\n\\begin{equation}\n\\Phi \\left( x,0\\right) :=-\\int_{0}^{x}y\\phi ^{\\prime }\\left( y\\right) dy%\n\\text{ is finite }\\forall x>0\\text{.} \\label{f1}\n\\end{equation}%\nAssume moreover that\n\\begin{equation}\n\\mathbb{E[}\\int_{0}^{T}\\phi \\left( X_{s}^{\\ast }\\right) ^{2}d\\left\\langle\nM\\right\\rangle _{s}]<+\\infty \\text{,} \\label{f2}\n\\end{equation}%\nand also\n\\begin{eqnarray}\n&&\\phi \\left( X_{\\varepsilon }^{\\ast }\\right) M_{\\varepsilon }=\\phi \\left(\nX_{\\varepsilon }^{\\ast }\\right) \\left( X_{\\varepsilon }-A_{\\varepsilon\n}\\right) \\text{ converges to zero in }L^{1}\\left( \\mathbb{P}\\right) ,\\text{\nas }\\varepsilon \\downarrow 0, \\label{f3} \\\\\n&&\\Phi \\left( X_{t}^{\\ast },0\\right) \\in L^{1}\\left( \\mathbb{P}\\right) \\text{%\n.} \\label{f4}\n\\end{eqnarray}%\nThen, for every $\\alpha \\geq 1$ the process\n\\begin{equation}\nZ\\left( \\phi ,\\alpha ;t\\right) =\\left\\{\n\\begin{array}{ll}\n0 & \\text{for }t=0 \\\\\n\\phi \\left( X_{t}^{\\ast }\\right) \\left( X_{t}-A_{t}\\right) +\\alpha \\Phi\n\\left( X_{t}^{\\ast },0\\right) & \\text{for }t>0%\n\\end{array}%\n\\right. , \\label{zz}\n\\end{equation}%\nis a $\\mathcal{F}_{t}$-submartingale.\n\n\\item Consider an increasing function $\\phi :(0,+\\infty )\\mapsto \\mathbb{R}$\nas in Part 2 of Theorem \\ref{Th : general} and such that assumptions (\\ref%\n{f1})--(\\ref{f4}) are satisfied. Then, for every $\\alpha \\geq 1$ the process\n$Z\\left( \\phi ,\\alpha ;t\\right) $ appearing in (\\ref{zz}) is a $\\mathcal{F}%\n_{t}$-supermartingale.\n\\end{enumerate}\n\\end{proposition}\n\n\\bigskip\n\n\\textbf{Remarks. }(i) A direct application of the Cauchy-Schwarz inequality\nshows that a sufficient condition to have (\\ref{f3}) is the following:%\n\\begin{equation}\n\\lim_{\\varepsilon \\downarrow 0}\\mathbb{E}\\left[ \\phi \\left( X_{\\varepsilon\n}^{\\ast }\\right) ^{2}\\right] \\times \\mathbb{E}\\left[ M_{\\varepsilon }^{2}%\n\\right] =\\lim_{\\varepsilon \\downarrow 0}\\mathbb{E}\\left[ \\phi \\left(\nX_{\\varepsilon }^{\\ast }\\right) ^{2}\\right] \\times \\mathbb{E}\\left[\n\\left\\langle M\\right\\rangle _{\\varepsilon }\\right] =0 \\label{cv0}\n\\end{equation}%\n(observe that $\\lim_{\\varepsilon \\downarrow 0}\\mathbb{E}\\left[\nM_{\\varepsilon }^{2}\\right] =0$, since $M_{0}=0$ by assumption). In other\nwords, when (\\ref{cv0}) is verified the quantity $\\mathbb{E}\\left[\nM_{\\varepsilon }^{2}\\right] $ `takes care' of the possible explosion of $%\n\\varepsilon \\mapsto \\mathbb{E}\\left[ \\phi \\left( X_{\\varepsilon }^{\\ast\n}\\right) ^{2}\\right] $ near zero.\n\n(ii) Let $\\phi $ be non-increasing or non-decreasing on $\\left( 0,+\\infty\n\\right) $, and suppose that $\\phi $ satisfies the assumptions of Theorem \\ref%\n{Th : general} and Proposition \\ref{P : epsilon}. Then, the process $%\nt\\mapsto \\int_{0}^{t}\\phi (X_{s}^{\\ast })dM_{s}$ is a continuous\nsquare-integrable $\\mathcal{F}_{t}^{W}$-martingale. Moreover, for any choice\nof $\\alpha \\in \\mathbb{R}$, the process $Z\\left( \\phi ,\\alpha ;t\\right) $, $%\nt\\geq 0$, defined in (\\ref{zz}) is a semimartingale, with canonical\ndecomposition given by\n\\begin{equation*}\nZ\\left( \\phi ,\\alpha ;t\\right) =\\int_{0}^{t}\\phi (X_{s}^{\\ast\n})dM_{s}+\\int_{0}^{t}\\left( (\\alpha -1)X_{s}^{\\ast }-A_{s}\\right) \\phi\n^{\\prime }\\left( X_{s}^{\\ast }\\right) dX_{s}^{\\ast }.\n\\end{equation*}\n\n\\section{A generalization of Theorem \\protect\\ref{Th : DavisSuh} \\label{S :\nProof}}\n\nThe forthcoming Theorem \\ref{T : DS} is a generalization of Theorem \\ref{Th\n: DavisSuh}. Recall the notation: $W$ is a standard Brownian motion issued\nfrom zero, $W_{t}^{\\ast }=\\max_{s\\leq t}\\left\\vert W_{s}\\right\\vert $ and $%\n\\mathcal{F}_{t}^{W}=\\sigma \\left\\{ W_{u}:u\\leq t\\right\\} $. We also set for\nevery $m\\geq 1$, every $p>0$ and every $c\\in \\mathbb{R}$:\n\\begin{eqnarray}\nJ_{t} &=&J_{t}\\left( m,c,p\\right) =\\left( W_{t}^{\\ast }\\right) ^{p-m}\\left[\n\\left\\vert W_{t}\\right\\vert ^{m}-A_{m,t}\\right] +c\\left( W_{t}^{\\ast\n}\\right) ^{p}\\text{, \\ \\ }t>0\\text{,} \\label{ggei} \\\\\nJ_{0}\\left( m,c,p\\right) &=&J_{0}=0, \\notag\n\\end{eqnarray}%\nwhere $t\\mapsto A_{m,t}$ is the increasing natural process in the Doob-Meyer\ndecomposition of the $\\mathcal{F}_{t}^{W}$-submartingale $t\\mapsto\n\\left\\vert W_{t}\\right\\vert ^{m}$. Of course, $J_{t}\\left( 2,c,p\\right)\n=Y_{t}\\left( c,p\\right) $, as defined in (\\ref{Y}).\n\n\\begin{theorem}\n\\label{T : DS}Under the above notation:\n\n\\begin{enumerate}\n\\item For every $p\\in (0,m]$, the process $J_{t}$ is a $\\mathcal{F}_{t}^{W}$%\n-submartingale if, and only if, $c\\geq \\frac{m-p}{p}.$\n\n\\item For every $p\\in \\lbrack m,+\\infty )$, the process $J_{t}$ is a $%\n\\mathcal{F}_{t}^{W}$-supermartingale if, and only if, $c\\leq \\frac{m-p}{p}.$\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nRecall first the following two facts: (i) $W_{t}^{\\ast }\\overset{law}{=}%\n\\sqrt{t}W_{1}^{\\ast }$ (by scaling), and (ii) there exists $\\eta >0$ such\nthat $\\mathbb{E}\\left[ \\exp (\\eta \\left( W_{1}^{\\ast }\\right) ^{-2})\\right]\n<+\\infty $ (this can be deduced e.g. from \\cite[Ch. II, Exercice 3.10]{RY}),\nso that the random variable $\\left( W_{1}^{\\ast }\\right) ^{-1}$ has finite\nmoments of all orders. Note also that the conclusions of both Point 1 and\nPoint 2 are trivial in the case where $p=m$. In the rest of the proof we\nwill therefore assume that $p\\neq m$.\n\nTo prove Point 1, we shall apply Theorem \\ref{Th : general} and Proposition %\n\\ref{P : epsilon} in the following framework: $X_{t}=\\left\\vert\nW_{t}\\right\\vert ^{m}$ and $\\phi \\left( x\\right) =x^{\\frac{p-m}{m}}=x^{\\frac{%\np}{m}-1}$. In this case, the martingale $M_{t}=\\left\\vert W_{t}\\right\\vert\n^{m}-A_{m,t}$ is such that $\\left\\langle M\\right\\rangle\n_{t}=m^{2}\\int_{0}^{t}W_{s}^{2m-2}ds$, $t\\geq 0$, and $\\Phi \\left(\nx,z\\right) =-\\int_{z}^{x}y\\phi ^{\\prime }\\left( y\\right) dy=-\\left( \\frac{p}{%\nm}-1\\right) \\int_{z}^{x}y^{\\frac{p}{m}-1}dy=\\frac{m-p}{p}\\left( x^{\\frac{p}{m%\n}}-z^{\\frac{p}{m}}\\right) $. Also, for every $T>\\varepsilon >0$%\n\\begin{eqnarray}\n\\mathbb{E[}\\int_{\\varepsilon }^{T}\\phi \\left( X_{s}^{\\ast }\\right)\n^{2}d\\left\\langle M\\right\\rangle _{s}] &=&m^{2}\\mathbb{E[}\\int_{\\varepsilon\n}^{T}\\left( W_{s}^{\\ast }\\right) ^{2p-2m}W_{s}^{2m-2}ds] \\notag \\\\\n&\\leq &m^{2}\\mathbb{E[}\\int_{\\varepsilon }^{T}\\left( W_{s}^{\\ast }\\right)\n^{2p-2}ds]=m^{2}\\mathbb{E[}\\left( W_{1}^{\\ast }\\right) ^{2p-2}\\mathbb{]}%\n\\int_{\\varepsilon }^{T}s^{\\frac{p}{2}-1}ds\\text{,} \\label{jjj}\n\\end{eqnarray}%\nso that $\\phi $ verifies (\\ref{int}) and (\\ref{f2}). Relations (\\ref{f1})\nand (\\ref{f4}) are trivially satisfied. To see that (\\ref{f3}) holds, use\nthe relations\n\\begin{eqnarray*}\n\\mathbb{E}\\left\\{ \\left\\vert \\phi \\left( X_{\\varepsilon }^{\\ast }\\right)\n\\left( X_{\\varepsilon }-A_{\\varepsilon }\\right) \\right\\vert \\right\\} &=&%\n\\mathbb{E\\{}\\left\\vert \\left( W_{\\varepsilon }^{\\ast }\\right) ^{p-m}\\left[\n\\left\\vert W_{\\varepsilon }\\right\\vert ^{m}-A_{m,\\varepsilon }\\right]\n\\right\\vert \\} \\\\\n&=&\\mathbb{E}\\left\\{ \\left\\vert \\left( W_{\\varepsilon }^{\\ast }\\right)\n^{p-m}M_{\\varepsilon }\\right\\vert \\right\\} \\leq \\mathbb{E}\\left\\{ \\left(\nW_{\\varepsilon }^{\\ast }\\right) ^{2p-2m}\\right\\} ^{1\/2}\\mathbb{E}\\left\\{\n\\left\\langle M\\right\\rangle _{\\varepsilon }\\right\\} ^{1\/2} \\\\\n&=&m\\mathbb{E}\\left\\{ W_{1}^{2m-2}\\right\\} ^{1\/2}\\mathbb{E}\\left\\{ \\left(\nW_{1}^{\\ast }\\right) ^{2p-2m}\\right\\} ^{1\/2}\\varepsilon ^{\\frac{p}{2}-\\frac{m%\n}{2}}\\left( \\int_{0}^{\\varepsilon }s^{m-1}ds\\right) ^{1\/2} \\\\\n&\\rightarrow &0\\text{, \\ as }\\varepsilon \\downarrow 0\\text{.}\n\\end{eqnarray*}%\nFrom Point 1 of Proposition \\ref{P : epsilon}, we therefore deduce that the\nprocess $Z\\left( t\\right) $ defined as $Z\\left( 0\\right) =0$ and, for $t>0$,%\n\\begin{eqnarray}\nZ\\left( t\\right) &=&\\phi \\left( \\left( W_{t}^{\\ast }\\right) ^{m}\\right)\n\\left[ \\left\\vert W_{t}\\right\\vert ^{m}-A_{m,t}\\right] +\\alpha \\Phi \\left(\n\\left( W_{t}^{\\ast }\\right) ^{m},0\\right) \\label{g} \\\\\n&=&\\left( W_{t}^{\\ast }\\right) ^{p-m}\\left[ \\left\\vert W_{t}\\right\\vert\n^{m}-A_{m,t}\\right] +\\alpha \\frac{m-p}{p}\\left( W_{t}^{\\ast }\\right) ^{p}%\n\\text{,} \\label{gg}\n\\end{eqnarray}%\nis a $\\mathcal{F}_{t}^{W}$-submartingale for every $\\alpha \\geq 1$. By\nwriting $c=\\alpha \\frac{m-p}{p}$ in the previous expression, and by using\nthe fact that $\\frac{m-p}{p}\\geq 0$ by assumption, we deduce immediately\nthat $J_{t}\\left( m,c;p\\right) $ is a submartingale for every $c\\geq \\frac{%\nm-p}{p}$. Now suppose $c<\\frac{m-p}{p}$. One can use formulae (\\ref{a}), (%\n\\ref{g}) and (\\ref{gg}) to prove that\n\\begin{eqnarray*}\nJ_{t}\\left( m,c;p\\right) &=&\\int_{0}^{t}\\phi (X_{s}^{\\ast\n})dM_{s}+\\int_{0}^{t}[-A_{m,s}\\phi ^{\\prime }\\left( (W_{s}^{\\ast\n})^{m}\\right) ]d(W_{s}^{\\ast })^{m}+\\left( \\alpha -1\\right) \\Phi \\left(\n\\left( W_{t}^{\\ast }\\right) ^{m},0\\right) \\\\\n&=&\\int_{0}^{t}(W_{s}^{\\ast })^{p-m}dM_{s} \\\\\n&&+\\left( \\frac{p}{m}-1\\right) \\int_{0}^{t}[\\left( 1-\\alpha \\right) \\left(\nW_{s}^{\\ast }\\right) ^{m}-A_{m,s}](W_{s}^{\\ast })^{p-2m}d(W_{s}^{\\ast })^{m}%\n\\text{,}\n\\end{eqnarray*}%\nwhere $1-\\alpha =1-pc\/(m-p)>0$. Note that $\\int_{0}^{t}(W_{s}^{\\ast\n})^{p-m}dM_{s}$ is a square-integrable martingale, due to (\\ref{jjj}). To\nconclude that, in this case, $J_{t}\\left( m,c;p\\right) $ cannot be a\nsubmartingale (nor a supermartingale), it is sufficient to observe that (for\nevery $m\\geq 1$ and every $\\alpha <1$) the paths of the finite variation\nprocess\n\\begin{equation*}\nt\\mapsto \\int_{0}^{t}[\\left( 1-\\alpha \\right) \\left( W_{s}^{\\ast }\\right)\n^{m}-A_{m,s}](W_{s}^{\\ast })^{p-2m}d(W_{s}^{\\ast })^{m}\\text{ }\n\\end{equation*}%\nare neither non-decreasing nor non-increasing, with $\\mathbb{P}$-probability\none.\n\nTo prove Point 2, one can argue in exactly the same way, and use Point 2 of\nProposition \\ref{P : epsilon} to obtain that the process $Z\\left( t\\right) $\ndefined as $Z\\left( 0\\right) =0$ and, for $t>0$,%\n\\begin{equation*}\nZ\\left( t\\right) =\\left( W_{t}^{\\ast }\\right) ^{p-m}\\left[ \\left\\vert\nW_{t}\\right\\vert ^{m}-A_{m,t}\\right] +\\alpha \\frac{m-p}{p}\\left( W_{t}^{\\ast\n}\\right) ^{p}\n\\end{equation*}%\nis a $\\mathcal{F}_{t}^{W}$-supermartingale for every $\\alpha \\geq 1$. By\nwriting once again $c=\\alpha \\frac{m-p}{p}$ in the previous expression, and\nsince $\\frac{m-p}{p}\\leq 0$, we immediately deduce that $J_{t}\\left(\nm,c;p\\right) $ is a supermartingale for every $c\\leq \\frac{m-p}{p}$. One can\nshow that $J_{t}\\left( m,c;p\\right) $ cannot be a supermartingale, whenever $%\nc>\\frac{m-p}{p}$, by using arguments analogous to those displayed in the\nlast part of the proof of Point 1.\n\\end{proof}\n\n\\bigskip\n\nThe following result is obtained by specializing Theorem \\ref{T : DS} to the\ncase $m=1$ (via Tanaka's formula).\n\n\\begin{corollary}\n\\label{C : Tanaka}Denote by $\\left\\{ \\ell _{t}:t\\geq 0\\right\\} $ the local\ntime at zero of the Brownian motion $W$. Then, the process\n\\begin{eqnarray*}\nJ_{t}\\left( p\\right) &=&\\left( W_{t}^{\\ast }\\right) ^{p-1}\\left[ \\left\\vert\nW_{t}\\right\\vert -\\ell _{t}\\right] +c\\left( W_{t}^{\\ast }\\right) ^{p}\\text{,\n}t>0\\text{,} \\\\\nJ_{0}\\left( p\\right) &=&0\\text{,}\n\\end{eqnarray*}%\nis such that: (i) for $p\\in (0,1]$, $J_{t}\\left( p\\right) $ is a $\\mathcal{F}%\n_{t}^{W}$-submartingale if, and only if, $c\\geq 1\/p-1$, and (ii) for $p\\in\n\\lbrack 1,+\\infty )$, $J_{t}\\left( p\\right) $ is a $\\mathcal{F}_{t}^{W}$%\n-supermartingale if, and only if, $c\\leq 1\/p-1.$\n\\end{corollary}\n\n\\section{Burkholder-Davis-Gundy (BDG) inequalities\\label{S : BDG}}\n\nWe reproduce an argument taken from \\cite[p. 314]{DS}, showing that the\nfirst part of Theorem \\ref{T : DS} can be used to obtain a strong version of\nthe BDG inequalities (see e.g. \\cite[Ch. IV, \\S 4]{RY}).\n\nFix $p\\in (0,2)$ and define $c=(2-p)\/p=2\/p-1.$ Since, according to the first\npart of Theorem \\ref{T : DS}, $Y_{t}=Y_{t}(c,p)$ is a $\\mathcal{F}_{t}^{W}$%\n-submartingale starting from zero, we deduce that, for every bounded and\nstrictly positive $\\mathcal{F}_{t}^{W}$-stopping time $\\tau $, one has $%\n\\mathbb{E}(Y_{\\tau })\\geq 0$. In particular, this yields%\n\\begin{equation}\n\\mathbb{E}\\left( \\frac{\\tau }{(W_{\\tau }^{\\ast })^{2-p}}\\right) \\leq \\frac{2%\n}{p}\\mathbb{E}\\left( (W_{\\tau }^{\\ast })^{p}\\right) \\text{.} \\label{zio}\n\\end{equation}%\nFormula (\\ref{zio}), combined with an appropriate use of H\\\"{o}lder's\ninequality, entails finally that, for $0\\eta \\mathbb{E}\\left( f_{\\left( \\eta \\right)\n}^{\\ast }\\right) $. As observed in \\cite{DS}, Burkholder's inequality (\\ref%\n{BI}) should be compared with (\\ref{gi}) for $p=1$, which yields\nthe relation\n $\\mathbb{E}\\left( \\tau ^{1\/2}\\right) \\leq \\sqrt{2}\\mathbb{E}%\n(W_{\\tau }^{\\ast })$ for every stopping time $\\tau $. This shows\nthat in such a framework, involving uniquely continuous\nmartingales, the constant $\\sqrt{3}$ is no longer optimal.\n\n\\section{Balayage \\label{S : Bal}}\n\nKeep the assumptions and notation of Section \\ref{S : Gen} and Theorem \\ref%\n{Th : general}, fix $\\varepsilon >0$ and consider a finite variation\nfunction $\\psi :(0,+\\infty )\\mapsto \\mathbb{R}.$ In this section we focus on\nthe formula%\n\\begin{equation}\n\\psi \\left( X_{t}^{\\ast }\\right) \\left( X_{t}-A_{t}\\right) -\\psi \\left(\nX_{\\varepsilon }^{\\ast }\\right) \\left( X_{\\varepsilon }-A_{\\varepsilon\n}\\right) =\\int_{\\varepsilon }^{t}\\psi (X_{s}^{\\ast })d\\left(\nX_{s}-A_{s}\\right) +\\int_{\\varepsilon }^{t}\\left( X_{s}^{\\ast }-A_{s}\\right)\nd\\psi (X_{s}^{\\ast }), \\label{d}\n\\end{equation}%\nwhere $\\varepsilon >0$. Note that by choosing $\\psi =\\phi $ in (\\ref{d}),\nwhere $\\phi \\in C^{1}$ is monotone, one recovers formula (\\ref{uyu}), which\nwas crucial in the proof Theorem \\ref{Th : general}. We shall now show that (%\n\\ref{d}) can be obtained by means of the \\textsl{balayage formulae} proved\nin \\cite{MY}.\\textit{\\ }\n\nTo see this, let $U=\\left\\{ U_{t}:t\\geq 0\\right\\} $ be a continuous $%\n\\mathcal{F}_{t}$-semimartingale issued from zero. For every $t>0$ we define\nthe random time\n\\begin{equation}\n\\sigma \\left( t\\right) =\\sup \\left\\{ s0\\right\\} $ such that the\nrestriction $\\left\\{ K_{t}:t\\geq \\varepsilon \\right\\} $ is locally bounded\nand $\\mathcal{F}_{t}$-predictable on $\\left[ \\varepsilon ,+\\infty \\right) $\nfor every $\\varepsilon >0$. Then, for every fixed $\\varepsilon >0$, the\nprocess $K_{\\sigma \\left( t\\right) }$, $t\\geq \\varepsilon $, is locally\nbounded and $\\mathcal{F}_{t}$-predictable, and moreover%\n\\begin{equation}\nU_{t}K_{\\sigma \\left( t\\right) }=U_{\\varepsilon }K_{\\sigma \\left(\n\\varepsilon \\right) }+\\int_{\\varepsilon }^{t}K_{\\sigma \\left( s\\right)\n}dU_{s}\\text{.} \\label{balabala}\n\\end{equation}\n\\end{proposition}\n\n\\bigskip\n\nTo see how (\\ref{d}) can be recovered from (\\ref{balabala}), set $%\nU_{t}=X_{t}-X_{t}^{\\ast }$ and $K_{t}=\\psi \\left( X_{t}^{\\ast }\\right) $.\nThen, $K_{t}=K_{\\sigma \\left( t\\right) }=\\psi (X_{\\sigma \\left( t\\right)\n}^{\\ast })$ by construction, where $\\sigma \\left( t\\right) $ is defined as\nin (\\ref{tauti}). As a consequence, (\\ref{balabala}) gives\n\\begin{equation*}\n\\psi \\left( X_{t}^{\\ast }\\right) \\left( X_{t}-X_{t}^{\\ast }\\right) =\\psi\n\\left( X_{\\varepsilon }^{\\ast }\\right) \\left( X_{\\varepsilon\n}-X_{\\varepsilon }^{\\ast }\\right) +\\int_{\\varepsilon }^{t}\\psi (X_{s}^{\\ast\n})d\\left( X_{s}-X_{s}^{\\ast }\\right) \\text{.}\n\\end{equation*}%\nFinally, a standard integration by parts applied to $\\psi \\left( X_{t}^{\\ast\n}\\right) \\left( X_{t}^{\\ast }-A_{t}\\right) $ yields\n\\begin{eqnarray*}\n\\psi \\left( X_{t}^{\\ast }\\right) \\left( X_{t}-A_{t}\\right) &=&\\psi \\left(\nX_{t}^{\\ast }\\right) \\left( X_{t}-X_{t}^{\\ast }\\right) +\\psi \\left(\nX_{t}^{\\ast }\\right) \\left( X_{t}^{\\ast }-A_{t}\\right) \\\\\n&=&\\psi \\left( X_{\\varepsilon }^{\\ast }\\right) \\left( X_{\\varepsilon\n}-X_{\\varepsilon }^{\\ast }\\right) +\\int_{\\varepsilon }^{t}\\psi (X_{s}^{\\ast\n})d\\left( X_{s}-X_{s}^{\\ast }\\right) \\\\\n&&+\\psi \\left( X_{\\varepsilon }^{\\ast }\\right) \\left( X_{\\varepsilon }^{\\ast\n}-A_{\\varepsilon }\\right) +\\int_{\\varepsilon }^{t}\\psi (X_{s}^{\\ast\n})d\\left( X_{s}^{\\ast }-A_{s}\\right) \\\\\n&&+\\int_{\\varepsilon }^{t}\\left( X_{s}^{\\ast }-A_{s}\\right) d\\psi \\left(\nX_{s}^{\\ast }\\right) \\text{,}\n\\end{eqnarray*}%\nwhich is equivalent to (\\ref{d}).\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzceun b/data_all_eng_slimpj/shuffled/split2/finalzzceun new file mode 100644 index 0000000000000000000000000000000000000000..32ce03a6ba73d4a91ffab09792347631aa5a794b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzceun @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:intro}\n\nParticle acceleration and heating are common phenomena in plasma. For example, magnetic reconnection converts free magnetic energy into particle energy and thus leads to the acceleration of particles. In this case, the energy source is the free magnetic energy stored in the unstable anti-parallel magnetic field. Another example is turbulence, which results from the nonlinear interaction of structures and waves, and transports energy from large-scale to small-scale flow energy (or in other words, large eddies to small eddies). The energy is eventually dissipated as heat at very small scales, which leads to the heating of plasma. Magnetic reconnection and turbulence are frequently invoked to explain space observations of particle acceleration and the heating of solar corona and solar wind (e.g., Ref \\cite{Parker1988, Matthaeus1999, Zank2018}).\n\nIt is well known that the typical space plasma has a very long collisional mean free path (e.g., on the order of au, or astronomical unit, in the solar wind near Earth) and thus is considered as collisionless \\cite{Marsch1991}. A consequent question is then how to characterize the dissipation process in a collisionless system. An increase in temperature corresponds to plasma heating, but it does not necessarily represent physical dissipation as it might be adiabatic. The pressure-strain interaction was recently suggested as a proxy of dissipation \\cite{Yang2017}, but its validity needs to be further tested.\n\nClassic statistical physics tells us that the dissipation process can be described by entropy, which is a nondecreasing function for an isolated macroscopic system according to the second law of thermodynamics. In kinetic theory, the entropy is often related to Boltzmann's H-function \\cite{Wolf2009, Guo2017}. The Boltzmann's H-theorem states that the H-function only decreases in the presence of a collision operator. In this sense, entropy is conserved in collisionless plasma. The conservation of entropy has been verified by recent kinetic simulations \\cite{Liang2019}.\n\nIn fluid dynamics, the entropy is frequently defined as $S \\sim \\log(p\/\\rho^{\\gamma})$ where $p$ is the pressure, $\\rho$ is the density, and $\\gamma$ is the adiabatic index (ratio of specific heats). This expression, which we refer to as fluid entropy, has the advantage that it is easy to calculate. However, as we will show in this paper, the fluid entropy may not be a good proxy for entropy in plasma. In Section \\ref{sec:entropy}, we show a derivation of fluid entropy as well as its gyrotropic extension from the thermodynamic perspective. From the basic kinetic theory, it is straightforward to derive evolution equations for the fluid entropy, as shown in Section \\ref{sec:equation}. These equations suggest several mechanisms that are responsible for the change of fluid entropy. The relation of entropy and previously discussed pressure-strain interaction \\cite{Yang2017} is thus established in this study. In addition, we discuss the role of heat flux, which has been neglected in most previous studies. In Section \\ref{sec:simulation}, we demonstrate our results by a collisionless particle-in-cell simulation. Finally, a brief dicussion and conclusions are found in Section \\ref{sec:discussion}.\n\n\n\\section{Kinetic and fluid entropy}\\label{sec:entropy}\n\\subsection{Kinetic entropy}\n\nKinetic entropy or Boltzmann entropy is discussed in most textbooks of statistical physics \\citep[e.g.,][]{Reif2009}. It is defined through the number of microstates $\\Omega$,\n\\begin{equation}\\label{eq:entro-boltz}\n S = k \\log \\Omega,\n\\end{equation}\nwhere $k$ is the Boltzmann constant. Equation \\eqref{eq:entro-boltz} is the most general and precise definition of entropy. The second law of thermodynamics states that the entropy thus defined is a nondecreasing quantity for an isolated system.\n\nIn kinetic theory, another useful quantity is the ``Boltzmann H'' function:\n\\begin{equation}\\label{eq:boltz-H}\n H = \\int f \\log f d^3x d^3v,\n\\end{equation}\nwhere $f$ is the particle distribution function. The Boltzmann H-theorem states that the H function of a system is nonincreasing, i.e.,\n\\[ \\frac{d}{dt} H \\le 0. \\]\nThus the function $-k H$ can be interpreted as the entropy. Indeed, one can show that the decreasing of H function is due to collision \\citep[e.g.,][]{Zank2014}. A recent study shows that the Boltzmann entropy or H function, when carefully evaluated in collisionless PIC simulations, is approximately conserved \\citep{Liang2019}.\n\n\\subsection{Entropy of ideal gas}\n\nIn fluid dynamics and MHD, the entropy is frequently defined as the following\n\\begin{equation}\\label{eq:entro-iso}\n S = \\frac{3}{2} \\log\\frac{p}{\\rho^{\\gamma}},\n\\end{equation}\nwhere as usual, $p$ is the pressure, $\\rho$ is the mass density, and $\\gamma$ is the adiabatic index. What is often overlooked is that this definition of entropy stems from the ideal gas equation of state, and may not be valid for magnetized plasmas. For completeness, we show a brief derivation of the expression \\eqref{eq:entro-iso}. We start with the thermodynamic relation\n\\begin{equation}\\label{eq:thermo}\n dS = \\frac{dQ}{T} = \\frac{dE}{T} + \\frac{dW}{T},\n\\end{equation}\nwhere $T$ is the temperature, $dQ$ is the heat, $dE$ is the change in energy, and $dW$ is the work. For ideal gas, we have the equation of state\n\\begin{equation}\\label{eq:eos-ideal}\n pV = NkT,~ \\textrm{or}~~ p = nkT.\n\\end{equation}\nwhere $N$ is the number of particles, and $n$ is the number density. The important property of ideal gas is that the internal energy $E$ is only a function of temperature (but not the volume) with the following relation\n\\begin{equation}\\label{eq:dE-ideal}\n E(T) = \\frac{3}{2}NkT = C_v T \\quad\\Rightarrow\\quad \\frac{dE}{T} = \\frac{C_v dT}{T} = \\frac{3}{2}Nk \\frac{dT}{T},\n\\end{equation}\nwhere $C_v$ is the specific heat at a constant volume. Now we use $p$ and $n$ as independent variables, and assuming a fixed number of particles, so that\n\\[ dT = \\frac{1}{nk} dp - \\frac{p}{n^2 k} dn;\\quad dV = -\\frac{N}{n^2} dn. \\]\nUsing the above relations and $dW = pdV$, we find\n\\[ dS = C_v \\frac{dp}{p} - (C_v + Nk) \\frac{dn}{n}. \\]\nNotice that $C_v + Nk = C_p$ the specific heat at a constant pressure, and $\\gamma = C_p\/C_v$ the adiabatic index, implying that the above relation then becomes\n\\begin{equation}\\label{eq:dS}\n dS = C_v \\kl{\\frac{dp}{p} - \\frac{C_p}{C_v} \\frac{dn}{n}} = C_v d\\log \\frac{p}{n^{\\gamma}} = \\frac{3}{2} Nk d\\log \\frac{p}{n^{\\gamma}}.\n\\end{equation}\nThus, Equation \\eqref{eq:entro-iso} is recovered except for some constant factors.\n\n\\subsection{Entropy of an anisotropic fluid}\n\nOne of the key assumptions made in the derivation of the fluid entropy \\eqref{eq:entro-iso} is the ideal gas equation of state \\eqref{eq:eos-ideal} and \\eqref{eq:dE-ideal}. Now we go one step further and consider anisotropic fluid. The simplest plasma model with anisotropic pressure is the CGL model due to Chew, Goldberger \\& Low \\citep{Chew1956}, where the pressure tensor is assumed to be of the form\n\\begin{equation}\\label{eq:pressure-CGL}\n \\boldsymbol{P} = p_{\\parallel}\\boldsymbol{bb} + p_{\\perp}(\\boldsymbol{I} - \\boldsymbol{bb}).\n\\end{equation}\nHere, $\\boldsymbol{P}$ is the pressure tensor, $\\boldsymbol{I}$ represents the identity tensor, and $\\boldsymbol{b}$ represents the magnetic field unit vector. The scalar pressure is related to the parallel and perpendicular pressure according to $p = (p_{\\parallel} + 2p_{\\perp}) \/ 3$. When the parallel and perpendicular pressure are the same, $p_{\\parallel} = p_{\\perp} = p$, and the pressure tensor reduces to the isotropic pressure $\\boldsymbol{P} = p\\boldsymbol{I}$. A comprehensive review of the CGL model is found in Ref \\cite{Hunana2019}. On neglecting the heat flux, the CGL pressure follows the evolution equations,\n\\begin{equation}\\label{eq:ppar}\n \\frac{dp_{\\parallel}}{dt} + p_{\\parallel}\\nabla\\cdot\\boldsymbol{u} + 2p_{\\parallel}\\boldsymbol{bb}:\\nabla\\boldsymbol{u} = 0;\n\\end{equation}\n\\begin{equation}\\label{eq:pper}\n \\frac{dp_{\\perp}}{dt} + 2p_{\\perp}\\nabla\\cdot\\boldsymbol{u} - p_{\\perp}\\boldsymbol{bb}:\\nabla\\boldsymbol{u} = 0.\n\\end{equation}\nSee also equations \\eqref{eq:pressure-para} and \\eqref{eq:pressure-perp} below for more general cases. Here, the $d\/dt = \\partial\/\\partial t + \\boldsymbol{u}\\cdot\\nabla$ represents the convective derivative. The classical CGL imposes the same induction equation as in ideal MHD with the motional electric field only,\n\\[ \\partt{\\boldsymbol{B}} = \\nabla\\times(\\boldsymbol{u}\\times\\boldsymbol{B}). \\]\nNote that $\\boldsymbol{u}$ here is the flow velocity, which can be approximated by the ion flow velocity $\\boldsymbol{u}_i$. This simplifies the term $\\boldsymbol{bb}:\\nabla\\boldsymbol{u}$ to\n\\[ \\boldsymbol{bb}:\\nabla\\boldsymbol{u} = \\frac{1}{B}\\boldsymbol{b}\\cdot(\\boldsymbol{B}\\cdot\\nabla\\boldsymbol{u}) = \\frac{1}{B}\\frac{dB}{dt} + \\nabla\\cdot\\boldsymbol{u}. \\]\nFrom here, assuming cold electrons, it is straightforward to derive the classical CGL double adiabatic equations\n\\begin{equation}\\label{eq:double-adiabatic}\n \\frac{d}{dt} \\kl{\\frac{p_{\\parallel} B^2}{\\rho^3}} = 0;\\quad \\frac{d}{dt} \\kl{\\frac{p_{\\perp}}{\\rho B}} = 0.\n\\end{equation}\nIt should be emphasized that the pressure here consists only of ion contributions, and $\\rho = \\rho_i + \\rho_e$ is the total mass density, although the pressure equations \\eqref{eq:ppar} and \\eqref{eq:pper} are valid for all species (after assuming CGL pressure and neglecting heat flux).\n\nTo find the entropy for a CGL fluid, we again use the thermodynamic relation \\eqref{eq:thermo}. The equation of state is similar to the ideal gas case, but we need consider separately the parallel and perpendicular pressure\/temperature:\n\\[ p_{\\parallel} = nk T_{\\parallel};\\quad p_{\\perp} = nk T_{\\perp}. \\]\nThe internal energy now relates to both parallel and perpendicular temperature as\n\\begin{equation}\\label{eq:Eparper}\n E = E_{\\parallel} + E_{\\perp} = \\frac{1}{2}Nk T_{\\parallel} + Nk T_{\\perp} = \\frac{3}{2} NkT.\n\\end{equation}\nAlthough the relation is the same as the ideal gas case when expressed in terms of the isotropic temperature $T = (T_{\\parallel} + 2T_{\\perp})\/3$, we argue that a single temperature is no longer sufficient to characterize the macrostate of thermodynamic equilibrium---both $T_{\\parallel}$ and $T_{\\perp}$ are needed. Therefore, we separate the entropy equation into parallel and perpendicular components:\n\\[ dS_{\\parallel} = \\frac{dE_{\\parallel}}{T_{\\parallel}} + \\frac{dW_{\\parallel}}{T_{\\parallel}};\\quad dS_{\\perp} = \\frac{dE_{\\perp}}{T_{\\perp}} + \\frac{dW_{\\perp}}{T_{\\perp}}. \\]\nIn calculating the parallel and perpendicular work, one should take into account the exchange between parallel and perpencidular energy in addition to the $pdV$ contribution. Using the CGL pressure equations \\eqref{eq:ppar} and \\eqref{eq:pper} and the induction equation, the work is modified as\n\\begin{equation}\\label{eq:work-parper}\n dW_{\\parallel} = p_{\\parallel} dV + p_{\\parallel} V\\frac{dB}{B};\\quad dW_{\\perp} = -p_{\\perp} V\\frac{dB}{B},\n\\end{equation}\nso that the combination of the two yields $dW = pdV$ in the limit $p_{\\parallel} = p_{\\perp}$. The relation between Equations \\eqref{eq:ppar}, \\eqref{eq:pper} and \\eqref{eq:work-parper} is shown explicitly in the Appendix. Physically, Equation \\eqref{eq:work-parper} stems from the conservation of magnetic flux and particles (see Ref \\cite{Guo2017} for a more physical derivation). Then, following the same procedure of the ideal gas case, we obtain the parallel and perpendicular entropy as\n\\begin{equation}\\label{eq:dS-parallel}\n dS_{\\parallel} = C_{v\\parallel} \\kl{\\frac{dp_{\\parallel}}{p_{\\parallel}} - \\frac{C_{p\\parallel}}{C_{v\\parallel}}\\frac{dn}{n} + 2\\frac{dB}{B}} = C_{v\\parallel} d \\log \\frac{p_{\\parallel} B^2}{n^{\\gamma_{\\parallel}}};\n\\end{equation}\n\\begin{equation}\\label{eq:dS-perp}\n dS_{\\perp} = C_{v\\perp} \\kl{\\frac{dp_{\\perp}}{p_{\\perp}} - \\frac{dn}{n} - \\frac{dB}{B}} = C_{v\\perp} d \\log \\frac{p_{\\perp}}{n B}.\n\\end{equation}\nHere, we have $C_{v\\parallel} = Nk\/2$; $C_{v\\perp} = Nk$, and thus $\\gamma_{\\parallel} = 3$; $\\gamma_{\\perp} = 2$ following Equation \\eqref{eq:Eparper}. Note that because the $pdV$ term only appears in the parallel work in Equation \\eqref{eq:work-parper}, the perpendicular adiabatic index $\\gamma_{\\perp}$ is not used in the entropy equation. The sum of Equations \\eqref{eq:dS-parallel} and \\eqref{eq:dS-perp} gives the total entropy:\n\\begin{equation}\\label{eq:dS-total}\n dS = dS_{\\parallel} + dS_{\\perp} = C_v d\\log \\frac{p_{\\parallel}^{1\/3}p_{\\perp}^{2\/3}}{n^{5\/3}},\n\\end{equation}\nor\n\\begin{equation}\\label{eq:entro-CGL}\n S = C_v \\log \\frac{p_{\\parallel}^{1\/3}p_{\\perp}^{2\/3}}{n^{5\/3}}.\n\\end{equation}\nThus we recover the early result obtained by Abraham-Shrauner (see Equation (9)--(11) of Ref \\cite{Abraham-Shrauner1967}). Under classical CGL assumptions, the parallel, perpendicular, and total entropy are all conserved quantities following the double adiabatic equations \\eqref{eq:double-adiabatic}. However, the regular isotropic fluid entropy \\eqref{eq:entro-iso} may not be conserved in this scenario using the normal definition $p = (p_{\\parallel} + 2p_{\\perp})\/3$. Reference \\cite{Abraham-Shrauner1967} also points out that the CGL fluid entropy \\eqref{eq:entro-CGL} is consistent with the Boltzmann H-function \\eqref{eq:boltz-H} evaluated with a bi-Maxwellian distribution\n\\[ f = n\\kl{\\frac{m}{2\\pi kT_{\\parallel}}}^{1\/2}\\kl{\\frac{m}{2\\pi kT_{\\perp}}} \\exp\\klm{-\\frac{mv_{\\parallel}^2}{2kT_{\\parallel}} - \\frac{mv_{\\perp}^2}{2kT_{\\perp}}}, \\]\nwhich can be easily verified.\n\n\n\\section{Plasma energization, heating, and the entropy equations}\\label{sec:equation}\n\\subsection{Plasma energization and heating}\n\nWe now use the fluid equations to study the plasma energization and heating. Deriving moments equations from the Vlasov equation is known from most standard plasma textbooks, so it is not shown here (see e.g., Ref \\cite{Zank2014, Gurnett2005}). The relevant results are the equations of bulk kinetic energy and thermal energy, as shown in Ref \\cite{Yang2017},\n\\begin{equation}\\label{eq:bulk}\n \\partt{\\varepsilon^{k}} + \\nabla\\cdot (\\varepsilon^{k} \\boldsymbol{u}) = -\\nabla\\cdot(\\boldsymbol{P}\\cdot \\boldsymbol{u}) + \\boldsymbol{P}:\\nabla\\boldsymbol{u} + nq\\boldsymbol{E}\\cdot \\boldsymbol{u};\n\\end{equation}\n\\begin{equation}\\label{eq:thermal}\n \\partt{\\varepsilon^{th}} + \\nabla\\cdot (\\varepsilon^{th} \\boldsymbol{u}) = -\\boldsymbol{P}:\\nabla\\boldsymbol{u} - \\nabla\\cdot \\boldsymbol{q};\n\\end{equation}\n\\begin{equation}\\label{eq:total}\n \\partt{\\varepsilon} + \\nabla\\cdot (\\varepsilon \\boldsymbol{u}) = -\\nabla\\cdot(\\boldsymbol{P}\\cdot \\boldsymbol{u}) - \\nabla\\cdot \\boldsymbol{q} + nq\\boldsymbol{E}\\cdot \\boldsymbol{u}.\n\\end{equation}\nHere, $\\boldsymbol{E}$ is the electric field and $q$ is the charge of the considered particle species (we drop the species subscript for simplicity). Other quantities in the equations are defined as moments: $n$ and $\\boldsymbol{u}$ are the number density and bulk flow velocity as usual; $\\varepsilon = (1\/2)\\int f mv^2 d^3v$ is the total energy density of a plasma species with $f(v)$ the velocity distribution function; $\\varepsilon^{k} = (1\/2)nmu^2$ is the bulk flow energy density; $\\varepsilon^{th} = (1\/2)\\int fm(\\boldsymbol{v}-\\boldsymbol{u})^2 d^3v$ is the thermal energy density. The pressure tensor is defined as the second moment $\\boldsymbol{P} = \\int fm(\\boldsymbol{v} - \\boldsymbol{u})(\\boldsymbol{v} - \\boldsymbol{u}) d^3v$, and the heat flux vector is $\\boldsymbol{q} = (1\/2)\\int fm|\\boldsymbol{v} - \\boldsymbol{u}|^2(\\boldsymbol{v} - \\boldsymbol{u}) d^3v$ (note that it is not to be confused with the charge q).\n\nFor an isolated system, one may argue that the divergence terms in Equation \\eqref{eq:bulk}--\\eqref{eq:total}, when integrated over space, vanish because of Gauss' theorem. Therefore, one arrives at the conclusion that the total plasma energization is due to the work done by the electric field, and the pressure tensor term $\\boldsymbol{P}:\\nabla\\boldsymbol{u}$ acts as a channel that connects the bulk flow energy to the thermal energy \\cite{Yang2017}. In other words, it is the work done by the pressure tensor that contributes to the plasma heating.\n\nTo further illustrate the physical meaning of the term $\\boldsymbol{P}:\\nabla\\boldsymbol{u}$, the full pressure tensor is decomposed as\n\\begin{equation}\\label{eq:pressure-full}\n \\boldsymbol{P} = p_{\\parallel}\\boldsymbol{bb} + p_{\\perp}(\\boldsymbol{I} - \\boldsymbol{bb}) + \\boldsymbol{P}^{n} = \\boldsymbol{P}^{c} + \\boldsymbol{P}^{n}.\n\\end{equation}\nHere, we recognize the familiar CGL pressure tensor $\\boldsymbol{P}^{c}$ as in Equation \\eqref{eq:pressure-CGL} and the remaining part may be called nongyrotropic pressure $\\boldsymbol{P}^{n}$ (see Ref \\cite{Hunana2019} for a detailed discussion). Upon decomposing the pressure tensor, it is easily verified that\n\\begin{equation}\\label{eq:pressure-work}\n \\boldsymbol{P}:\\nabla\\boldsymbol{u} = p\\nabla\\cdot \\boldsymbol{u} + \\frac{1}{2}(p_{\\parallel} - p_{\\perp})\\boldsymbol{bb}:\\boldsymbol{\\sigma} + \\boldsymbol{P}^{n}:\\nabla\\boldsymbol{u},\n\\end{equation}\nwhere we define the shear tensor $\\boldsymbol{\\sigma}$ as\n\\begin{equation}\\label{eq:shear}\n \\sigma_{ij} = \\partpart{u_i}{x_j} + \\partpart{u_j}{x_i} - \\frac{2}{3}\\delta_{ij}\\nabla\\cdot \\boldsymbol{u}.\n\\end{equation}\nThe shear tensor is equivalent to the traceless strain-rate tensor ($D_{ij}$ of Ref \\cite{Yang2017}) within a factor of 2. The first term of Equation \\eqref{eq:pressure-work} represents the energization due to fluid compression, the second term represents the shear or viscous energization, and the third term is due to nongyrotropic effects \\cite{Li2018, Du2018}. Since the nongyrotropic pressure $P^{n}$ is traceless, the last term can also be written as\n\\[ \\boldsymbol{P}^{n}:\\nabla\\boldsymbol{u} = \\frac{1}{2}\\boldsymbol{P}^{n}:\\boldsymbol{\\sigma}. \\]\nAlternatively, if one considers an isotropic pressure instead, the pressure tensor can be decomposed as\n\\begin{equation}\n \\boldsymbol{P} = p\\boldsymbol{I} + \\boldsymbol{\\Pi},\n\\end{equation}\nwhere $\\boldsymbol{\\Pi}$ is the anisotropic pressure tensor (or deviatoric pressure tensor). This decomposition is adopted by e.g., Yang et al. \\cite{Yang2017}. Under this decomposition, the pressure tensor work becomes\n\\begin{equation}\n \\boldsymbol{P}:\\nabla\\boldsymbol{u} = p(\\nabla\\cdot \\boldsymbol{u}) + \\frac{1}{2}\\boldsymbol{\\Pi}:\\boldsymbol{\\sigma},\n\\end{equation}\nwhich is Equation (12) in Ref \\cite{Yang2017}. Here, the fluid compression effect is the same as Equation \\eqref{eq:pressure-work}, and we find that the pressure-strain interaction (sometimes called ``Pi-D'') term is equivalent to the sum of shear and nongyrotropic energization.\n\n\\subsection{The entropy evolution equations}\n\nThe discussion above suggests that the plasma heating in an isolated collisionless system is due to the pressure tensor work, which includes compression, shear, and nongyrotropic energization. Another related question is how the heating process relates to actual dissipation and entropy change. In general, an isolated system should conserve entropy when collisions are completely ignored. This is indeed the case for the Boltzmann entropy as illustrated nicely in Ref \\cite{Liang2019}. On the other hand, the fluid entropy \\eqref{eq:entro-iso} or \\eqref{eq:entro-CGL} need not be a conserved quantity. Similar to the derivation of the energy equations, it is straightforward to obtain an entropy evolution equation from moments of the Vlasov equation. For an isotropic fluid, we need use the continuity equation and the equation for scalar pressure. The scalar pressure is related to the thermal energy as $\\varepsilon^{th} = (3\/2)p$, which is easy to see from their definitions (also note that this is consistent with the ideal gas equation of state \\eqref{eq:eos-ideal} and \\eqref{eq:dE-ideal}). The result is\n\\begin{eqnarray}\n \\frac{dS}{dt} = \\partt{S} + \\boldsymbol{u}\\cdot\\nabla S &=& -\\frac{\\boldsymbol{\\Pi}:\\boldsymbol{\\sigma}}{2p} - \\frac{\\nabla\\cdot\\boldsymbol{q}}{p} \\nonumber\\\\\n &=& -\\frac{p_{\\parallel} - p_{\\perp}}{2p}\\boldsymbol{bb}:\\boldsymbol{\\sigma} - \\frac{\\boldsymbol{P}^{n}:\\boldsymbol{\\sigma}}{2p} - \\frac{\\nabla\\cdot\\boldsymbol{q}}{p}, \\label{eq:dSdt-iso}\n\\end{eqnarray}\nwhere we let\n\\[ S = \\frac{3}{2}\\log \\frac{p}{n^{5\/3}}. \\]\nNote that we have introduced the convective derivative $d\/dt$ on the left side of the equation. Equation \\eqref{eq:dSdt-iso} suggests that the entropy change of a fluid element is due to the pressure-strain interaction (or shear\/nongyrotropic energization) and the heat flux. Comparing equation \\eqref{eq:dSdt-iso} with \\eqref{eq:total}, one notices that the compression energization $p\\nabla\\cdot\\boldsymbol{u}$ is absent in the entropy equation, which indicates that adiabatic compression effect does not contribute to the change of entropy. This is because the compression term is absorbed into the convective derivative. However, compression could be important in plasma heating as it is part of the pressure tensor work, and this has been verified by previous simulation studies \\cite{Li2018, Du2018}. Another key difference is that there are no divergence terms in the entropy equation. This means that in principle, all terms in Equation \\eqref{eq:dSdt-iso} could contribute to the entropy change, even if integrated over the volume of an isolated system.\n\nFor cases of anisotropic pressure, we need the equations for both parallel and perpendicular pressure, and they can be derived from the Vlasov equation in a similar way as the scalar pressure equation. Here we take the result from Ref \\cite{Hunana2019} (after slight rearrangements),\n\\begin{equation}\\label{eq:pressure-para}\n \\partt{p_{\\parallel}} + \\nabla\\cdot(p_{\\parallel}\\boldsymbol{u}) + 2p_{\\parallel}\\boldsymbol{bb}:\\nabla\\boldsymbol{u} + 2\\boldsymbol{bb}:(\\boldsymbol{P}^{n}\\cdot\\nabla\\boldsymbol{u}) - \\frac{d}{dt}(\\boldsymbol{bb}):\\boldsymbol{P}^{n} + \\boldsymbol{bb}:(\\nabla\\cdot\\boldsymbol{Q}) = 0;\n\\end{equation}\n\\begin{eqnarray}\n \\partt{p_{\\perp}} + \\nabla\\cdot(p_{\\perp}\\boldsymbol{u}) + p_{\\perp}(\\boldsymbol{I} - \\boldsymbol{bb}):\\nabla\\boldsymbol{u} + \\boldsymbol{P}^{n}:\\nabla\\boldsymbol{u} - \\boldsymbol{bb}:(\\boldsymbol{P}^{n}\\cdot\\nabla\\boldsymbol{u}) + \\frac{1}{2}\\frac{d}{dt}(\\boldsymbol{bb}):\\boldsymbol{P}^{n} \\nonumber\\\\\n + \\nabla\\cdot\\boldsymbol{q} - \\frac{1}{2}\\boldsymbol{bb}:(\\nabla\\cdot\\boldsymbol{Q}) = 0. \\label{eq:pressure-perp}\n\\end{eqnarray}\nWe have introduced the symmetric third rank heat flux tensor $\\boldsymbol{Q} = \\int fm(\\boldsymbol{v}-\\boldsymbol{u})(\\boldsymbol{v}-\\boldsymbol{u})(\\boldsymbol{v}-\\boldsymbol{u}) d^3v$, and it is related to the heat flux vector via a trace operation\n\\[ \\boldsymbol{q} = \\frac{1}{2}Tr(\\boldsymbol{Q}),\\,\\, \\textrm{or}\\,\\, q_i = \\frac{1}{2}Q_{ijj} = \\frac{1}{2}Q_{jij} = \\frac{1}{2}Q_{jji}. \\]\nIt can be easily verified that the sum of Equations \\eqref{eq:pressure-para} and \\eqref{eq:pressure-perp} yields the thermal energy equation \\eqref{eq:thermal}. Using the parallel and perpendicular pressure equations, we can derive the evolution equation for the CGL entropy \\eqref{eq:entro-CGL},\n\\begin{equation}\n \\frac{dS^c}{dt} = -\\frac{\\boldsymbol{P}^{n}:\\boldsymbol{\\sigma}}{2p_{\\perp}} - \\frac{\\nabla\\cdot\\boldsymbol{q}}{p_{\\perp}} - \\kl{\\frac{1}{p_{\\parallel}} - \\frac{1}{p_{\\perp}}}\\klm{\\boldsymbol{bb}:(\\boldsymbol{P}^{n}\\cdot\\nabla\\boldsymbol{u}) - \\frac{1}{2}\\frac{d}{dt}(\\boldsymbol{bb}):\\boldsymbol{P}^{n} + \\frac{1}{2}\\boldsymbol{bb}:(\\nabla\\cdot\\boldsymbol{Q})}, \\label{eq:dSdt-CGL}\n\\end{equation}\nwhere\n\\[ S^c = \\frac{3}{2}\\log \\frac{p_{\\parallel}^{1\/3}p_{\\perp}^{2\/3}}{n^{5\/3}}. \\]\nOn comparing to Equation \\eqref{eq:dSdt-iso}, it is clear that the increase of CGL entropy is determined by higher-order moments, namely the nongyrotropic pressure and the full heat flux tensor, while the increase of normal fluid entropy depends also on the gyrotropic pressure and heat flux vector. In the limit of $p_{\\parallel} = p_{\\perp}$, \\eqref{eq:dSdt-CGL} and \\eqref{eq:dSdt-iso} reduce to the same equation. Equation \\eqref{eq:dSdt-CGL} is fully general for nonrelativistic collisionless plasmas regardless of the closure.\n\nAs is evident from Equations \\eqref{eq:dSdt-iso}, for ideal MHD where all third-order or higher moments are cut off and only the isotropic scalar pressure is included, the entropy of a fluid element will be conserved. And similarly for a CGL plasma that satisfies the double adiabatic equations, the CGL fluid entropy is conserved according to Equation \\eqref{eq:dSdt-CGL}. We caution that strictly speaking, these conclusions may not be valid in a weak sense. For example, it is well known that shock waves provide dissipation and increase fluid entropy across discontinuous surfaces even within the framework of ideal-gas or ideal MHD (e.g., Ref \\cite{Kennel1985}). However, these simple fluid models cannot provide the physical mechanisms that explain the entropy increase and kinetic theory has to be used.\n\n\n\\section{Simulation results}\\label{sec:simulation}\n\nIn this section, we evaluate the fluid entropy in fully kinetic collisionless particle-in-cell (PIC) simulations, using the code VPIC. It has been recently reported that kinetic Boltzmann entropy is conserved in collisionless PIC simulations \\cite{Liang2019}. Fluid entropy \\eqref{eq:entro-iso} has also been evaluated in previous PIC simulations such as Ref \\cite{Birn2006}, but the detailed mechanisms underlying the increase in fluid entropy remain obscure. We will analyze the change of fluid entropy based on the results presented in the previous section.\n\nWe set up a 2D PIC simulation of reconnecting current sheets in a force-free configuration with magnetic field,\n\\[ B_x = B_0 \\tanh\\klm{\\frac{d}{\\pi L}\\sin\\kl{\\frac{\\pi z}{d}}} = B_0 \\tanh\\klm{\\frac{\\alpha}{\\pi}\\sin\\kl{\\frac{\\pi z}{\\alpha L}}}; \\]\n\\[ B_y = B_0 \\sqrt{1 + \\kl{\\frac{B_g}{B_0}}^2 - \\tanh^2\\klm{\\frac{d}{\\pi L}\\sin\\kl{\\frac{\\pi z}{d}}}};\\quad B_z = 0. \\]\nHere, $B_0$ is the asymptotic in-plane magnetic field, $B_g$ is the out-of plane guide field, and $L$ is the half thickness of the current sheet. Another parameter $d$ is introduced as the distance between two adjacent current sheets, and $\\alpha$ represent the ratio $d \/ L$. The electric current is calculated according to the MHD force-free condition $(\\nabla\\times\\boldsymbol{B})\\times\\boldsymbol{B} = 0$. Both electrons and ions follow drifting Maxwellian distributions initially, and we assume the initial density and temperature profiles of both electrons and ions are uniform. The current is carried by the electron drift along the magnetic field. A similar setup has been used in numerous previous simulation studies involving the interaction of multiple current sheets, e.g., Ref \\cite{Drake2010, Sironi2011, Hoshino2012}. We set the simulation box $L_x = L_z = 50 d_i$, sheet thickness $L = 0.5 d_i$, guide field $B_g = 0$, and distance $d = 12.5 d_i$ so there are 4 current sheets initially. The simulation is run for $\\Omega_{ci}t = 200$, which is about 4 Alfv\\'{e}n crossing times $t_{A} = L_x \/ v_{A}$. 200 macro-particles per cell per species are used in the simulation. A double periodic boundary condition is employed in both the $x$ and $z$ directions, so that the simulation represents a closed system.\n\nFigure \\ref{fig:entro} shows snapshots of the isotropic fluid entropy \\eqref{eq:entro-iso} in the simulation at different times $\\Omega_{ci}t = 50$, 100, and 200. The top panels plot the electron entropy and the bottom panels the ion entropy. Magnetic field lines are overplotted as black contours. Note that the absolute value of the entropy is not important as it depends on how we normalize the pressure and density, but the difference in entropy is independent of normalization. At early stage of the simulation, each current sheet evolves in a relatively independent manner, as illustrated in the left panels ($\\Omega_{ci}t = 50$). Numerous small magnetic island structures are formed along each current layer. The current sheets are distorted and interact with each other at later time, illustrated by the middle panels ($\\Omega_{ci}t = 100$). The figures suggest that the fluid entropy is produced mostly within the magnetic islands. However, for ions, the entropy seems to be more concentrated near the reconnection X-lines. Approaching the end of the simulation, the magnetic islands have experienced multiple merging and coalescence interactions. The simulation domain becomes rather uniform with a few big island structures.\n\n\\begin{figure*}[!ht]\n \\includegraphics[width=1.0\\linewidth]{Fig_1.pdf}\n \\caption{Snapshots of isotropic fluid entropy for electrons (top) and ions (bottom) at different simulation times $\\Omega_{ci} t = 50$, 100, and 200. Black contour lines represent magnetic field lines. \\label{fig:entro}}\n\\end{figure*}\n\nFigure \\ref{fig:energy} demonstrates the energy budget of the simulation, normalized to the initial magnetic energy. The top panel shows the evolution of magnetic energy (dot-dashed black curve) and plasma energy (solid blue curve for ions and dashed red curve for electrons). Toward the end of the simulation, about 70\\% of the initial magnetic energy is released via magnetic reconnection. Of the released magnetic energy, the majority goes to the ion kinetic energy. The second and third panels show the partition of thermal and bulk flow energy respectively, calculated by\n\\[ E_t = Tr(\\boldsymbol{P}) \/ 2;\\quad E_b = \\frac{1}{2}\\rho u^2. \\]\nAgain, the solid blue curves represent ions and dashed red curves electrons. The plasma energization is dominated by the thermal energy as more than 90\\% the plasma energy resides in the thermal energy at the end of the simulation. The bulk flow energy of electrons is decreasing most of the time during the simulation because the current is carried by electrons initially. The ion bulk flow energy increases at the beginning and is later converted to thermal energy. In the bottom panel, we plot the rate of change in the ion and electron thermal energy.\n\n\\begin{figure}[!ht]\n \\includegraphics[width=3.375in]{Fig_2.pdf}\n \\caption{Energy budget of the simulation. The top panel shows the evolution of magnetic energy (dot-dashed black), ion kinetic energy (solid blue), and electron kinetic energy (dashed red). The second panel from top shows the ion (solid blue) and electron (dashed red) thermal energy. The third panel shows the ion (solid blue) and electron (dashed red) bulk flow energy. All energies in these panels are normalized to the initial magnetic energy. In the bottom panel, the rates of change in ion (solid blue) and electron (dashed red) thermal energy are shown. \\label{fig:energy}}\n\\end{figure}\n\nWe then evaluate the rate of change in fluid entropy using Equation \\eqref{eq:dSdt-iso} and the result is shown in Figure \\ref{fig:dS-iso}. The rate of change of the fluid entropy $\\partial S\/\\partial t$ is integrated over the whole simulation box and plotted in the top panels as solid black lines. The change of fluid entropy is separated into three parts according to Equation \\eqref{eq:dSdt-iso}: convection ($-\\boldsymbol{u}\\cdot\\nabla S$), Pi-D ($-\\boldsymbol{\\Pi}:\\sigma\/2p$), and heat flux ($-\\nabla\\cdot\\boldsymbol{q}\/p$). They are evaluated for the entire simulation box and are plotted as dot-dashed blue, solid green, and dashed red lines. The sum of the three yields the dashed blue curves in the top panel and it follows the entropy change curve closely, which verifies the Equation \\eqref{eq:dSdt-iso}.\n\n\\begin{figure*}[!ht]\n \\includegraphics[width=0.5\\linewidth]{Fig_3a.pdf}%\n \\includegraphics[width=0.5\\linewidth]{Fig_3b.pdf}\n \\caption{Rate of change of the fluid entropy for electrons (left panels) and ions (right panels). The solid black lines in the top panels represent the rate of change of the fluid entropy. The convection term, Pi-D, and heat flux terms are evaluated separately and shown in the bottom panels. The sum of the three terms is displayed as the dashed blue curves in the top panels. \\label{fig:dS-iso}}\n\\end{figure*}\n\nIt is clear from Figure \\ref{fig:dS-iso} that the total fluid entropy of the system is always increasing in our simulation since its rate of change remains positive. This is similar to the thermal energy as illustrated in the bottom panel of Figure \\ref{fig:energy} though the two curves do not trace each other exactly. The rate of entropy change is large between $\\Omega_{ci}t \\sim$ 30--120 when magnetic reconnection and island merging are apparent in the simulation. The overall behaviors of entropy are similar for both electrons and ions. The convection effect remains negative for both species. An interesting result is that the Pi-D and heat flux contributions appear to be comparable with each other. For ions, the heat flux contribution is slightly larger compared with Pi-D. However, at later time of the simulation, the heat flux contribution becomes small and Pi-D dominates the entropy increase for both electrons and ions.\n\nSimilarly, we evaluate the CGL fluid entropy using Equation \\eqref{eq:dSdt-CGL}, as shown in Figure \\ref{fig:dS-cgl}. Similar to Figure \\ref{fig:dS-iso}, the rate of change of the CGL entropy $S^c$ is displayed in the top panels as solid black lines. As a comparison, the rate of change of the isotropic fluid entropy $S$ is shown as dot-dashed lines (which is the same as the solid black curves in Figure \\ref{fig:dS-iso}). The result shows that the two quantities $S$ and $S^c$ differ only slightly, probably because the anisotropy is not very strong \\citep{Birn2006}. In the bottom panels of Figure \\ref{fig:dS-cgl}, we separate Equation \\eqref{eq:dSdt-CGL} into three parts. The first part is the convection, which is very close to the convection term in Figure \\ref{fig:dS-iso}. The other two parts are denoted by $A$ and $B$, where:\n\\[ A = -\\frac{\\boldsymbol{P}^{n}:\\boldsymbol{\\sigma}}{2p_{\\perp}} - \\frac{\\nabla\\cdot\\boldsymbol{q}}{p_{\\perp}}; \\]\n\\[ B = - \\kl{\\frac{1}{p_{\\parallel}} - \\frac{1}{p_{\\perp}}}\\klm{\\boldsymbol{bb}:(\\boldsymbol{P}^{n}\\cdot\\nabla\\boldsymbol{u}) - \\frac{1}{2}\\frac{d}{dt}(\\boldsymbol{bb}):\\boldsymbol{P}^{n} + \\frac{1}{2}\\boldsymbol{bb}:(\\nabla\\cdot\\boldsymbol{Q})}. \\]\nTerm $A$ corresponds to the sum of Pi-D (with nongyrotropic pressure only) and heat flux, and $B$ is the additional term unique to the CGL entropy Equation \\eqref{eq:dSdt-CGL}. Note that in calculating $d\\boldsymbol{b}\/dt$, we use the Faraday's law\n\\[ \\partt{\\boldsymbol{B}} = c\\nabla\\times\\boldsymbol{E} \\]\nfor convenience since this eliminates the need for evaluating time derivatives. The sum of the three parts is shown as dashed blue lines in the top panels, and they agree reasonably well with the solid black curves, though not as good compared to Figure \\ref{fig:dS-iso}. This may be caused by the use of electric field and higher moments, which tend to be more noisy. One interesting difference between electrons and ions is that terms $A$ and $B$ are comparable in size for electrons while term $A$ dominates for ions. This may be understood from recognizing that Pi-D in Equation \\eqref{eq:dSdt-iso} consists of both gyrotropic and nongyrotropic pressure while Pi-D in term $A$ consists of the nongyrotropic pressure only. Since electrons are strongly magnetized, the electron pressure is almost gyrotropic and thus term $A$ is dominated by the heat flux. As a result, for a weakly anisotropic pressure ($p_{\\parallel} \\simeq p_{\\perp}$), the reduction of of Pi-D needs to be compensated by term $B$, which is dominated by the heat flux tensor $\\boldsymbol{Q}$. This result demonstrates that even though the isotropic and CGL fluid entropy have similar values, their production mechanisms could be different.\n\n\\begin{figure*}[!ht]\n \\includegraphics[width=0.5\\linewidth]{Fig_4a.pdf}%\n \\includegraphics[width=0.5\\linewidth]{Fig_4b.pdf}\n \\caption{Rate of change of the CGL fluid entropy for electrons (left panels) and ions (right panels). The solid black lines in the top panels represent the rate of change of the CGL entropy $S^c$ while the dot-dashed black lines represent the regular fluid entropy $S$. The convection term and two other terms (see text for details) are evaluated separately and shown in the bottom panels. The sum of the three terms is displayed as the dashed blue curves in the top panels. \\label{fig:dS-cgl}}\n\\end{figure*}\n\n\n\\section{Discussion and conclusions}\\label{sec:discussion}\n\nIn this paper, we discuss the evolution of the commonly used fluid entropy in both isotropic and gyrotropic (or CGL) forms. The isotropic fluid entropy is conserved for ideal gas or ideal MHD, and the CGL fluid entropy is conserved within the CGL plasma model. By simply taking moments of the Vlasov equation, we show that the fluid entropy of either form is not necessarily conserved even for an isolated collisionless system. This result is confirmed by a collisionless PIC simulation of multiple reconnecting current sheets. As pointed out by Liang et al.\\cite{Liang2019}, kinetic entropy in a collisionless PIC simulation is approximately conserved. Clearly, the true Boltzmann entropy cannot be fully represented by a finite number of fluid moments. The increasing entropy in our simulation suggests that the fluid entropy is insufficient to capture the physical dissipation process. Instead, the change in fluid entropy may simply imply the breakdown of the ideal gas equation of state or the CGL double adiabatic equations.\n\nThe pressure-strain interaction or Pi-D has been proposed to capture the dissipation processes in space plasma \\cite{Yang2017}. We illustrate in this paper that Pi-D does contribute to the change of fluid entropy in its isotropic form. However, our result shows that only the nongyrotropic part of Pi-D contributes to the CGL-form fluid entropy. Although the heat flux does not appear in the thermal energy equation, it plays an important role in the production of fluid entropy. Indeed, our simulation results suggest that the heat flux is almost equally as important as Pi-D in terms of increasing fluid entropy. When the CGL entropy is considered, the role of Pi-D is further reduced and the heat flux vector and tensor dominate the entropy increase, especially for electrons. Therefore, it may be expected that whether Pi-D or heat flux or other higher moments contribute to fluid entropy change depends on how the entropy itself is constructed.\n\nFinally, we note that we do not completely rule out the possibility that the increase of fluid entropy is due to numerical issues (at least partly). A good way to clarify this will be to combine our analysis with the evaluation of kinetic entropy. This will be the goal of a future study.\n\n\n\\begin{acknowledgments}\nSD and GPZ acknowledge the partial support of the NSF EPSCoR RII-Track-1 Cooperative Agreement OIA-1655280, and partial support from an NSF\/DOE Partnership in Basic Plasma Science and Engineering via NSF grant PHY-1707247. XL acknowledge the support by NASA under grant NNH16AC60I, DOE OFES, and the support by the DOE through the LDRD program at Los Alamos National Laboratory (LANL). FG's contributions are in part based upon work supported by the U.S. Department of Energy, Office of Fusion Energy Science, under Award Number DE-SC0018240 and by NSF grant AST-1735414. The simulation was performed at LANL Institutional Computing.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{the flow equation of the effective potential within LPA approximation}\n\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[scale=0.45]{FigS1.eps}\n\\vspace*{0.1cm}\n\\caption{\nThe rescaled FP potential at $d=d_c(N)$ with $N=200,480$ and $1600$. We can see that the FP potential almost converges to a limiting function, which is different from the GFP.\n}\n\\label{fig:rescaled-pot-T=C3}\n\\end{figure} \n\nWe define the dimensionless field $\\tilde{\\rho}$ and potential $\\tilde{U}_{k}$ as\n\\begin{equation}\n\\begin{array}{l}\n\\tilde{\\rho}=v_d^{-1} k^{2-d}\\rho\\\\\n\\tilde{U}_{k}(\\tilde{\\rho})=v_d^{-1} k^{-d}U_{k}\\left(\\rho\\right)\n\\end{array}\n\\end{equation} with \n\\begin{equation}\nv_{d}=\\frac{1}{2^{d-1} d \\pi^{d\/2}\\Gamma\\left(\\frac{d}{2}\\right)}.\n\\end{equation}\nUsing $t=\\log(k\/\\Lambda)$ and the dimensionless and renormalized quantities, denoted by a tilde, the flow of the effective potential $\\tilde{U}_{k}$ reads at the LPA:\n\\begin{align}\n \\begin{split}\n &\\partial_t \\tilde{U}_t= -d \\tilde{U}_t +(d-2) \\tilde\\rho \\tilde{U}_t' -\\frac{d}{2} \\int_{0}^{\\infty} dy y^{d\/2+1} r'(y) \\left(\\frac{1}{y(1+r(y))+ \\tilde{U}_t'+2\\tilde\\rho \\tilde{U}_t''}+\\frac{N-1}{y(1+r(y))+ \\tilde{U}_t'} \\right)\n \\end{split}\n \\label{flow-potential}\n\\end{align}\n with $y=q^2\/k^2$, $R_k(q)=q^2 r(y)$, $\\tilde{U}_t'=\\partial_\\rho \\tilde{U}_t$, $\\tilde{U}_t''=\\partial^2_\\rho \\tilde{U}_t$. \n We employed the cutoff $R_k(q)=(k^2-q^2) \\theta(k^2-q^2)$ [44]\n This leads to $r(y)=(1\/y-1) \\theta(1-y)$, which is convenient for analytial treatments for LPA calculations. With this cutoff, the flow equation becomes\n \\begin{align}\n \\begin{split}\n &\\partial_t \\tilde{U}_t= -d \\tilde{U}_t +(d-2) \\tilde\\rho \\tilde{U}_t' + \\frac{1}{1+ \\tilde{U}_t'+2\\tilde\\rho \\tilde{U}_t''}+\\frac{N-1}{1+ \\tilde{U}_t'}. \n \\end{split}\n \\label{flow-potential}\n \\end{align}\n\n\\section{the $T_2=C_3$ FP potential shape on $N=N_c(d)$}\nIt is also interesting to notice that by rescaling the potential and the field: $U\\to \\bar{U} \\equiv U\/N$ and $\\rho\\to \\bar{\\rho} \\equiv \\rho\/N$, the explicit factor $N$ in the LPA flow of the potential, Eq.(\\ref{flow-potential}), disappears in the large $N$ limit, if we assume no singularities of $U$, $U'$ and $U''$. This implies that for large enough values of $N$, the shape of the rescaled FP potential is almost independent of $N$. Using this rescaling, we find that this limit shape of $\\bar {U}^*$ on the line $N_c(d)$ when $d\\to3$ (or equivalently when $N\\to\\infty$) is clearly regular and not gaussian even though $T_2$ is closer and closer to the GFP at large $N$. This limit FP is therefore also different from the BMB FP, which shows a cusp, as shown in Fig. S1.\n \n \n This means that, for a fixed and large value of $N$, the shape of the rescaled potential $\\bar {U}^*$ changes very rapidly between $d=3^-$, where $T_2$ coincides with the GFP, and $d=d_c(N)$ where it collapses with $C_3$. \n\n\n\n\\section{A toy model of the double valued structure of $T_2$ and $C_2$}\nIn this section, we give a toy model in terms of the roots of a cubic equation: $f(x,\\theta)=0$, that show a similar double-valued structure as the tricritical FPs $T_2$ and $C_2$. These three roots, real or complex, depend on a parameter $\\theta\\in [0,2\\pi]$ in a cyclic way because we assume that $f(x,0)=f(x,2\\pi)$.\nBy analogy with our initial problem, we call $t_2, c_3$ and $c_2$ the three roots. When $\\theta=0$ the three roots are assumed real and $t_2$ is the rightmost root as shown in Fig. \\ref{fig:toymodel-S} (A). When \n$\\theta$ is increased the two roots $c_3$ and $c_2$ are assumed to approach each other and eventually coincide as in \\ref{fig:toymodel-S} (B). This corresponds to what happens in Fig. \\ref{fig:closed-path-S} where B is on the line $N_c'(d)$, that is, where the FPs $c_3$ and $c_2$ coincide. When $\\theta$ is further increased, the roots $c_3$ and $c_2$ are assumed to become complex as in Figs. \\ref{fig:toymodel-S} (C) and (D). In our toy model, the inflexion point which is in the lower half-plane in Fig. \\ref{fig:toymodel-S} (C) is in the upper half-plane in Fig. \\ref{fig:toymodel-S} (D). In Fig. \\ref{fig:toymodel-S} (E) two roots become real again as in Fig. \\ref{fig:closed-path-S} where \n$t_2=c_3$. When $\\theta=2\\pi$, we are back at point A. The root $t_2$ that has been followed by continuity has therefore become $c_2$ after a full rotation which exactly corresponds to the situation indicated in Fig. 3 (a) of the main text. Notice that by further increasing $\\theta$ from $2\\pi$ to $4\\pi$, that is, by making a second full rotation in Fig. \\ref{fig:closed-path-S}, $c_2$ would become $t_2$. \n\n \\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.25]{FigS2-3.eps}\n\\vspace*{0.1cm}\n\\caption{\nA closed clockwise path around the singular point $S$.}\n\\label{fig:closed-path-S}\n\\end{figure}\n\n \\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.3]{FigS3-3.eps}\n\\vspace*{0.1cm}\n\\caption{\nBehavior of the cubic function $f(x,\\theta)$ as well as its roots when $\\theta$ is varied between 0 and $2\\pi$. Starting from $t_2$ at (A), we follow this root by continuity all along the path, as indicated with black dots. At $\\theta=2\\pi$, $t_2$ has become $c_2$. This mimics the behavior of the FP $T_2$ along the path ABCDEA in Fig. \\ref{fig:closed-path-S}.}\n\\label{fig:toymodel-S}\n\\end{figure}\n\n\n\n \n\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMean field game (MFG) is a limit model for a stochastic differential game with large number of players, symmetric cost functions, and interactions of mean-field type. Specifically, each player optimizes a control problem whose state process and cost functions depend not only on their own state and control but also on other players' decision through the empirical distribution of their states. Under certain independence assumption, considering the control problem at the asymptotic regime can reduce this high-dimensional complex interacting system to a fully-coupled forward-backward partial differential equations (PDEs). The solution of this system can then be used to approximate the Nash equilibrium solution of the finite player games. This novel idea was first proposed by Lasry and Lions \\cite{lasry2006, lasry2006_2,lasry2007} and independently from an engineering community by Caines, Huang, and Malham\\'e \\cite{huang2007}. \n\nThe MFG problem, with linear-convex setting as will be considered in this paper, is defined as follows;\n\\begin{equation}\\label{mfg_nc}\\begin{cases}\n \\alpha^* \\in \\arg\\min_{\\alpha \\in \\mathcal{A}} \\mathbb{E} \\l[ \\int_0^T \\frac{\\alpha^2}{2}dt + g(X^\\alpha_T,m_T)\\r]\\\\\n dX^\\alpha_t = \\alpha_t dt + \\sigma dW_t \\\\\n m_t = \\mathcal{L}(X^{\\alpha^*}_t) \n \\end{cases}\n\\end{equation}\n\nIn the past decades, active research has been done on MFG model with tremendous progress in many directions. See \\cite{gomes2013survey} for a brief survey and \\cite{bensoussan2013} for a more extensive reference. Many extensions of \\eqref{mfg_nc} has been considered including a model with major\/minor players \\cite{bensoussan2014mean,carmona2014majorminor,huang2010large,nguyen2012linear,nourian2013} and a convergence from finite player games to MFG \\cite{bardi2014,fischer2014,gomes2012cont,lacker2014general}. An important extension that has gained a lot of attention is the model with common noise, which is a common Brownian motion that occurs in the state process of every players, relaxing the independence assumption assumed in the original model. This type of model comes up frequently and naturally in many applications particularly in finance or economics \\cite{moll2013,carmona2013mean,gueant2011,lasry2008application} where each player is subject to some sort of common market factor. MFG with common noise can be formulated as follows; \n\\begin{equation}\\label{mfg_c}\\begin{cases}\n \\alpha^* \\in \\arg\\min_{\\alpha \\in \\mathcal{A}} \\mathbb{E} \\l[ \\int_0^T \\frac{\\alpha^2}{2}dt + g(X^\\alpha_T,m_T)\\r]\\\\\n dX^\\alpha_t = \\alpha_t dt + \\sigma dW_t + \\varepsilon d\\tilde{W}_t \\\\\n m_t = \\mathcal{L}(X^{\\alpha^*}_t | \\tilde{\\mathscr{F}}_t), \\quad \\tilde{\\mathscr{F}}_t = \\sigma(\\tilde{W}_s; 0 \\leq s \\leq t) \n \\end{cases}\n\\end{equation}\n\n\nThe common noise model \\eqref{mfg_c} is more complicated than the original MFG \\eqref{mfg_nc}. In \\eqref{mfg_nc}, the law $m^{0,\\alpha}_t$ is expected to be deterministic, so it suffices to seek the optimal strategy along the path $(m^{0,\\alpha}_t)_{0 \\leq t \\leq T}$. This reduces the problem to a finite-dimensional system of forward-backward PDEs. The common noise model, on the other hands, is more complex as the flow of players distribution is stochastic. This means that we need to specify the optimal action for all possible trajectories of the players' distribution which is infinite-dimensional. One way to solve this model is to add $m_t$ as an argument in the value function. This approach leads to the study of the \\textit{master} equation which is an infinite-dimensional Hamilton-Jacobi-Bellman (HJB) equation that encapsulates all the information of the MFG. See \\cite{bensoussan2014master,carmona2014master,gomes2013survey} for some discussions on this approach. Alternatively, one could follow the same methodology as done by Lasry and Lions. In that case, instead of a forward-backward PDE, the presence of common noise gives a forward-backward stochastic partial differential equation (SPDE). Lastly, we can also use the probabilistic approach proposed by Carmona and Delarue \\cite{carmona2013probabilistic} which formulates the MFG as a mean-field forward-backward stochastic differential equations (FBSDE). The difference between the two approaches to MFG is from the two different approaches to stochastic control problems, namely the Stochastic Maximum Principle (SMP) and the Dynamic Programming Principle (DPP).\n\nRecently, there has been progress in the study of MFG with common noise concerning mostly to well-posedness results. In \\cite{ahuja2014,ahuja2016forward}, the existence and uniqueness result of MFG with common noise is proved when the state process is linear and the cost functions satisfy a certain convexity and monotonicity condition. Carmona et. al. \\cite{carmona2016} gives the existence and uniqueness result of a \\textit{weak} solution under a more general setting. In \\cite{bensoussan2014master,carmona2014master}, the master equation was discussed from the perspective of both HJB and probabilistic approaches. Under special circumstances, the common noise model might be explicitly solvable through a transformation which turns the problem to the original no common noise MFG \\cite{carmona2013mean,gueant2011,lacker2014translation}. Despite these results, a general common noise model is difficult and impractical to solve numerically or explicitly as it does not enjoy the dimension reduction property as in the case of MFG without common noise. \n\n\nThe goal of this paper is to consider a MFG problem when the common noise is \\textit{small} as denoted by the parameter $\\varepsilon$ in \\rref{mfg_c}. We will refer to this game as $\\varepsilon$-MFG. In this set up, we seek an approximate solution using only the information from $\\varepsilon=0$ problem, i.e. the original MFG with no common noise or $0$-MFG. If we denote by $(\\alpha^\\varepsilon_t,X^\\varepsilon_t,m^\\varepsilon_t)_{0 \\leq t \\leq T}$ a solution to $\\varepsilon$-MFG described above, then we are essentially interested in the following $\\varepsilon$-expansion \n\\begin{equation}\\label{expansion}\n \\alpha_t^\\varepsilon = \\alpha_t^0 + \\varepsilon \\delta^{\\alpha}_t + o(\\varepsilon), \\qquad X^\\varepsilon_t = X^0_t + \\varepsilon \\delta^{X}_t + o(\\varepsilon)\n \\end{equation}\nEquivalently, we would like to study the limit as $\\varepsilon \\to 0$ of\n\\begin{equation}\\label{limits} \\frac{X^\\varepsilon_t - X^0_t}{\\varepsilon}, \\quad \\frac{\\alpha_t^\\varepsilon-\\alpha_t^0}{\\varepsilon}\n\\end{equation}\n\n\n\nThis paper contributes mainly to the study of the limit \\rref{limits} and the corresponding first order approximate strategy. Our main result is to prove that \\rref{limits} converges to a solution of a system of linear mean-field FBSDE whose solution is a centered Gaussian process. While recently there has been much work on the MFG model with common noise or the convergence of $N$-player game to MFG, the asymptotic analysis of small common noise model is new to the best of our knowledges. Our setting and assumptions are similar to those in \\cite{ahuja2014} where we assume a linear state process, and convex and weak monotone cost functions, with some additional regularity assumptions. \n\nIn addition to the convergence result, we show that the first order approximate strategy (see \\rref{expansion}) gives an approximate Nash equilibrium of order $\\varepsilon^2$. More interestingly, we show that the game-theoretic correction in the optimal strategy due to the existence of (small) common noise is not presented at the first order, and we can simply use the 0-MFG optimal strategy along the trajectory of the stochastic flow $(m^{\\eps}_t)_{0 \\leq t \\leq T}$. That is, at time $t \\in [0,T]$, we solve the sub-game of 0-MFG over $[t,T]$ with the initial being the current distribution $m^{\\eps}_t$. Note that this is different from the 0-MFG solution itself since we use $m^{\\eps}_t$ as the initial at $t$ as opposed to $m^{0}_t$. \n\nOur main technical tool is the Stochastic Maximum Principle which turns a MFG problem to a mean-field FBSDE. The linear, convex, monotone assumptions on the MFG leads to a mean-field FBSDE with monotone property. A system of monotone FBSDE is well-studied both in the classical setting \\cite{hu1995,peng1999,yong1997continuation} and also recently with mean-field terms\\cite{ahuja2014,ahuja2016forward,Bensoussan2015} where probabilistic techniques and standard SDE estimates can be applied. Under this setting, we are able to obtain all the results, the limits and the estimates, in a \\textit{strong} sense, namely in $\\mathcal{L}^2$, using similar tools.\n\nThe paper is organized as follows. In section \\ref{mfg_commonnoise}, we consider a general MFG with common noise through the Stochastic Maximum Principle and discuss the well-posedness result as well as existence of the decoupling function all of which will be used in subsequent sections. The main results, namely the asymptotic analysis of $\\varepsilon$-MFG, are given in section \\ref{asymptotic}. We then discuss a connection between the SMP approach and the DPP approach in section \\ref{dpp2}. The Appendices contain the proofs of the main theorems and lemmas as well as discussions on the existence and uniqueness of FBSDE with monotone functionals and the notion of differentiation with respect to a probability measure.\n\n\n\n\n\n\\section{Mean field game with common noise}\\label{mfg_commonnoise}\n\n\\subsection{Notations and general set up}\\label{setup}\nFix a terminal time $T > 0$. Let $(W_t)_{0 \\leq t \\leq T}, (\\tilde{W}_t)_{0 \\leq t \\leq T}$ denote two independent Brownian motions on $\\mathbb{R}$ defined on a complete filtered probability space $(\\Omega, \\mathscr{F}, \\mathbb{F}=\\{\\mathscr{F}_t\\}_{0 \\leq t \\leq T}, \\mathbb{P})$ augmented by a $\\mathbb{P}$-null set. We call $(W_t)_{0 \\leq t \\leq T}$ the \\textit{individual noise} and $(\\tilde{W}_t)_{0 \\leq t \\leq T}$ the \\textit{common noise}. We assume that $(\\Omega,\\mathscr{F},\\mathbb{P})$ is in the form $ (\\Omega^0 \\times \\t{\\Omega},\\mathscr{F}^0 \\otimes \\tilde{\\mathscr{F}},\\mathbb{P}^0 \\otimes \\tilde{\\mathbb{P}})$ where $(\\t{\\Omega},\\tilde{\\mathscr{F}},\\tilde{\\mathbb{P}})$ is the canonical sample space of the common noise $(\\tilde{W}_t)_{0 \\leq t \\leq T}$ and that all other randomness, the individual noise and initial, are supported in $(\\Omega^0,\\mathscr{F}^0,\\mathbb{P}^0)$.\n\nLet $\\mathscr{P}_{2}(\\mathbb{R})$ denote the space of Borel probability measure on $\\mathbb{R}$ with finite second moment, i.e. all probability measure $\\mu$ such that\n\t$$ \\int_\\mathbb{R} x^2 d\\mu(x) < \\infty $$ \nIt is a complete separable metric space equipped with a Wasserstein metric defined as\n\\begin{equation}\\label{wass}\n \\mathscr{W}_2(m_1,m_2) = \\l( \\inf_{\\gamma \\in \\Gamma(m_1,m_2)} \\int_{\\mathbb{R}^2} |x-y|^2 \\gamma(dx,dy) \\r)^{\\frac{1}{2}}\n \\end{equation}\nwhere $\\Gamma(m_1,m_2)$ denotes the collection of all probability measures on $\\mathbb{R}^2$ with marginals $m_1$ and $m_2$. While we assume one-dimensional Euclidean space for simplicity, all the results in this paper still hold for $\\mathbb{R}^d$. Let $\\tilde{\\mathscr{F}}^{s}_t$ denote the filtration generated by $\\tilde{W}_r-\\tilde{W}_s, s \\leq r \\leq t$ and we write $\\tilde{\\mathscr{F}}_t = \\tilde{\\mathscr{F}}^0_t$. Suppose $\\mathscr{G}$ is a sub $\\sigma$-algebra of $\\mathscr{F}$ and $\\mathbb{G} = \\{ \\mathscr{G}_t \\}_{0 \\leq t \\leq T}$ is a sub filtration of $\\mathbb{F}$, then let $\\mathscr{L}^{2}_{\\mathscr{G}}$ denote the set of $\\mathscr{G}$-measurable real-valued square integrable random variable, $\\Mt{\\mathscr{G}}$ denote the set of $\\mathscr{G}$-measurable random probability measure $\\mu$ on $\\mathbb{R}$ with finite second moment, and $\\H^2_\\mathbb{G}([0,T];\\mathbb{R})$ denote the set of $\\mathscr{G}_t$-progressively-measurable process $\\beta = (\\beta_{t})_{0 \\leq t \\leq T}$ such that\n$$ \\mathbb{E}\\l[ \\int_{0}^{T} \\beta^{2}_{t} dt \\r] < \\infty$$\nWe define similarly the space $\\H_\\mathbb{G}^2([s,t];\\mathbb{R})$ for any $0 \\leq s < t \\leq T$ and we will often omit the subscript and write $\\H^2([0,T];\\mathbb{R})$ for $\\H^2_\\mathbb{F}([0,T];\\mathbb{R})$. We also let $\\mathcal{M}([0,T];\\Ptwo)$ denote the space of continuous $\\tilde{\\mathscr{F}}_t$-adapted stochastic flow of probability measure $\\mu = (\\mu_t)_{0 \\leq t \\leq T}$ with value in $\\mathscr{P}_{2}(\\mathbb{R})$ and define similarly $\\MM{s}{t}$.\n\nFor a control $\\alpha \\in \\H^{2}([0,T];\\mathbb{R})$, let $(X^{\\varepsilon,\\alpha}_t)_{0 \\leq t \\leq T}$ be the corresponding state process\n\\begin{equation}\\label{state} \nX^{\\varepsilon,\\alpha}_t = \\xi_0 + \\int_0^t \\alpha_s ds + \\sigma W_t + \\varepsilon \\tilde{W}_t\n\\end{equation}\nwhere $\\xi_0 \\in \\mathscr{L}^{2}_{\\mathscr{F}_0}$ is an initial state. Let $m^{\\varepsilon,\\alpha}_t$ denote the law of $X^{\\varepsilon,\\alpha}_t$ conditional on $\\tilde{\\mathscr{F}}_t$, i.e.\n$$ m^{\\varepsilon,\\alpha}_t = \\mathcal{L}(X^{\\varepsilon,\\alpha}_t | \\tilde{\\mathscr{F}}_t) $$\nIt is easy to check that $m^{\\varepsilon,\\alpha}=(m^{\\varepsilon,\\alpha}_t)_{0 \\leq t \\leq T} \\in \\mathcal{M}([0,T];\\Ptwo)$ when $\\alpha \\in \\H^{2}([0,T];\\mathbb{R})$. Given any $m \\in \\mathcal{M}([0,T];\\Ptwo) $, we defined the expected cost corresponding to a control $\\alpha$ as \n\\begin{equation}\\label{cost}\n\\mathcal{J}^\\varepsilon(\\alpha | m) \\triangleq \\mathbb{E}\\l[ \\int_0^T \\frac{\\alpha_t^2}{2}dt + g(X^{\\varepsilon,\\alpha}_T,m_T)\\r]\n\\end{equation}\nwhere $g : \\mathbb{R} \\times \\mathscr{P}_{2}(\\mathbb{R}) \\to \\mathbb{R} $ is a terminal cost. \n\n\\begin{remark} While we assume a quadratic running cost to simplify the notations, the result is expected to hold under a more general running cost satisfying similar assumptions that shall be imposed on the terminal cost function $g$, namely convexity and weak monotonicity. \n\\end{remark} \n\n\nNow we fix $m \\in \\mathcal{M}([0,T];\\Ptwo)$ and consider a stochastic control problem with the state process \\rref{state} and the cost function $\\mathcal{J}^\\varepsilon(\\alpha | m)$. We will refer to this problem as \\textit{an individual control problem corresponding to $m$}. In the context of a stochastic differential game, $m_t$ here represents the distribution of all the players in the game at time $t$. We would like to consider how each individual optimally chooses his\/her control given such information. The MFG solution then represents a Nash equilibrium where every player is optimal given other players' decision. \n\n\nThe mean field game problem is defined as follows; Find a control $\\hat{\\alpha} \\in \\H^{2}([0,T];\\mathbb{R})$ such that given $m^{\\varepsilon,\\hat{\\alpha}} = (m^{\\varepsilon,\\hat{\\alpha}}_t)_{0 \\leq t \\leq T}$, the optimal control for the state process (\\ref{state}) with cost $\\mathcal{J}^\\varepsilon(\\alpha | m^{\\varepsilon,\\hat{\\alpha}})$ defined by (\\ref{cost}) is again $\\hat{\\alpha}$. In other words, $\\hat{\\alpha} \\in \\H^{2}([0,T];\\mathbb{R})$ satisfies\n$$ \\mathcal{J}^\\varepsilon(\\hat{\\alpha} | m^{\\varepsilon,\\hat{\\alpha}}) \\leq \\mathcal{J}^\\varepsilon(\\alpha | m^{\\varepsilon,\\hat{\\alpha}}) ,\\quad \\forall\\alpha \\in \\H^{2}([0,T];\\mathbb{R}).$$ \nIt can be stated succinctly as \n\\begin{equation}\\label{mfg_c_2}\\begin{cases}\n \\alpha^* \\in \\arg\\max_{\\alpha \\in \\mathcal{A}} \\mathbb{E} \\l[ \\int_0^T \\frac{\\alpha^2}{2}dt + g(X^\\alpha_T,m_T)\\r]\\\\\n dX^\\alpha_t = \\alpha_t dt + \\sigma dW_t + \\varepsilon d\\tilde{W}_t \\\\\n m_t = \\mathcal{L}(X^{\\alpha^*}_t | \\tilde{\\mathscr{F}}_t), \\quad \\tilde{\\mathscr{F}}_t = \\sigma(\\tilde{W}_s; 0 \\leq s \\leq t) \n \\end{cases}\n\\end{equation}\nWe will often refer to this game as $\\varepsilon$-MFG to emphasize the level of the common noise term and call $\\hat{\\alpha}$ a \\textit{solution to $\\varepsilon$-MFG problem with initial $\\xi_0$}. Observe that the $\\varepsilon$-MFG described above can be viewed as a standard control problem with an additional fixed point feature.\n\n\\subsection{Existence and uniqueness of MFG with common noise}\\label{eu_mfg} Let us first state the assumptions on the cost function $g$\n\\begin{assump}(Lipschitz in $x$)\\label{lip_x} For each $x \\in \\mathbb{R}, m \\in \\mathscr{P}_{2}(\\mathbb{R})$, $\\d{x}g(x,m)$ exists and is Lipschitz continuous in $x$. There exists a constant $K$ such that for any $x,x' \\in \\mathbb{R}$\n\t$$ |\\d{x}g(x,m)-\\d{x}g(x',m)| \\leq K |x-x'| $$\n\t\n\\end{assump}\n\\begin{assump}(Convexity) \\label{convex} For any $x,x' \\in \\mathbb{R}, m \\in \\mathscr{P}_{2}(\\mathbb{R})$, \n\t\\begin{equation}\\label{convexity}\t\n\t\t(\\d{x}g(x,m)-\\d{x}g(x',m))(x-x') \\geq 0\n\t\\end{equation}\n\\end{assump}\n\nUnder these assumptions, we can apply the SMP to the individual control problem for a given $m \\in \\mathcal{M}([0,T];\\Ptwo)$ resulting in the following system of FBSDE \n\\begin{equation}\\label{smp_fbsde}\n\t\\begin{gathered}\n dX_{t} = -Y_{t} dt + \\sigma dW_{t} + \\varepsilon d\\tilde{W}_{t} \\\\\n dY_{t} = Z_{t} dW_{t} + \\tilde{Z}_{t}d\\tilde{W}_{t} \\\\\n X_{0} = \\xi, \\quad Y_{T} = \\d{x}g(X_T,m_T) \n\t\\end{gathered}\n\\end{equation}\nSolving this FBSDE yields the optimal control $\\hat{\\alpha}^\\varepsilon_t = - Y_t $.\nThe definition of $\\varepsilon$-MFG says that $(m_t)_{0 \\leq t \\leq T}$ must satisfy the following \\textit{consistency} condition.\n$$ m_t = m^{\\varepsilon,\\hat{\\alpha}^\\varepsilon}_t = \\mathcal{L}(X^{\\varepsilon,\\hat{\\alpha}^\\varepsilon}_t | \\tilde{\\mathscr{F}}_t) $$\nAdding this condition to \\eqref{smp_fbsde}, we have the mean-field FBSDE corresponding to MFG with common noise ($\\varepsilon$-MFG)\n\\begin{equation}\\label{smp_mfg}\n\t\\begin{gathered}\n dX_{t} = -Y_{t} dt + \\sigma dW_{t} + \\varepsilon d\\tilde{W}_{t} \\\\\n dY_{t} = Z_{t} dW_{t} + \\tilde{Z}_{t}d\\tilde{W}_{t} \\\\\n X_{0} = \\xi, \\quad Y_{T} = \\d{x}g(X_T,\\mathcal{L}(X_T|\\tilde{\\mathscr{F}}_T)) \n\t\\end{gathered}\n\\end{equation}\nNote that the two system, \\eqref{smp_fbsde} and \\eqref{smp_mfg}, are different. The FBSDE \\eqref{smp_fbsde} is a classical FBSDE with random coefficients from an \\textit{exogeneous} $m$. The system \\eqref{smp_mfg}, on the other hand, is a mean-field FBSDE where it depends on the law of the solution.\n\n\nWe now discuss the solvability of this FBSDE under what we called a weak monotonicity condition on the cost function $g$. The result below is mainly taken from \\cite{ahuja2014}, so we will state the main assumptions and results without proof and refer the reader to \\cite{ahuja2014} and reference therein for more detail. We also discuss a slightly more general result, the existence and uniqueness of an FBSDE with monotone functionals, in Appendix \\ref{fbsde_monotone_functionals} as we will be using such results in our subsequent analysis. We now state additional assumptions on $g$.\n\n\n\n\n\n\\begin{assump}(Lipschitz in $m$)\\label{lip_m} $\\d{x}g$ is Lipschitz continuous in $m$ uniformly in $x$, i.e. there exists a constant $K$ such that\n\t$$ |\\d{x}g(x,m)-\\d{x}g(x,m')| \\leq K \\mathscr{W}_2(m,m') $$\n\tfor all $x \\in \\mathbb{R}, m,m' \\in \\mathscr{P}_{2}(\\mathbb{R})$, where $\\mathscr{W}_2(m,m')$ is the second order Wasserstein metric defined by (\\ref{wass}). This is equivalent to the following; for any $x \\in \\mathbb{R}, \\xi,\\xi' \\in \\mathscr{L}^{2}_\\mathscr{F}$\n\t$$ |\\d{x}g(x,\\mathcal{L}(\\xi))-\\d{x}g(x,\\mathcal{L}(\\xi'))| \\leq K (\\mathbb{E}|\\xi-\\xi'|^2)^{\\frac{1}{2}} $$\n\\end{assump}\n\\begin{assump}(Weak monotonicity)\\label{mon} For any $m,m' \\in \\mathscr{P}_{2}(\\mathbb{R})$ and any $\\gamma \\in \\P_2(\\mathbb{R}^2)$ with marginals $m,m'$ respectively,\n\t$$ \\int_{\\mathbb{R}^2} \\l[ (\\d{x}g(x,m)-\\d{x}g(y,m'))(x-y) \\r] \\gamma(dx,dy) \\geq 0 $$\n\tEquivalently, for any $\\xi,\\xi' \\in \\mathscr{L}^{2}_\\mathscr{F}$, \n\t\\begin{equation}\\label{monotone}\n\t\t\\mathbb{E}[ \\d{x}g(\\xi,\\mathcal{L}(\\xi)) - \\d{x}g(\\xi',\\mathcal{L}(\\xi'))(\\xi-\\xi')] \\geq 0\n\t\\end{equation}\n\\end{assump}\n\n\n\nWith the assumptions above, the existence and uniqueness of FBSDE \\rref{smp_mfg} an the $\\varepsilon$-MFG follows. We refer to \\cite{ahuja2014,ahuja2016forward} for more detail.\n\n\\begin{thm}[Well-posedness of $\\varepsilon$-MFG] Under \\ref{lip_x}-\\ref{mon}, there exist a unique solution $(X_t,Y_t,Z_t,\\tilde{Z}_t)_{0\\leq t \\leq T}$ to FBSDE \\rref{smp_mfg}. In particular, there exist a unique $\\varepsilon$-MFG solution for any initial $\\xi_0 \\in \\mathscr{L}^{2}_{\\mathscr{F}_0}$.\n\\end{thm}\n\n\n\\subsection{Decoupling function, Markov property, and the master equation}\\label{decoupling_markov}\n\n\nA decoupling field of an FBSDE is a possibly random function which describes the relation of the backward process $Y_t$ as a function of the forward process $X_t$. When the coefficients in the FBSDE are deterministic, this function is also deterministic and satisfies a quasilinear PDE. In that case, the FBSDE is said to be Markovian and we call the function a \\textit{decoupling function}. A decoupling function is useful not only for solving an FBSDE, the method called \\textit{Four-step scheme} \\cite{ma1994solving}, but also for understanding the connection between the SMP and HJB approach to stochastic control problems.\n\nFor $\\varepsilon$-MFG, the existence of a (deterministic) decoupling function is not obvious a priori particularly in the case of common noise since for a fixed $m \\in \\mathcal{M}([0,T];\\Ptwo)$, we are in fact dealing with FBSDE with random coefficients. \nHowever, as the cost function are still a deterministic function of $m$, it is plausible to have such property if we include, as an additional input, the current distribution of players, or in FBSDE context, the conditional distribution of the state process. We state here such result for $\\varepsilon$-MFG which is proven in \\cite{ahuja2016forward}. We also refer to \\cite{delarue2014classical} for more detailed analysis of a decoupling function for 0-MFG.\n \n\\begin{thm}\\label{ue_existence} Under \\ref{lip_x}-\\ref{mon}, there exist a deterministic function $\\mathcal{U}^{\\eps}: [0,T] \\times \\mathbb{R} \\times \\mathscr{P}_{2}(\\mathbb{R}) \\to \\mathbb{R}$ such that\n \\begin{equation}\\label{decouple_smp} \n \tY^{\\eps}_t = \\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t))\n\\end{equation}\nMoreover, $\\mathcal{U}^{\\eps}$ is uniformly Lipschitz; for all $t \\in [0,T], x,x' \\in \\mathbb{R}, m,m' \\in \\mathscr{P}_{2}(\\mathbb{R})$,\n$$| \\mathcal{U}^{\\eps}(t,x,m) - \\mathcal{U}^{\\eps}(t,x',m')| \\leq C_{K,T} \\l( | x-x'| + \\mathscr{W}_2(m,m') \\r)$$\nwhere $C_{K,T}$ is a constant depending only on $K,T$.\n\\end{thm}\nAs a consequence of the Markov property, the $\\varepsilon$-MFG solution is in the feedback form; that is,\n \\begin{equation}\\label{feedback optimal control} \n \\hat{\\alpha}^\\eps_t = -Y^\\varepsilon_t = -\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t)) \n \\end{equation}\n \nThe decoupling function $\\mathcal{U}^{\\eps}$ in \\eqref{decouple_smp} can be defined through a system of FBSDE as follows. For $(s,x,m) \\in [0,T] \\times \\mathbb{R} \\times \\mathscr{P}_{2}(\\mathbb{R}) $, we solve the following FBSDE\\\n\\begin{equation}\\label{mkfbsde_sub}\n\t\\begin{gathered}\n dX_t = -Y_t dt + \\sigma dW_{t} + \\varepsilon d\\tilde{W}_t \\\\\n dY_t =Z_t dW_{t} + \\tilde{Z}_td\\tilde{W}_{t} \\\\\n X_s = x, \\quad Y_T = \\d{x}g(X_T,m^{s,m}_T) \n\t\\end{gathered}\n\\end{equation}\nwhere $(m^{s,m}_t)_{s \\leq t \\leq T}$ is the stochastic flow of $\\varepsilon$-MFG over $[s,T]$ with initial at $s$ = $m$. Denote the solution of \\rref{mkfbsde_sub} by $(X^{s,x,m}_t,Y^{s,x,m}_t,Z^{s,x,m}_t,\\tilde{Z}^{s,x,m}_t)_{s \\leq t \\leq T}$, then $Y^{s,x,m}_s$ is deterministic and we define $\\mathcal{U}^{\\eps}$ as \n$$ \\mathcal{U}^{\\eps}(s,x,m) \\triangleq Y^{s,x,m}_s $$\nWe refer to \\cite{ahuja2016forward} for more detail. Similar to the classical FBSDE, it is natural to ask if $\\mathcal{U}^{\\eps}$ is a solution to a certain PDE. It turns out that under certain condition, $\\mathcal{U}^{\\eps}:[0,T]\\times \\mathbb{R} \\times \\mathscr{P}_{2}(\\mathbb{R}) \\to \\mathbb{R}$ can be characterized as a solution to the following \\textit{master} equation\n\\begin{equation}\\label{master_ue}\n\\begin{split} \\d{t}\\mathcal{U}^{\\eps}(t,x,m)-\\mathcal{U}^{\\eps}(t,x,m)\\d{x}\\mathcal{U}^{\\eps}(t,x,m)+\\frac{\\sigma^{2}+\\varepsilon^2}{2}\\dd{xx}\\mathcal{U}^{\\eps}(t,x,m)- \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{U}^{\\eps}(t,x,m)(\\hat{X})(\\d{x}\\mathcal{U}^{\\eps}(t,\\hat{X},m)) \\r] \\\\ \\quad + \\frac{\\sigma^2}{2}\\dd{mm}\\mathcal{U}^{\\eps}(t,x,m)(\\hat{X})[\\zeta,\\zeta] + \\frac{\\varepsilon^2}{2}\\dd{mm}\\mathcal{U}^{\\eps}(t,x,m)(\\hat{X})[1,1] +\\varepsilon^2\\hat{\\mb{E}}^0\\l[\\dd{xm}\\mathcal{U}^{\\eps}(t,x,m)(\\hat{X})1 \\r]= 0\n\\end{split}\n\\end{equation}\nwith terminal condition\n$$ \\mathcal{U}^{\\eps}(T,x,m) = \\d{x}g(x,m) $$\nWe refer to Proposition 4.1 in \\cite{carmona2014master} for a related result. Here $\\hat{X}$ is a lifting random variable, i.e. $\\mathcal{L}(\\hat{X})=m$, and $\\zeta$ is a $\\mathcal{N}(0,1)$-random variable independent of $\\hat{X}$ both of which are related to the notion of differentiation with respective to $m$ as described in Appendix \\ref{derivative}.\n\nThis master equation is an infinite-dimensional HJB equation involving the derivative with respect to a probability measure. It was first introduced by Lasry and Lions and was discussed more extensively in \\cite{bensoussan2014master,carmona2014master,delarue2014classical}\\footnote{In particular, see equation (47) in \\cite{carmona2014master}.}. For the case $\\varepsilon=0$, namely MFG without common noise, Lasry and Lions propose a model in \\cite{cardaliaguet2010} through the following forward backward PDE\n\\begin{equation}\\label{uomo}\n\\begin{aligned}\n \t&\\d{t}u^{0} = \\frac{(\\d{x}u^{0})^{2}}{2}-\\frac{\\sigma^{2}}{2}\\dd{xx}u^{0}, &u^{0}(T,x) = g(x,m^{0}_T) \\\\\n\t&\\d{t}m^{0} = \\d{x}(\\d{x}u^{0}m^{0}) + \\frac{\\sigma^{2}}{2}\\dd{xx}m^{0}, & m^{0}(0,x) = m_0(x) = \\mathcal{L}(\\xi_0)\n\\end{aligned}\n\\end{equation}\nwhere $m^0_t(\\cdot) = m^0(t,\\cdot)$. The first equation denotes the backward HJB equation for the value function of each players given the flow of distribution $(m^0_t)_{0 \\leq t \\leq T}$. The second equation is the forward Fokker-Planck equation describing the distribution of players' state given all the players adopt the strategy \n$$ \\bar{\\alpha}(t,x,m^0_t) = -\\d{x}u^0(t,x) $$\n\nUnder sufficient regularity assumptions on $u^0$, it can be connected with $\\mathcal{U}^0(t,x,m)$ via\n\\begin{equation}\\label{uouox}\n \\mathcal{U}^0(t,x,m^0_t) = \\d{x}u^0(t,x)\n\\end{equation}\n\nWe would like to emphasize the relation \\rref{uouox} as the terms $\\mathcal{U}^0(t,x,m^0_t), \\d{x}\\mathcal{U}^0(t,x,m^0_t), \\d{m}\\mathcal{U}^0(t,x,m^0_t)$ are the main terms that will appear in our subsequent asymptotic analysis. The relation \\rref{uouox} means that the first two terms can be found from the system of PDE \\eqref{uomo} describing the $0$-MFG. The last term, which represents the sensitivity of the solution around the optimal path $(m^0_t)_{0 \\leq t \\leq T}$, is new and will be of crucial importance in the asymptotic analysis. \n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Asymptotic analysis}\\label{asymptotic}\n\n\n\\subsection{Linear variational FBSDE} In the previous section, we have discussed that, under a linear-convex framework, finding a solution of the $\\varepsilon$-MFG is equivalent to solving the corresponding mean-field FBSDE \\rref{smp_mfg}, and such system is in fact uniquely solvable under \\ref{lip_x}-\\ref{mon}. Let denote its solution by $(X^\\varepsilon_t, Y^\\varepsilon_t, Z^\\varepsilon_t, \\tilde{Z}^\\varepsilon_t)_{0 \\leq t \\leq T}$, i.e. they satisfies\n\\begin{equation}\\label{smp_mfg_2}\n\t\\begin{gathered}\n dX^{\\eps}_{t} = -Y^{\\eps}_{t} dt + \\sigma dW_{t} + \\varepsilon d\\tilde{W}_{t} \\\\\n dY^{\\eps}_{t} = Z^{\\eps}_{t} dW_{t} + \\tilde{Z}^{\\eps}_{t}d\\tilde{W}_{t} \\\\\n X^{\\eps}_{0} = \\xi, \\quad Y^{\\eps}_{T} = \\d{x}g(X^{\\eps}_T,\\mathcal{L}(X^{\\eps}_T|\\tilde{\\mathscr{F}}_T)) \n\t\\end{gathered}\n\\end{equation}\n\nSolving this FBSDE yields the $\\varepsilon$-MFG solution by setting $\\hat{\\alpha}^\\varepsilon_t = -Y^{\\eps}_t$. From the discussion in section \\ref{decoupling_markov}, we see that solving the $0$-MFG problem for $(X^0_t,Y^0_t, Z^0_t, \\tilde{Z}^0_t)_{0 \\leq t \\leq T}$ requires us to find $\\mathcal{U}^0$, which by \\rref{uouox} is reduced to solving a system of PDEs. However, when adding common noise, we need to solve for $\\mathcal{U}^{\\eps}$, a solution to the master equation \\rref{master_ue}. Instead of solving this infinite-dimensional equation, our goal here is to consider the approximation $(X^{\\eps}_t,Y^{\\eps}_t, Z^{\\eps}_t, \\tilde{Z}^{\\eps}_t)_{0 \\leq t \\leq T}$ around $(X^0_t,Y^0_t, Z^0_t, \\tilde{Z}^0_t)_{0 \\leq t \\leq T}$ when the common noise is small. Equivalently, we would like to consider the limit as $\\varepsilon \\to 0$ of\n$$ \\frac{X^{\\eps}_t-X^{0}_t}{\\varepsilon}, \\quad \\frac{Y^{\\eps}_t-Y^{0}_t}{\\varepsilon}$$\nFirst, we need an additional regularity assumption on $g$;\n\n\\begin{assump}\\label{second} \n $\\d{x}g$ is differentiable in $(x,m)$ with Lipschitz continuous and bounded derivative. Denote the bound and Lipschitz constant by the same $K$. Specifically for $\\dd{mx}g$, they satisfy, for all $x \\in \\mathbb{R}$, $m,m' \\in \\mathscr{P}_{2}(\\mathbb{R})$, and $\\xi,\\xi' \\in \\mathscr{L}^{2}_\\mathscr{F}$ with law $m,m'$,\n\\begin{equation}\\label{lip_second}\n\\begin{gathered}\n\\mathbb{E}[ \\dd{mx}g(x,m)(\\xi)^2]^{\\frac{1}{2}} \\leq K \\\\\n\\mathbb{E}[ (\\dd{mx}g(x,m)(\\xi) - \\dd{mx}g(x,m')(\\xi'))^2]^{\\frac{1}{2}} \\leq K \\|\\xi-\\xi'\\|_2\n\\end{gathered}\n\\end{equation}\n\\end{assump}\n\n\\begin{remark} $\\dd{xm}g$ involves a derivative with respect to a probability measure. We follow the framework introduced by Lasry and Lions in \\cite{cardaliaguet2010} which is based on a Fr\\'echet derivative of a lifting function defined on a space of random variable. See Appendix \\ref{derivative} for more detail.\n\\end{remark}\n\nLet $\\Delta X^\\varepsilon_t = \\frac{X^{\\eps}_t-X^{0}_t}{\\varepsilon} $ and denote similarly $\\Delta Y^{\\eps}_t,\\Delta Z^{\\eps}_t,\\Delta \\tilde{Z}^{\\eps}_t$, then they satisfy\n \\begin{equation}\n \\label{fbsde_delta}\n \\begin{gathered}\n \t d\\Delta X^{\\eps}_t = - \\Delta Y^{\\eps}_t dt + d\\tilde{W}_t \\\\\n\t d\\Delta Y^{\\eps}_t = \\Delta Z^{\\eps}_t dW_t +\\Delta \\tilde{Z}^{\\eps}_t d\\tilde{W}_t \\\\\n \\Delta X^{\\eps}_0 = 0, \\quad \\Delta Y^{\\eps}_T= \\frac{\\d{x}g(X^\\varepsilon_T, \\mathcal{L}(X^\\varepsilon_T | \\tilde{\\mathscr{F}}_T)) - \\d{x}g(X^0_T, \\mathcal{L}(X^0_T | \\tilde{\\mathscr{F}}_T))}{\\varepsilon}\n\\end{gathered}\n\\end{equation}\n Formally taking $\\varepsilon \\to 0$, we get the following \\textit{linear variational FBSDE}\n \\begin{equation}\n \\label{var}\n \\begin{gathered}\n \t dU_t = - V_tdt + d\\tilde{W}_t \\\\\n\t dV_t = Q_tdW_t + \\tilde{Q}_td\\tilde{W}_t \\\\\n U_0 = 0, \\quad V_T= \\dd{xx}g(X^0_T, m^0_T)U_T + \\hat{\\mathbb{E}}[ \\dd{xm}g(X^{0}_{T},m^0_{T})(\\hat{X}^0_T)\\hat{U}_{T}] \n\n\\end{gathered}\n\\end{equation}\n where \n $$m^0_t = \\mathcal{L}(X^0_t | \\tilde{\\mathscr{F}}_t) = \\mathcal{L}(X^0_t) $$\n and $\\hat{X}^0$ and $\\hat{U}$ are identical copies of $X^0$ and $U$ in the copied space $(\\hat{\\Omega}^0 \\times \\t{\\Omega},\\hat{\\mathscr{F}} \\otimes \\tilde{\\mathscr{F}},\\hat{\\mathbb{P}} \\otimes \\tilde{\\mathbb{P}})$ and $\\hat{\\mathbb{E}}^0[\\cdot]$ is the expectation with respect to $\\hat{\\omega}^0$ only. The copied space is used simply to distinguish a random variable used to represent a law in the \\textit{lifting} functionals from the original random variable (see Appendix \\ref{derivative} for more detail).\n We can write this term explicitly as\n \\begin{equation}\\label{expand_integral}\n \\begin{aligned}\n \\hat{\\mathbb{E}}[ \\dd{xm}g(X^{0}_{T},m^0_{T})(\\hat{X}^0_T)\\hat{U}_{T}] \n &= \\int_{\\hat{\\Omega}^0} \\dd{xm}g(X^{0}_{T}(\\omega^0),m^0_{T})(\\hat{X}^0_T(\\hat{\\omega}^0))\\hat{U}_{T}(\\hat{\\omega}^0,\\t{\\omega}) d\\hat{\\mathbb{P}}(\\hat{\\omega}^0) \\\\\n & = \\int_{\\Omega^0} \\dd{xm}g(X^{0}_{T}(\\omega^0),m^0_{T})(X^0_T(\\hat{\\omega}^0))U_{T}(\\hat{\\omega}^0,\\t{\\omega}) d\\mathbb{P}^0(\\hat{\\omega}^0) \n \\end{aligned}\n \\end{equation}\nwhere we suppress the $\\t{\\omega}$ in $X^0_T,\\hat{X}^0_T$ as they do not depend on it. One can see that the term $\\hat{\\mathbb{E}}[ \\dd{xm}g(X^{0}_{T},m^0_{T})(\\hat{X}^0_T)\\hat{U}_{T}] $ is a mean field term that couples $\\l\\{ (\\hat{X}^0_T,U_T)(\\hat{\\omega}^0,\\cdot); \\hat{\\omega}^0 \\in \\Omega^0 \\r\\}$ together. Note that each path $ \\hat{\\omega}^0 \\in \\Omega^0$ corresponds a path of an individual player, so in other words, the mean field term gives the sensitivity to the first order change from all other players. Our first result in this section shows that this variation process is well-posed. \n\n\n\\begin{thm} Assume \\ref{lip_x}-\\ref{second} hold, there exists a unique adapted solution $(U_t,V_t,Q_t,\\tilde{Q}_t)_{0 \\leq t \\leq T}$ to FBSDE \\rref{var} satisfying\n\\begin{equation}\\label{estimate_var}\n\t\\mathbb{E}\\l[ \\sup_{0 \\leq t \\leq T} [ U_t^2 + V_t^2 ] + \\int_0^T [Q_t^2 + \\tilde{Q}_t^2] dt \\r] \\leq C_{K,T}\n\\end{equation}\nwhere $C_{K,T}$ is a constant depending only on $K,T$.\n\\end{thm}\n \\begin{proof} We define a functional $G: \\mathscr{L}^{2}_\\mathscr{F} \\to \\mathscr{L}^{2}_\\mathscr{F}$ by \n $$G(\\xi) = \\dd{xx}g(X^0_T, m^0_T)\\xi + \\hat{\\mathbb{E}}\\l[ \\dd{xm}g(X^{0}_{T},m^0_T)(\\hat{X}^0_T)\\hat{\\xi} \\r]$$\nwhere $\\hat{X}^0_T, \\hat{\\xi}$ are identical copies of $X^0_T,\\xi$ in the copied space $(\\hat{\\Omega}^0 \\times \\t{\\Omega},\\hat{\\mathscr{F}} \\otimes \\tilde{\\mathscr{F}},\\hat{\\mathbb{P}} \\otimes \\tilde{\\mathbb{P}})$ and $\\hat{\\mathbb{E}}^0[\\cdot]$ is the expectation with respect to $\\hat{\\omega}^0$ only. Then by \\ref{lip_x}-\\ref{second}, it follows that $G$ satisfies functional Lipschitz and monotone properties. That is, for any $\\xi_1,\\xi_2 \\in \\mathscr{L}^{2}_{\\mathscr{F}}, A_T \\in \\tilde{\\mathscr{F}}_T$,\n\t\\begin{gather*}\n\t\n\t\t \\mathbb{E}[\\mathds{1}_{A_T}(G(\\xi_1)-G(\\xi_2))^2] \\leq K^2\\mathbb{E}[\\mathds{1}_{A_T}(\\xi_1-\\xi_2)^2] \\label{lipG} \\\\\n\t\t\\mathbb{E}[\\mathds{1}_{A_T}(G(\\xi_1)-G(\\xi_2))(\\xi_1-\\xi_2)] \\geq 0 \\label{monG}\n\t\\end{gather*}\nThe existence and uniqueness then follows from Theorem 1 in \\cite{ahuja2016forward} (the statement is also provided in Appendix \\ref{fbsde_monotone_functionals}). \n \\end{proof}\n\nWe are now ready to state our first main result which establishes the differentiability of $\\varepsilon$-MFG solution with respect to $\\varepsilon$.\n\n\\begin{thm}\\label{diff} Assume \\ref{lip_x}-\\ref{second} hold, let $(X^{\\eps}_t,Y^{\\eps}_t,Z^{\\eps}_t,\\tilde{Z}^{\\eps}_t)_{0 \\leq t \\leq T}$ denote the solution to \\rref{smp_mfg_2} and $(U_t,V_t,Q_t,\\tilde{Q}_t)_{0 \\leq t \\leq T}$ denote the solution to \\rref{var}, then there exist a constant $C_{K,T}$ depending only on $K,T$ such that\n \\begin{equation}\\label{converge_xy}\n \\mathbb{E} \\sup_{0\\leq t \\leq T} \\l[\\l(\\frac{X^\\varepsilon_t-X^0_t}{\\varepsilon} - U_t\\r)^2 + \\l(\\frac{Y^\\varepsilon_t-Y^0_t}{\\varepsilon} - V_t\\r)^2 \\r] \\leq C_{K,T}\\varepsilon^2\n \\end{equation}\n\\end{thm}\n\n\\begin{proof} See Appendix \\ref{proof_convergence}.\n\\end{proof}\n\nWe are able to obtain the convergence result above in a strong sense (in $\\mathscr{L}^{2}$) mainly due to the monotone property of our setting. As seen in \\cite{peng1999,hu1995,ahuja2016forward}, the monotone property of an FBSDE enables the proof for existence and uniqueness which relies on the standard SDE estimates and probabilistic tools. Similarly here, we are able to apply such tools to prove the convergence in $\\mathscr{L}^{2}$.\n\n\n\\subsection{Approximate Nash equilibrium for $\\varepsilon$-MFG}\\label{approx_sec}\n\nWe have shown that\n$$ \\frac{\\hat{\\alpha}^\\eps_t - \\hat{\\alpha}^0_t}{\\varepsilon} = \\frac{-Y^{\\eps}_t +Y^0_t}{\\varepsilon}\\to -V_t \\qquad\\text{as }\\varepsilon \\to 0 $$\nwhere the limit is in $\\H^{2}([0,T];\\mathbb{R})$. Using this result, we construct the first order approximate strategy by\n\\begin{equation}\n\\beta^\\varepsilon_t \\triangleq \\hat{\\alpha}^0_t - \\varepsilon V_t, \\quad \\forall t \\in [0,T]\n\\end{equation}\nBeing a game, an appropriate notion of approximation is required to see if $(\\beta^\\varepsilon_t)_{0 \\leq t \\leq T}$ serves as a good approximation. In this case, it is reasonable to assume that each player adopts this approximate solution and analyze the gap between the expected cost under this set of strategies and the optimal cost. For an exact Nash equilibrium, this gap is precisely zero by definition as every player is optimal given the other players' strategy. This notion of approximate optimality is called $\\delta$-Nash equilibrium.\\footnote{It is conventionally called $\\varepsilon$-Nash equilibrium. We use the parameter $\\delta$ here to avoid confusion with the parameter $\\varepsilon$ denoting the level of common noise} In a finite-player game, it is defined as follows. \n\n\\begin{definition} Under the same notations as defined in section \\ref{mfg_commonnoise}, for the $N$-player game, a set of admissible strategies $(\\alpha^i_t)_{0 \\leq t \\leq T, 1 \\leq i \\leq N}$ is called a $\\delta$-Nash equilibrium if for each $i=1,2,\\dots,N$,\n$$ \\mathcal{J}^i\\l(\\alpha^i | (\\alpha^j)_{j \\neq i} \\r) \\leq \\mathcal{J}^i\\l(\\beta| (\\alpha^j)_{j \\neq i} \\r) + \\delta $$\nfor all $\\beta =(\\beta_t)_{0 \\leq t \\leq T} \\in \\H^{2}([0,T];\\mathbb{R})$ where $\\mathcal{J}^i(\\cdot)$ denote the cost function of player $i$.\n\\end{definition} \n\nTo go from a finite-player symmetric game to its limit, we formally take $N \\to \\infty$, assume that each player adopts the same strategy, and use a single player as a representative player. As a result, we can define an approximate Nash equilibrium similarly for MFG as follows;\n\n\\begin{definition}\\label{approx_nash_def} Under the same notations as defined in section \\ref{mfg_commonnoise}, an admissible strategy $\\alpha = (\\alpha_t)_{0 \\leq t \\leq T} \\in \\H^{2}([0,T];\\mathbb{R})$ is called a $\\delta$-Nash equilibrium for $\\varepsilon$-MFG problem if\n$$ \\mathcal{J}^\\varepsilon(\\alpha | m^{\\alpha}) \\leq \\mathcal{J}^\\varepsilon(\\beta | m^{\\alpha}) + \\delta $$\nfor all $\\beta = (\\beta_t)_{0 \\leq t \\leq T}$ where $\\mathcal{J}^\\varepsilon(\\cdot)$ denotes the cost function and $m^\\alpha_t$ denotes the conditional law of $X^\\alpha_t$ with $(X^\\alpha_t)_{0 \\leq t \\leq T} $ being the state process corresponding to $\\alpha$.\n\\end{definition} \n\n\\begin{remark} By definition, an $\\varepsilon$-MFG solution is a $0$-Nash equilibrium for an $\\varepsilon$-MFG problem.\n\\end{remark}\n\n\n\nThe notion of an approximate Nash equilibrium is important in the theory of stochastic games with infinite horizon where for many problems, there is no exact Nash equilibrium while there exists a $\\delta$-Nash equilibrium for any $\\delta > 0$. It is also a widely used notion in \\textit{algorithmic game theory} where the main interest is in finding a polynomial time algorithm that yields an approximate Nash equilibrium solution when finding an exact Nash equilibrium is computationally expensive. \n\nIn MFG, this notion is used mainly in the study of the relation between an MFG and a symmetric $N$-player stochastic differential game. Recall that the motivation for considering an MFG model is in its application for finding a good approximate strategy for an $N$-player game when $N$ is large. In \\cite{carmona2013probabilistic}, Carmona and Delarue show that under a linear-convex MFG model without common noise, the $0$-MFG strategy is $\\varepsilon_N$-Nash equilibrium for the corresponding $N$-player game with $\\varepsilon_N \\sim O(N^{-1\/(d+4)})$ where $d$ is the dimension of the underlying Euclidean space. See also \\cite{carmona2013weak, huang2007, li2011} for other similar results. The converse, which asks whether the Nash equilibrium from $N$-player game converges to a corresponding MFG solution, is also of interest and is more challenging. For interested readers, we refer to \\cite{fischer2014} and reference therein for results in this direction all of which are for MFG models without common noise.\n \nIn this paper, we are only concerned with the model at the limit with a continuum of players. We are particularly interested in an approximate solution for $\\varepsilon$-MFG using the information from $0$-MFG solution. Our main result for this section is the following theorem\n\n\\begin{thm}\\label{approx_nash} Assume\\ref{lip_x}-\\ref{second} hold. For $\\varepsilon > 0$, let $\\hat{\\alpha}^\\eps=(\\hat{\\alpha}^\\eps_t)_{0 \\leq t \\leq T}$ denote the solution to the $\\varepsilon$-MFG and $(U_t,V_t,Q_t,\\tilde{Q}_t)_{0 \\leq t \\leq T}$ denote the solution to the linear variation FBSDE \\rref{var}. Define a first order approximate strategy $\\beta^\\varepsilon = (\\beta^\\varepsilon_t)_{0 \\leq t \\leq T}$ by\n\\begin{equation}\n\\beta^\\varepsilon_t \\triangleq \\hat{\\alpha}^0_t - \\varepsilon V_t\n\\end{equation}\nThen $\\beta^\\varepsilon$ is an $\\varepsilon^2$-Nash equilibrium for $\\varepsilon$-MFG.\n\\end{thm}\n\n\\begin{proof} See Appendix \\ref{proof_approx_nash}.\n\\end{proof}\n\n\n\\subsection{Decoupling function of the linear variation FBSDE} Despite being linear, the FBSDE \\rref{var} is not trivial to solve due to the mean field term $\\hat{\\mb{E}}^0\\l[ \\dd{xm}g(X^0_T,m_T^0)(\\hat{X}^0_T)\\hat{U}_T \\r]$. Nonetheless, we proceed in the similar way as solving a classical FBSDE by attempting to find a decoupling function describing $V_t$ as a function of $U_t$. Recall that we have a decoupling function $\\mathcal{U}^{\\eps}$ satisfying the relation\n $$ Y_t^{\\varepsilon} = \\mathcal{U}^{\\eps}(t,X^{\\varepsilon}_t,\\mathcal{L}(X^{\\varepsilon}_t|\\tilde{\\mathscr{F}}_t)) $$\nTherefore, we can write\n\\begin{equation}\\label{ue_decompose}\n\\begin{aligned}\nV_t \t&= \\lim_{\\varepsilon \\to 0} \\frac{Y^{\\eps}_t-Y^0_t}{\\varepsilon} \\\\\n\t&= \\lim_{\\varepsilon \\to 0} \\frac{\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))- \\mathcal{U}^0(t,X^0_t,\\mathcal{L}(X^0_t))}{\\varepsilon} \\\\\n \t&= \\lim_{\\varepsilon \\to 0} \\frac{\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))-\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))+\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))-\\mathcal{U}^0(t,X^0_t,\\mathcal{L}(X^0_t))}{\\varepsilon} \\\\\n\t&= \\lim_{\\varepsilon \\to 0} \\frac{\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))-\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))}{\\varepsilon}+ \\lim_{\\varepsilon \\to 0}\\frac{\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))-\\mathcal{U}^0(t,X^0_t,\\mathcal{L}(X^0_t))}{\\varepsilon} \n\\end{aligned}\n\\end{equation}\nwhere the limit is in $\\mathscr{L}^{2}_{\\mathscr{F}}$. The proposition below shows that the first part is in fact zero.\n\n\n\\begin{prop}\\label{meanzero_smp} Assume \\ref{lip_x}-\\ref{mon} holds. Let $\\mathcal{U}^{\\eps}$ denote the decoupling function of FBSDE \\rref{smp_mfg} as defined in \\rref{decouple_smp}, then the following holds;\n\\begin{equation}\\label{ueuo} \n\t\\lim_{\\varepsilon \\to 0} \\frac{\\mathcal{U}^{\\eps}(t,x,m) - \\mathcal{U}^0(t,x,m)}{\\varepsilon} = 0 \n\\end{equation}\nuniformly in $(t,x,m) \\in [0,T] \\times \\mathbb{R} \\times \\mathscr{P}_{2}(\\mathbb{R})$.\n\\end{prop}\n\n\\begin{proof} See Appendix \\ref{proof_ueuo}\n\\end{proof}\n\nThe result above implies that, at the first order, the decoupling function for $\\varepsilon$-MFG and $0$-MFG is the same. Combining with the recent result by Chassagneux et al.\\cite{delarue2014classical} which proves the existence of a classical solution $\\mathcal{U}^0$ of equation \\rref{master_ue} with $\\varepsilon=0$, we have the decoupling \\textit{functional} for FBSDE \\rref{var}. To apply such result, an extra regularity assumption for $g$ is needed\n\n\\begin{assump}\\label{third} For all $m \\in \\mathscr{P}_{2}(\\mathbb{R})$, the map $(x,z) \\mapsto\\dd{mx}g(x,m)(z),$ is continuously differentiable and satisfies for all $x,x',\\alpha \\in \\mathbb{R}$,$m,m' \\in \\mathscr{P}_{2}(\\mathbb{R})$, and $\\xi,\\xi' \\in \\mathscr{L}^{2}_\\mathscr{F}$ with law $m,m'$,\n\\begin{equation}\\label{lip_second_m}\n\\begin{gathered}\n\\mathbb{E}\\l[\\l( \\d{z}\\dd{xm}g(x,m)(\\xi) - \\d{z}\\dd{xm}g(x',m')(\\xi')\\r)^2\\r]^{\\frac{1}{2}} \\leq K \\l( |x-x'| + \\|\\xi-\\xi'\\|_2 \\r)\n\\end{gathered}\n\\end{equation}\n\\end{assump}\n\n\\begin{remark} The map $(x,z) \\mapsto\\dd{xm}g(x,m)(z)$ is related the notion of derivative with respect to a probability measure as described in Appendix \\ref{derivative}. For instance, if $f(x,m) = (x-\\int_{\\mathbb{R}} y dm(y))^2$, then the lifting functional is $\\tilde{f}(x,\\xi) = (x- \\mathbb{E}[\\xi])^2$, and the Fr\\'echet derivative is given by $D\\tilde{f}(x,\\xi)= 2(x-\\mathbb{E}[\\xi]) = \\mathbb{E}[ 2(x-\\xi)]$. Thus, $\\d{m}f(m)(z) = 2(x-z)$. \n\\end{remark} \n\nThe following theorem gives the decoupling functional for the linear variational process \\rref{var}.\n\n\\begin{thm}\\label{decouple_uv} Assume \\ref{lip_x}-\\ref{third} holds. Let $(U_t,V_t,Q_t,\\tilde{Q}_t)_{0 \\leq t \\leq T}$ denote the solution to \\rref{var}, $\\mathcal{U}^{0}$ denote the decoupling function for $0$-MFG defined in section \\ref{decoupling_markov}, then \n\\begin{equation}\\label{vt}\nV_t = \\d{x}\\mathcal{U}^0(t,X_t^0, m^0_t)U_t + \\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,X_t^0, m^0_t)(\\hat{X}^0_t)\\hat{U}_t] \n\\end{equation}\n\\end{thm}\n\\begin{proof} From \\eqref{ue_decompose} and Proposition \\ref{meanzero_smp}, we have\n$$ V_t = \\lim_{\\varepsilon \\to 0}\\frac{\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t))-\\mathcal{U}^0(t,X^0_t,\\mathcal{L}(X^0_t))}{\\varepsilon} $$\nFrom Theorem 5.5 in \\cite{delarue2014classical}, we have that $\\d{x}\\mathcal{U}^{0}, \\d{m}\\mathcal{U}^{0}$ exist, and they are bounded by Theorem \\ref{ue_existence}. The result then follows from Theorem \\ref{diff} above.\n\\end{proof}\n\nFrom Theorem \\ref{decouple_uv} above, we have decoupled the FBSDE \\rref{var} and get the following forward mean-field SDE\n\\begin{equation}\\label{dU}\n dU_t = - \\l[ \\d{x}\\mathcal{U}^0(t,X_t^0, m^0_t)U_t + \\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,X_t^0, m^0_t)(\\hat{X}^0_t)\\hat{U}_t] \\r] dt + d\\tilde{W}_t, \\quad U_0 = 0 \n \\end{equation}\n\n\nProposition \\ref{meanzero_smp} has a simple yet interesting implication. It says that to approximate the $\\varepsilon$-MFG solution at the first order, we simply need to use the $0$-MFG solution applying along the trajectory $(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t))$, i.e.\n$$ \\alpha^\\varepsilon_t = -Y^{\\eps}_t = -\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t)) \\approx -\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t))$$\nHowever, we do not usually know $\\mathcal{U}^0(t,x,m)$ for all $(t,x,m)$ but only $\\mathcal{U}^0(t,x,m^0_t)$ where $m^0_t = \\mathcal{L}(X^0_t)$ corresponds to the $0$-MFG solution. The full information of $\\mathcal{U}^0$ at every point $(t,x,m)$ will require solving the master equation \\rref{master_ue} which is infinite-dimensional problem and is non-trivial to do so. On the other hands, $\\mathcal{U}^0(t,x,m^0_t)$ is simply a gradient of a solution of the forward-backward PDE \\eqref{uomo} of Lasry and Lions. So unless we know the function $\\mathcal{U}^0(t,x,m)$, this process means that to get our optimal control at every time $t$, we need to resolve $0$-MFG problem over $[t,T]$ with initial $m_t = \\mathcal{L}(X^\\varepsilon_t| \\tilde{\\mathscr{F}}_t)$ which is computationally expensive. As a result, we need to approximate $\\mathcal{U}^0$ at the current state $(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t))$ by $\\mathcal{U}^0(t,X^0_t,m^0_t)$. In fact, it is not necessary to approximate at $(t,X^0_t,m^0_t)$ if we can observe $X^\\varepsilon_t$. In other words, making use of \\rref{ueuo}, we can expand around $(t,X^{\\eps}_t,m^0_t)$ instead and get a slightly simpler approximation of $\\hat{\\alpha}^\\varepsilon_t$ as follows;\n\\begin{equation}\\label{vt2}\n\\begin{aligned}\n \\hat{\\alpha}^\\varepsilon_t = -Y^{\\eps}_t &= -\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t)) \\\\\n \t\t\t\t\t&= -\\mathcal{U}^0(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t)) + o(\\varepsilon) \\\\\n\t\t\t\t\t&= -\\mathcal{U}^0(t,X^{\\eps}_t,m^0_t) + \\varepsilon \\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,X^{\\eps}_t, m^0_t)(\\hat{X}^0_t)\\hat{U}_t] + o(\\varepsilon)\n \\end{aligned}\n \\end{equation}\nFrom both \\rref{vt} and \\rref{vt2}, we see that the derivative with respect to the $m$-argument of $\\mathcal{U}^0(t,x,m^0_t)$ along the direction $\\hat{U}_t$ is essential in our asymptotic analysis.\n\n\n\n\n\nHaving established the first order approximation of the $\\varepsilon$-MFG solution in the form of a solution to a linear variational FBSDE, we now proceed to analyze properties of the solution $(U_t,V_t)_{0 \\leq t \\leq T}$. While the FBSDE \\rref{var} describing them seems complicated as it involves both the individual noise and common noise, this is simply a nature of SMP approach as it describes the control in the open-loop form (a function of path) instead of the closed-loop feedback form (a function of state). However, if we only analyze the effect of the perturbation by the common noise, or equivalently if we look at the distribution of $(U_t(\\omega^0,\\cdot),V_t(\\omega^0,\\cdot))_{0 \\leq t \\leq T}$ for a fixed $\\omega^0 \\in \\Omega^0$, then they are simply a pair of centered Gaussian process. \n\n\\begin{thm}\\label{meanzero} Let $(U_t,V_t,Q_t,\\tilde{Q}_t)_{0 \\leq t \\leq T}$ denote the solution to the FBSDE \\rref{var}, then $(U_t(\\omega^0,\\cdot), V_t(\\omega^0,\\cdot))_{0 \\leq t \\leq T}$ is a pair of Gaussian process in $(\\t{\\Omega},\\{\\tilde{\\mathscr{F}}_t\\}_{0 \\leq t \\leq T}, \\tilde{\\mathbb{P}})$ with mean zero $\\mathbb{P}^0$-a.s.\n\\end{thm} \n\n\n\\begin{proof}\nRecall that $\\mathscr{F}^0$ denote the $\\sigma$-algebra in the first component of the product sample space which is independent of the common noise filtraiton. Let $A: [0,T] \\times \\mathscr{L}^{2}_{\\mathscr{F}^0} \\to \\mathscr{L}^{2}_{\\mathscr{F}^0}$ denote the linear operator \n$$ A(t,\\xi) = \\d{x}\\mathcal{U}^0(t,X_t^0, m^0_t)\\xi + \\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,X_t^0, m^0_t)(\\hat{X}^0_t)\\hat{\\xi}] $$\nThen we can view \\eqref{dU} as a stochastic evolution equation in $\\mathscr{L}^{2}_{\\mathscr{F}^0}$\n$$ dU_t = A(t,U_t)dt + d\\tilde{W}^H_t, \\quad U_0=0$$\nwhere $(\\tilde{W}^H_t)_{0 \\leq t \\leq T}$ denote the natural lifting of the common noise $(\\tilde{W}_t)_{0 \\leq t \\leq T}$ to a Gaussian process in $\\mathscr{L}^{2}_\\mathscr{F}$ along the constant direction. That is, if $e_1=1$ denote the constant random variable, then for any $\\xi \\in \\mathscr{L}^{2}_{\\mathscr{F}}$, \n$$ \\l\\langle \\tilde{W}^H_t, \\xi \\r\\rangle = \\l\\langle e_1, \\xi \\r\\rangle \\tilde{W}_t = \\mathbb{E}[\\xi] \\tilde{W}_t $$\nBy the uniform Lipschitz property of $\\mathcal{U}^{0}$ (see Theorem \\ref{ue_existence}), we have that $\\| \\d{x}\\mathcal{U}^0(t,X^0_t,m^0_t) \\|_\\infty$, $\\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,X^0_t,m^0_t)(\\hat{X}^0_t)^2]^{\\frac{1}{2}}$ are bounded. Thus, $A$ is a bounded linear functional and hence induces a strongly continuous semigroup $(S(s,t))_{0 \\leq s \\leq t \\leq T}: \\mathscr{L}^{2}_{\\mathscr{F}^0} \\to \\mathscr{L}^{2}_{\\mathscr{F}^0}$. Therefore, by the variational of constant formula (see Theorem 5.4 in \\cite{da2014stochastic}) and $U_0=0$, we have \n$$ U_t = S(0,t)U_0 + \\int_0^t S(s,t)e_1 d\\tilde{W}_s = \\int_0^t S(s,t)e_1 d\\tilde{W}_s $$\nThus, $U_t$ is a mean-zero Gaussian process with respect to the common noise and the result follows since $V_t$ is a linear function of $U_t$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\iffalse\n \\subsubsection{SPDE for $\\eta(t,x,\\t{\\omega})$} Notice that $\\eta$ involves the law of $U_t$, and $U_t$ depends on $\\eta$ in its dynamic \\rref{dU}. To derive an equation for $\\eta$, we need to resort to the master equation. Having shown \\rref{converge_xy}, we write\n$$ \\eta(t,x,\\t{\\omega}) = \\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,x,m_t^0)(\\hat{X}^0_t)\\hat{U}_t]= \\lim_{\\varepsilon \\to 0} \\frac{\\mathcal{U}^0(t,x,m^{\\eps}_t(\\t{\\omega})) - \\mathcal{U}^0(t,x,m^0_t)}{\\varepsilon} $$\nwhere $m^{\\eps}_t = \\mathcal{L}(X^{\\eps}_t| \\tilde{\\mathscr{F}}_t), m^0_t = \\mathcal{L}(X^{0}_t)$. Now we apply It\\=o's lemma on each term and use the master equation \\rref{master_ue};\n\\begin{align*}\n d\\mathcal{U}^{0}(t,x,m^{\\eps}_t) &= \\l[ \\d{t}\\mathcal{U}^0(t,x,m^{\\eps}_t)- \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{U}^0(t,x,m^{\\eps}_t)(\\hat{X}^\\varepsilon_t)(\\mathcal{U}^{\\eps}_x(t,\\hat{X}^\\varepsilon_t,m^{\\eps}_t)) \\r]+ \\frac{\\sigma^2}{2}\\dd{mm}\\mathcal{U}^0(t,x,m^{\\eps}_t)(\\hat{X}^\\varepsilon_t)[\\zeta,\\zeta] \\r] dt \\\\\n &\\quad + \\varepsilon \\hat{\\mb{E}}^0[ \\d{m}\\mathcal{U}^0(t,x,m^{\\eps}_t)] d\\tilde{W}_t \\\\\n &=\\l[ \\mathcal{U}^{0}(t,x,m^{\\eps}_t)\\d{x}\\mathcal{U}^0(t,x,m^{\\eps}_t)-\\frac{\\sigma^{2}}{2}\\dd{xx}\\mathcal{U}^0(t,x,m^{\\eps}_t) \\r] dt +\\varepsilon \\hat{\\mb{E}}^0\\l[ \\d{m}\\mathcal{U}^0(t,x,m^{\\eps}_t) \\r] d\\tilde{W}_t \n\\end{align*}\nand\n$$d\\mathcal{U}^{0}(t,x,m^{0}_t) = \\l[ \\mathcal{U}^{0}(t,x,m^{0}_t)\\d{x}\\mathcal{U}^0(t,x,m^{0}_t)-\\frac{\\sigma^{2}}{2}\\dd{xx}\\mathcal{U}^0(t,x,m^{0}_t) \\r] dt$$\nThus, taking the difference, we arrive at the SPDE for $\\eta$;\n\\begin{equation}\\label{spde_eta}\n d\\eta(t,x) = \\l[ \\d{x}\\eta(t,x) \\mathcal{U}^0(t,x,m^0_t) + \\eta(t,x)\\d{x}\\mathcal{U}^0(t,x,m^0_t)-\\frac{\\sigma^2}{2}\\dd{xx}\\eta(t,x) \\r] dt - \\hat{\\mb{E}}^0\\l[ \\d{m}\\mathcal{U}^0(t,x,m^0_t)(\\hat{X}^0_t) \\r] d\\tilde{W}_t \n \\end{equation}\nDue to the fact that $m^{\\eps}(0,x) = m^{0}(0,x) = m_0(x)$, we have the initial condition\n $$ \\eta(0,x) = 0 $$\n Notice that the diffusion term in the dynamic of $\\eta$ is deterministic due to the absence of a common noise. We define $w:[0,T] \\times \\mathbb{R} \\to \\mathbb{R}$ by\n $$ w(t,x) \\triangleq \\hat{\\mb{E}}^0\\l[ \\d{m}\\mathcal{U}^0(t,x,m^0_t)(\\hat{X}^0_t) \\r]$$\n \n \\subsubsection{PDE for $w(t,x)$} To fully describe the dynamic for $\\eta$, we need to find a PDE describing $w(t,x)$. First, notice that \n $$w(t,x) = \\hat{\\mb{E}}^0\\l[ \\d{m}\\mathcal{U}^0(t,x,m^0_t)(\\hat{X}^0_t)\\cdot 1 \\r] = \\lim_{\\varepsilon \\to 0} \\frac{\\mathcal{U}^0(t,x,m^{0}_t(\\cdot-\\varepsilon))(\\hat{X}^0_t+\\varepsilon) - \\mathcal{U}^0(t,x,m^{0}_t)(\\hat{X}^0_t)}{\\varepsilon}$$ \nThat is, $w(t,\\cdot)$ measures the sensitivity of the solution of $0$-MFG PDE system at time $t$ with respect to the spatial shift of the equilibrium distribution $m^0_t$. To find the PDE of $w$, we follow the same step as we did for $\\eta$ by making use of the master equation \\rref{master_ue} and the limit description above. In fact, both of them satisfies the same PDE; \n\\begin{align*}\n \\d{t}\\mathcal{U}^0(t,x,m^{0}_t(\\cdot-\\varepsilon)) &= \\mathcal{U}^{0}(t,x,m^{0}_t(\\cdot-\\varepsilon))\\d{x}\\mathcal{U}^0(t,x,m^{0}_t(\\cdot-\\varepsilon))-\\frac{\\sigma^{2}}{2}\\dd{xx}\\mathcal{U}^0(t,x,m^{0}_t(\\cdot-\\varepsilon)) \\\\\n \\d{t}\\mathcal{U}^0(t,x,m^{0}_t) &= \\mathcal{U}^{0}(t,x,m^{0}_t)\\d{x}\\mathcal{U}^0(t,x,m^{0}_t)-\\frac{\\sigma^{2}}{2}\\dd{xx}\\mathcal{U}^0(t,x,m^{0}_t)\n \\end{align*}\n Therefore, we have\n $$ \\d{t}w(t,x) = \\d{x}\\mathcal{U}^{0}(t,x,m^0_t)w(t,x)+\\mathcal{U}^{0}(t,x,m^0_t)\\d{x}w(t,x)-\\frac{\\sigma^2}{2}\\dd{xx}w(t,x) $$\nwith terminal condition derived from that of $\\mathcal{U}^{0}$;\n$$ w(T,x) = \\hat{\\mb{E}}^0[ \\dd{xm}g(x,m^0_T)(\\hat{X}^0_T) ] $$\n\n \n\\subsubsection{Explicit solution}\n \n Being a linear SPDE, we can easily solve \\rref{spde_eta} by mean of Duhamel's Principle. Let $P^sf$ denote a solution to the problem;\n \\begin{equation}\\label{pde}\n \\begin{aligned}\n &\\d{t}\\phi(t,x) = \\d{x}\\phi(t,x) \\mathcal{U}^0(t,x,m^0_t) + \\phi(t,x)\\d{x}\\mathcal{U}^0(t,x,m^0_t)-\\frac{\\sigma^2}{2}\\dd{xx}\\phi(t,x), \\quad (t,x) \\in [s,T] \\times \\mathbb{R} \\\\\n &\\phi(s,x) = f(s,x), \\quad x \\in \\mathbb{R}\n \\end{aligned}\n \\end{equation}\n Then, from the initial condition $\\eta(0,x)=0$, it follows that\n \\begin{equation}\\label{eta_explicit}\n \\eta(t,x) = (P^0\\eta)(t,x) + \\int_0^t (P^sw)(t,x) d\\tilde{W}_s = \\int_0^t (P^sw)(t,x) d\\tilde{W}_s \n \\end{equation}\nWe let\n$$ \\psi^s(t,x) \\triangleq (P^sw)(t,x), \\quad (t,x) \\in [0,T] \\times \\mathbb{R} $$\nCombining everything together, we have an explicit solution of FBSDE \\rref{var}\n$$ U_t = - \\int_0^t \\int_0^s \\beta_{s,t} \\psi^r(s,X^0_s) d\\tilde{W}_r ds - \\int_0^t \\beta_{s,t} d\\tilde{W}_s $$\n$$V_t = \\d{x}\\mathcal{U}^0(t,X_t^0, m^0_t)U_t +\\int_0^t \\psi^s(t,X^0_t) d\\tilde{W}_s $$\nEquivalently, by relation \\rref{uouox}, we can write these in terms of the solution $(u^0,m^0)$ of the system of PDE \\rref{uomo} from $0$-MFG plus a family of functions $\\{\\psi^s\\}_{0 \\leq s \\leq T}$.\n\\begin{equation}\\label{explicit_sol}\n\\begin{gathered}\nU_t = - \\int_0^t \\int_0^s e^{-\\int_s^t \\dd{xx}u^0(k,X_k^0)dk} \\psi^r(s,X^0_s) d\\tilde{W}_r ds - \\int_0^t e^{-\\int_s^t \\dd{xx}u^0(k,X_k^0)dk} d\\tilde{W}_s \\\\\nV_t = \\dd{xx}u^0(t,X^0_t)U_t +\\int_0^t \\psi^s(t,X^0_t) d\\tilde{W}_s \n\\end{gathered}\n\\end{equation}\n\n\n\\subsection{Covariance function}\\label{covariance}\n\nHaving shown that the solution to \\rref{var} is a centered Gaussian process with respect to the common noise, we now compute its covariance function making use of the explicit solution we just derived. Recall our notation defined earlier;\n$$ \\gamma_t \\triangleq \\d{x}\\mathcal{U}^0(t,X_t^0, m^0_t),\\quad \\beta_{s,t} \\triangleq e^{- \\int_s^t \\gamma_r dr }, \\quad \\eta(t,x,\\t{\\omega}) \\triangleq \\hat{\\mb{E}}^0[\\d{m}\\mathcal{U}^0(t,x,m_t^0)(\\hat{X}^0_t)\\hat{U}_t],\\quad \\eta_t \\triangleq \\eta(t,X^0_t,\\t{\\omega}) $$\nWe define the following covariance functions\n$$ \\varphi^U_t \\triangleq \\tilde{\\mb{E}}\\l[ U_t^2 \\r], \\quad \\varphi^{V}_t \\triangleq \\tilde{\\mb{E}}\\l[ V_t^2 \\r], \\quad \\varphi^{UV}_t \\triangleq \\tilde{\\mb{E}}\\l[ U_tV_t \\r] $$\n$$\\varphi^{U\\eta}_t \\triangleq \\tilde{\\mb{E}}\\l[ U_t\\eta_t \\r], \\quad \\varphi^{V\\eta}_t \\triangleq \\tilde{\\mb{E}}\\l[ V_t\\eta_t \\r], \\quad \\varphi^{\\eta}_t \\triangleq \\tilde{\\mb{E}}\\l[ \\eta^2_t \\r] $$\nwhere $\\tilde{\\mb{E}}[\\cdot]$ is the expectation with respect to the common Brownian motion only. Observe that the full covariance functions can be derived easily in term of these three functions. Using It\\=o's lemma and a decoupling relation \\rref{vt}, it is easy to check that \n\\begin{equation}\\label{full_cov}\n\\begin{aligned}\n \\d{t}\\varphi^U_t &= -\\gamma_t\\varphi^U_t -\\varphi^{U\\eta}_t + 1, \\quad \\varphi^U_0 = 0 \\\\\n\\varphi^{UV}_t &=\\gamma_t\\varphi^U_t + \\varphi^{V\\eta}_t \\\\\n\\varphi^{V}_t &= \\gamma^2_t \\varphi^U_t + 2\\gamma_t\\varphi^{U\\eta}_t + \\varphi^{\\eta}_t\\\\\n\\varphi^{V\\eta}_t &= \\gamma_t\\varphi^{U\\eta}_t + \\varphi^{\\eta}_t\n\\end{aligned}\n\\end{equation}\nThus, every terms can be expressed in terms of $\\varphi^{U\\eta}_t$ and $\\varphi^{\\eta}_t$. Using the explicit form of $U_t, \\eta_t$ in \\rref{uv_explicit} and \\rref{eta_explicit};\n$$ U_t = - \\int_0^t \\beta_{s,t} \\eta_s ds - \\int_0^t \\beta_{s,t} d\\tilde{W}_s, \\qquad \\eta_t = \\int_0^t \\psi^s(t,X_t^0) d\\tilde{W}_s $$\nwe have\n\\begin{align*}\n \\varphi^{U\\eta}_t &= - \\int_0^t \\beta_{s,t} \\tilde{\\mb{E}}[\\eta_s \\eta_t] ds - \\tilde{\\mb{E}}\\l[ \\eta_t \\int_0^t \\beta_{s,t} d\\tilde{W}_s \\r] \\\\\n &= - \\int_0^t \\beta_{s,t} \\int^s_0 \\psi^r(t,X^0_t) \\psi^r(s,X^0_s) dr ds - \\int_0^t \\beta_{s,t}\\psi^s(t,X^0_t) ds \\\\\n &= - \\int_0^t e^{-\\int_s^t \\dd{xx}u^0(k,X_k^0)dk} \\int^s_0 \\psi^r(t,X^0_t) \\psi^r(s,X^0_s) dr ds - \\int_0^t e^{-\\int_s^t \\dd{xx}u^0(k,X_k^0)dk} \\psi^s(t,X^0_t) ds\n\\end{align*}\nand \n$$ \\varphi^\\eta_t = \\int^t_0 (\\psi^s(t,X^0_t))^2 ds $$\nThus, the covariance structure of $(U_t(\\omega,\\cdot),V_t(\\omega,\\cdot))_{0 \\leq t \\leq T}$ can be fully described in terms of $u^0_{xx}$, where $u^0$ is a solution from $0$-MFG system \\rref{uomo}, and $\\psi^s$, a solution to PDE \\rref{pde}. \n\n\\fi\n\n\n\n\n\\section{Connection to Dynamic Programming Principle}\\label{dpp2}\n\nIn this section, we discuss a connection between the SMP approach and the DPP approach and present asymptotic results from the DPP approach. Please note that the results here are largely formal and are intended to give a connection to a more familiar DPP approach. \n\n\\subsection{FBSPDE for $\\varepsilon$-MFG} We follow the same method used to derive the system of PDEs \\rref{uomo} for the $0$-MFG. We will attempt to write the forward-backward equations where the forward one describes the evolution of the equilibrium distribution through the Fokker-Planck equation and the backward one describes the HJB equation of the value function.\n\n\n\n\n\nRecall from equation \\eqref{feedback optimal control} that the optimal control of $\\varepsilon$-MFG in feedback form is given by $-\\mathcal{U}^{\\eps}(s,x,m)$. We can then define the value function $\\mathcal{V}^{\\eps}: [0,T]\\times \\mathbb{R} \\times \\mathscr{P}_{2}(\\mathbb{R}) \\to \\mathbb{R}$ as \n\\begin{align*}\n \\mathcal{V}^{\\eps}(t,x,m) &= \\inf_{(\\alpha_{s})_{t \\leq s \\leq T} \\in \\H^{2}([\\tau,T];\\mathbb{R})} \\mathbb{E}\\l[ \\int_{t}^{T} \\alpha_{s}^{2} ds + g(X^{\\eps}_{T},m^{\\eps}_{T}) \\Big| X^{\\eps}_{t} = x, m^{\\eps}_{t} = m\\r] \\\\\n \t&= \\mathbb{E}\\l[ \\int_{t}^{T} \\mathcal{U}^{\\eps}(s,X^\\varepsilon_s,m^\\varepsilon_s)^{2} ds + g(X^{\\eps}_{T},m^{\\eps}_{T}) \\Big| X^{\\eps}_{t} = x, m^{\\eps}_{t} = m\\r]\n\\end{align*}\nwhere $m^{\\eps}_t = \\mathcal{L}(X^{\\eps}_t | \\tilde{\\mathscr{F}}_t)$. The value function above represents the minimum expected cost from $t$ to $T$ given the state of the game at time $t$ (the current state of a player and the current distribution of all players). Suppose that $\\mathcal{V}^{\\eps}$ is sufficiently regular, then by Dynamic Programming Principle, it can be shown to satisfy\n\n\\begin{equation}\\label{master}\n\\begin{split} \\d{t}\\mathcal{V}^{\\eps}(t,x,m)-\\frac{(\\d{x}\\mathcal{V}^{\\eps}(t,x,m))^{2}}{2}+\\frac{\\sigma^{2}+\\varepsilon^2}{2}\\dd{xx}\\mathcal{V}^{\\eps}(t,x,m)- \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m)(\\hat{X})(\\d{x}\\mathcal{V}^{\\eps}(t,\\hat{X},m)) \\r] \\\\ \\quad + \\frac{\\sigma^2}{2}\\dd{mm}\\mathcal{V}^{\\eps}(t,x,m)(\\hat{X})[\\zeta,\\zeta] + \\frac{\\varepsilon^2}{2}\\dd{mm}\\mathcal{V}^{\\eps}(t,x,m)(\\hat{X})[1,1] +\\varepsilon^2\\hat{\\mb{E}}^0\\l[\\dd{xm}\\mathcal{V}^{\\eps}(t,x,m)(\\hat{X})1 \\r]= 0\n\\end{split}\n\\end{equation}\nwith terminal condition\n$$ \\mathcal{V}^{\\eps}(T,x,m) = g(x,m) $$\nwhere $\\hat{X}$ is a lifting random variable, i.e. $\\mathcal{L}(\\hat{X})=m$, and $\\zeta$ is a $\\mathcal{N}(0,1)$-random variable independent of $\\hat{X}$. The derivative with respect to the $m$-argument is based on the framework proposed by Lasry and Lions in \\cite{cardaliaguet2010} as introduced in the previous section. We refer the reader to Appendix \\ref{derivative} and reference therein for more detail.\n\nThe connection between the SMP and the HJB approach for a general stochastic control problem is well understood. That is, the backward process is the gradient of the value function, at least when the value function is sufficiently regular. A more general statement can be said in term of sub\/super gradient and a viscosity solution (see Ch.5 in \\cite{young1999} for instance). Similarly for $\\varepsilon$-MFG, we have\n\\begin{equation}\\label{ueve}\n\\mathcal{U}^{\\eps}(t,x,m) = \\d{x}\\mathcal{V}^{\\eps}(t,x,m)\n\\end{equation}\nThis relation can be proved in a similar way by taking It\\=o's lemma on $\\d{x}\\mathcal{V}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t | \\tilde{\\mathscr{F}}_t))$ and use \\rref{master} to show that it satisfies \\rref{smp_mfg}. See Theorem 6.4.7 in \\cite{pham2009} for the proof when there is no argument $m$ and section 6 in \\cite{carmona2014master} for a generalized It\\=o's lemma with a probability measure argument.\n\n\nFrom \\eqref{feedback optimal control}, \\eqref{ueve}, we have that the $\\varepsilon$-MFG Nash equilibrium strategy is \n\\begin{equation}\\label{control}\n \\hat{\\alpha}^\\varepsilon_t = -Y^{\\eps}_t = -\\mathcal{U}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t)) = -\\d{x}\\mathcal{V}^{\\eps}(t,X^{\\eps}_t,\\mathcal{L}(X^{\\eps}_t|\\tilde{\\mathscr{F}}_t))\n \\end{equation}\nLet $m^{\\eps}_t = m^{\\hat{\\alpha}^\\eps}_t$ denote the corresponding conditional law, then it satisfies the following stochastic Fokker-Plank equation \n\\begin{equation}\\label{FP1}\n\t dm^{\\eps}(t,x) = \\l(\\d{x}(\\d{x}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)m^{\\eps}_t) + \\frac{\\sigma^{2}+\\varepsilon^{2}}{2}\\dd{xx}m^{\\eps}_t\\r)dt -\\varepsilon \\d{x}m^{\\eps}_t\\;d\\tilde{W}_{s}\n\\end{equation}\nNow we define a random value function along $(m^{\\eps}_t)_{0 \\leq t \\leq T}$ by letting\n$$ u^{\\eps}(t,x,\\t{\\omega}) = \\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t(\\t{\\omega})) $$\nUsing the master equation \\rref{master}, it follows that \n\\begin{align*}\n du^{\\eps}(t,x) &= \\Bigg( \\d{t}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t) + \\frac{\\sigma^2}{2}\\dd{mm}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})[\\zeta,\\zeta] + \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})(\\d{x}\\mathcal{V}^{\\eps}(t,\\hat{X},m^{\\eps}_t)) \\r] \\\\\n &\\qquad \\qquad +\\frac{\\varepsilon^2}{2}\\dd{mm}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})[1,1 \\Bigg) dt - \\varepsilon \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] d\\tilde{W}_{t} \\\\\n & = \\l( \\frac{(\\d{x}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t))^{2}}{2}-\\frac{\\sigma^{2}}{2}\\dd{xx}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t) - \\frac{\\varepsilon^{2}}{2}\\Big( \\dd{xx}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)- 2 \\hat{\\mb{E}}^0\\l[\\dd{xm}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r]\\Big)\\r)dt \\\\\n &\\qquad\\qquad - \\varepsilon \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] d\\tilde{W}_{t} \\\\\n & = \\l( \\frac{(\\d{x}u^{\\eps}(t,x))^{2}}{2}-\\frac{\\sigma^{2}}{2}\\dd{xx}u^{\\eps}(t,x) - \\frac{\\varepsilon^{2}}{2}\\Big( \\dd{xx}u^{\\eps}(t,x)- 2 \\hat{\\mb{E}}^0\\l[\\dd{xm}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r]\\Big)\\r)dt \\\\\n &\\qquad\\qquad- \\varepsilon \\hat{\\mb{E}}^0\\l[\\dd{xm}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] d\\tilde{W}_{t} \\\\\n &= \\l( \\frac{(\\d{x}u^{\\eps}(t,x))^{2}}{2}-\\frac{\\sigma^{2}}{2}\\dd{xx}u^{\\eps}(t,x) - \\frac{\\varepsilon^{2}}{2}\\Big( \\dd{xx}u^{\\eps}(t,x)- 2 \\d{x}w(t,x)\\Big)\\r)dt - \\varepsilon v^{\\eps}(t,x) d\\tilde{W}_{t}\n \\end{align*}\nwhere\n$$ v^{\\eps}(t,x) \\triangleq \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] $$\nCombining with (\\ref{control}) and (\\ref{FP1}) , we have arrived at a system of FBSPDE\n\\begin{equation}\\label{ueme}\n\\begin{aligned}\n\tdm^{\\eps}(t,x) &= \\l(\\d{x}(\\d{x}u^{\\eps}m^{\\eps}) + \\frac{\\sigma^{2}+\\varepsilon^2}{2}\\dd{xx}m^{\\eps}+ \\frac{\\varepsilon^{2}}{2}\\dd{xx}m^{\\eps}\\r)dt -\\varepsilon \\d{x}m^{\\eps}\\;d\\tilde{W}_{t} \\\\\n \tdu^{\\eps}(t,x) & = \\l( \\frac{(\\d{x}u^{\\eps})^{2}}{2}-\\frac{\\sigma^{2}}{2}\\dd{xx}u^{\\eps} - \\frac{\\varepsilon^{2}}{2}\\Big( \\dd{xx}u^{\\eps}- 2\\d{x}v^{\\eps}\\Big)\\r)dt - \\varepsilon v^{\\eps} d\\tilde{W}_{t}\n\\end{aligned}\n\\end{equation}\nwith boundary conditions\n$$ m^{\\eps}(0,x) = m_0(x) = \\mathcal{L}(\\xi_0) , \\qquad u^{\\eps}(T,x) = g(x,m^{\\eps}_T) $$\n\nSimilarly to the equation \\eqref{master_ue}, we have a verification theorem for \\rref{ueme} which states that if we have a sufficiently regular solution $(u^{\\eps},m^{\\eps},v^{\\eps})$ to the FBSPDE \\rref{ueme} above, then the $\\varepsilon$-MFG solution is given by\n$$ \\hat{\\alpha}_t^\\varepsilon = -\\d{x}u^{\\eps}(t,X^{\\eps}_t), $$ \nWe refer the reader to section 4.2 in \\cite{bensoussan2014master} for such result. The tuple $(u^{\\eps},m^{\\eps},v^{\\eps})$ then gives, respectively, the value function, distribution of the optimal state process, and the sensitivity of the valuation with respect to a spatial shift of the distribution process. Consequently, despite the fact that the derivation of \\rref{ueme} above requires the regularity of a solution of the master equation, it actually contains the same information as the master equation. One represents the value function at time $t$ as a function of $(x,m)$ while the other represents the value function at time $t$ as a function of $(x,\\t{\\omega})$ where $\\t{\\omega}$ is a common Brownian motion path. Note also that when $\\varepsilon=0$, as expected, we get back the system of PDEs \\rref{uomo} of Lasry and Lions. \n\n\n\\subsection{Asymptotic analysis} We now consider the case when $\\varepsilon$ is small and the approximation of ($u^{\\eps},m^{\\eps}$) around ($u^{0},m^{0}$). Particularly, we want to find a pair of random functions $( \\delta^{m}, \\delta^{u})$ from the following expansion\n\\begin{align*}\nm^{\\eps}(t,x,\\t{\\omega}) &= m^{0}(t,x) + \\varepsilon \\delta^{m}(t,x,\\t{\\omega}) + o(\\varepsilon) \\\\\nu^{\\eps}(t,x,\\t{\\omega}) &= u^{0}(t,x) + \\varepsilon \\delta^{u}(t,x,\\t{\\omega}) + o(\\varepsilon)\n\\end{align*}\nLet us proceed formally. We write\n$$ \\delta^{u,\\eps}(t,x) = \\frac{u^{\\eps}(t,x) - u^{0}(t,x)}{\\varepsilon}, \\quad \\delta^{m,\\eps}(t,x) = \\frac{m^{\\eps}(t,x) - m^{0}(t,x)}{\\varepsilon} $$\nThen from the dynamic of ($u^{\\eps},m^{\\eps}$) and ($u^{0},m^{0}$) in \\rref{ueme}, it follows that\n\\begin{align*}\n\td \\delta^{m,\\eps}(t,x)\t&= \\l[ \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{m,\\eps}+\\frac{(m^{\\eps} \\d{x}u^{\\eps} - m^{0}\\d{x}u^{0} )_{x}}{\\varepsilon} + O(\\varepsilon) \\r] dt - \\d{x}m^{\\eps} d\\tilde{W}_t \\\\\n\t\t\t\t&= \\l[ \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{m,\\eps}+\\d{x}( \\delta^{m,\\eps}\\d{x}u^{0}+m^{0}\\d{x} \\delta^{u,\\eps}) + O(\\varepsilon) \\r] dt - \\d{x}m^{\\eps} d\\tilde{W}_t\n\\end{align*}\nand \n\\begin{align*}\n\td \\delta^{u,\\eps}(t,x) \t&= \\l[ \\frac{(\\d{x}u^{\\eps})^{2}-(\\d{x}u^{0})^{2}}{2\\varepsilon} - \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{u,\\eps} + O(\\varepsilon) \\r] dt - \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] d\\tilde{W}_t \\\\\n\t\t\t\t&= \\l[ \\d{x} \\delta^{u,\\eps}\\d{x}u^{0} - \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{u,\\eps} + O(\\varepsilon) \\r] dt - \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] d\\tilde{W}_t \n\\end{align*}\nwith boundary conditions\n$$ \\delta^{m,\\eps}(0,x) = m_0(x), \\qquad \\delta^{u,\\eps}(T,x) = \\frac{g(x,m^{\\eps}_T)-g(x,m^{0}_T)}{\\varepsilon} $$\nFormally taking the limit as $\\varepsilon \\to 0$, we obtain the system of SPDEs describing the $\\varepsilon$-correction terms,\n\\begin{equation}\\label{fbspde1}\n\\begin{aligned}\n\td \\delta^{m}(t,x)\t&= \\l[ \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{m}+\\d{x}( \\delta^{m}\\d{x}u^{0}+m^{0}\\d{x} \\delta^{u}) \\r] dt - \\d{x}m^{0} d\\tilde{W}_t \\\\\n\td \\delta^{u}(t,x) \t&= \\l[ \\d{x} \\delta^{u}\\d{x}u^{0} - \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{u} \\r] dt - \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{V}^{0}(t,x,m^{\\eps}_t)(\\hat{X})1 \\r] d\\tilde{W}_t \n\\end{aligned}\n\\end{equation}\nwith boundary conditions given by\n\\begin{equation}\\label{boundary}\n \\delta^{m}(0,x) = 0,\\qquad \\delta^{u}(T,x) = \\l\\langle \\d{m}g(x,m^{0}_T)(\\cdot), \\delta^{m}_T \\r\\rangle\n \\end{equation}\nwhere $\\d{m}g$ denote the derivative with respect to the probability measure and $\\l\\langle \\cdot, \\cdot \\r\\rangle$ denote the inner product in $\\mathscr{L}^{2}$. In this case, we are using a different notion of derivative, namely the G\\^{a}uteax directional derivative, as it is more appropriate for this approach. We refrain from discussing in detail here and rather refer the reader to \\cite{bensoussan2014master} for more detail.\n\nNormally, in the BSPDE or BSDE setting, the diffusion part of the backward process is not specified and is part of a solution to ensure the adaptedness of a solution. In other words, the FBSPDE above should be written as\n\\begin{equation}\\label{fbspde2}\n\\begin{aligned}\n\td \\delta^{m}(t,x)\t&= \\l[ \\frac{\\sigma^{2}}{2}\\dd{xx} \\delta^{m}+\\d{x}( \\delta^{m}\\d{x}u^{0}+m^{0}\\d{x} \\delta^{u}) \\r] dt - \\d{x}m^{0} d\\tilde{W}_t \\\\\n\td \\delta^{u}(t,x) \t&= \\l[ \\d{x} \\delta^{u}\\d{x}u^{0} - \\frac{\\sigma^{2}}{2}\\d{x} \\delta^{u} \\r] dt - \\delta^{v} d\\tilde{W}_t \n\\end{aligned}\n\\end{equation}\nand its solution would be a tuple of adapted-process $( \\delta^{m}, \\delta^{u}, \\delta^{v})$. The FBSPDE \\rref{fbspde2} is essentially the first-order correction term of $\\varepsilon$-MFG solution from the DPP approach. \n\nTo see the connection between the two approaches, we express the stochastic function $( \\delta^{u}, \\delta^{m})$ in terms of $(U_t,V_t)$ and the derivatives of $\\mathcal{U}^{0}$, the main objects from the SMP approach. The relation between $ \\delta^{u}, \\delta^{v}$ and those from the SMP are clear from the relation $\\d{x}\\mathcal{V}^{\\eps}(t,x,m) = \\mathcal{U}^{\\eps}(t,x,m)$. That is,\n$$ \\d{x} \\delta^{u}(t,x) = \\lim_{\\varepsilon \\to 0} \\frac{\\d{x}\\mathcal{V}^{\\eps}(t,x,m^{\\eps}_t)-\\d{x}\\mathcal{V}^{0}(t,x,m^{0}_t)}{\\varepsilon} = \\hat{\\mb{E}}^0\\l[\\d{m}\\mathcal{U}^{0}(t,x,m^{0}_t)(\\hat{X})(\\hat{U}_t) \\r] $$\nThe relation between $ \\delta^{m}$ and those from the SMP is less straightforward mainly due to a difference in the notion of derivatives with respect to the law. For a test function $\\phi$, we write\n\\begin{align*}\n \\l\\langle \\phi, \\delta^{m}(t,\\cdot) \\r\\rangle &= \\lim_{\\varepsilon\\to0} \\frac{\\l\\langle \\phi, m^{\\eps}_t \\r\\rangle - \\l\\langle \\phi, m^{0}_t \\r\\rangle }{\\varepsilon} \\\\\n &= \\lim_{\\varepsilon\\to 0} \\frac{ \\mathbb{E}[\\phi(X^{\\eps}_t | \\tilde{\\mathscr{F}}_t)] - \\mathbb{E}[\\phi(X^{0}_t)] }{\\varepsilon} \\\\\n &= \\mathbb{E}[ \\d{x}\\phi(X^{0}_t)U_t | \\tilde{\\mathscr{F}}_t] \\\\\n &= \\int \\d{x}\\phi(x)u m^{X^{0},U}_t(x,u) dxdu \\\\\n &= \\l\\langle \\d{x}\\phi, \\int_\\mathbb{R} u m^{X^{0},U}_t(\\cdot,u) du \\r\\rangle \\\\\n &= \\l\\langle \\phi, - \\d{x}\\int_\\mathbb{R} um^{X^{0},U}_t(\\cdot,u) du \\r\\rangle\n \\end{align*} \n where $m_t^{X^{0},U}$ denote the joint law of $X^{0}_t,U_t$ conditional on $\\tilde{\\mathscr{F}}_t$. That is, \n $$ \\delta^{m}(t,x) = - \\d{x}\\int_\\mathbb{R} um^{X^{0},U}_t(x,u) du. $$ \n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHow do we estimate the effective constitutive properties of\ncomposite materials? This question has long been considered in the\ncontext of acoustics, elastodynamics and electromagnetics\n\\c{Akhlesh_book,Milton_book}. An upsurge in interest in this topic\nhas been prompted by the recent proliferation of\n\\emph{metamaterials}, both as theoretical concepts and as physical\nentities. An operational definition of a metamaterial is as an\nartificial composite material which\nexhibits properties that are not exhibited by its component materials or at\nleast not exhibited to the same extent by its component materials\n\\c{Walser}. Metamaterials are often exemplified by\nhomogenized composite materials (HCMs). Typically, metamaterials are\nassociated with constitutive parameter regimes which have not been\naccessible conventionally. For example, in relation to\nelastodynamics, metamaterials with negative mass density \\c{Mei} and\nnegative stiffness \\c{Lakes_01,Fang} have recently been described,\nwhereas negatively--refracting metamaterials have been the subject\nof intense research activity lately in electromagnetics \\c{Rama}.\n\nWe focus here on the effective elastodynamic properties of a\ncomposite material. The HCM considered arises in the\nlong--wavelength regime from component materials which are generally\northotropic, viscoelastic and randomly distributed as oriented\nellipsoidal particles.\n Our study is based on the\nstrong--property--fluctuation theory (SPFT) which~---~by allowing\nfor higher--order characterizations of the distributional statistics\nof the component materials~---~provides a multi--scattering approach\nto homogenization \\c{Ryzhov}. This distinguishes the SPFT from\ncertain well--known self--consistent approaches to homogenization\n\\cite{Hill_1963,Hill_1965,Bud,Sabina_Willis}, although we note that\nmore sophisticated self--consistent theories have been proposed in\nrecent years \\c{Kim,Kanaun,Wang_Qin}. While the general character of\nthe SPFT approach to homogenization is reminiscent of\nmulti--scattering theories \\c{Twersky,Linton,Avila}, the SPFT\nprovides an estimate of the HCM's constitutive parameters whereas\nmulti--scattering approaches generally provide effective wavenumbers\n\\c{Datta,Varadan,Maurel}. A distinctive feature of\n the SPFT is that it incorporates a\nrenormalized formulation which can accommodate relatively strong\nvariations in the constitutive parameters of the component\nmaterials. This is because the perturbative scheme for averaging the\nrenormalized equations in the SPFT is based on parameters which\nremain small even when there are strong fluctuations in the\nconstitutive parameters describing the component materials. In\ncontrast, conventional variational methods of homogenization\n\\c{HS_1962,Kroner,Willis,Talbot_Willis,Hashin_83} yield bounds which\nare widely separated when there are large differences between the\nconstitutive parameters of the component materials.\n\n\nThe SPFT has been widely utilized to estimate the electromagnetic\nconstitutive parameters of HCMs \\c{TK81,Genchev,ML95,spft_form,Cui}.\n Acoustic\n\\c{Zhuck_acoustics} and elastodynamic \\cite{spft_zhuck1} versions of\nthe theory have also been developed. The general framework for the\nelastodynamic SPFT, applicable to linear anisotropic HCMs, was\nestablished in 1999 \\cite{spft_zhuck1}, but no numerical studies\nhave been reported hitherto. In the following we apply this theory\nto examine numerically the case wherein the component materials are\ngenerally orthotropic materials which are distributed as oriented\nellipsoidal particles. Prior to undertaking our numerical study, we\nderive new theoretical results in two areas:\n\\begin{itemize}\n\\item[(i)] in the\nimplementation of a\n two--point covariance function which characterizes the\n distributions of the component materials, and\n \\item[(ii)] in the\n simplification of certain integrals in order to make them\n amenable to numerical computation.\n \\end{itemize}\nThe SPFT estimates of the HCM constitutive parameters are\nillustrated by means of numerical examples, and results are\ncompared to those provided by the Mori--Tanaka mean--field approach\n\\cite{Mori-Tanaka,Benveniste}.\n\n\n\n\n\\section{Theory} \\l{theory_section}\n\n\n\\subsection{Preliminaries}\n\n\nIn applying the elastodynamic SPFT formalism, it is expedient to\nadopt both matrix and tensor representations \\c{Lakhjcm}. The\ncorrespondence between the two representations is described in\nAppendix~A.\n Matrixes are denoted by double underlining and bold\nfont, while vectors are in bold font with no underlining. Tensors\nare represented in normal font with their components indicated by\nsubscripts (for $n$th--order tensors, with $n \\leq 4$) or subscripts\nand superscripts (for eighth--order tensors). All tensor indexes\nrange from $1$ to $3$. The $pq$th component of a matrix $\\*A$ is\nwritten as $\\left[ \\, \\*A \\, \\right]_{pq}$, while the $p$th component of\na vector $\\#b$ is written as $\\left[ \\, \\#b \\, \\right]_p$.\n A repeated index\nimplies summation. Thus, we have the matrix component $\\big[ \\, \\*A\n\\cdot \\*B \\,\\big]_{pr}=\n \\big[\\, \\*A \\, \\big]_{pq}\\big[ \\, \\*B \\,\n\\big]_{qr}$, vector component $\\big[ \\, \\*A \\cdot \\#b\\big]_{p}=\n \\big[ \\, \\*A \\, \\big]_{pq} \\big[\\#b\\big]_{q}$,\nand scalar $\\#a \\cdot \\#b = \\left[ \\#a \\right]_p \\left[ \\#b \\right]_p$. The\nadjoint, determinant and trace of a matrix $\\*A$ are denoted by\n$\\mbox{adj} \\left( \\, \\*A\\, \\right)$, $\\mbox{det} \\left( \\, \\*A \\, \\right)$ and $\\mbox{tr} \\left( \\,\n\\*A \\, \\right)$, respectively.\nThe prefixes $\\mbox{Re}$ and $\\mbox{Im}$ are used to signify\nreal and imaginary parts, respectively, while $i = \\sqrt{-1}$.\n\nThe SPFT is developed in the frequency domain wherein the stress,\nstrain, and displacement have an implicit $\\exp \\left( - i \\omega t\n\\right)$ dependency on time $t$, $\\omega$ being the angular\nfrequency. Thus,\n these are generally complex--valued quantities.\nIn order to retrieve the corresponding time--domain quantities, the\ninverse temporal Fourier transform operation must be performed,\nalthough one must bear in mind that homogenization is essentially a\nlong--wavelength procedure \\c{Akhlesh_book_intro,M08}.\n The possibility of\nviscoelastic behaviour is accommodated through complex--valued\nconstitutive parameters. Stiffness tensors\nare taken to exhibit the usual symmetries\n\\begin{equation}\\label{eq_symmetry}\nC_{lmpq} = C_{mlpq} = C_{lmqp} = C_{pqlm},\n\\end{equation}\nwhilst noting that the symmetry $\\mbox{Im}\\, C_{lmpq} = \\mbox{Im} \\,\nC_{pqlm}$\n has not been proved generally \\cite{Cerveny_2006}.\nOn account of the symmetries \\r{eq_symmetry},\n the matrix counterpart of tensor\n$C_{lmpq}$~---~namely,\n the 9$\\times$9\nstiffness matrix $\\*C$~---~is symmetric.\\footnote{Alternatively, in\nlight of \\r{eq_symmetry}, the stiffness tensor may be represented\nby a symmetric 6$\\times$6 matrix \\c{Ting}, but the following\npresentation of the SPFT is more straightforwardly presented in\nterms of the 9$\\times$9 matrix representation.}\n\n\n\\subsection{Component materials}\n\nWe consider the homogenization of a two--component composite material. The\ncomponent materials, which are themselves homogeneous, are randomly\ndistributed throughout the mixture as identically--oriented,\nconformal, ellipsoidal particles. For convenience, the principal\naxes of the ellipsoidal particles are taken to be aligned with the\nCartesian axes. Thus, the surface of each ellipsoidal particle\n may be parameterized by the vector\n\\begin{equation} \\l{r_shape}\n\\#r^{(e)} = \\eta \\*U \\cdot\\hat{\\#r},\n\\end{equation}\nwhere $\\eta$ is a linear measure of size, $\\hat{\\#r}$ is the radial\nunit vector and the diagonal shape matrix\n\\begin{equation}\\label{U_shape}\n\\*U=\\frac{1}{\\sqrt[3]{abc}}\\left(\n\\begin{array}{ccc}\na & 0 & 0 \\\\\n0 & b & 0 \\\\\n0 & 0 & c \\\\\n\\end{array}\n\\right),\\qquad \\qquad (a,b,c \\in\\mathbb{R}^{+}).\n\\end{equation}\n\n\n\n Let the space\n occupied by the composite material be\n denoted by $V$. It is partitioned into parts $V^{(1)}$ and\n$V^{(2)}$ containing the two component materials labelled as `1' and\n`2', respectively. The distributional statistics of the component\nmaterials are described in terms of moments of the characteristic\nfunctions\n\\begin{equation}\n\\Phi^{(\\ell)} (\\#r) = \\left\\{ \\begin{array}{ll} 1, & \\qquad \\#r \\in\nV^{(\\ell)},\\\\ & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad (\\ell=1,2) . \\\\\n 0, & \\qquad \\#r \\not\\in V^{(\\ell)}, \\end{array} \\right.\n\\end{equation}\n The volume fraction of component material $\\ell$, namely $f^{(\\ell)}$ , is given by\nthe first statistical moment of\n $\\Phi^{(\\ell)}$ ;\n i.e., \\begin{equation} \\l{vf}\n \\langle \\, \\Phi^{(\\ell)}(\\#r) \\, \\rangle = f^{(\\ell)}, \\qquad \\qquad \\left( \\ell = 1,2 \\right)\n ,\n \\end{equation}\nwhere the angular brackets denote the ensemble average of the\nquantity enclosed.\n Notice that\n $f^{(1)} + f^{(2)} = 1$.\nThe second statistical moment of $\\Phi^{(\\ell)}$\n constitutes a two--point covariance function.\nThe physically--motivated form \\c{TKN82}\n\\begin{equation}\n\\langle \\, \\Phi^{(\\ell)} (\\#r) \\, \\Phi^{(\\ell)} (\\#r')\\,\\rangle =\n\\left\\{\n\\begin{array}{lll}\n\\langle \\, \\Phi^{(\\ell)} (\\#r) \\, \\rangle \\langle \\Phi^{(\\ell)}\n(\\#r')\\,\\rangle\\,, & & \\hspace{10mm} | \\, \\=U^{-1}\\cdot \\left( \\#r - \\#r' \\right) | > L \\,,\\\\ && \\hspace{25mm} \\\\\n\\langle \\, \\Phi^{(\\ell)} (\\#r) \\, \\rangle \\,, && \\hspace{10mm}\n | \\, \\=U^{-1} \\cdot \\left( \\#r -\n\\#r' \\right) | \\leq L\\,,\n\\end{array}\n\\right.\n \\l{cov}\n\\end{equation}\nis adopted, where $L>0$ is the correlation length which is taken to\nbe much smaller than the elastodynamic wavelengths but larger than\nthe sizes of the component particles. In the context of the\nelectromagnetic SPFT, the specific form of the covariance function\nhas only a secondary influence on estimates of HCM constitutive\nparameters,\n for a range of\nphysically--plausible covariance functions \\c{MLW01b}.\n\n\n The elastodynamic properties of the\n component materials `1' and `2' are characterized by\ntheir stiffness tensors $C_{lmpq}^{(1)}$ and $C_{lmpq}^{(2)}$ (or,\nequivalently, their 9$\\times$9 stiffness matrixes $\\*C^{(\\ell)}$,\n$\\ell \\in \\left\\{ 1,2 \\right\\}$), and their densities $\\rho^{(1)}$\nand $\\rho^{(2)}$. The stiffness tensors exhibit the symmetries\nrepresented in \\r{eq_symmetry}. The component materials are generally\northotropic \\c{Ting} in the following developments;\ni.e., the stiffness matrix\nfor each component material may be expressed as\n\\begin{equation} \\l{ortho_form2}\n\\*C^{(\\ell)} = \\left( \\begin{array}{lll} \\*{\\mathcal{M}}^{(\\ell)} & \\*0\n& \\*0 \\\\ \\*0 & \\*{\\mathcal{D}}^{(\\ell)} & \\*{\\mathcal{D}}^{(\\ell)}\n\\\\\n\\*0 & \\*{\\mathcal{D}}^{(\\ell)} & \\*{\\mathcal{D}}^{(\\ell)}\n\\end{array} \\right), \\qquad \\qquad (\\ell=1,2),\n\\end{equation}\nwhere $ \\*{\\mathcal{M}}^{(\\ell)}$ and $\\*{\\mathcal{D}}^{(\\ell)}$\nare symmetric and diagonal 3$\\times$3 matrixes, respectively, and\n$\\*0$ is the 3$\\times$3 null matrix. For the degenerate case in\nwhich the component material `$\\ell$' is isotropic, we have\n\\begin{equation} \\l{ortho_form}\n\\left.\n\\begin{array}{l}\n \\left[ \\*C^{(\\ell)} \\right]_{11} = \\left[ \\*C^{(\\ell)} \\right]_{22} =\n\\left[ \\*C^{(\\ell)} \\right]_{33} = \\lambda^{(\\ell)} + 2 \\mu^{(\\ell)} \\vspace{4pt} \\\\\n \\left[ \\*C^{(\\ell)} \\right]_{12} = \\left[ \\*C^{(\\ell)} \\right]_{13} =\n\\left[ \\*C^{(\\ell)} \\right]_{23} = \\lambda^{(\\ell)} \\vspace{4pt} \\\\\n \\left[ \\*C^{(\\ell)} \\right]_{44} = \\left[ \\*C^{(\\ell)} \\right]_{55} =\n\\left[ \\*C^{(\\ell)} \\right]_{66} = \\mu^{(\\ell)}\n\\end{array}\n\\right\\}, \\qquad \\qquad (\\ell=1,2),\n\\end{equation}\nwhere $\\lambda^{(\\ell)}$ and $\\mu^{(\\ell)}$ are the Lam\\'{e}\nconstants \\c{MNLS}.\n\n\\subsection{Comparison material}\n\n A central concept in the SPFT is that of a homogeneous\n \\emph{comparison material}. This provides the initial ansatz for an iterative\nprocedure that\n delivers a succession of SPFT estimates of the\n constitutive properties of the HCM.\nAs such, the comparison material represents the lowest--order SPFT\nestimate of the HCM.\n Since we have taken the component materials\n to be generally orthotropic and\ndistributed as ellipsoidal particles, the comparison material is\ngenerally orthotropic\\footnote{In fact, the comparison material\nwould also be orthotropic if (i) the components materials were\nisotropic but distributed as aligned ellipsoidal particles; or (ii)\nthe components materials were orthotropic but distributed as\nspherical particles}. While this is a physically--reasonable\nassumption here, we remark that the form of the HCM stiffness tensor\nmay be derived via certain asymptotic approaches to homogenization\n\\c{Parnell}.\n The orthotropic comparison material (OCM) is characterized by its\nstiffness tensor $C^{(ocm)}_{lmpq}$ and density $\\rho^{(ocm)}$, with\n$C^{(ocm)}_{lmpq}$ exhibiting the symmetries \\r{eq_symmetry}.\n\nThe SPFT formulation exploits the spectral Green function of the\nOCM, which may be expressed in 3$\\times$3 matrix form as\n\\begin{equation}\\label{G_eq}\n\\*G^{(ocm)}(\\#k) = \\Big[k^2\n\\*a(\\#{\\hat{k}})-\\omega^2\\rho^{(ocm)}{\\*I} \\Big]^{-1},\n\\end{equation}\nwith $\\*I$ being the 3$\\times $3 identity matrix and\n$\\*a(\\#{\\hat{k}})$ the $3 \\times 3$ matrix with entries\n\\begin{equation}\\label{aform}\n\\big[\\,\\*a (\\#{\\hat{k}}) \\,\\big]_{mp}=\\frac{k_l C_{lmpq}^{(ocm)}\nk_q}{k^2}.\n\\end{equation}\nHerein, $\\#k = k \\#{\\hat{k}}$ $ \\equiv \\left( k_1, k_2, k_3 \\right)$ with\n$ \\#{\\hat{k}} = ( \\sin \\theta \\cos \\phi$, $\\sin \\theta \\sin \\phi$,\n$\\cos \\theta )$. For use later on in \\S\\ref{sec_SPFT}, we remark\nthat $\\*G^{(ocm)} (\\#k)$ may be conveniently expressed as\n\\begin{equation}\n\\*G^{(ocm)} (\\#k) = \\frac{1}{ \\Delta(\\#k)} \\, \\*N(\\#k),\n\\end{equation}\nwith the 3$\\times$3 matrix function\n\\begin{equation}\n\\*N(\\#k) =\n k^4 \\mbox{adj} \\left[ \\, \\*a( \\#{\\hat{k}} ) \\, \\right] +\\omega^2 \\rho^{(ocm)}\nk^2 \\left\\{ \\, \\*a( \\#{\\hat{k}} )-\\mbox{tr} \\left[ \\, \\*a( \\#{\\hat{k}} ) \\right]\n\\, \\*I \\, \\right\\}+ \\left( \\omega^2 \\rho^{(ocm)} \\right)^2 \\*I\n\\end{equation}\nand the scalar function\n\\begin{equation} \\l{Delta_def}\n\\Delta(\\#k) = k^6 \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] - \\omega^2\n\\rho^{(ocm)} \\\n k^4 \\mbox{tr}\n\\left\\{ \\mbox{adj} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\right\\} + \\left( \\omega^2\n\\rho^{(ocm)} \\right)^2 k^2 \\, \\mbox{tr} \\left[ \\, \\*a(\\#{\\hat{k}}) \\right] - \\left(\n\\omega^2 \\rho^{(ocm)} \\right)^3.\n\\end{equation}\n\n\nA key step in the SPFT~---~one which facilitates the calculation of\n $C^{(ocm)}_{lmpq}$ and $\\rho^{(ocm)}$~---~\nis the imposition of the conditions \\cite[eqs.\n(2.72),(2.73)]{spft_zhuck1}\n\\begin{eqnarray}\n&&\\left\\langle \\Phi^{(1)} (\\#r) \\,\\xi_{lmpq}^{(1)} + \\Phi^{(2)}\n(\\#r)\\, \\xi_{lmpq}^{(2)} \\, \\right\\rangle\n =0, \\l{s1}\\\\\n&& \\left\\langle \\Phi^{(1)} (\\#r) \\left( \\rho^{(1)}-\\rho^{(ocm)} \\right) +\n\\Phi^{(2)} (\\#r) \\left( \\rho^{(2)}-\\rho^{(ocm)} \\right) \\right\\rangle\n =0 ,\\l{s2}\n\\end{eqnarray}\nin order to remove certain secular terms.\nIn \\r{s1}, the quantities\n\\begin{equation}\\label{xi_eq}\n\\xi_{lmpq}^{(\\ell)}=\\left( C_{lmst}^{(\\ell)} - C_{lmst}^{(ocm)} \\right)\n\\eta_{stpq}, \\qquad \\quad (\\ell = 1,2),\n\\end{equation}\nwhere $\\eta^{(\\ell)}_{stpq}$ is given implicitly via\n\\begin{eqnarray}\n&&e^{(\\ell)}_{pq} = \\eta^{(\\ell)}_{pqst} f^{(\\ell)}_{st}, \\l{edef}\\\\\n&&f^{(\\ell)}_{ij} = e^{(\\ell)}_{ij}+S_{ijlm} \\left( C_{lmpq}^{(\\ell)} -\nC_{lmpq}^{(ocm)} \\right) e^{(\\ell)}_{pq}, \\l{fdef}\n\\end{eqnarray}\nand the renormalization tensor\n\\begin{eqnarray}\\label{ellipse_Sint2}\nS_{rstu} &=& \\frac{1}{8\\pi} \\int_{0}^{2\\pi}d\\phi\n\\int_{0}^{\\pi}d\\theta \\sin \\theta \\times \\nonumber \\\\ &&\n\\frac{(\\*U^{-1}\\cdot \\#{\\hat{k}})_t \\left\\{ (\\*U^{-1}\\cdot\n\\#{\\hat{k}})_s \\big[\\, \\*a^{-1} (\\*U^{-1}\\cdot \\#{\\hat{k}}) \\,\n\\big]_{ru}+(\\*U^{-1}\\cdot \\#{\\hat{k}})_r \\big[ \\, \\*a^{-1}\n(\\*U^{-1}\\cdot \\#{\\hat{k}})\\, \\big]_{su}\n\\right\\} }{(\\*U^{-1}\\cdot\n\\#{\\hat{k}})\\cdot(\\*U^{-1}\\cdot \\#{\\hat{k}} )}.\n\\end{eqnarray}\n\n\nUpon substituting \\r{xi_eq}--\\r{fdef} into \\r{s1}, exploiting\n\\r{vf}, and after some algebraic manipulations, we obtain\n\\begin{equation}\\label{fullxi}\nf^{(1)}\\left[ \\left( \\*C^{(1)}-\\*C^{(ocm)} \\right)^{\\dagger}+\\*S\\,\n\\right]^{\\dagger} = - f^{(2)}\\left[ \\left( \\*C^{(2)}-\\*C^{(ocm)}\n\\right)^{\\dagger}+\\*S\\, \\right]^{\\dagger},\n\\end{equation}\nwherein the 9$\\times$9 matrix equivalents\n of the tensors $C_{lmpq}^{(ocm)}$ and $S_{rstu}$ (namely, $\\*C^{(ocm)}$ and $\\*S$) have been\n introduced and $^{\\dagger}$ denotes the matrix operation defined\nin Appendix~A. The\n OCM stiffness matrix may be extracted from \\r{fullxi} as\n\\begin{equation}\\label{1stnewit}\n\\*C^{(ocm)}=\\*C^{(1)}+f^{(2)} \\left[\\,\n\\mbox{\\boldmath$\\*\\tau$}+(\\*C^{(2)}-\\*C^{(ocm)})\\cdot \\*S \\,\n\\right]^{\\dagger} \\cdot\\left( \\*C^{(1)}-\\*C^{(2)}\\right),\n\\end{equation}\nwhere ${\\mbox{\\boldmath$\\*\\tau$}}$ is the $9\\times 9$ matrix\nrepresentation of the identity tensor $\\tau_{rstu}$, as described in\nAppendix~A. This nonlinear relation \\r{1stnewit} can be readily\nsolved for $\\*C^{(ocm)}$ by numerical procedures, such as the Jacobi\nmethod \\c{Bagnara}.\n\nBy combining \\r{vf} with \\r{s2}, it follows immediately that the OCM\ndensity is the volume average of the densities of the component\nmaterials `1' and `2'; i.e.,\n\\begin{equation}\n\\rho^{(ocm)} = f^{(1)}\\rho^{(1)}+f^{(2)}\\rho^{(2)}.\n\\end{equation}\n\n\n\n\n\\subsection{Second--order SPFT } \\label{sec_SPFT}\n\nThe expressions for the second--order\\footnote{The first--order\nSPFT estimate is identical to the zeroth--order SPFT estimate which\nis represented by the comparison material.} estimates of the HCM\nstiffness and density tensors, as derived elsewhere \\cite[eqs.\n(2.77),(2.78)]{spft_zhuck1}, are\n\\begin{equation}\\label{spft_int}\nC^{(spft)}_{lmpq} = C^{(ocm)}_{lmpq}-\\frac{\\omega^2\n\\rho^{(ocm)}}{2}\\int d^3k \\; \\frac{k_t}{k^2}\\, B^{lmrs}_{tupq}(\\#k)\n\\, \\left[ \\, \\*G^{(ocm)} (\\#k)\\, \\right]_{vu} \\left\\{ k_s \\left[ \\, \\*a^{-1}\n(\\#{\\hat{k}})\\, \\right]_{rv}+k_r \\left[ \\, \\*a^{-1} (\\#{\\hat{k}})\n\\,\\right]_{sv} \\right\\}\n\\end{equation}\nand\n\\begin{equation}\\label{rho_int}\n\\rho^{(spft)}_{mp}= \\rho^{(ocm)}\\delta_{mp}+\\omega^2\\int d^3k \\;\nB(\\#k) \\left[ \\*G^{(ocm)} (\\#k) \\right]_{mp},\n\\end{equation}\nrespectively,\nwherein $\\delta_{mp}$ is the Kronecker delta function. The\neighth--order tensor $B^{lmrs}_{tupq}(\\#k)$ and scalar $B(\\#k)$\nrepresent the spectral covariance functions given as\n\\begin{equation}\\label{cov_fn}\n\\left.\n\\begin{array}{l}\n\\displaystyle{\n B^{lmrs}_{tupq} (\\#k) = \\frac{ \\left(\n\\xi^{(2)}_{lmrs} - \\xi^{(1)}_{lmrs} \\right) \\left( \\xi^{(2)}_{tupq} -\n\\xi^{(1)}_{tupq} \\right)}{8 \\pi^3} \\int d^3 R \\;\n \\, \\Gamma (\\#R) \\, \\exp \\left( -i \\#k \\cdot \\#R\n\\right)} \\vspace{4pt}\\\\\n\\displaystyle{ B(\\#k)= \\frac{\\left( \\rho^{(2)}-\\rho^{(1)}\\right)^2 }{8\n\\pi^3} \\int d^3R \\; \\Gamma (\\#R) \\, \\exp \\left( -i \\#k \\cdot \\#R \\right)}\n\\end{array}\n\\right\\},\n\\end{equation}\nwith\n\\begin{equation}\n\\Gamma (\\#r - \\#r') = \\langle \\, \\Phi^{(1)} (\\#r) \\, \\Phi^{(1)}\n(\\#r')\\,\\rangle - \\langle \\, \\Phi^{(1)} (\\#r) \\,\\rangle \\, \\langle\n\\, \\Phi^{(1)} (\\#r')\\,\\rangle \\equiv \\langle \\, \\Phi^{(2)} (\\#r) \\,\n\\Phi^{(2)} (\\#r')\\,\\rangle - \\langle \\, \\Phi^{(2)} (\\#r) \\,\\rangle\n\\, \\langle \\, \\Phi^{(2)} (\\#r')\\,\\rangle.\n\\end{equation}\n\n\nWe now proceed to simplify the expressions for $C^{(spft)}_{lmpq}$\nand $\\rho^{(spft)}_{mp}$ presented in \\r{spft_int} and \\r{rho_int},\nin order to make them numerically tractable. We\nbegin with the integral on the right sides of \\r{cov_fn} which, upon\nimplementing\n the step\nfunction--shaped covariance function \\r{cov}, may be expressed as\n\\begin{equation}\n\\int d^3 R \\;\n \\, \\Gamma (\\#R) \\, \\exp \\left( -i \\#k \\cdot \\#R\n\\right) = \\int_{|\\#R | \\leq L} d^3R \\; \\exp \\left[ -i \\left( \\*U \\cdot \\#k\n\\right) \\cdot \\#R \\right].\n\\end{equation}\nThus, we find that $B^{lmrs}_{tupq}(\\#k)$ and $B (\\#k)$ are given by\n\\begin{equation}\\label{covrfn}\n\\left.\n\\begin{array}{l}\n\\displaystyle{\n B^{lmrs}_{tupq} (\\#k) = \\frac{f^{(1)}f^{(2)}\n \\left( \\xi^{(2)}_{lmrs} - \\xi^{(1)}_{lmrs} \\right) \\left(\n\\xi^{(2)}_{tupq} - \\xi^{(1)}_{tupq} \\right) }{2 \\left( \\pi k \\sigma \\right)^2}\n\\left[ \\frac{\\sin \\left( k\\sigma L \\right)}{k\\sigma} -L \\cos \\left( k\\sigma L\n\\right)\n\\right]} \\vspace{4pt} \\\\\n \\displaystyle{ B (\\#k) = \\frac{f^{(1)}f^{(2)}\n\\left( \\rho^{(2)}-\\rho^{(1)}\\right)^2 }{2 \\left( \\pi k \\sigma \\right)^2} \\left[\n\\frac{\\sin \\left( k\\sigma L \\right)}{k\\sigma} -L \\cos \\left( k\\sigma L \\right)\n\\right]}\n\\end{array}\n\\right\\},\n\\end{equation}\nwherein the scalar function\n\\begin{equation}\n\\sigma\\equiv \\sigma(\\theta,\\phi)=\\sqrt{a^2 \\sin^2\\theta\n\\cos^2\\phi+b^2 \\sin^2\\theta \\sin^2\\phi +c^2\\cos^2\\theta}.\n\\end{equation}\n\n\nUpon substituting \\r{covrfn} into \\r{spft_int} and \\r{rho_int}, the\nintegrals therein with respect to $k$ can be evaluated by means of\ncalculus of residues: The roots of $\\Delta(\\#k) = 0$ give rise to\nsix poles in the complex--$k$ plane, located at $k = \\pm p_1$, $ \\pm\np_2$\n and $ \\pm p_3$, chosen such that $\\mbox{Re} \\; p_i \\geq 0\n \\;\\; (i=1,2,3)$. From \\r{Delta_def}, we find that.\n\\begin{eqnarray}\n p_1^2 &=& P_A-\n\\frac{1}{3} \\left(\n \\frac{2^{1\/3} P_B}{ P_C \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] }- \\frac{P_C}{ 2^{1\/3} \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]\n }\\right),\\\\\n p_2^2 &=& P_A+ \\frac{1}{3} \\left( \\frac{(1+i\\sqrt{3}) P_B}{ 2^{2\/3} P_C \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]\n }-\\frac{(1-i\\sqrt{3})P_C}{ 2^{4\/3}\\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] }\\right), \\\\\n p_3^2 &=& P_A+\n\\frac{1}{3} \\left(\n \\frac{(1-i\\sqrt{3})P_B}{ 2^{2\/3} P_C \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]\n }-\\frac{(1+i\\sqrt{3})P_C}{2^{4\/3} \\, \\mbox{det}\n \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] } \\right),\n\\end{eqnarray}\nwherein\n\\begin{eqnarray}\n P_A &=& \\frac{\\omega^2\\rho^{(ocm)} \\mbox{tr} \\left\\{ \\mbox{adj} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\right\\} }{3 \\,\n \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] }, \\\\\n P_B &=& \\left( \\omega^2 \\rho^{(ocm)} \\right)^2\\left(\n 3 \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\, \\mbox{tr} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]\n -\\mbox{tr}\\left\\{\n \\mbox{adj} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\right\\}^2\\right) ,\\\\\n P_C^3 &=& P_D+\\sqrt{4 P_B^3+ P_D^2}, \\\\\n P_D &=&\n\\left( \\omega^2 \\rho^{(ocm)} \\right)^3\n \\Big( 2 \\, \\mbox{tr} \\left\\{ \\mbox{adj} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\right\\}^3\n \\nonumber \\\\\n & & -9 \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\, \\mbox{tr}\n \\left\\{\n \\mbox{adj} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] \\right\\} \\, \\mbox{tr} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]\n +27 \\, \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]^2\\Big).\n\\end{eqnarray}\nThus, by application of the\n Cauchy residue theorem \\c{Kwok}, the SPFT estimates are delivered as\n\\begin{eqnarray}\\label{final_full_int}\nC^{(spft)}_{lmpq} &= & C^{(ocm)}_{lmpq}+\\frac{\\omega^2 \\rho^{(ocm)}\nf^{(1)}f^{(2)}\\left( \\xi^{(2)}_{lmrs} - \\xi^{(1)}_{lmrs} \\right) \\left(\n\\xi^{(2)}_{tupq} - \\xi^{(1)}_{tupq}\\right)}{4\\pi i }\n\\times\\nonumber\\\\\n& & \\int_{\\phi=0}^{2 \\pi} \\int_{\\theta=0}^{\\pi} d\\phi \\, d\\theta \\,\n\\frac{k_t \\sin\\theta \\left\\{ k_s \\left[ \\, \\*a^{-1}(\\#{\\hat{k}}) \\,\n\\right]_{rv}+ k_r \\left[ \\, \\*a^{-1}(\\#{\\hat{k}}) \\, \\right]_{sv}\\right\\}}{ \\left(\nk \\sigma \\right)^2 \\; \\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right] } \\left[ \\,\n\\*b(\\#{\\hat{k}}) \\, \\right]_{vu} ,\n\\end{eqnarray}\nand\n\\begin{eqnarray} \\l{rfinal_full_int}\n\\rho^{(spft)}_{mp}& = & \\rho^{(ocm)}\\delta_{mp}-\\frac{\\omega^2\n f^{(1)}f^{(2)}\\left(\n \\rho^{(2)}-\\rho^{(1)}\\right)^2}{2\\pi i }\\int_{\\phi=0}^{2 \\pi}\n \\int_{\\theta=0}^{\\pi} d\\phi \\, d\\theta\n\\, \\frac{\\sin\\theta}{\\mbox{det} \\left[ \\, \\*a(\\#{\\hat{k}}) \\, \\right]}\\, \\left[\n\\, \\*b(\\#{\\hat{k}}) \\, \\right]_{mp} ,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\\label{Residue}\n\\*b(\\#{\\hat{k}})&=&\\frac{1}{2i}\\bigg[ \\frac{e^{iL\\sigma p_1}\\*N(p_1\n\\*U \\cdot \\#{\\hat{k}} )}{\\sigma\np_1^2(p_1^2-p_2^2)(p_1^2-p_3^2)}\\bigg(1-iL\\sigma p_1\\bigg) -\n\\frac{e^{iL\\sigma p_2}\\*N(p_2 \\*U \\cdot \\#{\\hat{k}})}{\\sigma\np_2^2(p_1^2-p_2^2)(p_2^2-p_3^2)}\\bigg(1-iL\\sigma p_2\\bigg)\\nonumber\\\\\n& &+\\frac{e^{iL\\sigma p_3}\\*N(p_3 \\*U \\cdot \\#{\\hat{k}})}{\\sigma\np_3^2(p_2^2-p_3^2)(p_1^2-p_3^2)}\\bigg(1-iL\\sigma p_3\\bigg)-\n\\frac{\\*N(\\#0)}{\\sigma p_1^2p_2^2p_3^2}\\bigg].\n\\end{eqnarray}\nThe integrals in \\r{final_full_int} and \\r{rfinal_full_int} are\nreadily evaluated by standard numerical methods \\c{num_methods}.\n\nSignificantly, the second--order SPFT estimates $C^{(spft)}_{lmpq}$\nand $\\rho^{(spft)}_{mp}$\n are complex--valued even when the corresponding quantities for the\n component materials, i.e.,\n$C^{(\\ell)}_{lmpq}$ and $\\rho^{(\\ell)}$ ($\\ell = 1,2$), are\nreal--valued. This reflects the fact that the SPFT effectively\ntakes into account losses due to scattering. This feature is not\nunique to the SPFT: it arises generally in multi--scattering\napproaches to homogenization \\c{Twersky,Varadan,Biwa}. We note that\nfor \\cite{Cerveny_2006}\n\\begin{itemize}\n\\item[(i)] the time--averaged strain energy density to be\npositive--valued,\n $\\mbox{Re}\\, \\*{\\breve{C}}^{(spft)}$ is required to be positive--definite; and\n\\item[(ii)] the time--averaged dissipated energy density to be\npositive--valued,\n $ - \\mbox{Im}\\, \\*{\\breve{C}}^{(spft)}$ is required to be positive--semi--definite,\n\\end{itemize}\nwhere $\\*{\\breve{C}}^{(spft)}$ is the 6$\\times$6 matrix with\ncomponents $\\left[ \\, \\*{\\breve{C}}^{(spft)} \\, \\right]_{st} = \\left[ \\,\n\\*C^{(spft)} \\, \\right]_{st}$ $(s, t \\in \\left\\{ 1,2,\\ldots,6 \\right\\})$ and\n$\\*C^{(spft)}$ is the 9$\\times$9 matrix equivalent to the SPFT\nstiffness tensor $C^{(spft)}_{lmpq}$.\n\nIt is notable too that the second--order SPFT estimates\n$C^{(spft)}_{lmpq}$ and $\\rho^{(spft)}_{mp}$ are explicitly\ndependent on frequency, whereas the corresponding zeroth--order SPFT\nestimates exhibit only an implicit dependency on frequency via the\nfrequency--dependent constitutive parameters of the component\nmaterials. Accordingly, the second--order SPFT estimates may be\nviewed as low frequency corrections to the quasi--static estimates\nprovided by the the zeroth--order SPFT.\n\n\nA complex--valued anisotropic density, as delivered\nby \\r{rfinal_full_int}, is not without precedent \\c{Willis85}; see Milton\n\\c{Milton_NJP2007} for a discussion on this issue.\n\n\n\n\\section{Numerical results} \\l{Num_Results}\n\nLet us now illustrate the theory presented in \\S\\ref{theory_section}\nby means of some representative numerical examples. We consider\nhomogenizations wherein the component materials are acetal and glass\n(or orthotropic perturbations of these in \\S\\ref{ortho_perturb}).\nThe corresponding results are qualitatively similar to those we\nfound from homogenizations involving a wide range of different\ncomponent materials, characterized by widely different constitutive\nparameters, which are not presented here.\n In order to\nprovide a baseline for the SPFT estimate of the HCM stiffness\ntensor, the corresponding results provided by the Mori--Tanaka\nmean--field formalism \\cite{Mori-Tanaka,Benveniste} were also\ncomputed. The Mori--Tanaka formalism was chosen as a comparison for\nthe SPFT because it is well--established and straightforwardly\nimplemented \\c{Lakhjcm,Mura_book}. Comparative studies involving the\nMori--Tanaka and other homogenization formalisms are reported\nelsewhere; see \\c{Ferrari,Hu_Weng,Mercier}, for example. The\nMori--Tanaka estimate of the 9$\\times$9 stiffness matrix of the HCM\nmay be written as \\c{Mura_book}\n\\begin{equation}\\label{MTestimate}\n\\*C^{(MT)}=\\left[ \\, f^{(1)}\\*C^{(1)}+f^{(2)}\\*C^{(2)}\\cdot\\*A \\,\n\\right]\\cdot\\left[ \\, f^{(1)}{\\mbox{\\boldmath$\\*\\tau$}}+f^{(2)}\\*A\\,\n\\right]^{\\dagger},\n\\end{equation}\nwhere\n\\begin{equation}\n\\*A=\\left[ \\, {\\mbox{\\boldmath$\\*\\tau$}}+\\*S^{(esh)}\\cdot \\left(\n\\*C^{(1)}\\right)^{\\dagger} \\cdot \\left( \\*C^{(2)}-\\*C^{(1)} \\right) \\,\n\\right]^{\\dagger},\n\\end{equation}\nand $\\*S^{(esh)}$ is the 9$\\times$9 Eshelby matrix \\c{Eshelby}. The\nevaluation of this matrix is described in Appendix~B.\n\nIn the remainder of this section, we present the 9$\\times$9 stiffness matrix of the\nHCM, namely $\\*C^{(hcm)}$, as estimated by the lowest--order SPFT\n(i.e., $hcm =ocm$), the second--order SPFT (i.e., $hcm =spft$) and\nthe Mori--Tanaka mean--field formalism (i.e., $hcm =MT$). The matrix\n$\\*C^{(hcm)}$ generally has the orthotropic form represented in\n\\r{ortho_form2} with $\\ell = hcm$. We also present the\nsecond--order SPFT density tensor $\\rho^{(spft)}_{mp}$; numerical\nresults for the lowest--order SPFT density $\\rho^{(ocm)}$ need not\nbe presented here as that quantity is simply the volume average of\nthe densities of the component materials.\n For all second--order SPFT\ncomputations, we selected $\\omega=2\\pi\\times 10^6$ $\\mbox{s}^{-1}$.\n\n\n\\subsection{Isotropic component materials distributed as oriented\nellipsoidal particles} \\l{iso_ell}\n\nLet us begin by considering the scenario in which the component materials\nare both isotropic. The component material `1' is taken to be acetal\n(i.e., $\\lambda^{(1)} = \\lambda^{(ace)}$, $\\mu^{(1)} = \\mu^{(ace)}$ and\n$\\rho^{(1)} = \\rho^{(ace)}$), and component material `2' to be glass\n(i.e., $\\lambda^{(2)} = \\lambda^{(gla)}$, $\\mu^{(2)} = \\mu^{(gla)}$ and\n$\\rho^{(2)} = \\rho^{(gla)}$). The\n Lam\\'e constants and densities for these two materials are\n as follows \\cite{MacMillan,Shackelford}:\n \\begin{equation}\\left. \\l{glass_acetal}\n \\begin{array} {lll}\n \\lambda^{(ace)}=2.68 \\; \\mbox{GPa}, & \\mu^{(ace)}=1.15 \\; \\mbox{GPa}, &\n\\rho^{(ace)} = 1.43 \\times 10^{3}\\; \\mbox{kg} \\, \\mbox{m}^{-3} \\vspace{4pt} \\\\\n\\lambda^{(gla)}=21.73 \\; \\mbox{GPa}, & \\mu^{(gla)}=29.2 \\; \\mbox{GPa},\n& \\rho^{(gla)} = 2.23 \\times 10^{3}\\; \\mbox{kg} \\, \\mbox{m}^{-3}\n\\end{array}\n\\right\\}.\n\\end{equation}\nThe eccentricities of the ellipsoidal component\nparticles are specified by the parameters $\\left\\{ a, b, c \\right\\}$,\nper \\r{r_shape} and \\r{U_shape}.\n\nIn Fig.~\\ref{glaaceplot_spher1} the components of the HCM stiffness\nmatrix $\\*C^{(hcm)}$, as computed using the lowest--order SPFT and\nthe Mori--Tanaka formalism, are plotted as functions of volume\nfraction $f^{(2)}$ for the case $a=b=c$. Since the HCM is isotropic\nin this case, only the components $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{11}\n\\equiv \\lambda^{(hcm)} + 2 \\mu^{(hcm)}$, $\\left[ \\, \\*C^{(hcm)} \\,\n\\right]_{12} \\equiv \\lambda^{(hcm)}$ and $\\left[ \\, \\*C^{(hcm)} \\,\n\\right]_{44} \\equiv \\mu^{(hcm)}$ are presented, per \\r{ortho_form}\nwith $\\ell = hcm$. Notice that the following limits necessarily\napply for both the SPFT and Mori--Tanaka estimates:\n\\begin{equation}\n\\lim_{f^{(2)} \\to 0} \\*C^{(hcm)}\n = \\*C^{(1)},\\qquad\n\\lim_{f^{(2)} \\to 1} \\*C^{(hcm)} =\n\\*C^{(2)}.\n\\end{equation}\nIt is apparent from Fig.~\\ref{glaaceplot_spher1} that, while the\nlowest--order SPFT and the Mori--Tanaka estimates are qualitatively\nsimilar, the Mori--Tanaka estimates display a greater deviation from\nthe naive HCM estimate $f^{(1)} \\left[ \\, \\*C^{(1)} \\, \\right]_{pq} +\nf^{(2)} \\left[ \\, \\*C^{(2)} \\, \\right]_{pq}$ for mid--range values of\n$f^{(2)}$. For further comparison in this isotropic scenario, the\nfamiliar variational bounds on $ \\left[ \\, \\*C^{(hcm)} \\, \\right]_{11}$\nand $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{44}$ established by Hashin and\nShtrikman \\c{Milton_book,HS_1962} are also presented in\nFig.~\\ref{glaaceplot_spher1}: the lower Hashin--Shtrikman bound\ncoincides with the Mori--Tanaka estimate and the lowest--order SPFT\nestimate lies within the upper and lower Hashin--Shtrikman bounds\nfor all values of $f^{(2)}$. Parenthetically, we note that for\nisotropic HCMs the lowest--order SPFT estimates are the same as\nthose provided by the well--known formalisms of Hill \\c{Hill_1963}\nand Budiansky \\c{Bud}, as demonstrated elsewhere \\c{spft_zhuck1}.\n\n\nThe corresponding lowest--order SPFT and Mori--Tanaka estimates for\nthe orthotropic HCM arising from the distribution of component\nmaterial as ellipsoids described by $\\left\\{ a\/c = 5, \\, b\/c = 1.5\n\\right\\}$ are presented in Fig.~\\ref{glaaceplot_ellips1}. The matrix\nentries $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{pq}$ are plotted against\n$f^{(2)}$ for $pq \\in \\left\\{ 11, 12, 44 \\right\\}$. The graphs for $pq =\n13$ and $23$ are qualitatively similar to those for $pq = 12$; and\nthose for $pq = 55$ and $66$ are qualitatively similar to those for\n$pq = 44$.\n The degree of orthotropy\nexhibited by the HCM can be gauged by relative differences in the\nvalues of $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{pq}$ for $pq \\in \\left\\{ 11, \\,\n22,\\, 33 \\right\\}$ (and similarly by relative differences in $\\left[ \\,\n\\*C^{(hcm)} \\, \\right]_{pq}$ for $pq \\in \\left\\{ 44, \\, 55,\\, 66 \\right\\}$ and\nby relative differences in $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{pq}$ for\n$pq \\in \\left\\{ 12, \\, 13,\\, 23 \\right\\}$). These relative differences are\ngreatest for mid--range values of the volume fraction $f^{(2)}$.\n\nThe orthotropic nature of the HCM is accentuated by using\ncomponent materials with more eccentrically--shaped particles. This\nis illustrated by Fig.~\\ref{glaaceplot_ellips2}, which shows results\ncomputed for the same scenario as for Fig.~\\ref{glaaceplot_ellips1}\nbut with ellipsoidal particles described by $\\left\\{ a\/c = 10, \\, b\/c\n= 2 \\right\\}$. A comparison of\nFigs.~\\ref{glaaceplot_spher1}--\\ref{glaaceplot_ellips2} reveals that\ndifferences between the estimates of the lowest--order SPFT and the\nMori--Tanaka mean--field formalism vary slightly as the orthotropic\nnature of the HCM is accentuated. For example, the difference\nbetween the lowest--order SPFT and the Mori--Tanaka estimates of\nthe $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{44}$ increases as the HCM becomes\nmore orthotropic.\n\nNow let us turn to the second--order SPFT estimates of the HCM\nconstitutive parameters. We considered these quantities as functions\nof $\\bar{k} L$, where\n\\begin{equation}\n\\bar{k} = \\frac{\\omega}{4} \\left(\n\\sqrt{\\frac{\\rho^{(1)}}{\\lambda^{(1)}+2\\mu^{(1)}}} +\n\\sqrt{\\frac{\\rho^{(1)}}{\\mu^{(1)}}} +\n\\sqrt{\\frac{\\rho^{(2)}}{\\lambda^{(2)}+2\\mu^{(2)}}} +\n\\sqrt{\\frac{\\rho^{(2)}}{\\mu^{(2)}}} \\right)\n\\end{equation}\nis an approximate wavenumber calculated as the average of the shear\nand longitudinal wavenumbers in the component materials, and $L$ is\nthe correlation length associated with the the two--point covariance\nfunction \\r{cov}. Since $L$ is required to be smaller than\ncharacteristic wavelengths in the HCM (but larger than the sizes of\nthe component particles), we restrict our attention to $0 < \\bar{k}\nL < 0.6$. Fig.~\\ref{Re_Cspftplot_ellipse} shows the real and\nimaginary parts of the components of\n$\\*{\\tilde{C}}^{(spft)}=\\*C^{(spft)}-\\*C^{(ocm)}$ plotted against\n$\\bar{k} L$ for $f^{(2)} = 0.5$. The values of the shape parameters\n$\\left\\{ a, \\, b, \\, c \\right\\}$ correspond to those used in the\ncalculations for\nFigs.~\\ref{glaaceplot_spher1}--\\ref{glaaceplot_ellips2}. As\npreviously, only the matrix entries $\\left[ \\, \\*{\\tilde{C}}^{(spft)}\n\\, \\right]_{pq}$ are presented for $pq \\in \\left\\{ 11, 12, 44 \\right\\}$. The\ngraphs for $pq = 13$ and $23$ are qualitatively similar to those for\n$pq = 12$; and those for $pq = 55$ and $66$ are qualitatively\nsimilar to those for $pq = 44$. Notice that\n\\begin{equation} \\lim_{L \\to 0} \\*C^{(spft)} =\n \\*C^{(ocm)}\n \\end{equation}\nand\n\\begin{equation} \\Big\\vert \\left[\n\\, \\*{\\tilde{C}}^{(spft)} \\, \\right]_{pq} \\Big\\vert\n\\ll \\Big\\vert \\left[ \\, \\*C^{(ocm)} \\, \\right]_{pq} \\Big\\vert\n\\end{equation}\n for all nonzero matrix entries. Furthermore, for the particular example considered here, the\nmagnitude of $\\left[ \\, \\*{\\tilde{C}}^{(spft)} \\, \\right]_{pq}$ generally\ndecreases as the particles of the component materials become more\neccentric in shape.\n\nA very striking feature of the second--order SPFT estimates\npresented in Fig.~\\ref{Re_Cspftplot_ellipse} is that\n\\begin{equation}\n\\mbox{Im }\n \\left[ \\, \\*C^{(spft)} \\, \\right]_{pq} \\neq 0,\n\\end{equation}\n whereas $\\mbox{Im } \\,\n\\left[ \\, \\*C^{(a,b)} \\, \\right]_{pq} = \\mbox{Im } \\, \\left[ \\, \\*C^{(ocm)}\n\\, \\right]_{pq} = 0$. Furthermore, the magnitude of $\\mbox{Im }\\left[\n\\, \\*C^{(spft)} \\, \\right]_{pq}$ grows steadily as the correlation\nlength increases from zero. These observations may be interpreted in\nterms of effective losses due to scattering as follows. For all\nreported calculations, $\\mbox{Re}\\, \\*{\\breve{C}}^{(spft)}$ is\npositive--definite and\n $ - \\mbox{Im}\\, \\*{\\breve{C}}^{(spft)}$ is positive--semi--definite,\nwhich together imply that the associated time--averaged strain\nenergy and dissipated energy densities are positive--valued\n\\cite{Cerveny_2006}, as discussed in \\S\\ref{sec_SPFT}. Accordingly,\nthe emergence of nonzero imaginary parts of $\\left[ \\, \\*C^{(spft)} \\,\n\\right]_{pq}$ indicates that the HCM has acquired an effectively\ndissipative nature, despite the component materials being\nnondissipative. This effective dissipation must be a scattering\nloss, because the second--order SPFT accommodates interactions\nbetween spatially--distinct scattering centres via the two--point\ncovariance function \\r{cov}. As the correlation length increases,\nthe number of scattering centres that can mutually interact also\nincreases, thereby increasing the scattering loss per unit volume.\n\nLastly in this subsection, the real and imaginary parts of the\nsecond--order SPFT density tensor $\\tilde{\\rho}^{(spft)}_{pq} =\n\\rho^{(spft)}_{pq} - \\rho^{(ocm)}$ are plotted as functions of\n$\\bar{k} L$ in Fig.~\\ref{Re_rho_ellipse}. Only the $p=q$ components\nare presented, as the $p \\neq q$ components are negligibly small.\nThe density tensor exhibits characteristics similar to those of the\ncorresponding stiffness tensor insofar as\n\\begin{equation}\n \\lim_{L\n\\to 0} \\rho^{(spft)}_{pq} = \\rho^{(ocm)}\n\\end{equation}\nand\n\\begin{equation}\n\\Big\\vert \\tilde{\\rho}^{(spft)}_{pq} \\Big\\vert\n \\ll \\Big\\vert \\rho^{(ocm)} \\Big\\vert\n\\end{equation}\nfor all values of\nthe indexes $p$ and $q$. Also, $ \\displaystyle{ \\vert\n\\tilde{\\rho}^{(spft)} _{pq} \\vert} $ generally decreases as\nthe shape of the particles of the component materials deviates further from\nspherical.\n\n\n\\subsection{Orthotropic component materials distributed as spheres}\n\\l{ortho_perturb}\n\n\nLet us now explore the scenario wherein the component materials are\northotropic perturbations of the isotropic\ncomponent materials considered in \\S\\ref{iso_ell}. In the notation of\n\\r{ortho_form2}, we choose\n\\begin{equation}\n\\left.\n\\begin{array}{l} \\l{om1}\n \\*{\\mathcal{M}}^{(1)} =\n \\left(\n\\begin{array}{lll}\n\\left( \\lambda^{(ace)} + 2 \\mu^{(ace)}\\right) \\left( 1+ \\varsigma \\right) &\n\\lambda^{(ace)} \\left( 1-\\varsigma \\right) & \\lambda^{(ace)} \\left( 1 + 2 \\varsigma\n\\right)\\\\\n\\lambda^{(ace)} \\left( 1-\\varsigma \\right) & \\left( \\lambda^{(ace)} + 2\n\\mu^{(ace)}\\right) \\left( 1 - \\frac{1}{4}\\varsigma \\right) & \\lambda^{(ace)} \\left(\n1+ \\frac{1}{4} \\varsigma \\right)\n\\\\\n\\lambda^{(ace)} \\left( 1+2\\varsigma \\right) & \\lambda^{(ace)} \\left( 1 +\n\\frac{1}{4}\\varsigma \\right) & \\left( \\lambda^{(ace)} + 2 \\mu^{(ace)}\n\\right)\n \\left(\n1- 2 \\varsigma \\right)\n\\end{array}\n\\right) \\vspace{8pt}\n\\\\\n\\*{\\mathcal{D}}^{(1)} =\n \\left(\n\\begin{array}{lll}\n\\left( \\mu^{(ace)}\\right) \\left( 1- \\varsigma \\right) & 0 &\n0 \\\\\n0 & \\mu^{(ace)} \\left( 1 - \\frac{1}{2}\\varsigma \\right) & 0\n\\\\\n0 & 0 & \\mu^{(ace)}\n \\left(\n1- \\frac{2}{3} \\varsigma \\right)\n\\end{array}\n\\right)\n\\end{array}\n\\right\\}\n\\end{equation}\nand\n\\begin{equation}\n\\left.\n\\begin{array}{l} \\l{om2}\n \\*{\\mathcal{M}}^{(2)} =\n \\left(\n\\begin{array}{lll}\n\\left( \\lambda^{(gla)} + 2 \\mu^{(gla)}\\right) \\left( 1+ 2\\varsigma \\right) &\n\\lambda^{(gla)} \\left( 1-2\\varsigma \\right) & \\lambda^{(gla)} \\left( 1 + \\frac{1}{2}\n\\varsigma\n\\right)\\\\\n\\lambda^{(gla)} \\left( 1-2 \\varsigma \\right) & \\left( \\lambda^{(gla)} + 2\n\\mu^{(gla)}\\right) \\left( 1 + \\frac{1}{3}\\varsigma \\right) & \\lambda^{(gla)} \\left(\n1- \\frac{1}{3} \\varsigma \\right)\n\\\\\n\\lambda^{(gla)} \\left( 1+ \\frac{1}{2}\\varsigma \\right) & \\lambda^{(gla)} \\left( 1 -\n\\frac{1}{3}\\varsigma \\right) & \\left( \\lambda^{(gla)} + 2 \\mu^{(gla)} \\right)\n \\left(\n1- \\frac{1}{2} \\varsigma \\right)\n\\end{array}\n\\right) \\vspace{8pt}\n\\\\\n\\*{\\mathcal{D}}^{(2)} =\n \\left(\n\\begin{array}{lll}\n\\left( \\mu^{(gla)}\\right) \\left( 1- \\frac{3}{2}\n\\varsigma \\right) & 0 &\n0 \\\\\n0 & \\mu^{(gla)} \\left( 1 - \\frac{4}{5}\\varsigma \\right) & 0\n\\\\\n0 & 0 & \\mu^{(gla)}\n \\left(\n1- \\frac{2}{3} \\varsigma \\right)\n\\end{array}\n\\right)\n\\end{array}\n\\right\\},\n\\end{equation}\nwhere the real--valued scalar $\\varsigma$ controls the degree of\northotropy. Thus, at fixed values of $\\varsigma$ the component\nmaterials may be viewed as being locally orthotropic. As in\n\\S\\ref{iso_ell}, the densities of the component materials are taken\nto be $\\rho^{(1)} = \\rho^{(ace)}$ and $\\rho^{(2)} = \\rho^{(gla)}$.\n The component materials are distributed as spherical\n particles (i.e., $a = b= c $).\n\n\nThe lowest--order SPFT and Mori--Tanaka estimates for the HCM\narising from orthotropic component materials characterized by\n$\\varsigma = 0.05$ and $\\varsigma = 0.1$ are presented in\nFig.~\\ref{glaaceplot_delta1} and \\ref{glaaceplot_delta2},\nrespectively. As previously, only a representative selection of the\nentries of $\\*C^{(hcm)}$ are provided here. The plots for $\\varsigma\n= 0$, for which case the HCM is isotropic, are the ones displayed in\nFig.~\\ref{glaaceplot_spher1}.\n In a\n manner resembling the scenario considered in\n \\S\\ref{iso_ell}, the lowest--order SPFT and the Mori--Tanaka\nestimates are qualitatively similar, but the Mori--Tanaka estimates\ndisplay a greater deviation from the naive HCM estimate $f^{(1)}\n\\left[ \\, \\*C^{(1)} \\, \\right]_{pq} + f^{(2)} \\left[ \\, \\*C^{(2)} \\,\n\\right]_{pq}$ for mid--range values of $f^{(2)}$, at all values of\n$\\varsigma$.\n\nThe degree of orthotropy exhibited by the HCM\n clearly increases as the value of $\\varsigma$ increases, and\ndifferences between the estimates of the lowest--order SPFT and the\nMori--Tanaka mean--field formalism also vary as $\\varsigma$\nincreases. To explore this matter further, in Fig.~\\ref{C11_C33_fig}\nthe associated ratios $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{11} \/ \\left[ \\,\n\\*C^{(hcm)} \\, \\right]_{33}$, $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{12} \/ \\left[\n\\, \\*C^{(hcm)} \\, \\right]_{23}$ and $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{44} \/\n\\left[ \\, \\*C^{(hcm)} \\, \\right]_{44}$ are plotted against $f^{(2)}$ for\n$\\varsigma = 0.05$ and $0.1$. The three different patterns are\nportrayed in the three plots: for $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{11}\n\/ \\left[ \\, \\*C^{(hcm)} \\, \\right]_{33}$ differences between the\nlowest--order SPFT and the Mori--Tanaka estimates are larger for\nwhen the HCM is more orthotropic; the reverse is the case for $\\left[\n\\, \\*C^{(hcm)} \\, \\right]_{12} \/ \\left[ \\, \\*C^{(hcm)} \\, \\right]_{23}$,\nwhile for $\\left[ \\, \\*C^{(hcm)} \\, \\right]_{44} \/ \\left[ \\, \\*C^{(hcm)} \\,\n\\right]_{66}$ there is no noticeable difference between the\nlowest--order SPFT and Mori--Tanaka estimates as the degree of HCM\northotropy is increased.\n\n\nNext we focus on the second--order SPFT estimate of the HCM\nstiffness tensor. The real and imaginary parts of a representative\nselection of entries of\n$\\*{\\tilde{C}}^{(spft)}=\\*C^{(spft)}-\\*C^{(ocm)}$ are graphed\nagainst $\\bar{k} L$ in Fig.~\\ref{Re_Cspftplot}. The volume fraction\nis fixed at $f^{(2)} = 0.5$. The values of the orthotropy parameter\n$\\varsigma$ are $0$, $0.05$ and $0.1$, in correspondence with the\ncalculations of Figs.~\\ref{glaaceplot_spher1},\n\\ref{glaaceplot_delta1} and \\ref{glaaceplot_delta2}. As we observed\nin \\S\\ref{iso_ell}, the magnitude of the components of $\n\\*{\\tilde{C}}^{(spft)} $ generally decrease as the HCM becomes more\northotropic. Also, the second--order SPFT estimate $\\*C^{(spft)}$\nhas components with nonzero imaginary parts, which implies that the\nHCM is effectively dissipative even though the component materials\nare nondissipative. Furthermore, the HCM becomes increasingly\ndissipative as the correlation length increases, this effective\ndissipation being attributable to scattering losses.\n\n\nFinally, the real and imaginary parts of the second--order SPFT\ndensity tensor $\\tilde{\\rho}^{(spft)}_{pq} = \\rho^{(spft)}_{pq} -\n\\rho^{(ocm)}$ are plotted as functions of $\\bar{k} L$ in\nFig.~\\ref{rhoplot}. As previously in \\S\\ref{iso_ell}, the components\nfor $p \\neq q$ are negligibly small so only the $p=q$ components\nare provided here. The density plots resemble those of the\ncorresponding stiffness tensor; i.e., the components\n$\\tilde{\\rho}^{(spft)}_{pp}$ are much smaller than $\\rho^{(ocm)}$\nand they increase rapidly from zero as $L$ increases from zero. The\nmagnitudes of $\\tilde{\\rho}^{(spft)}_{pp}$ are smallest when the\northotropy parameter describing the component materials is greatest.\n\n\\section{Closing remarks}\n\nThe elastodynamic SPFT has been further developed, in order to\nundertake numerical studies based on a specific choice of two--point\ncovariance function. From our theoretical considerations in\n\\S\\ref{theory_section} and our representative numerical studies in\n\\S\\ref{Num_Results}, involving generally orthotropic component\nmaterials which are distributed as oriented ellipsoids, the\nfollowing conclusions were drawn:\n\\begin{itemize}\n\n\\item The lowest--order SPFT estimate of the HCM stiffness tensor\nis qualitatively similar to that provided by the Mori--Tanaka\nmean--field formalism.\n\n\\item Differences between the estimates of\nthe lowest--order SPFT and the Mori--Tanaka mean--field formalism\nare greatest for mid--range values of the volume fraction.\n\n\\item Differences between the estimates of\nthe lowest--order SPFT and the Mori--Tanaka mean--field formalism\nvary as the HCM becomes more orthotropic. The degree of orthotropy\nof the HCM may be increased by increasing either the degree of\northotropy of component materials or\n the degree of eccentricity (nonsphericity) of the component particles.\n\n\\item The second--order SPFT provides a low--frequency correction to the\nquasi--static lowest--order estimates of the HCM stiffness tensor\nand density. The correction vanishes as the correlation length tends\nto zero.\n\n\\item The correction provided\nby second--order SPFT, though relatively small in magnitude, is\nhighly significant as it indicates effective dissipation due to\nscattering loss.\n\n\\item Differences between the second--order and\nlowest--order SPFT estimates of the HCM constitutive parameters\ndiminish as the HCM becomes more orthotropic.\n\n\\end{itemize}\n\nThe ability to accommodate higher--order descriptions of the\ndistributional statistics of the component materials bodes well for\nthe future deployment of the SPFT in exploring the complex behaviour\nof metamaterials as HCMs. Additionally, since the SPFT has been now\nestablished for acoustic, electromagnetic and elastodynamic\nhomogenization scenarios, the prospect of considering HCMs with\nmixed acoustic\/elastodynamic\/electromagnetic constitutive relations\nbeckons.\n\n\n\\vspace{15mm} \\noindent {\\bf Acknowledgements:} AJD is supported by\nan \\emph{Engineering and Physical Sciences Research Council} (UK)\n studentship. AL thanks the Charles Godfrey Binder Professorship Endowment\n for partial support.\n\\vspace{15mm}\n\n\n\\section*{Appendix A}\n\n\\subsection*{Matrix\/tensor algebra}\n\nA fourth--order tensor $A_{rstu}$ ($r,s,t,u \\in \\left\\{ 1,2,3\n\\right\\}$) has $81$ components. If it obeys the symmetries\n$A_{rstu} = A_{srtu} = A_{rsut} = A_{turs}$,\n it can be represented by a 9$\\times$9 matrix $\\*A $\nwith components $\\left[ \\,\\*A \\, \\right]_{RS}$ ($R,S \\in \\left\\{ 1,\\ldots,9\n\\right\\}$) . Similarly, the nine entries of a second--order tensor\n$B_{rs}$ ($r,s \\in \\left\\{ 1,2,3 \\right\\}$) may be expressed as a column\n9--vector $\\#B$ with components $\\left[ \\, \\#B \\, \\right]_R$ ($R \\in \\left\\{ 1,\n\\ldots, 9 \\right\\}$). The scheme for converting between the tensor\nsubscript pairs $rs$ and $tu$ and the matrix indexes $RS$ or vector\nindex $R$ is provided in Table~\\ref{MatrixConv}.\n\n\\begin{table}[h!]\n \\centering\n\\begin{tabular}{|c|c||c|c||c|c|}\n\\hline\n \n $R,S$ & $rs,tu$ & $R,S$ & $rs,tu$ & $R,S$ & $rs,tu$ \\\\\n \\hline\n \\hline\n $1$ & $11$ & $4$ & $23$ or $32$ & $7$ & $23$ or $32$ \\\\\n $2$ & $22$ & $5$ & $13$ or $31$ & $8$ & $13$ or $31$ \\\\\n $3$ & $33$ & $6$ & $12$ or $21$ & $9$ & $12$ or $21$ \\\\\n \\hline\n\\end{tabular}\n \\caption{Conversion between tensor and matrix\/vector subscripts.}\\label{MatrixConv}\n\\end{table}\n\nThe most general 9$\\times$9 matrix $\\*A$ considered in this\npaper has the form\n\\begin{equation} \\l{A_form2}\n\\*A = \\left( \\begin{array}{lll} \\*{\\mbox{\\boldmath$\\alpha$}} & \\*0 &\n\\*0\n\\\\ \\*0 & \\*{\\mbox{\\boldmath$\\beta$}} & \\*{\\mbox{\\boldmath$\\beta$}}\n\\\\\n\\*0 & \\*{\\mbox{\\boldmath$\\beta$}} & \\*{\\mbox{\\boldmath$\\beta$}}\n\\end{array} \\right),\n\\end{equation}\nwhere $ \\*{\\mbox{\\boldmath$\\alpha$}}$ is a general 3$\\times$3\nmatrix, $\\*{\\mbox{\\boldmath$\\beta$}}$ is a diagonal 3$\\times$3\nmatrix, and $\\*0$ is the null 3$\\times$3 matrix. If we define a\n9$\\times$9 matrix $\\*A^\\dagger$ as \\c{Lakhjcm}\n\\begin{equation} \\l{A_form22}\n\\*A^\\dagger = \\left( \\begin{array}{lll}\n\\*{\\mbox{\\boldmath$\\alpha$}}^{-1} & \\*0 & \\*0 \\vspace{4pt}\n\\\\ \\*0 & \\frac{1}{4} \\, \\*{\\mbox{\\boldmath$\\beta$}}^{-1} & \\frac{1}{4} \\, \\*{\\mbox{\\boldmath$\\beta$}}^{-1}\n\\vspace{4pt}\n\\\\\n\\*0 & \\frac{1}{4} \\, \\*{\\mbox{\\boldmath$\\beta$}}^{-1} & \\frac{1}{4}\n\\, \\*{\\mbox{\\boldmath$\\beta$}}^{-1}\n\\end{array} \\right),\n\\end{equation}\nthen\n $\\*A^{\\dagger}\\cdot \\*A = \\*A\\cdot \\*A^{\\dagger}=\n\\*{\\mbox{\\boldmath$\\tau$}}$, where $\\*{\\mbox{\\boldmath$\\tau$}}$ is\nthe 9$\\times $9 matrix counterpart of the identity tensor\n\\begin{equation}\n\\tau_{rstu}=\\frac{1}{2} \\left(\n\\delta_{rt}\\delta_{su}+\\delta_{ru}\\delta_{st} \\right).\n\\end{equation}\n\n\n\\section*{Appendix B}\n\n\n\\subsection*{Eshelby matrix\/tensor}\n\n\nIf the component materials are orthotropic and distributed as\nspherical particles (i.e., $a = b = c $), then the tensor\ncounterpart of the 9$\\times$9 Eshelby matrix is given as\n \\cite{Numerical_Eshelby}\n\\begin{equation}\\label{EshelbySint}\nS^{(esh)}_{ijkl}=\\frac{1}{8\\pi}C^{(1)}_{mnkl}\\int_{-1}^{+1}d\\zeta_3\n\\int_{0}^{2\\pi} d\\omega \\; \\left[ \\,\nF_{imjn}(\\overline{\\vartheta})+F_{jmin}(\\overline{\\vartheta})\\, \\right]\n,\n\\end{equation}\nwherein\n\\begin{equation}\n\\left.\n\\begin{array}{l}\n\\displaystyle{\n F_{ijkl}(\\overline{\\vartheta}) = \\frac{\\overline{\\vartheta}_k \\overline{\\vartheta}_l\n N_{ij}}{D}, \\qquad\n N_{ij}(\\overline{\\vartheta}) = \\frac{1}{2}\\epsilon_{ikl}\\epsilon_{jmn} K_{km}K_{ln}} \\vspace{4pt}\\\\\n \\displaystyle{ D = \\epsilon_{mnl}K_{m1}K_{n2}K_{l3}, \\qquad\n K_{ik}= C^{(1)}_{ijkl} \\overline{\\vartheta}_j \\overline{\\vartheta}_l} \\qquad \\\\\n\\displaystyle{ \\overline{\\vartheta}_1 = \\frac{\\zeta_1}{a}, \\qquad\n \\overline{\\vartheta}_2 = \\frac{\\zeta_2}{b}, \\qquad \\overline{\\vartheta}_3 = \\frac{\\zeta_3}{c}} \\vspace{4pt}\n \\\\\n \\displaystyle{ \\zeta_1 = (1-\\zeta_3^2)^{1\/2}\\cos(\\omega), \\qquad\n \\zeta_2=(1-\\zeta_3^2)^{1\/2}\\sin(\\omega), \\qquad\n \\zeta_3=\\zeta_3}\n \\end{array} \\right\\},\n\\end{equation}\nwith $\\epsilon_{ijk}$ being the Levi--Civita symbol. The integrals\nin \\r{EshelbySint} can be evaluated using standard numerical methods\n\\c{num_methods}.\n\nIf the component materials are isotropic and distributed as\nellipsoidal particles described by the shape matrix $\\*U$, then the\n Eshelby matrix has the form represented in \\r{A_form2} with distinct components\n given as \\cite{Mura_book}\n\\begin{eqnarray}\n\\left[ \\, \\*S^{(esh)} \\,\\right]_{11} &=& \\frac{3a^2I_{\\alpha \\alpha}+\\left(\n1-2\\nu^{(1)} \\right) I_\\alpha}{8\\pi\\left( 1-\\nu^{(1)} \\right)}, \\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{12} &=& \\frac{3b^2I_{\\alpha \\beta}- \\left(\n1-2\\nu^{(1)} \\right) I_\\alpha}{8\\pi \\left( 1-\\nu^{(1)} \\right)}, \\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{13} &=& \\frac{3c^2I_{\\alpha \\gamma}-\\left(\n1-2\\nu^{(1)} \\right) I_\\alpha}{8\\pi\\left( 1-\\nu^{(1)} \\right)}, \\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{21} &=& \\frac{3a^2I_{\\alpha \\beta }-\\left(\n1-2\\nu^{(1)} \\right) I_\\beta}{8\\pi \\left( 1-\\nu^{(1)} \\right)} ,\\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{22} &=& \\frac{3b^2I_{\\beta \\beta}+\\left(\n1-2\\nu^{(1)} \\right) I_\\beta}{8\\pi \\left( 1-\\nu^{(1)} \\right) }, \\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{23} &=& \\frac{3c^2I_{\\beta \\gamma}+ \\left(\n1-2\\nu^{(1)} \\right) I_\\beta}{8\\pi \\left( 1-\\nu^{(1)} \\right) },\n\\end{eqnarray}\n\\begin{eqnarray}\n\\left[ \\, \\*S^{(esh)} \\,\\right]_{31} &=& \\frac{3a^2I_{\\alpha \\gamma }-\n\\left( 1-2\\nu^{(1)} \\right) I_\\gamma}{8\\pi \\left( 1-\\nu^{(1)} \\right) },\n\\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{32} &=& \\frac{3b^2I_{\\beta \\gamma}-\\left(\n1-2\\nu^{(1)} \\right) I_\\gamma}{8\\pi \\left( 1-\\nu^{(1)} \\right) },\n\\end{eqnarray}\n\\begin{eqnarray} \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{33} &=& \\frac{3c^2I_{\\gamma \\gamma}+ \\left(\n1-2\\nu^{(1)} \\right) I_\\gamma}{8\\pi \\left( 1-\\nu^{(1)} \\right) } ,\n\\end{eqnarray}\n\\begin{eqnarray}\n \\left[ \\,\n\\*S^{(esh)} \\,\\right]_{44} &=& \\frac{3(b^2+c^2)I_{\\beta \\gamma}+ \\left( 1\n-2\\nu^{(1)} \\right) (I_\\beta +I_\\gamma)}{16\\pi \\left( 1-\\nu^{(1)} \\right)}\n,\\end{eqnarray}\n\\begin{eqnarray}\n\\left[ \\, \\*S^{(esh)} \\,\\right]_{55} &=& \\frac{3(a^2+c^2)I_{\\alpha\n\\gamma}+\\left( 1-2\\nu^{(1)} \\right) (I_\\alpha +I_\\gamma)}{16\\pi \\left(\n1-\\nu^{(1)} \\right)} ,\\end{eqnarray}\n\\begin{eqnarray} \\left[ \\, \\*S^{(esh)} \\,\\right]_{66} &=&\n \\frac{3(a^2+b^2)I_{\\alpha \\beta}+\\left( 1-2\\nu^{(1)} \\right)(I_\\alpha+I_\\beta)}{16\\pi \\left( 1-\\nu^{(1)} \\right)},\n\\end{eqnarray}\nwhere $\\nu^{(1)} = \\displaystyle{\\frac{\\lambda^{(1)}}{2\\left(\n\\lambda^{(1)}+\\mu^{(1)}\\right)}}$ is the Poisson ratio of component material\n`1'. For the case $a>b>c$ we have\n\\begin{equation}\n\\left. \\begin{array}{l}\n I_{\\alpha} = \\displaystyle{\\frac{4\\pi\n abc}{(a^2-b^2)(a^2-c^2)^{1\/2}}\\left[\n \\, F(\\tilde{\\theta},\\tilde{k})-E(\\tilde{\\theta},\\tilde{k})\\right]}\n \\vspace{4pt} \\\\\n I_{\\gamma} = \\displaystyle{\\frac{4\\pi\n abc}{(b^2-c^2)(a^2-c^2)^{1\/2}}\\left[\n \\, \\frac{b}{ac}(a^2-c^2)^{1\/2}-E(\\tilde{\\theta},\\tilde{k})\\right] }\n \\vspace{4pt} \\\\\n I_{\\beta} = \\displaystyle{4\\pi-(I_\\alpha +I_\\gamma)}, \\qquad\n I_{\\alpha \\beta} = \\displaystyle{\\frac{I_\\alpha -I_\\beta}{3(b^2-a^2)}}, \\qquad\n I_{\\alpha \\gamma} = \\displaystyle{\\frac{I_\\alpha -I_\\gamma}{3(c^2-a^2)}} , \\qquad\n I_{\\beta \\gamma} = \\displaystyle{\\frac{I_\\beta -I_\\gamma}{3(c^2-b^2)}} \\vspace{4pt} \\\\\n I_{\\alpha \\alpha} = \\displaystyle{\\frac{4\\pi}{3a^2}-(I_{\\alpha \\beta}+I_{\\alpha \\beta})}, \\qquad\n I_{\\beta \\beta} = \\frac{4\\pi}{3b^2}-(I_{\\alpha \\beta}+I_{\\beta \\gamma}), \\qquad\n I_{\\gamma \\gamma} = \\frac{4\\pi}{3c^2}-(I_{\\alpha \\gamma}+I_{\\beta \\gamma}) \\end{array} \\right\\},\n\\end{equation}\nwith the elliptic integrals given by\n\\begin{equation}\n\\left.\n\\begin{array}{l}\n\\displaystyle{E(\\tilde{\\theta},\\tilde{k}) = \\int_0^{\\tilde{\\theta}}\nd\\phi \\;(1-\\tilde{k}^2\\sin^2\\phi)^{1\/2}} \\vspace{4pt}\n\\\\\n\\displaystyle{F(\\tilde{\\theta},\\tilde{k}) = \\int_0^{\\tilde{\\theta}}\nd\\phi (1-\\tilde{k}^2\\sin^2\\phi)^{-1\/2}} \\end{array} \\right\\},\n\\end{equation}\nwherein\n\\begin{equation}\n\\tilde{\\theta}=\\sin^{-1}\\frac{(a^2-c^2)^{1\/2}}{a}, \\qquad\n\\tilde{k}=\\frac{(a^2-b^2)^{1\/2}}{(a^2-c^2)^{1\/2}}.\n\\end{equation}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\n\nReliability in communication has always been a major concern when dealing with digital data. Especially in today's information-dependent society, it is vital to design efficient ways of preventing data corruption when transmitting information. Error correcting codes have been developed for this purpose since the birth of the information theory field following the work of Shannon \\cite{Shannon1948}. \n\nIn 1989, Sourlas derived a set of error correcting codes, the so called Sourlas codes, which theoretically saturate the Shannon bound \\cite{Sourlas1989}. Although these codes turned out to be impractical, the main point of interest of this paper was the parallel made between physical spin glass systems and information theory.\n\nFollowing this paper, the tools of statistical mechanics have been successfully applied to a wide range of problems of information theory in recent years. In the field of error correcting codes itself \\cite{Kabashima2000,Nishimori1999,Montanari2000},\nas well as in spreading codes \\cite{Tanaka2001,Tanaka2005},\nand compression codes \\cite{Murayama2003,Murayama2004,Hosaka2002,Hosaka2005,Mimura2006,Cousseau2008}, statistical mechanical techniques have shown great potential.\n\nThe present paper uses similar techniques to investigate an error correcting code scheme where the codeword is encoded using tree-like multilayer perceptron neural networks. It is known that there exists a natural duality between lossy compression codes and error correcting codes. Indeed, a lossy compression code can be regarded as a standard error correcting code, but one where the codeword is generated using the original decoder of the error correcting code scheme and where the decompressed message is obtained using the original encoder of the scheme (Cf. \\cite{MacKay2003} for details).\n\nRecently, a lossy compression scheme based on a simple perceptron decoder was investigated by Hosaka et al. \\cite{Hosaka2002}. In their paper, they used statistical mechanical techniques to investigate the theoretical performance of their scheme at the infinite codeword length limit. The perceptron they defined in their model uses a special hat-shaped non-monotonic transfer function. This rather uncommon feature enables the scheme to deal with biased messages and it is known that this type of function maximizes the storage capacity of the simple perceptron \\cite{Monasson1994,Bex1995}. They found that their scheme can theoretically yield Shannon optimal performance. Subsequently, Shinzato et al. \\cite{Shinzato2006} investigated the same model but in the framework of error correcting code. They found that their model can theoretically yield Shannon optimal performance.\n\nBased on these studies, Mimura et al. \\cite{Mimura2006} proposed a tree-like multilayer perceptron network for lossy compression purposes, but use only the standard sign function as the transfer function of their model. They showed that the parity tree model can theoretically yield Shannon optimal performance, but only when considering unbiased messages. In contrast, they showed that the committee tree model cannot yield optimal performance, even for unbiased messages. However, the advantage of using a multilayer structure is improved replica symmetric solution stability, and an increased number of codewords sharing the same distortion properties \\cite{Almeida1978}.\nIn a recent study, Cousseau et al. \\cite{Cousseau2008} investigated the same tree-like multilayer perceptron model but used the hat-shaped non-monotonic transfer function introduced by Hosaka et al. \\cite{Hosaka2002}, thus combining both advantages of \\cite{Hosaka2002,Mimura2006}. By doing so, they were able to show that both parity tree and committee tree structures can then theoretically yield Shannon optimal performance even for biased messages under some specific conditions.\n\nThe purpose of the present paper is to discuss the performance of the same tree-like perceptron models but in the error correcting code framework, thus completing the topic of perceptron type network applications in coding theory. In this paper, we make use of the Binary Asymmetric Channel (BAC). Indeed, the use of the non-monotonic hat-shaped transfer function introduced by Hosaka et al. \\cite{Hosaka2002} enables us to control the bias of the of the codeword sequence, and enables the relevant schemes to deal with such an asymmetric channel (the BAC was also used by Shinzato et al. \\cite{Shinzato2006}). On the other hand, we expect the schemes which use the standard monotonic sign function to be able to deal only with the BSC channel, which corresponds to a particular case of the BAC. The majority of popular error correcting codes like turbo codes \\cite{Berrou1993} and low density parity check codes (LDPC) \\cite{Gallager1962,MacKay1995}, which provide near Shannon performance in practical time frames, have been widely studied but this was generally restricted to symmetric channels. On the other hand, apart from a few studies \\cite{Wang2005,Neri2008}, little is known when dealing with asymmetric channels.\n\nMultilayer perceptrons have been widely studied over the years by the machine learning community and a wide range of problems have been considered (storage capacity, learning rules, etc). These works revealed non-trivial behaviors of even simple models like the simple perceptron network for example. Many of these previous results are summarized in reference \\cite{Engel2001}. The present analysis gives us an opportunity to discuss the difficulty of decoding for densely connected systems (or dense systems as opposed to sparsely connected systems like LDPC codes for example) using a systematic manner in the context of multilayer networks. There has been relatively little discussion of dense systems, mainly because of the computational cost which is obviously higher than for sparse systems. However, because of their their rich randomness, dense systems can possibly be regarded as pseudo-random codes like the dense limit of LDPC codes.\n\nIn this paper we mainly focus on the necessary conditions to get Shannon optimal performance. To discuss practical decoders, it is first necessary to investigate the optimality of our schemes. This includes discussion of the optimal parameters for the transfer function since we need to know these parameters to discuss the optimal decoder. In other words, we need a theoretical analysis of the performance before we can study the decoding problem. \n\nThe paper is organized as follows. Section \\ref{section2} introduces the framework of\nerror correcting codes. Section \\ref{section3} describes our model. Section \\ref{section4} deals with the BAC capacity.\nSection \\ref{section5} presents the mathematical tools used to evaluate the performance of the present scheme. Section \\ref{section6} states the results and elucidates the location of the phase transition, which characterizes the best achievable performance of the model. Section \\ref{section7} is devoted to the conclusion and discussion.\n\n\n\n\n\n\n\n\\section{Error correcting codes} \\label{section2}\n\nIn a general scheme, an original message $\\mbi{s}^0$ of size $N$ is encoded into a codeword $\\mbi{y}_0$ of size $M$ by some encoding device. The aim of this stage is to add redundancy to the original data. Therefore, we necessarily have $M>N$. Based on this redundancy, a proper decoder device should be able to recover the original data even if it were corrupted by noise in the transmission channel. The quantity $R=N\/M$ is called the code rate and evaluates the trade-off between redundancy and codeword size. The codeword $\\mbi{y}_0$ is then fed into a channel where the bits are subject to noise. The received noisy message $\\mbi{y}$ (which is also $M$ dimensional) is then decoded using its redundancy to infer the original $N$ dimensional message $\\mbi{s}^0$. In other words, in a Bayesian framework, one tries to maximize the following posterior probability,\n\\begin{eqnarray}\nP(\\mbi{s}|\\mbi{y}) \\propto P(\\mbi{y}|\\mbi{s}) P(\\mbi{s}).\n\\end{eqnarray}\n\nAs data transmission is costly, generally one wants to be able to ensure error-free transmission while transmitting the fewest possible bits. In other words, one wants to ensure error-free transmission while keeping the code rate as large as possible. For this purpose, the well known Shannon bound \\cite{Shannon1948} gives a way to compute the best achievable code rate which allows error-free recovery. However, while this gives us the value of such an optimal code rate, it does not give any clue as to how to construct such an optimal code. Therefore, several codes have been proposed over the years in an ongoing quest to find a code which can reach this theoretical bound.\n\n\n\n\n\n\n\\section{Error correcting codes using monotonic and non-monotonic multilayer perceptrons} \\label{section3}\n\nIn this paper, since we make use of techniques derived from statistical mechanics, we will use Ising variables rather than Boolean ones. The Boolean $0$ is mapped onto $1$ in the Ising framework while the Boolean $1$ is mapped to $-1$. This mapping can be used without any loss of generality.\n\nWe assume that the original message $\\mbi{s}_0$ is generated from the uniform distribution and that all the bits are independently generated so that we have \n\\begin{eqnarray}\nP(\\mbi{s}^0)= \\frac1 {2^N}.\n\\end{eqnarray}\nThe channel considered in this study is the Binary Asymmetric Channel (BAC) where each bit is flipped independently of the others with asymmetric probabilities. If the original bit fed into the channel is $1$, then it is flipped with probability $p$. Conversely, if the original bit is $-1$, it is flipped with probability $r$. Figure \\ref{fig:BAC} shows the BAC properties in details.\n\\begin{figure}[ht!]\n \\vspace{0mm\n \\begin{center}\n \\includegraphics[width=0.3\\linewidth,keepaspectratio]{figure1.eps}\n \\end{center}\n \\caption{The Binary Asymmetric Channel (BAC)}\n \\label{fig:BAC}\n \\vspace{0mm\n\\end{figure}\nThe well known Binary Symmetric Channel (BSC) corresponds to the particular case $r=p$.\n\nWhen the corrupted message $\\mbi{y}$ is received at the output of the channel, the goal is then to recover $\\mbi{s}^0$ using $\\mbi{y}$. The state of the estimated message is denoted by the vector $\\mbi{s}$. The general outline of the scheme is shown in Figure \\ref{fig:scheme}.\n\\begin{figure}[hb!]\n \\vspace{0mm\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth,keepaspectratio]{figure2.eps}\n \\end{center}\n \\caption{Layout of the scheme}\n \\label{fig:scheme}\n \\vspace{0mm\n\\end{figure}\nFrom Figure \\ref{fig:BAC} we can easily derive the following conditional probability,\n\\begin{eqnarray}\nP(y^{\\mu}|y^{\\mu}_0) = \\frac 1 2 + \\frac {y^{\\mu}} 2 [(1-r-p) y^{\\mu}_0 + (r-p)],\n\\end{eqnarray}\nwhere we make use of the notations $\\mbi{y}_0 = (y_0^1,\\ldots,y_0^{\\mu},\\ldots,y_0^M)$, $\\mbi{y} = (y^1,\\ldots,y^{\\mu},\\ldots,y^M)$. Since we assume that the bits are flipped independently, we deduce\n\\begin{eqnarray}\nP(\\mbi{y}|\\mbi{y}_0) = \\prod_{\\mu=1}^M \\left\\{ \\frac 1 2 + \\frac {y^{\\mu}} 2 [(1-r-p) y^{\\mu}_0 + (r-p)] \\right\\}.\n\\end{eqnarray}\n\nTo encode the original message $\\mbi{s}^0$ into a codeword $\\mbi{y}_0$, we use three non-monotonic tree-like parity machine or committee machine neural networks ((I), (II) and (III)). In the same way, we also investigate the standard monotonic parity tree and committee tree neural networks ((IV) and (V)).\n\n(I) Multilayer parity tree with non-monotonic hidden units (PTH).\n\\begin{equation}\ny^{\\mu}_0 (\\mbi{s}^0) \\equiv\n\\prod_{l=1}^K f_k \\left( \\sqrt{\\frac {K} {N}} \\, \\mbi{s}^0_l \\cdot \\mbi{x}_l^{\\mu} \\right) .\n\\label{parity}\n\\end{equation}\n\n(II) Multilayer committee tree with non-monotonic hidden units (CTH).\n\\begin{equation}\ny^{\\mu}_0 (\\mbi{s}^0) \\equiv\n\\mbox{sgn} \\left( \\sum_{l=1}^K f_k \\left[ \\sqrt{\\frac {K} {N}} \\, \\mbi{s}^0_l \\cdot \\mbi{x}_l^{\\mu} \\right] \\right) .\n\\label{committee1}\n\\end{equation}\nNote that in this case, if the number of hidden units $K$ is even, it is possible to get $0$ as the argument of the sign\nfunction. We avoid this uncertainty by considering only an odd\nnumber of hidden units for the committee tree with non-monotonic\nhidden units in the sequel.\n\n(III) Multilayer committee tree with a non-monotonic output unit (CTO).\n\\begin{equation}\ny^{\\mu}_0 (\\mbi{s}^0) \\equiv\nf_k \\left( \\sqrt {\\frac 1 K}\\sum_{l=1}^K\n\\mbox{sgn} \\left[ \\sqrt{\\frac {K} {N}} \\, \\mbi{s}^0_l \\cdot \\mbi{x}_l^{\\mu}\n\\right] \\right) .\n\\label{committee2}\n\\end{equation}\n\n(IV) Multilayer parity tree (PT).\n\\begin{equation}\ny^{\\mu}_0 (\\mbi{s}^0) \\equiv\n\\prod_{l=1}^K \\mbox{sgn} \\left( \\sqrt{\\frac {K} {N}} \\, \\mbi{s}^0_l \\cdot \\mbi{x}_l^{\\mu} \\right) .\n\\label{MONOparity}\n\\end{equation}\n\n(V) Multilayer committee tree (CT).\n\\begin{equation}\ny^{\\mu}_0 (\\mbi{s}^0) \\equiv\n\\mbox{sgn} \\left( \\sqrt {\\frac 1 K} \\sum_{l=1}^K \\mbox{sgn} \\left[ \\sqrt{\\frac {K} {N}} \\, \\mbi{s}^0_l \\cdot \\mbi{x}_l^{\\mu} \\right] \\right) .\n\\label{MONOcommittee}\n\\end{equation}\nIn this case also, if the number of hidden units $K$ is even,\nit is a possible to get $0$ as the argument of the sign\nfunction. We again avoid this uncertainty by considering only an odd\nnumber of hidden units for the committee tree in the sequel.\n\nThe original message $\\mbi{s}^0$ is split into $N \/ K$-dimensional $K$ disjoint vectors so that $\\mbi{s}^0$ can be written $\\mbi{s}^0=(\\mbi{s}^0_1, \\ldots , \\mbi{s}^0_K)$. In schemes (I), (II), and (III), $f_k$ is a non-monotonic function of a real parameter $k$ of the\nform\n\\begin{equation}\nf_k (x) =\n\\begin{cases}\n1 \\quad \\text{if} \\quad |x| \\leq k\n\\\\\n-1 \\quad \\text{if} \\quad |x| > k ,\n\\end{cases}\n\\end{equation}\nand the vectors $\\mbi{x}^{\\mu}_l$ are fixed $N\/K$-dimensional independent vectors\nuniformly distributed on $\\{-1,1\\}$. The use of random input vectors is known to maximize the storage capacity of perceptron networks, making such a scheme promising for error correcting tasks. The $\\mbox{sgn}$ function denotes the sign function taking $1$\nfor $x \\geq 0$ and $-1$ for $x <0$. Each of these architectures applies a\ndifferent non-linear transformation to the original data $\\mbi{s}^0$.\nThe general architecture of these perceptron-based encoders and the non-monotonic function $f_k$ are displayed in Figure \\ref{fig:net}.\n\\begin{figure}[ht!]\n \\vspace{0mm\n \\begin{center}\n \\includegraphics[width=0.6\\linewidth,keepaspectratio]{figure3.eps}\n \\end{center}\n \\caption{Left: General architecture of the treelike multilayer perceptrons with $N$ input units and $K$ hidden units. Right: The non-monotonic function $f_k$.}\n \\label{fig:net}\n \\vspace{0mm\n\\end{figure}\nNote that we can also consider an encoder based on a committee-tree where both the hidden-units\nand the output unit are non-monotonic. However, this introduces an extra parameter (we will\nhave one threshold parameter for the hidden-units and one for the output unit) to tune\nand the performance should not change drastically. For simplicity, we restrict our study\nto the above three cases.\n\nTo keep the notation as general as possible, as long as explicit use of the encoder is not necessary in computations, we will denote the transformation performed on vector $\\mbi{s}$ by the respective encoders using the following notation:\n\\begin{equation}\n\\mathcal{F}_k \\left( \\left\\{ \\sqrt{\\frac K N} \\mbi{s}_l \\cdot \\mbi{x}_l^{\\mu} \\right\\} \\right).\n\\end{equation}\n$\\mathcal{F}_k$ takes a different expression for the five different types of network and $k$ denotes the fact that all the encoders depend on a real threshold parameter $k$ (except for schemes (IV) and (V), where this function does not depend on $k$. However for consistency, we will keep this notation for these schemes). Furthermore, note that $\\mathcal{F}_k$ contains all the terms depending on index $l$ (i.e.: $\\mathcal{F}_k ( \\{ u_l \\} )$ contains all the terms $u_1, \\ldots , u_l, \\ldots , u_K$).\n\n\n\\section{Binary Asymmetric Channel (BAC) capacity} \\label{section4}\n\nIn this section, we compute the capacity of the BAC. According to Shannon's channel coding theorem, the optimal code rate is given by the capacity of the channel. Any code rate bigger than the channel capacity will inevitably lead to information loss. The definition of the channel capacity $C$ is \n\\begin{eqnarray}\nC = \\max_{{\\rm{input \\ probability}}} \\left\\{ I(X,Y) \\right\\},\n\\end{eqnarray}\nwhere $I$ denotes mutual information, $X$ denotes the channel input distribution, and $Y$ denotes the channel output distribution. Computation of the capacity of such a binary channel requires only simple algebra and calculations are straightforward, giving\n\\begin{eqnarray}\nC_{BAC} = H_2 (\\gamma_C) - \\frac {1+\\Omega_C} 2 H_2 (p) - \\frac {1-\\Omega_C} 2 H_2 (r), \\label{capacity}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nH_2 (x) & = & - x \\log_2 (x) - (1-x) \\log_2 (1-x), \\\\\n\\gamma_C & = & \\frac {1} {1+\\Delta_C} = \\frac 1 2 \\left[ (1-p) (1+\\Omega_C) + r (1-\\Omega_C) \\right], \\\\\n\\Delta_C & = & \\left[ \\frac {r^r (1-r)^{1-r}} {p^p (1-p)^{1-p}} \\right]^{1\/1-r-p}, \\\\\n\\Omega_C & = & \\frac {2 \\gamma_C - 1 -r + p} {1 -r -p}. \\label{omegC}\n\\end{eqnarray}\nIn the special case $r=p$, the capacity simplifies to\n\\begin{eqnarray}\nC_{BSC} = 1 - H_2 (p),\n\\end{eqnarray}\nwhich corresponds to the capacity of the BSC.\n\n\n\n\n\n\n\n\n\\section{Analytical Evaluation} \\label{section5}\n\nAs stated in section II, our goal is to maximize the posterior $P(\\mbi{s}|\\mbi{y})$. \nLet us define the following Hamiltonian:\n\\begin{eqnarray}\n\\mathcal{H} (\\mbi{y},\\mbi{s}) = - \\ln [ P(\\mbi{s}|\\mbi{y}) P(\\mbi{s}) ] \n= - \\ln P(\\mbi{y},\\mbi{s}).\n\\end{eqnarray}\nThe ground state of the above Hamiltonian trivially corresponds to the {\\it maximum a posteriori} (MAP) estimator of the posterior $P(\\mbi{s}|\\mbi{y})$. Then, let us compute the joint probability of $\\mbi{y}$ and $\\mbi{s}$. We have\n\\begin{eqnarray}\nP(\\mbi{y},\\mbi{s}) = P(\\mbi{y}|\\mbi{s}) P(\\mbi{s}).\n\\end{eqnarray}\nSince the relation between an arbitrary message $\\mbi{s}$ and the codeword fed into the channel is deterministic, for any $\\mbi{s}$, we can write\n\\begin{eqnarray}\nP(\\mbi{y}|\\mbi{s}) & = & P \\left( \\mbi{y} \\Big| \\mathcal{F}_k \\left( \\left\\{ \\sqrt{\\frac K N} \\mbi{s}_l \\cdot \\mbi{x}_l^{\\mu} \\right\\} \\right) \\right), \\nonumber\n\\\\\n& = & \\prod_{\\mu=1}^M \\left\\{ \\frac 1 2 + \\frac {y^{\\mu}} 2 [(1-r-p) \\mathcal{F}_k \\left( \\left\\{ \\sqrt{\\frac K N} \\mbi{s}_l \\cdot \\mbi{x}_l^{\\mu} \\right\\} \\right) + (r-p)] \\right\\}.\n\\end{eqnarray}\nWe finally get the explicit expression of the Hamiltonian,\n\\begin{eqnarray}\n\\mathcal{H} (\\mbi{y},\\mbi{s}) & = & - \\ln P(\\mbi{y},\\mbi{s}) \\nonumber\n\\\\\n& = & - \\ln \\left[ \\frac 1 {2^N} \\prod_{\\mu=1}^M \\left\\{ \\frac 1 2 + \\frac {y^{\\mu}} 2 [(1-r-p) \\mathcal{F}_k \\left( \\left\\{ \\sqrt{\\frac K N} \\mbi{s}_l \\cdot \\mbi{x}_l^{\\mu} \\right\\} \\right) + (r-p)] \\right\\} \\right] .\n\\end{eqnarray}\nUsing this Hamiltonian, we can define the following partition function\n\\begin{equation}\nZ(\\beta,\\mbi{y},\\mbi{x}) = \\sum_{\\mbi{s}}\n\\exp \\left[ -\\beta \\mathcal{H} (\\mbi{y},\\mbi{s}) \\right] ,\n\\end{equation}\nwhere the sum over $\\mbi{s}$ represents the sum over all possible states for vector\n$\\mbi{s}$, and $\\beta$ is the inverse temperature parameter. Such a partition function\ncan be identified with the partition function of a spin glass system with dynamical\nvariables $\\mbi{s}$ and quenched variables $\\mbi{x}$. The average of this partition\nfunction over $\\mbi{y}$ and $\\mbi{x}$ naturally contains all the interesting typical\nproperties of the scheme, such as the free energy. However, it is hard to evaluate this average and we need some techniques to investigate it. In this paper, we use the so-called\n\\textit{Replica Method} to calculate the average of the partition function.\nOnce the free energy is obtained, one can compute the critical code rate at which a phase transition occurs between the ferromagnetic phase (error recovery possible) and the paramagnetic phase (decoding impossible). This gives us the best code rate the scheme can achieve. A code rate exceeding this critical value will make decoding impossible.\nThe calculations to obtain the average of the partition function\n$\\left< Z(\\beta,\\mbi{y},\\mbi{x}) \\right>_{\\mbi{y},\\mbi{x}}$ are detailed in Appendix A.\n\nAfter long calculations, the replica symmetric (RS) free energy is obtained,\n\\begin{eqnarray}\n- f_{RS} (q,\\hat{q},m,\\hat{m}) & = & \\underset{q,\\hat{q},m,\\hat{m}}{{\\rm{extr}}} \\left\\{\n\\sum_{y=\\pm1} \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D R_l \\right] \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D t_l \\right] \\times \\ln \\left[ I(y,R_l,t_l,m,q) \\right] \\right. \\nonumber\n\\\\\n&& \\times \\left( \\frac 1 2 + \\frac y 2 \\left[ (1-r-p) \\mathcal{F}_k \\left( \\left\\{ R_l \\right\\} \\right) + (r-p) \\right] \\right) \\nonumber\n\\\\\n&& + R \\int_{-\\infty}^{\\infty} DU \\ln \\left( 2 \\cosh \\left[ \\sqrt{\\hat{q}} U + \\hat{m} \\right] \\right) - R \\ln 2 \\nonumber\n\\\\\n&& - R m \\hat{m} -R \\frac { \\hat{q} (1-q)} 2 \\Bigg\\} , \\label{freeEnergy}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nI(y,R_l,t_l,m,q) & = & \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D z_l \\right] \n\\times \\Bigg[ \\frac 1 2 + \\frac y 2 (r-p) \n\\nonumber \\\\ && \n+\\frac y 2 (1-r-p)\n\\mathcal{F}_k \\left( \\left\\{ \\sqrt{1-q} z_l + \\sqrt{q-m^2} t_l +m R_l \\right\\} \\right) \\Bigg],\n\\end{eqnarray}\n\\begin{eqnarray}\nDx & = & \\frac {e^{- \\frac {x^2} 2}} {\\sqrt{2 \\pi}} dx.\n\\end{eqnarray}\nand where ${\\rm{extr}}$ denotes extremization. The sum denotes the sum other all possible states for the variable $y$, that is $\\pm 1$. \n\nNote also that we set $\\beta=1$. This choice of finite temperature decoding (in contrast to $\\beta \\to \\infty$ which corresponds to the zero temperature limit) corresponds to the {\\it maximizer of posterior marginals} (MPM) estimator, while the zero temperature decoding corresponds to the MAP estimator \\cite{Rujan1993,Nishimori2001}. The MPM estimator is known to be optimal for the purpose of decoding \\cite{Nishimori1993,Sourlas1994,Nishimori2001}. On top of that, in this paper we suppose that all the channel properties (i.e.: the true values of $(p,r)$) are known to the decoder which implies that the system's state we consider is located on the Nishimori line \\cite{Nishimori1993,Sourlas1994}.\n\nTo retrieve the free energy one has to extremize (\\ref{freeEnergy}) with respect to the order parameters $q,\\hat{q},m,\\hat{m}$. This is done by solving the following saddle point equations\n\\begin{eqnarray}\n\\frac {\\partial f_{RS}} {\\partial q} & = & 0 \\Leftrightarrow \\hat{q}=-2 R^{-1}\n\\sum_{y=\\pm1} \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D R_l \\right] \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D t_l \\right] \\times \\frac{I^{\\prime}_q (y,R_l,t_l,m,q)} {I(y,R_l,t_l,m,q)} \\nonumber\n\\\\\n&& \\makebox[4cm]{} \\times \\left( \\frac 1 2 + \\frac y 2 \\left[ (1-r-p) \\mathcal{F}_k \\left( \\left\\{ R_l \\right\\} \\right) + (r-p) \\right] \\right), \\label{qHat}\n\\\\\n\\frac {\\partial f_{RS}} {\\partial m} & = & 0 \\Leftrightarrow \\hat{m}= R^{-1}\n\\sum_{y=\\pm1} \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D R_l \\right] \\int_{-\\infty}^{\\infty} \\left[ \\prod_{l=1}^{K} D t_l \\right] \\times \\frac{I^{\\prime}_m (y,R_l,t_l,m,q)} {I(y,R_l,t_l,m,q)} \\nonumber\n\\\\\n&& \\makebox[4cm]{} \\times \\left( \\frac 1 2 + \\frac y 2 \\left[ (1-r-p) \\mathcal{F}_k \\left( \\left\\{ R_l \\right\\} \\right) + (r-p) \\right] \\right), \\label{mHat}\n\\\\\n\\frac {\\partial f_{RS}} {\\partial \\hat{q}} & = & 0 \\Leftrightarrow \nq = \\int_{-\\infty}^{\\infty} D U \\tanh^2 (\\sqrt{\\hat{q}} U + \\hat{m}), \\label{q}\n\\\\\n\\frac {\\partial f_{RS}} {\\partial \\hat{m}} & = & 0 \\Leftrightarrow \\label{m}\nm = \\int_{-\\infty}^{\\infty} D U \\tanh (\\sqrt{\\hat{q}} U + \\hat{m}),\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nI^{\\prime}_q (y,R_l,t_l,m,q) & = & \\frac {\\partial I (y,R_l,t_l,m,q)} {\\partial q},\n\\\\\nI^{\\prime}_m (y,R_l,t_l,m,q) & = & \\frac {\\partial I (y,R_l,t_l,m,q)} {\\partial m}.\n\\end{eqnarray}\nAn error correcting code scheme typically admits two solutions: one where $m=q=1$, called the ferromagnetic solution, and one where $m=q=0$, called the paramagnetic solution. As the names indicate, these solutions come from the physical ferromagnet state and correspond to the case where the spins are all ordered ($m=q=1$) or to the case where the spins take completely random states ($m=q=0$). As we can deduce from equations (\\ref{mOrder}) and (\\ref{RS}), the ferromagnetic solution corresponds to decoding success since $m=1$ implies perfect overlap. Conversely, the paramagnetic phase implies failure in the decoding process (overlap $m$ is $0$).\n\n\n\n\n\n\n\n\\subsection{Replica symmetric solution using a parity tree with non-monotonic hidden units}\n\nUsing a parity tree with non-monotonic hidden units (\\ref{parity}), the encoder function becomes\n\\begin{eqnarray}\n\\mathcal{F}_k ( \\{ u_l \\} ) = \\prod^K_{l=1} f_k (u_l).\n\\end{eqnarray}\nUsing this encoder function and substituting $m=q=0$ in the saddle point equations, one can find a consistent solution where $q=m=\\hat{q}=\\hat{m}=0$. This corresponds to the paramagnetic solution, where decoding of the received message fails. Using these conditions in (\\ref{freeEnergy}), one can retrieve the free energy of the paramagnetic phase,\n\\begin{eqnarray}\n- f_{para} & = & - H_2 \\left(\\frac 1 2 \\left[ (1-p) (1+\\Omega_{PTH}) + r(1-\\Omega_{PTH}) \\right] \\right) \\times \\ln 2 , \\label{freePara}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\Omega_{PTH} = \\prod_{l=1}^K \\int_{-\\infty}^{+\\infty} D z_l f_k(z_l). \\label{OmegaPT}\n\\end{eqnarray}\nIn the same way, substituting $m=q=1$ in the saddle point equations, one can find a consistent solution. However, the ferromagnetic solution cannot be computed analytically. So we proceed numerically by simply checking the integrand of equations (\\ref{qHat}) and (\\ref{mHat}). We did that extensively for values of $K=1$, $K=2$, and $K=3$. In each case we found that the integrand diverges so that when $(q,m)\\to(1,1)$, we have both $\\hat{q}\\to\\infty$ and $\\hat{m}\\to\\infty$. Substituting $\\hat{q}\\to\\infty$ and $\\hat{m}\\to\\infty$ into (\\ref{q}) and (\\ref{m}) clearly yields $q=m=1$. So $q=m=1$, $\\hat{q}\\to\\infty$ and $\\hat{m}\\to\\infty$ is a consistent solution of the saddle point equations which corresponds to the ferromagnetic solution, where decoding of the received message succeeds. We also checked higher values of $K$ (up to $K=5$) and did not find any other consistent solution. We conjecture that this result holds for any finite value of $K$. Finally, substituting $m=q=1$, $\\hat{m} \\to \\infty$ and $\\hat{q} \\to \\infty$ into (\\ref{freeEnergy}), one can get the free energy of the ferromagnetic phase,\n\\begin{eqnarray}\n- f_{ferro} & = & - \\frac {\\ln 2} {2} \\left[ (1+\\Omega_{PTH}) H_2 (p) + (1-\\Omega_{PTH}) H_2 (r) \\right] - R \\ln 2. \\label{freeFerro}\n\\end{eqnarray}\nNote that when $K=1$, the present scheme corresponds to the case of Shinzato et al. \\cite{Shinzato2006}. The result we obtained when $K=1$ is indeed equivalent to what they found.\n\n\n\n\n\\subsection{Replica symmetric solution using a committee tree with non-monotonic hidden units}\n\nWhen a committee tree with non-monotonic hidden units (\\ref{committee1}) is used, the encoder function becomes\n\\begin{eqnarray}\n\\mathcal{F}_k ( \\{ u_l \\} ) = \\mbox{sgn} \\left[ \\sum^K_{l=1} f_k (u_l) \\right].\n\\end{eqnarray}\nUsing this encoder function and substituting $m=q=0$ in the saddle point equations, one can find a consistent solution where $q=m=\\hat{q}=\\hat{m}=0$. This corresponds to the paramagnetic solution, where decoding of the received message fails. Using these conditions in (\\ref{freeEnergy}), one can retrieve the free energy of the paramagnetic phase,\n\\begin{eqnarray}\n- f_{para} & = & - H_2 \\left(\\frac 1 2 \\left[ (1-p) (1+\\Omega_{CTH}) + r(1-\\Omega_{CTH}) \\right] \\right) \\times \\ln 2 ,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\Omega_{CTH} = \\int_{-\\infty}^{+\\infty} \\left[ \\prod_{l=1}^K D z_l \\right] \\times \\mbox{sgn} \\left[ \\sum^K_{l=1} f_k (z_l) \\right]. \\label{OmegaCTH}\n\\end{eqnarray}\nIn the same way, by substituting $m=q=1$ in the saddle point equations one can find a consistent solution. However, the ferromagnetic solution cannot be computed analytically, so we proceed numerically by simply checking the integrand of equations (\\ref{qHat}) and (\\ref{mHat}). We did that extensively for $K=3$ (we consider only odd values of $K$ for this scheme, and when $K=1$ the present scheme is equivalent to the parity tree case). We found that the integrand diverges so that when $(q,m)\\to(1,1)$, we have both $\\hat{q}\\to\\infty$ and $\\hat{m}\\to\\infty$. We also checked higher values of $K$ (up to $K=5$) and did not find any other consistent solution. We conjecture that this result holds for any finite value of $K$. Finally, substituting $m=q=1$, $\\hat{m} \\to \\infty$ and $\\hat{q} \\to \\infty$ into (\\ref{freeEnergy}), one can get the free energy of the ferromagnetic phase,\n\\begin{eqnarray}\n- f_{ferro} & = & - \\frac {\\ln 2} {2} \\left[ (1+\\Omega_{CTH}) H_2 (p) + (1-\\Omega_{CTH}) H_2 (r) \\right] - R \\ln 2.\n\\end{eqnarray}\n \n\n\n\n\n\n\n\n\n\n\\subsection{Replica symmetric solution using a committee tree with a non-monotonic output unit}\n\nWhen a committee tree with a non-monotonic output unit (\\ref{committee2}) is used, the encoder function becomes\n\\begin{eqnarray}\n\\mathcal{F}_k ( \\{ u_l \\}) = f_k \\left[ \\sqrt{\\frac {1} {K}} \\sum^K_{l=1} \\mbox{sgn} (u_l) \\right].\n\\end{eqnarray}\nUsing this encoder function and substituting $m=q=0$ in the saddle point equations do not imply $\\hat{m}=\\hat{q}=0$ and a non-trivial solution is found, which makes the free energy too complex to be investigated. This scheme is likely to give non-optimal performance in such a case and will not be considered in what follows.\n\nNote that the limit where $K \\to \\infty$ was not studied because the saddle point equations take a non-trivial form that is difficult to investigate (in the lossy compression case, this study is still tractable). The techniques to investigate the free energy in the $K \\to \\infty$ limit described in reference \\cite{Engel2001} cannot be easily applied here. However, based on the previous results of Cousseau et al. \\cite{Cousseau2008}, it is probable that in the $K \\to \\infty$ limit, the committee tree with a non-monotonic output unit saturates the Shannon bound in the general BAC case.\n\n\n\n\n\n\n\n\\subsection{Replica symmetric solution using a parity tree}\n\nUsing a parity tree (\\ref{MONOparity}), the encoder function becomes\n\\begin{eqnarray}\n\\mathcal{F}_k ( \\{ u_l \\} ) = \\prod^K_{l=1} \\mbox{sgn} (u_l).\n\\end{eqnarray}\nUsing this encoder function and substituting $m=q=0$ in the saddle point equations, one can find a consistent solution where $q=m=\\hat{q}=\\hat{m}=0$ but only when $K>1$. This corresponds to the paramagnetic solution, where decoding of the received message fails. Using these conditions in (\\ref{freeEnergy}), one can retrieve the free energy of the paramagnetic phase,\n\\begin{eqnarray}\n- f_{para} & = & - H_2 \\left(\\frac 1 2 \\left[ (1-p) (1+\\Omega_{PT}) + r(1-\\Omega_{PT}) \\right] \\right) \\times \\ln 2 ,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\Omega_{PT} = \\prod_{l=1}^K \\int_{-\\infty}^{+\\infty} D z_l \\times \\mbox{sgn} (z_l). \\label{OmegaMONOPT}\n\\end{eqnarray}\nWhen $K=1$ is considered, $m=q=0$ does not imply $\\hat{m}=\\hat{q}=0$ and a non-trivial solution is found that makes the free energy too complex to be investigated. The scheme is likely to give non-optimal performance in such a case and will not be considered in what follows.\n\nIn the same way, substituting $m=q=1$ in the saddle point equations, one can find a consistent solution, but only when $K>1$. However, the ferromagnetic solution cannot be computed analytically, so we proceed numerically by simply checking the integrand of equations (\\ref{qHat}) and (\\ref{mHat}). We did that extensively for values of $K=2$ and $K=3$. In each case, we found that the integrand diverges so that when $(q,m)\\to(1,1)$ we have both $\\hat{q}\\to\\infty$ and $\\hat{m}\\to\\infty$. We also checked higher values of $K$ (up to $K=5$) and did not find any other consistent solution. We conjecture that this result holds for any finite value of $K>1$. Finally, substituting $m=q=1$, $\\hat{m} \\to \\infty$ and $\\hat{q} \\to \\infty$ into (\\ref{freeEnergy}), one can get the free energy of the ferromagnetic phase,\n\\begin{eqnarray}\n- f_{ferro} & = & - \\frac {\\ln 2} {2} \\left[ (1+\\Omega_{PT}) H_2 (p) + (1-\\Omega_{PT}) H_2 (r) \\right] - R \\ln 2.\n\\end{eqnarray}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Replica symmetric solution using a committee tree}\n\nUsing a committee tree (\\ref{MONOcommittee}), the encoder function becomes\n\\begin{eqnarray}\n\\mathcal{F}_k ( \\{ u_l \\}) = \\mbox{sgn} \\left[ \\sqrt{\\frac {1} {K}} \\sum^K_{l=1} \\mbox{sgn} (u_l) \\right].\n\\end{eqnarray}\nUsing this encoder function and substituting $m=q=0$ in the saddle point equations do not imply $\\hat{m}=\\hat{q}=0$ and a non-trivial solution is found that makes the free energy too complex to be investigated. This scheme is likely to give non-optimal performance in such a case and will not be considered in what follows. As in the lossy compression case \\cite{Mimura2006}, the committee tree is unable to yield Shannon optimal performance. \n\nNote that the limit where $K \\to \\infty$ was not studied because the saddle point equations take a non-trivial form that is difficult to investigate (in the lossy compression case, this study is still tractable). The techniques to investigate the free energy in the $K \\to \\infty$ limit described in reference \\cite{Engel2001} cannot be easily applied here. However, based on the previous results of Mimura et al. \\cite{Mimura2006}, it is probable that in the $K \\to \\infty$ limit the committee tree still fails to saturate the Shannon bound even in the BSC case.\n\n\n\n\n\n\n\n\n\n\n\\section{Phase transition} \\label{section6}\n\nFor the parity and committee tree with non-monotonic hidden units and for the standard parity tree, we found a paramagnetic and a ferromagnetic solution of the following form:\n\\begin{eqnarray}\n- f_{para} & = & - H_2 \\left(\\frac 1 2 \\left[ (1-p) (1+\\Omega) + r(1-\\Omega) \\right] \\right) \\times \\ln 2 ,\n\\\\\n- f_{ferro} & = & - \\frac {\\ln 2} {2} \\left[ (1+\\Omega) H_2 (p) + (1-\\Omega) H_2 (r) \\right] - R \\ln 2,\n\\end{eqnarray}\nwhere $\\Omega$ is given by $\\Omega_{PTH}$, $\\Omega_{CTH}$, or $\\Omega_{PT}$ depending on the encoder considered.\n\nIt then beconmes possible to calculate the critical value of the code rate $R$ at which a sharp phase transition occurs between the ferromagnetic and the paramagnetic phase. This indicates the boundary between possible decoding (ferromagnetic phase) and impossible decoding (paramagnetic phase). In other words, this enables us to calculate the optimal code rate for each scheme. At the phase transition point, we have\n\\begin{eqnarray}\nf_{para} = f_{ferro}.\n\\end{eqnarray}\nSimple algebra leads to\n\\begin{eqnarray}\nR = H_2 (\\gamma) - \\frac {1+\\Omega} {2} H_2 (p) - \\frac {1-\\Omega} {2} H_2 (r),\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\gamma = \\frac 1 2 \\left[ (1-p) (1+\\Omega) + r (1-\\Omega) \\right]\n\\end{eqnarray}\nand where $\\Omega$ is given by the encoder considered ($\\Omega_{PTH}$, $\\Omega_{CTH}$, or $\\Omega_{PT}$). This equation has exactly the same form as the BAC capacity equation (\\ref{capacity}) and in fact is equivalent to the BAC capacity if and only if $\\Omega = \\Omega_C$. Since $\\Omega$ depends on the encoder, we will treat each case in the following subsections.\n\n\n\n\n\n\\subsection{Tuning of the parity tree with non-monotonic hidden units}\nIn the parity tree with non-monotonic hidden units case, we have\n\\begin{eqnarray}\n\\Omega \\equiv \\Omega_{PTH} = \\prod_{l=1}^K \\int_{-\\infty}^{+\\infty} D z_l f_k(z_l).\n\\end{eqnarray}\nThe parity tree with non-monotonic hidden units is optimal if and only if \n\\begin{eqnarray}\n\\Omega_{PTH} = \\Omega_{C} \\Leftrightarrow H (k) = \\frac 1 4 \\left( 1 - \\sqrt[K]{\\Omega_C} \\right), \\label{kPT}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nH (x) = \\int_{x}^{+\\infty} Dz.\n\\end{eqnarray}\nThis gives us a condition on the threshold parameter $k$ of the non-monotonic transfer function $f_k$. If the threshold $k$ is tuned to satisfy (\\ref{kPT}), the scheme achieves the Shannon limit. The only remaining issue is whether such an optimal threshold $k$ exists. \n\nWe solved (\\ref{kPT}) numerically with parameters $(p,r) \\in \\{]0,1[\\}^2$ and always found an optimal threshold parameter $k$ up to $K=11$. Note that $\\Omega_C$ can be negative, which causes problems for the $K-$th root when considering an even number of hidden units $K$. However a simple permutation of the probability $p$ and $r$ changes the sign of $\\Omega_C$. Since the original messages are drawn from the uniform distribution, this permutation can be done without any loss of generality. Instead of using $\\mbi{s}_0$, one uses $-\\mbi{s}_0$. We did not check higher values of $K$, but we conjecture that the same result holds. This means that the parity tree with non-monotonic hidden units saturates the Shannon bound in the large codeword length limit for any number of hidden units $K$.\n\n\n\n\n\n\\subsection{Tuning of the committee tree with non-monotonic hidden units}\nIn the committee tree with non monotonic hidden units case, we have\n\\begin{eqnarray}\n\\Omega \\equiv \\Omega_{CTH} = \\int_{-\\infty}^{+\\infty} \\left[ \\prod_{l=1}^K D z_l \\right] \\times \\mbox{sgn} \\left[ \\sum^K_{l=1} f_k (z_l) \\right].\n\\end{eqnarray}\nThe committee tree with non-monotonic hidden units is optimal if and only if \n\\begin{eqnarray}\n\\Omega_{CTH} = \\Omega_{C} \\Leftrightarrow \\Omega_C & = & \\sum_{l=0}^{\\frac {K-1} {2}} \\binom{K}{l} \\left( [2 H(k)]^l [1-2H(k)]^{K-l} \\right.\n\\nonumber \\\\ && \\makebox[3cm]{}\n\\left. - [2 H(k)]^{K-l} [1-2H(k)]^l \\right) \\label{kCTH},\n\\end{eqnarray}\nwhere $\\binom{x}{y}$ denotes the binomial coefficient. This gives us a condition on the threshold parameter $k$ of the non-monotonic transfer function $f_k$. If the threshold $k$ is tuned to satisfy (\\ref{kCTH}), the scheme achieves the Shannon limit. Thus, we should check if such an optimal threshold $k$ exists. \n\nWe solved (\\ref{kCTH}) numerically with parameters $(p,r) \\in \\{]0,1[\\}^2$ and always found an optimal threshold parameter $k$ up to $K=11$. We did not check higher values of $K$, but we conjecture that the same result holds. Note that as mentioned in the definition of this encoder, we considered only an odd number of hidden units $K$. Therefore, these results mean that the committee tree with non-monotonic hidden units saturates the Shannon bound in the large codeword length limit for any odd number of hidden units $K$.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Tuning of the parity tree}\nIn the parity tree case, we have\n\\begin{eqnarray}\n\\Omega \\equiv \\Omega_{PT} = \\prod_{l=1}^K \\int_{-\\infty}^{+\\infty} D z_l \\times \\mbox{sgn} (z_l).\n\\end{eqnarray}\nThe parity tree is optimal if and only if \n\\begin{eqnarray}\n\\Omega_{PT} = \\Omega_{C} \\Leftrightarrow \\Omega_C = 0. \\label{kMONOPT}\n\\end{eqnarray}\nThis gives us a strong condition on $\\Omega_C$. From the definition (\\ref{omegC}), it can be easily seen that $\\Omega_C=0$ if and only if $r=p$: that is when the BAC channel turns into the particular case of the BSC channel. This means that the standard monotonic parity tree saturates the Shannon bound in the large codeword length limit, but only in the BSC case and for a number of hidden units $K>1$. This confirms what we expected and is the equivalent of Mimura et al. \\cite{Mimura2006} lossy compression case.\n\n\n\n\n\n\n\n\n\\FloatBarrier\n\n\\section{Conclusion and Discussion} \\label{section7}\n\nWe investigated an error correcting code scheme for uniformly unbiased Boolean messages using parity tree and committee tree multilayer perceptrons. All the schemes which use the non-monotonic transfer function $f_k$ in their hidden layer were shown to saturate the Shannon bound under some specific conditions. The use of $f_k$ enables the relevant schemes to deal with asymmetric channels like the BAC while monotonic networks using only the standard sign function can deal only with symmetric channels like the BSC.\n\nIndeed, we confirmed that the standard monotonic parity tree saturates the Shannon bound only in the case of the BSC channel. The standard monotonic committee tree however, fails to provide optimal performance even in the BSC case. \n\nAs a general conclusion, this paper shows that tree-like multilayer perceptrons introduced in \\cite{Hosaka2002,Mimura2006,Cousseau2008} within the framework of lossy compression can also be used efficiently in an error correcting code scheme. For each network considered, we provided a theoretical analysis of the typical performance and gave the necessary conditions for obtaining optimal performance. In each case, we were able to derive results similar to the lossy compression results. \nFinally, in the case of error correcting code, the replica symmetric solution stability \\cite{Almeida1978} was not checked because no replica symmetry breaking is expected on the Nishimori line \\cite{Nishimori2001bis}.\n\nThis paper discusses only the typical performance of the schemes at the infinite codeword length, however, and does not provide any explicit decoder. \nBecause the present schemes make use of densely connected systems, a formal decoder cannot be implemented as it would require a decoding time which would grow exponentially with the size of the original message. One promising alternative is to use the popular belief propagation (BP) algorithm to calculate an approximation of the marginalized posterior probabilities. The BP algorithm is known for giving good results when working in the ferromagnetic phase, where no frustration is present into the system.\n\nWith the previous work done on lossy compression \\cite{Hosaka2002,Hosaka2006,Mimura2006,Cousseau2008} and on error correcting code \\cite{Shinzato2006} using perceptron type networks, there is now a sufficient theoretical background to investigate and compare the practical performance (in the finite codeword length limit) of all the schemes with the theoretical performance. In the case of lossy compression with a simple perceptron, the study of the BP algorithm performance has already been done by Hosaka et al. \\cite{Hosaka2006}. Their work provides a solid base from which to begin investigating the more complicated multilayer structure. The influence of the number of hidden units on the practical performance of the scheme is an interesting issue which will be examined in future work.\n\n\n\\section*{Acknowledgements}\n\nThis work was partially supported by a Grant-in-Aid for Encouragement of\nYoung Scientists (B) (Grant No. 18700230), Grant-in-Aid for Scientific\nResearch on Priority Areas (Grant Nos. 18079003, 18020007),\nGrant-in-Aid for Scientific Research (C) (Grant No. 16500093), and a\nGrant-in-Aid for JSPS Fellows (Grant No. 06J06774) from the Ministry of\nEducation, Culture, Sports, Science and Technology of Japan.\n\n\n\n\n\n\\vspace*{5mm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzctac b/data_all_eng_slimpj/shuffled/split2/finalzzctac new file mode 100644 index 0000000000000000000000000000000000000000..717b7c9f6801b510d48445bd4f6846d51b6c2431 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzctac @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:introduction}\n\\noindent\nIn this paper, we shall develop a data-driven method to solve the following multiscale elliptic PDEs with random coefficients $a(x,\\omega)$, \n\\begin{align}\n\\mathcal{L}(x,\\omega) u(x,\\omega) \\equiv -\\nabla\\cdot\\big(a(x,\\omega)\\nabla u(x,\\omega)\\big) &= f(x), \n\\quad x\\in D, \\quad \\omega\\in\\Omega, \\label{MsStoEllip_Eq}\\\\\nu(x,\\omega)&= 0, \\quad \\quad x\\in \\partial D, \\label{MsStoEllip_BC}\n\\end{align}\nwhere $D \\in \\mathbb{R}^d$ is a bounded spatial domain and $\\Omega$ is a sample space. The forcing function $f(x)$ is assumed to be in $L^2(D)$.\nWe also assume that the problem is uniformly elliptic almost surely; see Section \\ref{sec:randomproblem} for precise definition of the problem.\n\nIn recent years, there has been an increased interest in quantifying the uncertainty in systems with randomness, i.e., solving stochastic partial differential equations (SPDEs, i.e., PDEs driven by Brownian motion) or partial differential equations with random coefficients (RPDEs). Uncertainty quantification (UQ) is an emerging research \narea to address these issues; see \\cite{Ghanem:91,Xiu:03,babuska:04,matthies:05,WuanHou:06,Wan:06,Babuska:07,Webster:08,Xiu:09,Najm:09,sapsis:09,Zabaras:13,Grahamquasi:2015} and references therein. \nHowever, when SPDEs or RPDEs involving multiscale features and\/or high-dimensional random inputs, the problems become challenging due to high computational cost.\n\nRecently, some progress has been made in developing numerical methods for multiscale PDEs with random coefficients; see \\cite{Kevrekidis:2003,Zabaras:06,Ghanem:08,graham2011quasi,abdulle2013multilevel,hou2015heterogeneous,ZhangCiHouMMS:15,efendiev2015multilevel,chung2018cluster} and references therein. \nFor example, data-driven stochastic methods to solve PDEs with random and\/or multiscale coefficients were proposed in \\cite{ChengHouYanZhang:13,ZhangCiHouMMS:15,ZhangHouLiu:15,hou2019model}. They demonstrated through numerical experiments that those methods were efficient in solving RPDEs with many different force functions. However, the polynomial chaos expansion \\cite{Ghanem:91,Xiu:03} is used to represent the randomness in the solutions. Although the polynomial chaos expansion is general, it is a priori instead of problem specific. Hence many terms may be required in practice for an accurate approximation which induces the curse of dimensionality. \n\nWe aim to develop a new data-driven method to solve multiscale elliptic PDEs with random coefficients based on intrinsic dimension reduction. The underlying low-dimensional structure for elliptic problems is implied by the work \\cite{bebendorf2003}, in which high separability of the Green's \nfunction for uniformly elliptic operators with $L^{\\infty}$ coefficients and the structure of blockwise low-rank approximation to the inverses of FEM matrices were established. We show that under the uniform ellipticity assumption, the family of Green's functions parametrized by a random variable $\\omega$ is still highly separable, which reveals the approximate low dimensional structure of the family of solutions to \\eqref{MsStoEllip_Eq} (again parametrized by $\\omega$)\nand motivates our method. \n \nOur method consists of two stages. In the offline stage, a set of data-driven basis \nis constructed from solution samples. For example, the data can be generated by solving \\eqref{MsStoEllip_Eq}-\\eqref{MsStoEllip_BC} corresponding to a sampling of the coefficient $a(x,\\omega)$. \nHere, different sampling methods can be applied, including Monte Carlo (MC) method and quasi-Monte Carlo (qMC) method. The sparse-grid based stochastic collocation method \\cite{Griebel:04,Xiu:05,Webster:08} also works when the dimension of the random variables in $a(x,\\omega)$ is moderate. Or the data come from field measurements directly.\nThen the low-dimensional structure and the corresponding basis will be extracted using model reduction methods, such as the proper orthogonal decomposition (POD) \\cite{HolmesLumleyPOD:1993,Sirovich:1987,Willcox2015PODsurvey}, a.k.a. principle component analysis (PCA). The basis functions are data driven and problem specific. \nThe key point is that once the dimension reduction is achieved, the online stage of computing the solution corresponding to a new coefficient becomes finding a linear combination of the (few) basis to approximate the solution. However, the mapping from the input coefficients of the PDE to the expansion coefficients of the solution in terms of the data driven basis is highly nonlinear. We propose a few possible online strategies (see Section \\ref{sec:DerivationNewMethod}). For examples, if the coefficient is in parametric form, one can approximate the nonlinear map from the parameter domain to the expansion coefficients. Or one can apply Galerkin method using the extracted basis to solve \\eqref{MsStoEllip_BC} for a new coefficient. In practice, the random coefficient of the PDE may not be available but censors can be deployed to record the solution at certain locations. In this case, one can compute the expansion coefficients of a new solution by least square fitting those measurements at designed locations.\nWe also provide analysis and guidelines for sampling, dimension reduction, and other implementations of our methods. \n\n\n \n\n \nThe rest of the paper is organized as follows. In Section 2, we introduce the high separability of the Green's function of deterministic elliptic PDEs and present its extension to elliptic problems with random coefficients. In section 3, we describe our new data-driven method and its detailed implementation. In Section 4, we present numerical results to demonstrate the efficiency of our method. Concluding remarks are made in Section 5.\n\n\n\\section{Low-dimensional structures in the solution space} \\label{sec:LowDimStructures}\n\\subsection{High separability of the Green's function of deterministic elliptic operators. }\n\\noindent\nLet $\\mathcal{L}(x): V \\to V' $ be an uniformly elliptic operator in a divergence form\n\\begin{align}\n\\mathcal{L}(x)u(x) \\equiv -\\nabla\\cdot(a(x)\\nabla u(x))\\label{DeterministicEllipticPDE}\n\\end{align}\nin a bounded Lipschitz domain $D \\subset \\mathbb{R}^d$, where $V = H_0^1(D)$. The uniformly elliptic assumption means that there exist $a_{\\min}, a_{\\max}>0$, such that $a_{\\min}0$ and satisfies\n\\[\na(u, \\varphi) = \\int_{E} a(x)\\nabla u(x)\\cdot \\nabla \\varphi(x) dx =0 \\quad \\forall \\varphi \\in C_0^{\\infty} (E).\n\\]\nDenote the space of $\\mathcal{L}$-harmonic functions on $E$ by $X(E)$, which is closed in $L^2(E)$. The following key Lemma shows that the space of $\\mathcal{L}$-harmonic function has an approximate low dimensional structure. \n\n\\begin{lemma}[Lemma 2.6 of \\cite{BebendorfHackbusch:2003}]\\label{lemma1}\nLet $\\hat{E}\\subset E \\subset D$ in $R^d$ and assume that $\\hat{E}$ is convex such that \n\\[\ndist(\\hat{E}, \\partial E)\\ge \\rho~ diam(\\hat{E})>0, \\quad \\mbox{for some constant } \\rho >0.\n\\]\nThen for any $1>\\epsilon>0$, there is a subspace $W\\subset X(\\hat{E})$ so that for all $u\\in X(\\hat{E})$,\n\\[\ndist_{L^2(\\hat{E})}(u, W)\\le \\epsilon \\|u\\|_{L^2(E)}\n\\]\nand\n\\[\ndim(W)\\le c^d(\\kappa_a,\\rho) (|\\log \\epsilon |)^{d+1},\n\\]\nwhere $c(\\kappa_a,\\rho) >0 $ is a constant that depends on $\\rho$ and $\\kappa_a$.\n\\end{lemma}\nIn other words, the above Lemma says the Kolmogorov n-width of the space of $\\mathcal{L}$-harmonic function $X(\\hat{E})$ is less than $O(\\exp(-n^{\\frac{1}{d+1}}))$.\nThe key property of $\\mathcal{L}$-harmonic functions used to prove the above result is the Caccioppoli inequality, which provides the estimate $\\|\\nabla u\\|_{L^2(\\hat{\\E})} \\le C(\\kappa_a, \\rho)\\|u\\|_{L^2(E)}$. Moreover, the projection of the space of piecewise constant functions defined on a multi-resolution rectangular mesh onto $X(\\hat{E})$ can be constructed as a candidate for $W$ based on Prop. \\ref{FiniteDimensionalApprox}.\n\nIn particular, the Green's function $G(\\cdot,y)$ is $\\mathcal{L}$-harmonic on $E$ if $y\\notin E$. Moreover, given two disjoint domains in $D_1, D_2$ in $D$, the Green's function $G(x,y)$ with $x\\in D_1, y\\in D_2$ can be viewed as a family of $\\mathcal{L}$-harmonic functions on $D_1$ parametrized by $y\\in D_2$. From the above Lemma one can easily deduce the following result which shows the high separability of the Green's function for the elliptic operator \\eqref{DeterministicEllipticPDE}.\n\n\n\n\n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{tikzpicture}[scale=0.9]\n\n\t\\coordinate [label={[xshift=0.7cm, yshift=0.3cm]$D$}] (a1) at (0,0);\n\t\\coordinate (b1) at (0,4);\n\t\\coordinate (c1) at (8,4);\n\t\\coordinate (d1) at (8,0);\n\t\\draw(a1)--(b1)--(c1)--(d1)--cycle;\n\n\t\\coordinate (a2) at (1,0.8);\n\t\\coordinate (b2) at (1,3.2);\n\t\\coordinate (c2) at (3,3.2);\n\t\\coordinate (d2) at (3,0.8);\n\t\\draw(a2)--(b2)--(c2)--(d2)--cycle;\n\n\t\\coordinate (a3) at (5,0.8);\n\t\\coordinate (b3) at (5,3.2);\n\t\\coordinate (c3) at (7,3.2);\n\t\\coordinate (d3) at (7,0.8);\n\t\\draw(a3)--(b3)--(c3)--(d3)--cycle;\n\n\t\\tikzstyle{textnode} = [thick, fill=white, minimum size = 0.1cm]\n\t\\node[textnode] (D1) at (2,2) {$D_1$};\n\t\\node[textnode] (D2) at (6,2) {$D_2$};\n\t\\node[textnode] (Gf) at (4,3.3) {$G(x,y)$};\n\n\t\\path [->] (Gf) edge node {} (D1);\n\t\\path [->] (Gf) edge node {} (D2);\n\t\\end{tikzpicture}\n\t\\caption{Green's function $G(x,y)$ with dependence on $x\\in D_1$ and $y\\in D_2$.}\n\t\\label{fig:Greenfunction1}\n\\end{figure} \n\n\\begin{proposition}[Theorem 2.8 of \\cite{BebendorfHackbusch:2003}]\\label{GreenFuncSepaApp}\n\tLet $D_1, D_2 \\subset D$ be two subdomains and $D_1$ be convex (see Figure \\ref{fig:Greenfunction1}). Assume that there exists $\\rho>0$ such that \n\t\\begin{align}\n\t0 < \\text{ \\normalfont diam} (D_1) \\leq \\rho\\text{ \\normalfont dist} (D_1, D_2). \n\t\\label{AdmissiblePairs}\n\n\t\\end{align}\n\tThen for any $\\epsilon \\in (0,1)$ there is a separable approximation\n\t\\begin{align}\n\tG_k(x,y) = \\sum_{i=1}^k u_i(x) v_i(y) \\quad \\text{with } k \\leq \n\tc^d(\\kappa_a, \\rho) |\\log \\epsilon|^{d+1},\n\t\\label{GreenFuncSepaApp1}\n\t\\end{align}\n\tso that for all $y\\in D_2$\n\t\\begin{align}\n\t\\| G( \\cdot,y) - G_k(\\cdot,y) \\|_{L^2 (D_1)} \\leq \\epsilon \\| G(\\cdot,y) \\| _{L^2(\\hat{D}_1)},\n\t\\end{align}\n\twhere $\\hat{D}_1 := \\{ x \\in D : 2\\rho~\\text{\\normalfont dist} (x, D_1) \\leq \\text{\\normalfont diam} (D_1)\\}$.\n\\end{proposition}\n\\begin{remark}\nIn the recent work \\cite{EngquistZhao:2018}, it is shown that the Green's function for high frequency Helmholtz equation is not highly separable due to the highly oscillatory phase.\n\\end{remark}\n\n\\subsection{Extension to elliptic PDEs with random coefficients}\\label{sec:randomproblem}\n\\noindent\nLet's consider the following elliptic PDEs with random coefficients:\n\\begin{align}\n\\mathcal{L}(x,\\omega) u(x,\\omega) \n\\equiv -\\nabla\\cdot\\big(a(x,\\omega)\\nabla u(x,\\omega)\\big) &= f(x), \n\\quad x\\in D, \\quad \\omega\\in\\Omega, \\label{MsStoEllip_ModelEq}\\\\\nu(x,\\omega)&= 0, \\quad \\quad x\\in \\partial D, \\label{MsStoEllip_ModelBC}\n\\end{align}\nwhere $D \\in \\mathbb{R}^d$ is a bounded spatial domain and $\\Omega$ is a sample space. The forcing function $f(x)$ is assumed to be in $L^2(D)$. The above equation can be used to model the flow pressure in porous media such as water aquifer and oil reservoirs, where the permeability field $a(x,\\omega)$ is a random field whose exact values are infeasible to obtain in practice due to the low resolution of seismic data. We also assume that the problem is uniformly elliptic almost surely, namely, there exist $a_{\\min}, a_{\\max}>0$, such that\n\\begin{align}\nP\\big(\\omega\\in \\Omega: a(x, \\omega)\\in [a_{\\min},a_{\\max}], \\forall x \\in D\\big) = 1.\n\\label{asUniformlyElliptic1}\n\\end{align}\nNote that we do not make any assumption on the regularity of the coefficient $a(x,\\omega)$ in the physical space, which can be arbitrarily rough for each realization. \nFor the problem \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC}, the corresponding Green function is defined as\n\\begin{align}\n\\mathcal{L}(x,\\omega)G(x,y,\\omega) \\equiv-\\nabla_x\\cdot(a(x,\\omega)\\nabla_x G(x,y,\\omega)) &= \\delta(x,y), \\quad x\\in D,\\quad \\omega\\in\\Omega,\\\\\nG(x,y,\\omega) &= 0, \\quad \\quad x\\in \\partial D, \n\\end{align} \nwhere $y\\in D$ and $\\delta(x,y)$ is the Dirac delta function. \nA key observation for the proof of Lemma \\ref{lemma1} and Prop. \\ref{FiniteDimensionalApprox} is that the projection of the space of piecewise constant functions defined on a multi-resolution rectangular mesh, depending only on the geometry of $D_1, D_2$, $\\kappa_a$, and $\\rho$, onto the $\\mathcal{L}$-harmonic function provides a candidate for the finite dimensional subspace $W$. Based on this observation, one can easily extend the statement in Prop. \\ref{FiniteDimensionalApprox} to the family of Green's functions $G(x,y,\\omega) $ parametrized by $\\omega$\nunder the uniform ellipticity assumption \\eqref{asUniformlyElliptic1}.\n\\begin{theorem}\\label{ThmRandomGreenFuncSepaApp}\n\tLet $D_1, D_2 \\subset D$ be two subdomains and $D_1$ be convex. Assume that there is $\\rho>0$ such that $0 < \\text{ \\normalfont diam} (D_1) \\leq \\rho\\text{ \\normalfont dist} (D_1, D_2)$. \n\tThen for any $\\epsilon \\in (0,1)$ there is a separable approximation\n\t\\begin{align}\n\tG_k(x,y,\\omega) = \\sum_{i=1}^k u_i(x) v_i(y,\\omega) \\quad \\text{with } k \\leq \n\tc^d(\\kappa_a, \\rho) |\\log \\epsilon|^{d+1},\n\t\\label{RandomGreenFuncSepaApp}\n\t\\end{align}\n\tso that for all $y\\in D_2$\n\t\\begin{align}\n\t\\| G(\\cdot,y,\\omega) - G_k(\\cdot,y, \\omega) \\|_{L^2 (D_1)} \\leq \\epsilon \\| G(\\cdot,y, \\omega) \\| _{L^2(\\hat{D}_1)} \\quad \\text{a.s. in } \\Omega,\n\t\\end{align}\n\twhere $\\hat{D}_1 := \\{ x \\in D : 2\\rho\\text{\\normalfont dist} (x, D_1) \\leq \\text{\\normalfont diam} (D_1)\\}$.\n\\end{theorem}\n\nThe above Theorem shows that there exists a low dimensional linear subspace, e.g., spanned by $u_i(\\cdot)$, that can approximate the family of functions $G(\\cdot,y,\\omega)$ well in $L^2(D_1)$ uniformly with respect to $y\\in D_2$ and a.s. in $\\omega$. Moreover, if $\\mathrm{supp}(f)\\subset D_2$, one can approximate the solution to \\eqref{MsStoEllip_ModelBC} by the same space well in $L^2(D_1)$ uniformly with respect to $f$ and a.s. in $\\omega$. Let \n\\begin{equation}\nu_f(x,\\omega)=\\int_{D_2} G(x,y,\\omega)f(y) dy\n\\end{equation}\nand\n\\begin{equation}\nu^{\\epsilon}_f(x,\\omega)=\\int_{D_2} G_k(x,y,\\omega)f(y) dy=\\sum_{i=1}^k u_i(x)\\int_{D_2} v_i(y,\\omega) f(y) dy.\n\\end{equation}\nHence\n\\begin{equation}\n\\begin{array}{l}\n\\|u_f(\\cdot,\\omega)-u^{\\epsilon}_f(\\cdot,\\omega)\\|^2_{L^2(D_1)}=\\int_{D_1} \\left[\\int_{D_2} (G(x,y,\\omega)-G_k(x,y,\\omega))f(y) dy\\right]^2 dx \n\\\\ \\\\\n\\le \\|f\\|_{L^2(D_2)}^2 \\int_{D_2}\\| G(\\cdot,y,\\omega) - G_k(\\cdot,y, \\omega) \\|^2_{L^2 (D_1)} dy\\le C(D_1, D_2, \\kappa_a, d)\\epsilon^2\\|f\\|_{L^2(D_2)}^2,\n\\end{array}\n\\end{equation}\na.s. in $\\omega$ since $\\| G(\\cdot,y, \\omega) \\| _{L^2(\\hat{D}_1)}$ is bounded by a positive constant that depends on $D_1, D_2, \\kappa_a, d$ a.s. in $\\omega$ due to uniform ellipticity \\eqref{asUniformlyElliptic1}.\nAlthough, the proof of high separability of the Green's function requires $x\\in D_1, y\\in D_2$ for well separated $D_1$ and $D_2$, i.e., avoiding the singularity of the Green's function at $x=y$, the above approximation of the solution $u$ in a domain disjoint with the support of $f$ seems to be valid for $u$ in the whole domain even when $f$ is a globally supported smooth function as shown in our numerical tests. \n\n\n\\begin{remark}\\label{remark1}\nIt is important to note that both the linear subspace $W$ and the bound for its dimension are independent of the randomness. Moreover, it is often possible to find a problem specific and data driven subspace with a dimension much smaller than the theoretical upper bound for $W$ (as demonstrated by our experiments). This key observation motivates our data-driven approach which can achieve a significant dimension reduction in the solution space. \n\\end{remark}\n\\begin{remark}\nAlthough we present the problem and our data driven approach for the elliptic problem \\eqref{MsStoEllip_ModelBC} with scalar random coefficients $a(x,\\omega)$, all the statements can be directly extended when the random coefficient is replaced by a symmetric positive definite tensor $a_{i,j}(x,\\omega), i,j,=1, \\ldots, d$ with uniform ellipticity.\n\\end{remark}\n\n\\begin{remark}\nIn the recent work \\cite{BrysonZhaoZhong:2019}, it is shown that a random field can have a large intrinsic complexity if it is rough, i.e., $a(x_1,\\omega)$ and $a(x_2, \\omega)$ decorrelate quickly in terms of $\\|x_1-x_2\\|$. However, when a random field, as rough as it can be, is input as the coefficient of an elliptic PDE, the intrinsic complexity of the resulting solution space, which depends on the coefficient highly nonlinearly and nonlocally, is highly reduced. This phenomenon can also be used to explain the severe ill-posedness of the inverse problem in which one tries to recover the coefficient of an elliptic PDE from the boundary measurements such as electrical impedance tomography (EIT).\n\\end{remark}\n\n\nBefore we end this subsection, we give a short review of existing methods for solving problem \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} involving random coefficients. There are basically two types\nof methods. In intrusive methods, one represents the solution of \\eqref{MsStoEllip_ModelEq} by $u(x,\\omega)= \\sum_{\\alpha \\in J} u_{\\alpha}(x)H_{\\alpha}(\\omega)$, where $J$ is an index set, and $H_{\\alpha}(\\omega)$ are certain basis functions (e.g. orthogonal polynomials). Typical examples are the Wiener chaos expansion (WCE) and polynomial chaos expansion (PCE) method. Then, one uses Galerkin method to compute the expansion coefficients $u_{\\alpha}(x)$; see \\cite{Ghanem:91,Xiu:03,babuska:04,matthies:05,WuanHou:06,Najm:09} and reference therein. These methods have been successfully applied to many UQ problems, where the dimension of the random input is small. However, the number of basis functions increases exponentially with the dimension of random input, i.e., they suffer from the curse of dimensionality of both the input space and the output (solution) space.\n\nIn the non-intrusive methods, one can use the MC method or qMC method to solve \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC}. However, the convergence rate is slow and the method becomes more expensive when the coefficient $a(x,\\omega)$ contains multiscale features. Stochastic collocation methods explore the smoothness of the solutions in the random space and use certain quadrature points and weights to compute the solutions \\cite{Xiu:05,Babuska:07}. Exponential convergence can be achieved for smooth solutions, but the quadrature points grow exponentially as the number of random variables increases. Sparse grids \\cite{Griebel:04,Webster:08} can reduce the quadrature points to some extent \\cite{Griebel:04}. However, the sparse grid method still becomes very expensive when the dimension of randomness is modestly high.\n\nInstead of building random basis functions a priori or \nchoosing collocation quadrature points based on the random coefficient $a(x,\\omega)$ \n(see Eq.\\eqref{ParametrizeRandomCoefficient}), we extract the low dimensional structure and a set of basis functions in the solution space directly from the data (or sampled solutions). Notice that the dimension of the extracted low dimensional space mainly depends on $\\kappa_a$ (namely $a_{\\min}$ and $a_{\\max}$), and very mildly on the dimension of the random input in $a(x,\\omega)$. Therefore, the curse of dimension can be alleviated.\n\n \n\\section{Derivation of the new data-driven method} \\label{sec:DerivationNewMethod}\nIn many physical and engineering applications, one needs to obtain the solution of the Eq.\\eqref{MsStoEllip_ModelEq} on a subdomain $\\hat{D}\\subseteq D$. For instance, in the reservoir simulation one is interested in computing the pressure value $u(x,\\omega)$ on a specific subdomain $\\hat{D}$. \n Our method consists of offline and online stages. In the offline stage, we extract the low dimensional structure and a set of data-driven\nbasis functions from solution samples. For example, a set of solution samples $\\{u(x,\\omega_i)\\}_{i=1}^{N}$ can be obtained from measurements or generated by solving \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} with coefficient samples $\\{a(x,\\omega_i)\\}_{i=1}^{N}$. \n\nLet $V_l=\\{u|_{\\hat{D}}(x,\\omega_1),...,u|_{\\hat{D}}(x,\\omega_N)\\}$ denote the solution samples. We use POD \\cite{HolmesLumleyPOD:1993,Sirovich:1987,Willcox2015PODsurvey}, or a.k.a PCA, to find the optimal subspace and its orthonormal basis to approximate $V_l$ to certain accuracy. Define the correlation matrix \n$\\sigma_{ij}=_{\\hat{D}}, i, j= 1, \\ldots, N$. Let the eigenvalues and corresponding eigenfunctions of the correlation matrix be $\\lambda_1\\ge \\lambda_2 \\ge \\ldots \\ge \\ldots \\ge \\lambda_N \\ge 0$ and $\\phi_{1}(x)$, $\\phi_{2}(x), \\ldots, \\phi_N(x)$ respectively. The space spanned by the leading $K$ eigenfunctions have the following approximation property to $V_l$.\n\\begin{proposition}\\label{POD_proposition}\n\\begin{align}\n\\frac{\\sum_{i=1}^{N}\\Big|\\Big|u(x,\\omega_{i})- \\sum_{j=1}^{K}_{\\hat{D}}\\phi_j(x)\\Big|\\Big|_{L^2(\\hat{D})}^{2} }{\\sum_{i=1}^{N}\\Big|\\Big|u(x,\\omega_{i})\\Big|\\Big|_{L^2(\\hat{D})}^{2}}=\\frac{\\sum_{s=K+1}^{N} \\lambda_s}{\\sum_{s=1}^{N} \\lambda_s}.\n\\label{Prop_PODError}\n\\end{align}\n\\end{proposition} \nFirst, we expect a fast decay in $\\lambda_s$ so that a small $K\\ll N$ will be enough to approximate the solution samples well in root mean square sense. \nSecondly, based on the existence of low dimensional structure implied by Theorem \\ref{ThmRandomGreenFuncSepaApp}, we expect that the data-driven basis, $\\phi_{1}(x)$, $\\phi_{2}(x), \\ldots, \\phi_{K}(x)$, can almost surely approximate the solution $u|_{\\hat{D}}(x,\\omega)$ well too \nunder some sampling condition (see Section \\ref{sec:DetermineNumberOfSamples}) by\n\\begin{align}\nu|_{\\hat{D}}(x,\\omega) \\approx \\sum_{j=1}^{K}c_{j}(\\omega)\\phi_{j}(x), \\quad \\text{a.s. }\n\\omega \\in \\Omega, \\label{RB_expansion}\n\\end{align} \nwhere the data-driven basis functions $\\phi_{j}(x)$, $j=1,...,K$ are defined on $\\hat{D}$. \nThe Prop.\\ref{POD_proposition} still remains valid in the case $\\hat{D}=D$, where the data-driven basis $\\phi_{j}(x)$, $j=1,...,K$ can be used in the Galerkin approach to solve \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} on the whole domain $D$ (see Section \\ref{sec:GlobalProblem}).\n\nNow the problem is how to find $c_{j}(\\omega)$ through an efficient online process given a new realization of $a(x,\\omega)$. We prescribe several strategies in different setups.\n\n\\subsection{Parametrized randomness}\\label{sec:parametrized}\nIn many applications, $a(x,\\omega)$ is parameterized by $r$ independent random variables, i.e., \n\\begin{align}\na(x,\\omega) = a(x,\\xi_{1}(\\omega),...,\\xi_{r}(\\omega)).\n\\label{ParametrizeRandomCoefficient}\n\\end{align}\nThus, the solution can be represented as a function of these random variables as well, i.e., $u(x,\\omega) = u(x,\\xi_{1}(\\omega),...,\\xi_{r}(\\omega))$.\nLet $\\gvec{\\xi}(\\omega)=[\\xi_1(\\omega),\\cdots,\\xi_r(\\omega)]^T$ denote the \nrandom input vector and $\\textbf{c}(\\omega)=[c_{1}(\\omega),\\cdots,c_{K}(\\omega)]^T$ denote the vector of solution coefficients in \\eqref{RB_expansion}. Now, the problem can be viewed as constructing \n a map from $\\gvec{\\xi}(\\omega)$ to $\\textbf{c}(\\omega)$, denoted by $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$, which is nonlinear. We approximate this nonlinear map through the sample solution set. \n Given a set of solution samples $\\{u(x,\\omega_i)\\}_{i=1}^{N}$ corresponding to $\\{\\gvec{\\xi}(\\omega_i)\\}_{i=1}^{N}$, e.g., by solving \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} with $a(x,\\xi_{1}(\\omega_i),...,\\xi_{r}(\\omega_i))$,\nfrom which the set of data driven basis $\\phi_{j}(x), j=1,...,K$ is obtained using POD as described above, we can easily compute the projection coefficients $\\{\\textbf{c}(\\omega_i)\\}_{i=1}^{N}$ of $u|_{\\hat{D}}(x,\\omega_i)$ on $\\phi_{j}(x)$, $j=1,...,K$, i.e., $c_j(\\omega_i)=_{\\hat{D}}$. From the data set,\n$F(\\gvec{\\xi}(\\omega_i))= \\textbf{c}(\\omega_i)$, $i=1,...,N$, we construct the map $\\textbf{F}$. Note the significant dimension reduction by reducing the map $\\gvec{\\xi}(\\omega)\\mapsto u(x,\\omega)$ to the map $\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$.\nWe provide a few ways to construct $\\textbf{F}$. \n\\begin{itemize}\n\\item Interpolation. \n\\\\\nWhen the dimension of the random input $r$ is small or moderate, one can use interpolation. In particular, if the solution samples correspond to $\\gvec{\\xi}$ located on a (sparse) grid, standard polynomial interpolation can be used to approximate the coefficient $c_j$ at a new point of $\\gvec{\\xi}$. If the solution samples correspond to $\\gvec{\\xi}$ at scattered points or the dimension of the random input $r$ is moderate or high, one can first find the a few nearest neighbors to a new point efficiently using $k-d$ tree \\cite{wald2006building} and then use moving least square approximation centered at the new point. \n\\item Neural network.\n\\\\\n When the dimension of the random input $r$ is high, interpolation approach becomes expensive and less accurate, we show that neural network seems to provide a satisfactory solution.\n\\end{itemize}\nMore implementation details will be explained in Section \\ref{sec:NumericalExperiments} and the map $\\textbf{F}$ is plotted based on interpolation. \n\n\n \nIn the online stage, one can compute the solution $u(x,\\omega)$ to \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} using the constructed mapping $\\textbf{F}$. \nGiven a new realization of $a(x,\\xi_{1}(\\omega_i),...,\\xi_{r}(\\omega_i))$, we plug $\\gvec{\\xi}(\\omega)$ into the constructed map $\\textbf{F}$ and directly obtain $\\textbf{c}(\\omega)=\\textbf{F}(\\gvec{\\xi}(\\omega))$ which are the projection coefficients of the solution on the data-driven basis. So we can quickly obtain the new solution $u|_{\\hat{D}}(x,\\omega)$ using Eq.\\eqref{RB_expansion}, where the computational time is negligible. Once we obtain the numerical solutions, we can use them to compute statistical quantities of interest, such as mean, variance, and joint probability distributions. \n\\begin{remark}\nIn Prop.\\ref{POD_proposition} we construct the data-driven basis functions from eigen-decomposition of the correlation matrix associated with the solution samples. Alternatively we can subtract the mean from the solution samples, compute the covariance matrix, and construct the basis functions from eigen-decomposition of the covariance matrix. \n\\end{remark}\n\n\\subsection{Galerkin approach} \\label{sec:GlobalProblem}\n\\noindent\nIn the case $\\hat{D}=D$, we can solve \\eqref{MsStoEllip_ModelEq}-\\eqref{MsStoEllip_ModelBC} on \nthe whole domain $D$ by the standard Galerkin formulation using the data driven basis for a new realization of $a(x,\\omega)$.\n\nOnce the data driven basis $\\phi_{j}(x)$, $j=1,...,K$, which are defined on the domain $D$, are obtained from solution samples in the offline stage, \n given a new realization of the coefficient $a(x,\\omega)$, we approximate the corresponding solution as \n\\begin{align}\nu(x,\\omega) \\approx \\sum_{j=1}^{K}c_{j}(\\omega)\\phi_{j}(x), \\quad \\text{a.s. }\n\\omega \\in \\Omega, \\label{RB_expansion2}\n\\end{align} \nand use the Galerkin projection to determine the coefficients $c_{j}(\\omega)$, $j=1,...,K$ by solving the following linear system in the online stage,\n\\begin{align}\n\\sum_{j=1}^K \\int_{D}a(x,\\omega)c_{j}(\\omega)\\nabla\\phi_{j}(x)\\cdot\\nabla\\phi_{l}(x)dx = \\int_{D}f(x)\\phi_{l}(x)dx, \n \\quad l=1,...,K.\n \\label{GalerkinSystem}\n\\end{align}\n\n\\begin{remark}\nThe computational cost of solving the linear system \\eqref{GalerkinSystem} is small compared to using a Galerkin method, such as the finite element method, directly for $u(x,\\omega)$ because $K$ is much smaller than the degree of freedom needed to discretize $u(x,\\omega)$. \n\\end{remark}\n\nIf the coefficient $a(x,\\omega)$ has the affine parameter dependence property \\cite{RozzaPatera:2007}, \ni.e., $ a(x,\\omega) = \\sum_{n=1}^{r} a_{n}(x)\\xi_{n}(\\omega) $, we compute the terms that do not depend on randomness, including $\\int_{D}a_{n}(x)\\nabla\\phi_{j}(x)\\cdot\\nabla\\phi_{l}(x)dx$,\n$\\int_{D}f(x)\\phi_{l}(x)dx$, $j,l=1,...,K$ and save them in the offline stage. This leads to considerable savings in assembling the stiffness matrix for each new realization of the coefficient $a(x,\\omega)$ in the online stage.\nOf course, the affine form is automatically parametrized. Hence, one can also construct the map $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$ as described in the previous Section \\ref{sec:parametrized}. \nIf the coefficient $a(x,\\omega)$ does not admit an affine form, we can apply the empirical interpolation method (EIM) \\cite{PateraMaday:2004} to convert $a(x,\\omega)$ into an affine form. \n\n\\subsection{Least square fitting from direct measurements at selected locations}\\label{sec:LS}\nIn many applications, only samples (data) or measurements of $u(x,\\omega)$ is available while the model of $a(x,\\omega)$ or its realization is not known. In this case, we propose to compute the coefficients $\\textbf{c}$ by least square fitting the measurements (values) of $u(x,\\omega)$ at appropriately selected locations. First, as before, from a set of solutions samples, $u(x_j, \\omega_i)$, measured on a mesh $x_j \\in \\hat{D}, j=1, \\ldots, J$, one finds a set of data driven basis $\\phi_1(x_j), \\ldots, \\phi_K(x_j)$, e.g. using POD. For a new solution $u(x,\\omega)$ measured at $x_1, x_2, \\ldots, x_M$, one can set up the following least square problem to find $\\vec{c}=[c_1, \\ldots, c_K]^T$ such that $u(x,\\omega)\\approx \\sum_{k=1}^K c_k\\phi_k(x)$:\n\\begin{equation}\n\\label{eq:LS}\nB \\vec{c}=\\vec{y}, \\quad \\vec{y}=[u(x_1,\\omega), \\ldots, u(x_M,\\omega)]^T, B=[\\boldsymbol{\\phi}^M_1, \\ldots, \\boldsymbol{\\phi}^M_K]\\in R^{M\\times K},\n\\end{equation}\nwhere $\\boldsymbol{\\phi}^M_k=[\\phi_k(x_1), \\ldots, \\phi_k(x_M)]^T$. The key issue in practice is the conditioning of the least square problem \\eqref{eq:LS}. One way is to select the measurement (sensor) locations $x_1, \\ldots x_M$ such that rows of $B$ are as decorrelated as possible. We adopt the approach proposed in \\cite{Kutz2017Sensor} in which a QR factorization with pivoting for the matrix of data driven basis is used to determine the measurement locations. More specifically, let $\\Phi=[\\boldsymbol{\\phi}_1, \\ldots, \\boldsymbol{\\phi}_K]\\in R^{J\\times K}$, $\\boldsymbol{\\phi}_k=[\\phi_k(x_1), \\ldots, \\phi_k(x_J)]^T$. If $M=K$, QR factorization with column pivoting is performed on $\\Phi^T$. If $M>K$, QR factorization with pivoting is performed on $\\Phi\\Phi^T$. The first $M$ pivoting indices provide the measurement locations. More details can be found in \\cite{Kutz2017Sensor} and Section \\ref{sec:NumericalExperiments}.\n\n\n\\subsection{Extension to problems with parameterized force functions} \\label{sec:ExtensionTOManyFx}\n\\noindent\nIn many applications, we are interested in solving multiscale elliptic PDEs with random coefficients in the multiquery setting. A model problem is given as follows, \n\\begin{align}\n-\\nabla\\cdot\\big(a(x,\\omega)\\nabla u(x,\\omega)\\big) &= f(x,\\theta), \n\\quad x\\in D, \\quad \\omega\\in\\Omega, \\quad \\theta \\in \\Theta, \\label{MsStoEllipMultiquery_Eq}\\\\\nu(x,\\omega)&= 0, \\quad \\quad x\\in \\partial D, \\label{MsStoEllipMultiquery_BC}\n\\end{align}\nwhere the setting of the coefficient $a(x,\\omega)$ is the same as \\eqref{ParametrizeRandomCoefficient}. \nNotice that the force function $f(x,\\theta)$ is parameterized by $\\theta\\in \\Theta$ and $\\Theta$ is a \nparameter set. In practice, we often need to solve the problem \\eqref{MsStoEllipMultiquery_Eq}-\\eqref{MsStoEllipMultiquery_BC} with multiple force functions $f(x,\\theta)$, which is known as the multiquery problem. It is computationally expensive to solve this kind of problem using traditional methods. \n\nSome attempts have been made in \\cite{ZhangCiHouMMS:15,hou2019model}, where a data-driven stochastic method has been proposed to solve PDEs with random and multiscale coefficients. When the number of random variables in the coefficient $a(x,\\omega)$ is small, say less than 10, the methods developed in \\cite{ZhangCiHouMMS:15,hou2019model} can provide considerable savings in solving multiquery problems. However, they suffer from the curse of dimensionality of both the input space and the output (solution) space. \nOur method using data driven basis, which is based on extracting a low dimensional structure in the output space, can be directly adopted to this situation. Numerical experiments are presented in Section \\ref{sec:NumericalExperiments}.\n\n\n\n \n\\subsection{Determine a set of good learning samples} \\label{sec:DetermineNumberOfSamples}\n\\noindent\nA set of good solution samples is important for the construction of data-driven basis in the offline stage. \nHere we provide an error analysis which is based on the finite element formulation. However, the results extend to general Galerkin formulation.\nFirst, we make a few assumptions. \n\\begin{assumption} \\label{assumption2}\n\tSuppose $a(x,\\omega)$ has the following property: given $ \\delta_1 > 0$, there exists an integer $N_{\\delta_1}$ and a choice of snapshots $\\{a(x,\\omega_i)\\}$, $i=1,...,N_{\\delta_1}$ such that\n\t\\begin{align} \n\t\\mathds{E}\\left[\\inf_{1\\le i\\le N_{\\delta_1}} \\big|\\big|a(x,\\omega) - a(x,\\omega_i)\\big|\\big|_{L^\\infty(D)}\\right] \\le \\delta_1. \\label{asd}\n\t\\end{align}\n\\end{assumption}\nLet $\\{a(x,\\omega_i)\\}_{i=1}^{N_{\\delta_1}}$ denote the samples of the random coefficient. When the coefficient has an affine form, we can verify Asm. \\ref{assumption2} and provide a constructive way to sample snapshots \n$\\{a(x,\\omega_i)\\}_{i=1}^{N_{\\delta_1}}$ if we know the distribution of the random variables $\\xi_{i}(\\omega)$, $i=1,...,r$.\n\nLet $V_h\\subset H_{0}^{1}(D)$ denote a finite element space that is spanned by nodal basis functions on a mesh with size $h$ and $\\tilde{V}_h \\subset V_h$ denote the space spanned by the data-driven basis $\\{\\phi_{j}(x)\\}_{j=1}^{K}$. We assume the mesh size is fine enough so that the finite element space can approximate the solutions to the underlying PDEs well. For each $a(x,\\omega_i)$, let $u_h(x,\\omega_i)\\in V_h$ denote the FEM solution and $\\tilde{u}_h(x,\\omega_i)\\in \\tilde{V}_h$ denote the projection on the data-driven basis $\\{\\phi_{j}(x)\\}_{j=1}^{K}$. \n\\begin{assumption} \\label{assumption3}\n\tGiven $\\delta_2 > 0$, we can find a set of data-driven basis, $\\phi_1, \\ldots, \\phi_{K_{\\delta_2}}$ such that \n\t\\begin{align}\n\t||u_h(x,\\omega_i)-\\tilde{u}_h(x,\\omega_i)||_{L^2(D)} \\le \\delta_2,\\ \\forall 1\\le i \\le K_{\\delta_2}, \\label{equation_asumption2}\n\t\\end{align}\n\twhere $\\tilde{u}_h(x,\\omega_i)$ is the $L^2$ projection of $u_h(x,\\omega_i)$ onto the space spanned by $\\phi_1, \\ldots, \\phi_{K_{\\delta_2}}$.\n\\end{assumption}\nAsm.\\ref{assumption3} can be verified by setting the threshold in the POD method; see Prop.\\ref{POD_proposition}. Now we present the following error estimate. \n\n\\begin{theorem} \\label{error_theorem1}\n\tUnder Assumptions \\ref{assumption2}-\\ref{assumption3}, for any $\\delta_i > 0$, $i=1,2$, we can choose the samples of the random coefficient $\\{a(x,\\omega_i)\\}_{i=1}^{N_{\\delta_1}}$ and the threshold in constructing the data-driven basis accordingly, such that \n\t\\begin{align}\n\t\\mathds{E}\\left[\\big|\\big|u_h(x,\\omega) - \\tilde{u}_h(x,\\omega)\\big|\\big|_{L^2(D)}\\right] \n\t\\leq C\\delta_1 + \\delta_2, \\label{error_theorem}\n\t\\end{align}\n\twhere $C$ depends on $a_{\\min}$, $f(x)$ and the domain $D$.\n\\end{theorem}\n\\begin{proof}\n\tGiven a coefficient $a(x,\\omega)$, let $u_h(x,\\omega)$ and $\\tilde{u}_h(x,\\omega)$ be the corresponding FEM solution and data-driven solution, respectively. We have\n\t\\begin{align} \\label{proof_basis_error}\n\t&\\big|\\big|u_h(x,\\omega) - \\tilde{u}_h(x,\\omega)\\big|\\big|_ {L^2(D)} \\nonumber\\\\\n\t\\le &\\big|\\big|u_h(x,\\omega) - u_h(x,\\omega_i)\\big|\\big|_{L^2(D)} + \n\t\\big|\\big|u_h(x,\\omega_i) - \\tilde{u}_h(x,\\omega_i)\\big|\\big|_{L^2(D)} + \\big|\\big|\\tilde{u}_h(x,\\omega_i) - \\tilde{u}_h(x,\\omega)\\big|\\big|_{L^2(D)}, \\nonumber\\\\\n\t:=& I_1 + I_2+ I_3,\n\t\\end{align}\n\twhere $u_h(x,\\omega_i)$ is the solution corresponding to the coefficient $a(x,\\omega_i)$ and $\\tilde{u}_h(x,\\omega_i)$ is its projection. Now we estimate the error term $I_1$ first. In the sense of weak form, we have\n\t\\begin{align}\n\t\\int_{D}a(x,\\omega)\\nabla u_h(x,\\omega)\\cdot \\nabla v_h(x)dx=\\int_{D}f(x)v_h(x), \\quad \\text{for all} \\quad v_h(x)\\in V_h,\n\t\\label{FEMsolutionWeakForm1} \n\t\\end{align}\n\tand\n\t\\begin{align} \n\t\\int_{D}a(x,\\omega_i)\\nabla u_h(x,\\omega_i)\\cdot\\nabla v_h(x)dx=\\int_{D}f(x)v_h(x), \\quad \\text{for all} \\quad v_h(x)\\in V_h.\n\t\\label{FEMsolutionWeakForm2} \n\t\\end{align}\n\tSubtracting the variational formulations \\eqref{FEMsolutionWeakForm1}-\\eqref{FEMsolutionWeakForm2} for \n\t$u_h(x,\\omega)$ and $u_h(x,\\omega_i)$, we find that for all $v_h(x)\\in V_h$, \n\t\\begin{align}\n\t\\int_{D}a(x,\\omega)\\nabla (u_h(x,\\omega)-u_h(x,\\omega_i))\\cdot\\nabla v_h(x)dx\n\t=-\\int_{D}(a(x,\\omega)-a(x,\\omega_i))\\nabla u_h(x,\\omega_i)\\cdot\\nabla v_h(x). \n\t\\label{FEMsolutionWeakForm3} \n\t\\end{align}\n\tLet $w_h(x)=u_h(x,\\omega)-u_h(x,\\omega_i)$ and $L(v_h)=-\\int_{D}(a(x,\\omega)-a(x,\\omega_i))\\nabla u_h(x,\\omega_i)\\cdot\\nabla v_h(x)$ denote the linear form. Eq.\\eqref{FEMsolutionWeakForm3} means that \n\t$w_h(x,\\omega)$ is the solution of the weak form $\\int_{D}a(x,\\omega)\\nabla w_h\\cdot\\nabla v_h(x)dx=L(v_h)$. Therefore, we have \n\t\\begin{align}\n\t\\big|\\big|w_h(x)\\big|\\big|_ {H^1(D)}\\leq \\frac{||L||_{H^1(D)}}{a_{\\min}}.\n\t\\label{EstimateError}\n\t\\end{align}\n\tNotice that \n\t\\begin{align}\n\t||L||_{H^1(D)} =\\max_{||v_h||_{H^1(D)}=1}|L(v_h)|&\\leq ||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\n\t||u_h(x,\\omega_i)||_{H^1(D)},\\nonumber \\\\\n\t&\\leq ||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\\frac{||f(x)||_{H^1(D)}}{a_{\\min}}.\n\t\\label{EstimateError2}\n\t\\end{align}\n\tSince $w_h(x)=0$ on $\\partial D$, combining Eqns.\\eqref{EstimateError}-\\eqref{EstimateError2} and using the Poincar\\'e inequality on $w_h(x)$, we obtain an estimate for the term $I_1$ as \n\t\\begin{align}\n\t\\big|\\big|u_h(x,\\omega) - u_h(x,\\omega_i)\\big|\\big|_{L^2(D)} &\\leq C_1\\big|\\big|u_h(x,\\omega) - u_h(x,\\omega_i)\\big|\\big|_{H^1(D)} \\nonumber \\\\\n\t&\\leq C_1||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\\frac{||f(x)||_{H^1(D)}}{a_{\\min}^2},\n\t\\label{EstimateError3}\n\t\\end{align}\n\twhere $C_1$ only depends on the domain $D$. For the term $I_3$ in Eq.\\eqref{proof_basis_error}, we can similarly get \n\t\\begin{align}\n\t\\big|\\big|\\tilde{u}_h(x,\\omega_i) - \\tilde{u}_h(x,\\omega)\\big|\\big|_{L^2(D)} \n\t\\leq C_1||a(x,\\omega)-a(x,\\omega_i)||_{L^\\infty(D)}\\frac{||f(x)||_{H^1(D)}}{a_{\\min}^2}.\n\t\\label{I3}\n\t\\end{align}\n\tThe term $I_2$ in Eq.\\eqref{proof_basis_error} can be controlled according to the Asm.\\ref{assumption3}. \n\tCombining the estimates for terms $I_1$, $I_2$ and $I_3$ and integrating \n\tover the random space, we prove the theorem. \n\n\\end{proof} \n \nTheorem \\ref{error_theorem1} indicates that the error between $u_h(x,\\omega)$ and its approximation $\\tilde{u}_h(x,\\omega)$ using the data driven basis consists of two parts. The first part depends on how well the random coefficient is sampled. While the second part depends on the truncation threshold in constructing the data-driven basis from the solution samples. In practice, a balance of these two factors and the discretization error (of the numerical method used to solve the PDEs) gives us the guidance on how to choose solution samples and truncation threshold in the POD method to achieve optimal accuracy. Again, the key advantage for our data driven approach for this form of elliptic PDEs is the low dimensional structure in the solution space which provides a significant dimension reduction. \n\n\n\\section{Numerical experiments} \\label{sec:NumericalExperiments}\n\\noindent\nIn this section we will present various numerical experiments to demonstrate the accuracy and efficiency of our proposed data-driven method. \n\\subsection{An example with five random variables}\\label{sec:Example1}\n\\noindent\nWe consider a multiscale elliptic PDE with a random coefficient that is defined on a square domain $D=[0,1]\\times[0,1]$,\n\\begin{align}\\label{randommultiscaleelliptic}\n\\begin{split}\n-\\nabla\\cdot(a(x,y,\\omega)\\nabla u(x,y,\\omega)) &= f(x,y), \\quad (x,y)\\in D, \\omega\\in\\Omega,\\\\\nu(x,y,\\omega)&=0, \\quad \\quad (x,y)\\in\\partial D.\n\\end{split}\n\\end{align}\nIn this example, the coefficient $a(x,y,\\omega)$ is defined as \n\\begin{align} \na(x,y,\\omega) =& 0.1 + \\frac{2+p_1\\sin(\\frac{2\\pi x}{\\epsilon_1})}{2-p_1\\cos(\\frac{2\\pi y}{\\epsilon_1})} \\xi_1(\\omega)\n+ \\frac{2+p_2\\sin(\\frac{2\\pi (x+y)}{\\sqrt{2}\\epsilon_2})}{2-p_2\\sin(\\frac{2\\pi (x-y)}{\\sqrt{2}\\epsilon_2})}\\xi_2(\\omega)\n+ \\frac{2+p_3\\cos(\\frac{2\\pi (x-0.5)}{\\epsilon_3})}{2-p_3\\cos(\\frac{2\\pi (y-0.5)}{\\epsilon_3})}\\xi_3(\\omega) \\nonumber \\\\\n&+ \\frac{2+p_4\\cos(\\frac{2\\pi (x-y)}{\\sqrt{2}\\epsilon_4})}{2-p_4\\sin(\\frac{2\\pi (x+y)}{\\sqrt{2}\\epsilon_4})}\\xi_4(\\omega)\n+ \\frac{2+p_5\\cos(\\frac{2\\pi (2x-y)}{\\sqrt{5}\\epsilon_5})}{2-p_5\\sin(\\frac{2\\pi (x+2y)}{\\sqrt{5}\\epsilon_5})}\\xi_5(\\omega), \\label{coefficientofexample1}\n\\end{align}\nwhere $[\\epsilon_1,\\epsilon_2,\\epsilon_3,\\epsilon_4,\\epsilon_5]=[\\frac{1}{47},\\frac{1}{29},\\frac{1}{53},\\frac{1}{37},\\frac{1}{41}]$, $[p_1,p_2,p_3,p_4,p_5]=[1.98,1.96,1.94,1.92,1.9]$, and $\\xi_i(\\omega)$, $i=1,...,5$ are i.i.d. uniform random variables in $[0,1]$. The contrast ratio in the coefficient \\eqref{coefficientofexample1} is $\\kappa_a\\approx 4.5\\times 10^3$. The force function is $f(x,y) = \\sin(2\\pi x)\\cos(2\\pi y)\\cdot I_{D_2}(x,y)$, where $I_{D_2}$ is an indicator function defined on $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$. \nThe coefficient \\eqref{coefficientofexample1} is highly oscillatory in the physical space. \nTherefore, one needs a fine discretization to resolve the small-scale variations in the problem. \nWe shall show results for the solution to \\eqref{randommultiscaleelliptic} with coefficient \\eqref{coefficientofexample1} in: (1) a restricted subdomain $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$ away from the support $D_2$ of the source term $f(x,y)$; and (2) the full domain $D$.\n\nIn all of our numerical experiments, we use the same uniform triangulation to implement the standard FEM and choose mesh size $h=\\frac{1}{512}$ in order to resolve the multiscale information. We use $N=2000$ samples in the offline stage to construct the data-driven basis and determine the number of basis $K$ according to the decay rate of the eigenvalues of the correlation matrix of the solution samples, i.e., $\\sigma_{ij}=, i,j=1, \\dots, N$.\n\nIn Figure \\ref{fig:Example1localeigenvalues}, we show the decay property of eigenvalues. Specifically, we show the magnitude of the eigenvalues in Figure \\ref{fig:Example1localeigenvalues1a} and the ratio of the accumulated sum of the leading eigenvalues over the total sum in Figure \\ref{fig:Example1localeigenvalues1b}. These results and Prop.\\ref{POD_proposition} imply that a few leading eigenvectors will provide a set of data-driven basis that can approximate all solution samples well. \n\\begin{figure}[tbph]\n\t\\centering \n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ Decay of eigenvalues.}\n\t\t\\label{fig:Example1localeigenvalues1a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_acc_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption\n\t\t\t\n\t\t\t $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example1localeigenvalues1b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues in the local problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1localeigenvalues}\n\\end{figure} \n\nAfter we construct the data-driven basis, we use the spline interpolation to approximate the mapping $\\textbf{F}:\\gvec{\\xi} \\mapsto \\textbf{c}(\\gvec{\\xi})$. Notice that the coefficient of \\eqref{coefficientofexample1}\nis parameterized by five i.i.d. random variables. We can partition the random space $[\\xi_1(\\omega),\\xi_2(\\omega),\\cdots,\\xi_5(\\omega)]^T\\in [0,1]^5$ into a set of uniform grids in order to \nconstruct the mapping $\\textbf{F}$. Here we choose $N_1=9^5$ samples. We remark that we can choose other sampling strategies, such as sparse-grid points and Latin hypercube points. In Figure \\ref{fig:Example1localbasismapping}, we show the profiles of the first two data-driven basis functions $\\phi_{1}$ and $\\phi_{2}$ and the plots of the mappings $c_1(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ and $c_2(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ with fixed $[\\xi_3,\\xi_4,\\xi_5]^T=[0.25, 0.5, 0.75]^T$. One can see that the data-driven basis functions contain multiscale features and the mapping $c_1(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ and $c_2(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ are smooth with respect to $\\xi_i$, $i=1,2$. The behaviors of other data-driven basis functions and the mappings are similar (not shown here).\n\n\n\n\\begin{figure}[tbph]\n\t\\centering\t\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_basis_zeta1-eps-converted-to.pdf}\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_mapping_c1-eps-converted-to.pdf}\\\\\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_basis_zeta2-eps-converted-to.pdf}\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_local_mapping_c2-eps-converted-to.pdf}\\\\\n\t\\end{subfigure}\n\t\\caption{Plots of data-driven basis $\\phi_{1}$ and $\\phi_{2}$ and mappings $c_1(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ and $c_2(\\xi_1,\\xi_2;\\xi_3,\\xi_4,\\xi_5)$ with fixed $[\\xi_3,\\xi_4,\\xi_5]^T=[0.25, 0.5, 0.75]^T$.}\n\t\\label{fig:Example1localbasismapping}\n\\end{figure} \n\nOnce we get the mapping $\\textbf{F}$, the solution corresponding to a new realization $a(x,\\gvec{\\xi}(\\omega))$ can be constructed easily by finding $ \\textbf{c}(\\gvec{\\xi})$ and plugging in the approximation \\eqref{RB_expansion}. In Figure \\ref{fig:Example1locall2err}, we show the mean relative $L^2$ and $H^1$ errors of the testing error and projection error. The testing error is the error between the numerical solution obtained by our mapping method and the reference solution obtained by the FEM on the same fine mesh used to compute the sample solutions. The projection error is the error between the FEM solution and its projection on the space spanned by data-driven basis, i.e. the best possible approximation error. For the experiment, only four data-driven basis are needed to achieve a relative error less than $1\\%$ in $L^2$ norm and less than $2\\%$ in $H^1$ norm. Moreover, the numerical solution obtained by our mapping method is close to the projection solution, which is the best approximation of the reference solution by the data-driven basis. This is due to the smoothness of the mapping. Notice that the computational time of the mapping method is almost negligible. In practice, when the number of basis is 10, it takes about $0.0022s$ to get a new solution by the mapping method, whereas the standard FEM takes $0.73s$. \n\\begin{figure}[tbph]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\n\t\\includegraphics[width=1.0\\linewidth]{ex1_local_L2err-eps-converted-to.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\\includegraphics[width=1.0\\linewidth]{ex1_local_H1err-eps-converted-to.pdf}\n\t\\end{subfigure}\n\t\\caption{ Relative $L^2$ and $H^1$ error with increasing number of basis for the local problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1locall2err}\n\\end{figure} \n\nIn Figure \\ref{fig:Example1localdiffN}, we show the accuracy of the proposed method when we use different number of samples $N$ in constructing the data-driven basis. Although the numerical error decreases when the sampling number $N$ is increased in general, the difference is very mild. \n\\begin{figure}[tbph]\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanL2err_diffN.pdf}\n\t\\caption{Testing errors in $L^2$ norm.}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanL2proj_diffN.pdf}\n\t\\caption{Projection errors in $L^2$ norm.}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanH1err_diffN.pdf}\n\t\\caption{Testing errors in $H^1$ norm.}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{ex1_meanH1proj_diffN.pdf}\n\t\\caption{Projection errors in $H^1$ norm.}\n\t\\end{subfigure}\n\t\\caption{ The relative testing\/projection errors in $L^2$ and $H^1$ norms with different number of samples (i.e. $N$) for the local problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1localdiffN}\n\\end{figure} \n\nNext, we test our method on the whole computation domain for \\eqref{randommultiscaleelliptic} with coefficient \\eqref{coefficientofexample1}. Figure \\ref{fig:Example1globaleigenvalues} shows the decay property of eigenvalues. Similarly, we show magnitudes of the leading eigenvalues in Figure \\ref{fig:Example1globaleigenvalues3a} and the ratio of the accumulated sum of the eigenvalues over the total sum in Figure \\ref{fig:Example1globaleigenvalues3b}. We observe similar behaviors as before. Since we approximate the solution in the whole computational domain, we take the Galerkin approach described in Section \\ref{sec:GlobalProblem} using the data-driven basis. \nIn Figure \\ref{fig:Example1globall2engerr}, we show the mean relative error between our numerical solution and the reference solution in $L^2$ norm and $H^1$ norm, respectively. In practice, when the number of basis is 15, it takes about $0.084s$ to compute a new solution by our method, whereas the standard FEM method costs about $0.82s$ for one solution. \n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ Decay of the eigenvalues.}\n\t\t\\label{fig:Example1globaleigenvalues3a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_acc_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example1globaleigenvalues3b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues for the global problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1globaleigenvalues}\n\\end{figure} \n\n \n\n\n\n\n\n\\begin{figure}[tbph] \n\t\\centering\t\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_L2err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $L^2$ norm.}\n\t\t\\label{fig:Example1globall2err}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex1_global_H1err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $H^1$ norm.} \n\t\t\\label{fig:Example1globalengerr}\n\t\\end{subfigure}\n\t\\caption{The relative errors with increasing number of basis for the global problem of Sec.\\ref{sec:Example1}.}\n\t\\label{fig:Example1globall2engerr}\n\\end{figure} \n\n\n\\subsection{An example with an exponential type coefficient}\\label{sec:Example2}\n\\noindent\nWe now solve the problem \\eqref{randommultiscaleelliptic} with an exponential type coefficient. \nThe coefficient is parameterized by eight random variables, which has the following form \n\\begin{align}\na(x,y,\\omega) =&\\exp\\Big( \\sum_{i=1}^8 \\sin(\\frac{2\\pi (9-i)x}{9\\epsilon_i})\\cos(\\frac{2\\pi iy}{9\\epsilon_i})\\xi_i(\\omega) \\Big),\n\\label{coefficientofexample2}\n\\end{align}\nwhere the multiscale parameters $[\\epsilon_1,\\epsilon_2,\\cdots,\\epsilon_{8}] =[\\frac{1}{43},\\frac{1}{41},\\frac{1}{47},\\frac{1}{29},\\frac{1}{37},\\frac{1}{31},\\frac{1}{53},\\frac{1}{35}]$ and $\\xi_i(\\omega)$, $i=1,...,8$ are i.i.d. uniform random variables in $[-\\frac{1}{2},\\frac{1}{2}]$. Hence the contrast ratio is $\\kappa_a\\approx 3.0\\times 10^3$ in the coefficient \\eqref{coefficientofexample2}. The force function is $f(x,y) = \\cos(2\\pi x)\\sin(2\\pi y)\\cdot I_{D_2}(x,y)$, where $I_{D_2}$ is an indicator function defined on $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$. \nIn the local problem, the subdomain of interest is $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$. \n\nIn Figure \\ref{fig:Example2eigenvalues}, we show the decay property of eigenvalues. Specifically, in Figure \\ref{fig:Example2eigenvalues-a} we show the magnitude of leading eigenvalues and in Figure \\ref{fig:Example2eigenvalues-b} we show the ratio of the accumulated sum of the eigenvalues over the total sum. These results imply that the solution space has a low-dimensional structure, which can be approximated by the data-driven basis functions. \n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ Decay of eigenvalues.}\n\t\t\\label{fig:Example2eigenvalues-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_acc_eigenvalues-eps-converted-to.pdf} \n\t\t\\caption{ $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example2eigenvalues-b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues in the problem of Sec.\\ref{sec:Example2}.}\n\t\\label{fig:Example2eigenvalues}\n\\end{figure} \nSince the coefficient $a(x,y,\\omega)$ is parameterized by eight random variables, it is expensive to construct the mapping $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$ using the interpolation method with uniform grids. Instead, we use a sparse grid polynomial interpolation approach to approximate the mapping $\\textbf{F}$. Specifically, we use Legendre polynomials with total order less than or equal 4 to approximate the mapping, where the total number of nodes is $N_1=2177$; see \\cite{Griebel:04}.\n\nFigure \\ref{fig:Example2errors-a} shows the relative errors of the testing error and projection error in $L^2$ norm. Figure \\ref{fig:Example2errors-b} shows the corresponding relative errors in $H^1$ norm. The sparse grid polynomial interpolation approach gives a comparable error as the best approximation error. We observe similar convergence results in solving the global problem \\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexample2} (not shown here). Therefore, we can use sparse grid method to construct mappings for problems of moderate number of random variables. \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_L2err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $L^2$ norm.}\n\t\t\\label{fig:Example2errors-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex2_local_H1err-eps-converted-to.pdf} \n\t\t\\caption{ Relative error in $H^1$ norm.} \n\t\t\\label{fig:Example2errors-b}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the problem of Sec.\\ref{sec:Example2}.}\n\t\\label{fig:Example2errors}\n\\end{figure} \n\n\\subsection{An example with a discontinuous coefficient}\\label{sec:ExampleInterface}\n\\noindent \nWe solve the problem \\eqref{randommultiscaleelliptic} with a discontinuous coefficient, \nwhich is an interface problem. The coefficient is parameterized by twelve random variables and has the following form \n\t\\begin{align}\na(x,y,\\omega) =& \\exp\\Big(\\sum_{i=1}^{6} \\sin(2\\pi \\frac{x\\sin(\\frac{i\\pi}{6}) +y\\cos(\\frac{i\\pi}{6}) }{\\epsilon_i} )\\xi_i(\\omega) \\Big)\\cdot I_{D\\setminus D_3}(x,y)\\nonumber\\\\\n&+\\exp\\Big(\\sum_{i=1}^{6} \\sin(2\\pi \\frac{x\\sin(\\frac{(i+0.5)\\pi}{6}) +y\\cos(\\frac{(i+0.5)\\pi}{6}) }{\\epsilon_{i+6}} )\\xi_{i+6}(\\omega) \\Big)\\cdot I_{D_3}(x,y),\n\\label{coefficientofexampleInterface}\n\\end{align}\nwhere $\\epsilon_i=\\frac{1+i}{100}$ for $i=1,\\cdots,6$, $\\epsilon_{i}=\\frac{i+13}{100}$ for $i=7,\\cdots,12$, $\\xi_i(\\omega)$, $i=1,\\cdots,12$ are i.i.d. uniform random variables in $[-\\frac{2}{3},\\frac{2}{3}]$, and $I_{D_3}$ and $I_{D\\setminus D_3}$ are indicator functions. \nThe subdomain $D_3$ consists of three small rectangles whose edges are parallel to the edges of domain $D$ with width $10h$ and height $0.8$. And the lower left vertices are located \nat $(0.3,0.1),(0.5,0.1),(0.7,0.1)$ respectively. The contrast ratio in the coefficient \\eqref{coefficientofexampleInterface} is $\\kappa_a\\approx 3\\times 10^3$. In Figure \\ref{fig:ExampleInterfaceRealizations} we show two realizations of the coefficient \\eqref{coefficientofexampleInterface}. \t\n\n\t\\begin{figure}[htbp]\n\t\t\\centering\n\t\t\\includegraphics[width=0.49\\linewidth]{Ex5_DiffCoef_realization1.pdf}\n\t\t\\includegraphics[width=0.49\\linewidth]{Ex5_DiffCoef_realization5.pdf} \n\t\t\\caption{Two realizations of the coefficient \\eqref{coefficientofexampleInterface} in the interface problem.} \n\t\t\\label{fig:ExampleInterfaceRealizations}\n\t\\end{figure}\n \n\tWe now solve the local problem of \\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexampleInterface}, where the domain of interest is $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$. The force function is $f(x,y) = \\cos(2\\pi x)\\sin(2\\pi y)\\cdot I_{D_2}(x,y)$, where $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$. In Figure \\ref{fig:ExampleInterfaceeigenvalues-a} and Figure \\ref{fig:ExampleInterfaceeigenvalues-b} we show the magnitude of dominant eigenvalues and approximate accuracy. These results show that only a few data-driven basis functions are enough to approximate all solution samples well. \n\t\\begin{figure}[tbph] \n\t\t\\centering\n\t\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_eigenvalues.pdf} \n\t\t\t\\caption{ Decay of eigenvalues.}\n\t\t\n\t\t\t\\label{fig:ExampleInterfaceeigenvalues-a}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_acc_eigenvalues.pdf} \n\t\t\t\\caption{$1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\n\t\t\t\\label{fig:ExampleInterfaceeigenvalues-b}\n\t\t\\end{subfigure}\n\t\t\\caption{The decay properties of the eigenvalues in the problem of Sec.\\ref{sec:ExampleInterface}.}\n\t\t\\label{fig:ExampleInterfaceeigenvalues}\n\t\\end{figure} \n\n Since the coefficient \\eqref{coefficientofexampleInterface} is parameterized by twelve random variables, constructing the mapping $\\textbf{F}:\\gvec{\\xi}(\\omega)\\mapsto \\textbf{c}(\\omega)$ using the sparse grid polynomial interpolation becomes very expensive too. Here we use the least square method combined with the $k-d$ tree algorithm for searching nearest neighbors to approximate the mapping $\\textbf{F}$. \n \n \n In our method, we first generate $N_1=5000$ data pairs $\\{(\\gvec{\\xi}^n(\\omega),\\textbf{c}^n(\\omega)\\}_{n=1}^{N_1}$ that will be used as training data. \n Then, we use $N_2=200$ samples for testing in the online stage. For each new testing data point $\\gvec{\\xi}(\\omega)=[\\xi_1(\\omega),\\cdots,\\xi_r(\\omega)]^T$ (here $r=12$), we run the $k-d$ tree algorithm to find its $n$ nearest neighbors in the training data set and apply the least square method to compute the corresponding mapped value $\\vec{c}(\\omega)=[c_1(\\omega), \\ldots, c_K(\\omega)]^T$. The complexity of constructing a $k-d$ tree is $O(N_1\\log N_1)$. Given the $k-d$ tree, for each testing point the complexity of finding its $n$ nearest neighbors is $O(n\\log N_1)$ \\cite{wald2006building}. Since the $n$ training data points are close to the testing data point $\\gvec{\\xi}(\\omega)$, for each training data $(\\gvec{\\xi}^m(\\omega),\\textbf{c}^m(\\omega)$, $m=1,....n$, we compute the first-order Taylor expansion of each component $c^m_j(\\omega)$ at $\\gvec{\\xi}(\\omega)$ as \n \\begin{align}\n c^m_j(\\omega)\\approx c_j(\\omega)+\\sum_{i=1}^{r=12}(\\xi^m_i-\\xi_i)\\frac{\\partial c_j}{\\partial \\xi_i}(\\omega),\\quad j=1,2,\\cdots,K, \n \\label{least-square-system}\n \\end{align}\n where $\\xi^m_i$, $i=1,...,r$, $c^m_j(\\omega)$, $j=1,...,K$ are given training data, $c_j(\\omega)$ and \n $\\frac{\\partial c_j}{\\partial \\xi_i}(\\omega)$, $j=1,...,K$ are unknowns associated with the testing data point $\\gvec{\\xi}(\\omega)$. In the $k-d$ tree algorithm, we choose $n=20$, which is slightly greater than $r+1=13$. By solving \\eqref{least-square-system} using the least square method, we get the mapped value $\\vec{c}(\\omega)=[c_1(\\omega), \\ldots, c_K(\\omega)]^T$. Finally, we use the formula \\eqref{RB_expansion} to get the numerical solution of Eq.\\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexampleInterface}. \n \n Because of the discontinuity and high-dimensional random variables in the coefficient \\eqref{coefficientofexampleInterface}, the problem \\eqref{randommultiscaleelliptic} is more challenging. The nearest neighbors based least square method provides an efficient way to construct mappings and achieves relative errors less than $3\\%$ in both $L^2$ norm and $H^1$ norm;\n see Figure \\ref{fig:ExampleInterfacelocalerrors}. Alternatively, one can use the neural network method to construct mappings for this type of challenging problems; see Section \\ref{sec:Example3}.\n \n \n \n\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_L2err.pdf}\n\t\n\t\t\\label{fig:ExampleInterfaceerrors-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{Ex5channel_local_H1err.pdf}\n\t\n\t\t\\label{fig:ExampleInterfaceerrors-b}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the local problem of Sec.\\ref{sec:ExampleInterface} .}\n\t\\label{fig:ExampleInterfacelocalerrors}\n\\end{figure}\n\n\\subsection{An example with high-dimensional random coefficient and force function}\\label{sec:Example3}\n\\noindent\nWe solve the problem \\eqref{randommultiscaleelliptic} with an exponential type coefficient and random force function,\nwhere the total number of random variables is twenty. Specifically, the coefficient is parameterized by eighteen i.i.d. random variables, i.e.\n\\begin{align}\na(x,y,\\omega) = \\exp\\Big(\\sum_{i=1}^{18} \\sin(2\\pi \\frac{x\\sin(\\frac{i\\pi}{18}) +y\\cos(\\frac{i\\pi}{18}) }{\\epsilon_i} )\\xi_i(\\omega) \\Big),\n\\label{coefficientofexample3}\n\\end{align}\nwhere $\\epsilon_i=\\frac{1}{2i+9}$, $i=1,2,\\cdots,18$ and $\\xi_i(\\omega)$, $i=1,...,18$ are i.i.d. uniform random variables in $[-\\frac{1}{5},\\frac{1}{5}]$. The force function is a Gaussian density function $f(x,y) = \\frac{1}{2\\pi\\sigma^2}\\exp(-\\frac{(x-\\theta_1)^2+(y-\\theta_2)^2}{2\\sigma^2})$ with a random center $(\\theta_1,\\theta_2)$ that is a random point uniformly distributed in the subdomain $D_2=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$ and $\\sigma=0.01$. When $\\sigma$ is small, the \nGaussian density function $f(x,y)$ can be used to approximate the Dirac-$\\delta$ function, such as modeling wells in reservoir simulations.\n\nWe first solve the local problem of \\eqref{randommultiscaleelliptic} with the coefficient \\eqref{coefficientofexample3}, where the subdomain of interest is $D_1=[\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{11}{16},\\frac{15}{16}]$. In Figures \\ref{fig:Example3eigenvalues-a} and \\ref{fig:Example3eigenvalues-b}, we show the magnitude of leading eigenvalues and the ratio of the accumulated sum of the eigenvalue over the total sum, respectively. We observe similar exponential decay properties of eigenvalues even if the force function contains randomness. These results show that we can still build a set of data-driven basis functions to solve problem \\eqref{randommultiscaleelliptic} with coefficient \\eqref{coefficientofexample3}.\n\n\\begin{figure}[tbph] \n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_eigenvalues.pdf} \n\t\t\\caption{ Decay of eigenvalues.}\n\t\t\\label{fig:Example3eigenvalues-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_acc_eigenvalues.pdf} \n\t\t\\caption{ $1-\\sqrt{\\sum_{j=n+1}^{N}\\lambda_{j}\/\\sum_{j=1}^{N}\\lambda_{j}}$, $n=1,2,...$.} \n\t\t\\label{fig:Example3eigenvalues-b}\n\t\\end{subfigure}\n\t\\caption{The decay properties of the eigenvalues in the problem of Sec.\\ref{sec:Example3}.}\n\t\\label{fig:Example3eigenvalues}\n\\end{figure} \n \nNotice that both the coefficient and force contain randomness here. We put the random variables $\\gvec{\\xi}(\\omega)$ in the coefficient and the random variables $\\gvec{\\theta}(\\omega)$ in the force together when we construct the mapping $\\textbf{F}$. Moreover, the dimension of randomness, 18+2=20, is too large even for sparse grids. Here we construct the mapping $\\textbf{F}:(\\gvec{\\xi}(\\omega),\\gvec{\\theta}(\\omega))\\mapsto \\textbf{c}(\\omega)$ using the neural network as depicted in Figure \\ref{fig:DNNstructure2}. The neural network has 4 hidden layers and each layer has 50 units. Naturally, the number of the input units is 20 and the number of the output units is $K$. The layer between input units and first layer of hidden units is an affine transform. So is the layer between output units and last layer of hidden units. Each two layers of hidden units are connected by an affine transform, a tanh (hyperbolic tangent) activation and a residual connection, i.e. $\\textbf{h}_{l+1}=\\tanh(\\textbf{A}_l \\textbf{h}_l+\\textbf{b}_l)+\\textbf{h}_l$, $l=1,2,3$, where $\\textbf{h}_l$ is $l$-th layer of hidden units, $\\textbf{A}_l$ is a 50-by-50 matrix and $\\textbf{b}_l$ is a 50-by-1 vector. Under the same setting of neural network, if the rectified linear unit (ReLU), which is piecewise linear, is used\nas the activation function, we observe a much bigger error. Therefore we choose the hyperbolic tangent activation function and implement the residual neural network (ResNet) here \\cite{he2016deep}. \n\n\\begin{figure}[tbph]\n\\tikzset{global scale\/.style={\n scale=#1,\n every node\/.append style={scale=#1}\n }\n}\n\t\\centering\n\t\\begin{tikzpicture}[global scale=0.6]\n\t\\tikzstyle{inputvariables} = [circle, very thick, fill=yellow,draw=black, minimum height=0.1cm, text width = 0.2cm]\n\t\\tikzstyle{hiddenvariables} = [circle, thick, draw =black, fill=blue,minimum height=0.1cm, text width = 0.2cm]\n\t\\tikzstyle{outputvariables} = [circle, very thick, draw=black, fill=red,minimum height=0.1cm, text width = 0.2cm]\n\t\\tikzstyle{dottedvariables} = [thick, fill=white, minimum size = 0.2cm]\n\t\\tikzstyle{textnode} = [thick, fill=white, minimum size=0.1cm ]\n\n\t\\node[inputvariables] (x1) at (0,0) {$\\xi_1$};\n\t\\node[inputvariables, below=0.4cm of x1] (x2) {$\\xi_2$};\n\t\\node[dottedvariables, below=0.1cm of x2] (x3) {$\\vdots$};\n\t\\node[inputvariables, below=0.1cm of x3] (x4) {$\\xi_{r_1}$};\n\t\\node[inputvariables, below=0.4cm of x4] (x5) {$\\theta_{1}$};\n\t\\node[inputvariables, below=0.1cm of x5] (x6) {$\\theta_{2}$};\n\t\\node[dottedvariables, below=0.1cm of x6] (x7) {$\\vdots$};\n\t\\node[inputvariables, below=0.4cm of x7] (x8) {$\\theta_{r_2}$};\n\t\\draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (x4.south west)--(x1.north west);\n\t\\node[textnode, above left of=x3,left=0.2cm] {$\\gvec{\\xi}(\\omega)$};\n\t\\draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (x8.south west)--(x5.north west);\n\t\\node[textnode, above left of=x7,left=0.2cm] {$\\gvec{\\theta}(\\omega)$};\n\n\t\\node[hiddenvariables] (h1) at (3,-1) {};\n\t\\node[hiddenvariables, below=0.4cm of h1] (h2) {};\n\t\\node[dottedvariables, below=0.1cm of h2] (h3) {$\\vdots$};\n\t\\node[dottedvariables, below=0.1cm of h3] (h4) {$\\vdots$};\n\t\\node[hiddenvariables, below=0.4cm of h4] (h5) {};\n\t\\node[hiddenvariables, below=0.4cm of h5] (h6) {};\n\t\n\t\\node[dottedvariables] (h31) at (5.5,-1) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h31] (h32) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h32] (h33) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h33] (h34) {$\\dots$};\n\t\\node[dottedvariables, below=0.5cm of h34] (h35) {$\\cdots$};\n\t\\node[dottedvariables, below=0.5cm of h35] (h36) {$\\cdots$};\n\t\n\t\\node[hiddenvariables] (h21) at (8,-1) {};\n\t\\node[hiddenvariables, below=0.4cm of h21] (h22) {};\n\t\\node[dottedvariables, below=0.1cm of h22] (h23) {$\\vdots$};\n\t\\node[dottedvariables, below=0.1cm of h23] (h24) {$\\vdots$};\n\t\\node[hiddenvariables, below=0.4cm of h24] (h25) {};\n\t\\node[hiddenvariables, below=0.4cm of h25] (h26) {};\n\n\t\\node[outputvariables] (y1) at (11,0) {$c_1$};\n\t\\node[outputvariables, below=0.4cm of y1] (y2) {$c_2$};\n\t\\node[outputvariables, below=0.4cm of y2] (y3) {$c_3$};\n\t\\node[dottedvariables, below=0.2cm of y3] (y4) {$\\vdots$};\n\t\\node[dottedvariables, below=0.2cm of y4] (y5) {$\\vdots$};\n\t\\node[dottedvariables, below=0.2cm of y5] (y6) {$\\vdots$};\n\t\\node[dottedvariables, below=0.2cm of y6] (y7) {$\\vdots$};\n\t\\node[outputvariables, below=0.2cm of y7] (y8) {$c_k$};\n\t\\draw[-,thick,decorate, decoration={brace, raise=0.3cm}] (y1.north east)--(y8.south east);\n\t\\node[textnode, above right of=y5, right=0.2cm] {$\\mathbf{c}(\\omega)$};\n\n\t\\path [-] (x1) edge node {} (h1);\n\t\\path [-] (x2) edge node {} (h1);\n\t\\path [-] (x4) edge node {} (h1);\n\t\\path [-] (x5) edge node {} (h1);\n\t\\path [-] (x6) edge node {} (h1);\n\t\\path [-] (x8) edge node {} (h1);\n\t\\path [-] (x1) edge node {} (h2);\n\t\\path [-] (x2) edge node {} (h2);\n\t\\path [-] (x4) edge node {} (h2);\n\t\\path [-] (x5) edge node {} (h2);\n\t\\path [-] (x6) edge node {} (h2);\n\t\\path [-] (x8) edge node {} (h2);\n\t\\path [-] (x1) edge node {} (h5);\n\t\\path [-] (x2) edge node {} (h5);\n\t\\path [-] (x4) edge node {} (h5);\n\t\\path [-] (x5) edge node {} (h5);\n\t\\path [-] (x6) edge node {} (h5);\n\t\\path [-] (x8) edge node {} (h5);\n\t\\path [-] (x1) edge node {} (h6);\n\t\\path [-] (x2) edge node {} (h6);\n\t\\path [-] (x4) edge node {} (h6);\n\t\\path [-] (x5) edge node {} (h6);\n\t\\path [-] (x6) edge node {} (h6);\n\t\\path [-] (x8) edge node {} (h6);\n\n\t\\path [-] (h1) edge node {} (h31);\n\t\\path [-] (h2) edge node {} (h31);\n\t\\path [-] (h5) edge node {} (h31);\n\t\\path [-] (h6) edge node {} (h31);\n\t\\path [-] (h1) edge node {} (h32);\n\t\\path [-] (h2) edge node {} (h32);\n\t\\path [-] (h5) edge node {} (h32);\n\t\\path [-] (h6) edge node {} (h32);\n\t\\path [-] (h1) edge node {} (h33);\n\t\\path [-] (h2) edge node {} (h33);\n\t\\path [-] (h5) edge node {} (h33);\n\t\\path [-] (h6) edge node {} (h33);\n\t\\path [-] (h1) edge node {} (h34);\n\t\\path [-] (h2) edge node {} (h34);\n\t\\path [-] (h5) edge node {} (h34);\n\t\\path [-] (h6) edge node {} (h34);\n\t\\path [-] (h1) edge node {} (h35);\n\t\\path [-] (h2) edge node {} (h35);\n\t\\path [-] (h5) edge node {} (h35);\n\t\\path [-] (h6) edge node {} (h35);\n\t\\path [-] (h1) edge node {} (h36);\n\t\\path [-] (h2) edge node {} (h36);\n\t\\path [-] (h5) edge node {} (h36);\n\t\\path [-] (h6) edge node {} (h36);\n\t\n\t\\path [-] (h21) edge node {} (h31);\n\t\\path [-] (h22) edge node {} (h31);\n\t\\path [-] (h25) edge node {} (h31);\n\t\\path [-] (h26) edge node {} (h31);\n\t\\path [-] (h21) edge node {} (h32);\n\t\\path [-] (h22) edge node {} (h32);\n\t\\path [-] (h25) edge node {} (h32);\n\t\\path [-] (h26) edge node {} (h32);\n\t\\path [-] (h21) edge node {} (h33);\n\t\\path [-] (h22) edge node {} (h33);\n\t\\path [-] (h25) edge node {} (h33);\n\t\\path [-] (h26) edge node {} (h33);\n\t\\path [-] (h21) edge node {} (h34);\n\t\\path [-] (h22) edge node {} (h34);\n\t\\path [-] (h25) edge node {} (h34);\n\t\\path [-] (h26) edge node {} (h34);\n\t\\path [-] (h21) edge node {} (h35);\n\t\\path [-] (h22) edge node {} (h35);\n\t\\path [-] (h25) edge node {} (h35);\n\t\\path [-] (h26) edge node {} (h35);\n\t\\path [-] (h21) edge node {} (h36);\n\t\\path [-] (h22) edge node {} (h36);\n\t\\path [-] (h25) edge node {} (h36);\n\t\\path [-] (h26) edge node {} (h36);\n\n\t\\path [-] (h21) edge node {} (y1);\n\t\\path [-] (h22) edge node {} (y1);\n\t\\path [-] (h25) edge node {} (y1);\n\t\\path [-] (h26) edge node {} (y1);\n\t\\path [-] (h21) edge node {} (y2);\n\t\\path [-] (h22) edge node {} (y2);\n\t\\path [-] (h25) edge node {} (y2);\n\t\\path [-] (h26) edge node {} (y2);\n\t\\path [-] (h21) edge node {} (y3);\n\t\\path [-] (h22) edge node {} (y3);\n\t\\path [-] (h25) edge node {} (y3);\n\t\\path [-] (h26) edge node {} (y3);\n\t\\path [-] (h21) edge node {} (y8);\n\t\\path [-] (h22) edge node {} (y8);\n\t\\path [-] (h25) edge node {} (y8);\n\t\\path [-] (h26) edge node {} (y8);\n\t\n\t\\node[textnode, font=\\fontsize{15}{6}\\selectfont, above=1.0cm of h31] (Text1){Hidden units};\n\t\\node[textnode, font=\\fontsize{15}{6}\\selectfont, left =1.8cm of Text1] (Text2) {Input units};\n\t\\node[textnode, font=\\fontsize{15}{6}\\selectfont, right = 1.8cm of Text1] {Output units};\n\t\\end{tikzpicture}\n\t\\caption{Structure of neural network, where $r_1=18$ and $r_2=2$.}\n\t\\label{fig:DNNstructure2}\n\\end{figure} \nWe use $N_1=5000$ samples for network training in the offline stage and use $N_2=200$ samples for testing in the online stage. The sample data pairs for training are $\\{(\\gvec{\\xi}^n(\\omega),\\gvec{\\theta}^n(\\omega)),\\textbf{c}^n(\\omega)\\}_{n=1}^{N_1}$, where \n$\\gvec{\\xi}^n(\\omega)\\in [-\\frac{1}{5},\\frac{1}{5}]^{18}$, $\\gvec{\\theta}^n(\\omega))\\in [\\frac{1}{4},\\frac{3}{4}]\\times[\\frac{1}{16},\\frac{5}{16}]$, and $\\textbf{c}^n(\\omega)\\in R^{K}$. We define the loss function of network training as \n\\begin{align}\nloss\\big(\\{\\textbf{c}^n\\},\\{\\textbf{\\^{c}}^n\\}\\big) = \\frac{1}{N_1}\\sum_{n=1}^{N_1}\\frac{1}{K}|\\textbf{c}^{n}-\\textbf{\\^{c}}^{n}|^2, \n\\end{align}\nwhere $\\textbf{c}^{n}$ are the training data and $\\textbf{\\^{c}}^n$ are the output of the neural network.\n\n \nFigure \\ref{fig:Example3locall2err-a} shows the value of loss function during training procedure. Figure \\ref{fig:Example3locall2err-b} shows the corresponding mean relative error of the testing samples in $L^2$ norm. \nEventually the relative error of the neural network reaches about $1.5\\times 10^{-2}$. \nFigure \\ref{fig:Example3locall2err-c} shows the corresponding mean relative error of the testing samples in $H^1$ norm. We remark that many existing methods become extremely expensive or infeasible when the problem is parameterized by high-dimensional random variables like this one. \n \n\\begin{figure}[htbp]\n\t\\centering\n\n\t\\begin{subfigure}[b]{0.32\\textwidth}\n\t\t$K=5$\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingloss_5.pdf} \\\\\n\t\t$K=10$\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingloss_10.pdf} \\\\\n\t\t$K=20$\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingloss_20.pdf} \n\t\t\\caption{ Loss.}\n\t\t\\label{fig:Example3locall2err-a}\n\t\\end{subfigure}\n\n\n\t\\begin{subfigure}[b]{0.32\\textwidth}\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingL2err_5.pdf} \\\\\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingL2err_10.pdf} \\\\\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingL2err_20.pdf} \n\t\t\\caption{ Relative $L^2$ error.} \n\t\t\\label{fig:Example3locall2err-b}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[b]{0.32\\textwidth}\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingH1err_5.pdf}\\\\ \n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingH1err_10.pdf}\\\\\n\t\t~~\\\\\n\t\t\\includegraphics[width=1.0\\linewidth]{ex3_local_trainingH1err_20.pdf}\n\t\t\\caption{ Relative $H^1$ error.} \n\t\t\\label{fig:Example3locall2err-c}\n\t\\end{subfigure}\n\t\\caption{ First column: the value of loss function during training procedure. Second column and third column: the mean relative errors of the testing set during training procedure in $L^2$ and $H^1$ norm respectively.}\n\t\\label{fig:Example3locall2err}\n\\end{figure}\n\n\n\n\\subsection{An example with unknown random coefficient and source function}\\label{sec:Example4}\n\\noindent\nHere we present an example where the models of the random coefficient and source are unknown. Only a set of sample solutions are provided as well as a few censors can be placed at certain locations for solution measurements. This kind of scenario appears often in practice. We take the least square fitting method as described in Section \\ref{sec:LS}. Our numerical experiment is still based on \\eqref{randommultiscaleelliptic}, which is used to generate solution samples (instead of experiments or measurements in real practice). But once the data are generated, we do not assume any knowledge of the coefficient or the source when computing a new solution. \n\n\nTo be specific, the coefficient takes the form\n\\begin{align}\na(x,y,\\omega) = \\exp\\Big(\\sum_{i=1}^{24} \\sin(2\\pi \\frac{x\\sin(\\frac{i\\pi}{24}) +y\\cos(\\frac{i\\pi}{24}) }{\\epsilon_i} )\\xi_i(\\omega) \\Big)\n\\label{coefficientofexample4}\n\\end{align}\nwhere $\\epsilon_i=\\frac{1+i}{100}$, $i=1,2,\\cdots,24$ and $\\xi_i(\\omega)$, $i=1,...,24$ are i.i.d. uniform random variables in $[-\\frac{1}{6},\\frac{1}{6}]$. The force function is a random function $f(x,y) = \\sin(\\pi(\\theta_1x+2\\theta_2))\\cos(\\pi(\\theta_3y+2\\theta_4))\\cdot I_{D_2}(x,y)$ with i.i.d. uniform random variables $\\theta_1,\\theta_2,\\theta_3,\\theta_4$ in $[0,2]$. We first generate $N=2000$ solutions samples (using standard FEM) $u(x_j, \\omega_i), i=1, \\ldots, N, j=1, \\ldots, J$, where $x_j$ are the points where solution samples are measured. Then a set of $K$ data-driven basis $\\phi_k(x_j), j=1, \\ldots, J, k=1, \\dots, K$ are extracted from the solution samples as before. \n\nNext we determine $M$ good sensing locations from the data-driven basis so that the least square problem \\eqref{eq:LS} is not ill-conditioned. We follow the method proposed in \\cite{Kutz2017Sensor}. Define $\\Phi=[\\boldsymbol{\\phi}_1, \\ldots, \\boldsymbol{\\phi}_K]\\in R^{J\\times K}$, where $\\boldsymbol{\\phi}_k=[\\phi_k(x_1), \\ldots, \\phi_k(x_J)]^T$. If $M=K$, QR factorization with column pivoting is performed on $\\Phi^T$. If $M>K$, QR factorization with pivoting is performed on $\\Phi\\Phi^T$. The first $M$ pivoting indices provide the measurement locations. Once a new solution is measured at these $M$ selected locations, the least square problem \\eqref{eq:LS} is solved to determine the coefficients $c_1, c_2, \\ldots, c_K$ and the new solution is approximated by $u(x_j,\\omega)=\\sum_{k=1}^K c_k\\phi_k(x_j)$.\n\n \nIn Figure \\ref{fig:Example4localerrors} and Figure \\ref{fig:Example4globalerrors}, we show the results of the local problem and global problem, respectively. In these numerical results, we \ncompared the error between the reconstructed solutions and the reference solution. We find the our proposed method works well for problem \\eqref{randommultiscaleelliptic} with a non-parametric coefficient or source as well. \n \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_local_L2err.pdf}\n\t\t\\label{fig:Example4errors-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_local_H1err.pdf}\n\t\t\\label{fig:Example4errors-b}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the local problem of Sec.\\ref{sec:Example4} .}\n\t\\label{fig:Example4localerrors}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_global_L2err.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n\t\t\\includegraphics[width=1.0\\linewidth]{ex4_global_H1err.pdf}\n\t\\end{subfigure}\n\t\\caption{ The relative errors with increasing number of basis in the global problem of Sec.\\ref{sec:Example4}.}\n\t\\label{fig:Example4globalerrors}\n\\end{figure}\n \n\\section{Conclusion} \\label{sec:Conclusion}\n\\noindent \nIn this paper, we propose a data-driven approach to solve elliptic PDEs with multiscale and random coefficient which arise in various applications, such as heterogeneous porous media flow problems in water aquifer and oil reservoir simulations. The key idea for our method, which is motivated by the high separable approximation of the underlying Green's function, is to extract a problem specific low dimensional structure in the solution space and construct its basis from the data. Once the data-driven basis is available, depending on different setups, we design several ways to compute a new solution efficiently.\n\nError analysis based on sampling error of the coefficients and the projection error of the data-driven basis is presented to provide some guidance in the implementation of our method. Numerical examples show that the proposed method is very efficient especially when the problem has relative high dimensional random input.\n\n \n \n\n\n\n\n\\section*{Acknowledgements}\n\\noindent\nThe research of S. Li is partially supported by the Doris Chen Postgraduate Scholarship. \nThe research of Z. Zhang is supported by the Hong Kong RGC General Research Funds (Projects 27300616, 17300817, and 17300318), National Natural Science Foundation of China (Project 11601457), Seed Funding Programme for Basic Research (HKU), and Basic Research Programme (JCYJ20180307151603959) of The Science, Technology and Innovation Commission of Shenzhen Municipality. The research of H. Zhao is partially supported by NSF grant DMS-1622490 and DMS-1821010. This research is made possible by a donation to the Big Data Project Fund, HKU, from Dr Patrick Poon whose generosity is gratefully acknowledged.\n\n\n\n\n\\bibliographystyle{siam}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe projects of the $e^+e^-$ Linear Collider (LC) -- ILC, CLIC, ... -- contain one essential element that is not present in other colliders.\nHere each $e^-$ (or $e^+$) bunch will be used only\nonce, and physical collisions retain two very dense and\nstrongly collimated beams of high energy electrons with the\nprecisely known time structure. We consider,\nfor definiteness, electron beam parameters of the ILC\nproject \\cite{ILC}:\n \\bear{l}\n\\mbox{\\it particle energy}\\;\\; E_e=250\\div 500\\;{\\rm GeV}, \\\\\n\\mbox{\\it number of electrons per second}\\;\\; N_e\\sim 10^{14}\/{\\rm s},\\\\\n\\mbox{\\it transverse size and angular spread are negligible};\\\\\n\\mbox{\\it time structure is complex and precisely known}.\n \\eear{beampar}\n\n\nThe problem of dealing with this powerful beam dump is\nunder extensive discussions, see, e.g.,~\\cite{ILC}.\n\nAbout 10 years ago we suggested to utilize such used beams in project TESLA to\ninitiate operation of a subcritical fission reactor and to\nconstruct a neutrino factory ($\\nu$F)~\\cite{LCWS05,reactor}.\nWith new studies of ILC and CLIC, these proposals should be renewed.\\\\\n\n\n\n\n\n\\bu Neutrino factories promise to solve many problems. The existing\nprojects (see, e.g.,~\\cite{nufact1}-\\cite{nufact3}) are very\nexpensive, their physical potential is limited by an\nexpected neutrino energy and productivity of a neutrino\nsource.\n\n\n\n\nThe proposed $\\nu$F based on LC is much less expensive\nthan those discussed nowadays since there are no additional costs\nfor construction of a high intensity and high energy particle source. The\ncombination of the high number of particles in the beam and\nhigh particle energy with precisely known time\nstructure \\eqref{beampar} provides very favorable properties of such a $\\nu$F.\nThe initial beam will be prepared in LC irrelevantly to the $\\nu$F\nconstruction. The construction demands no special\nelectronics except that for detectors. The initial beam\nis very well collimated, therefore the additional efforts for\nbeam cooling are not necessary. Use of the IceCube in\nAntarctica or Lake Baikal detector just as specially prepared detector not so far from LC as a far distance detector (FDD)\nallows to study in details $\\nu_\\mu-\\nu_\\tau$ oscillations and observe possible oscillations $\\nu_\\mu\\to \\nu_{sterile}$ (in latter case --\nvia a measurement of deficit of $\\nu_\\mu N\\to \\mu X$ events).\n\nThe neutrino beam will have a very well known discrete time\nstructure that repeats the same structure in the LC. This\nfact allows one to separate cosmic and similar backgrounds\nduring operation with high precision. A very simple structure of\na neutrino generator allows to calculate the energy spectrum\nand a content of the main neutrino beam with high accuracy.\nIt must be verified with high precision in a nearby detector\n(NBD).\n\nIn this project an incident neutrino beam will contain mainly $\\nu_\\mu$,\n$\\bar{\\nu}_\\mu$ with a small admixture of $\\nu_e$\nand $\\bar{\\nu}_e$ and tiny dope of $\\nu_\\tau$ and\n$\\bar{\\nu}_\\tau$ (the latter can be calculated with low\nprecision). For the electron beam energy of 250~GeV, neutrino\nenergies are spread up to about\n80~GeV with the mean energy of about 30 GeV, providing reliable\nobservation of $\\tau$, produced by $\\nu_\\tau$ from the\n$\\nu_\\mu-\\nu_\\tau$ oscillations. In the physical program of a\ndiscussed $\\nu$F we consider only the problem of\n$\\nu_\\mu-\\nu_\\tau$ and\/or $\\nu_\\mu\n-\\nu_{sterile}$ oscillations. The potential of this $\\nu$F in solving\nother problems of $\\nu$ physics should be studied after\ndetailed consideration of the project, see also~\\cite{nufact3}.\\vspace{-3mm}\n\n\n\n\\section{Elements of neutrino factory. Scheme}\n\n\n\nThe proposed scheme deals with the electron beam used in LC\nand contains the following parts, Fig.~1:\\\\\n\\bu Beam bending magnet (BM).\\\\\n\\bu Pion producer (PP), \\, \\\\ \\bu Neutrino\ntransformer (NT),\\\\\n \\bu Nearby detector (NBD), \\\\ \\ \\bu Far distance detector (FDD).\n\\\\\n\\begin{figure}[hbt]\n\\centering \\includegraphics[width=0.75\\textwidth, height=2.5cm]{Scem2.eps}\n\\caption{Main parts of the neutrino factory after BM.}\n \\end{figure}\n\n\n\n\\section{Beam bending magnet}\n\nThe system should start with a bending magnet situated after\nthe detector of the basic collider. It\nturns the used beam at an angle necessary to reach FDD by\nsacrificing monochromaticity but without essential growth of the angular\nspread. The vertical component of the turning angle $\\alpha_V$\nis determined by Earth curvature. Let us denote the\ndistance from LC to FDD at the Earth surface by $L_F$. To reach FDD, the initial\nbeam (and therefore NT) should be turned before PP at the\nangle $\\alpha_V =L_F\/(2R_E)$ below horizon (here\n$R_E$ is the Earth radius).\n\n\nThe horizontal component of turning angle can be minimized\nby a suitable choice of the proper LC orientation\n(orientation of an incident beam near the LC collision point).\n\n\n\\section{Pion producer (PP)}\n\nThe PP can be, for example, a 20~cm long water cylinder\n({\\it one radiation length}). Water in the cylinder can rotate for cooling.\nIn this PP almost each electron will produce a bremsstrahlung\nphoton with energy $E_\\gamma=100-200$~GeV. The angular\nspread of these photons is roughly the same as that\nof the initial beam ($\\sim 0.1$~mrad). Bremsstrahlung\nphotons have an additional angular spread of about\n$1\/\\gamma\\approx 2\\cdot 10^{-6}$. These two spreads are\nnegligible for our problem.\n\nThen these photons collide with nuclei and produce pions,\n \\be\n\\gamma N\\to N'+\\pi\\;'s,\\quad \\sigma\\approx 110\\,\\mu b.\n \\ee\nThis process gives about $10^{-3}$ $\\gamma N$ collisions\nper one electron that corresponds to about $\\sim 10^{11}$ $\\gamma N$\ncollisions per second. On average, each of these collisions\nproduces a single pion with high energy $E_\\pi>E_\\gamma\/2$ (for\nestimates $\\la E_\\pi^h\\ra=70$~GeV) and at least 2-3 pions\nwith lower energy (for estimates, $\\la E_\\pi^\\ell\\ra\\approx\n20$~GeV).\n\nMean transverse momentum of these pions is 350-500 MeV. The\nangular spread of high energy pions with the energy $\\la\nE_\\pi^h\\ra$ is within 7 mrad. Increase of the angular\nspread of pions with decrease of energy is compensated by\ngrowth of the number of produced pions. Therefore, for\nestimates we accept that the pion flux within an angular\ninterval of 7~mrad contains $\\sim 10^{11}$~pions with\n$E_\\pi=\\la E_\\pi^h\\ra$ and the same number of pions with\n$E_\\pi=\\la E_\\pi^\\ell\\ra$ per second. Let us denote the\nenergy distribution of pions flying almost forward by\n$f(E)$.\n\n$\\circ$ Certainly, more refined calculations should also consider\nproduction and decay of $K$ mesons etc.\n\n\n\\bu The production of $\\nu_\\tau$ in the reaction mentioned in ref.~\\cite{Telnov}\n \\begin{subequations}\\label{nutsorce}\n \\be\n\\gamma N\\to D_s^\\pm X\\to \\nu_\\tau \\bar{\\tau} X\\,.\n \\ee\nplays the most essential role for our estimates. Its cross\nsection rapidly increases with energy growth and\n \\be\n\\sigma(\\gamma N\\to \\tau+...)\\approx 2\\cdot 10^{-33}\\, {\\rm cm}^2\\;\\;\n{\\rm at} \\;\\;\nE_\\gamma\\approx 50 \\mbox{ GeV}\\,.\n \\ee\n \\end{subequations}\n\n\n\\section{Neutrino transformer (NT). Neutrino beams}\n\nFor the neutrino transformer (NT), we suggest a high vacuum\nbeam pipe of length $L_{NT}\\approx 1$~km and radius\n$r_{NT}\\approx 2$~m.\nHere muon neutrino $\\nu_\\mu$ and\n$\\bar{\\nu}_\\mu$ are created from $\\pi\\to\\mu\\nu$ decay. This\nlength $L_{NT}$ allows that more than one quarter of pions with\n$E_\\pi\\le\\la E_\\pi^h\\ra$ decay. The pipe with a radius\n$r_{NT}$ gives an angular coverage of 2 mrad, which cuts\nout 1\/12 of the total flux of low and medium energy\nneutrinos. With the growth of pion energy, two factors act\nin opposite directions. First, the initial angular\nspread of pions decreases with this growth, therefore, the fraction\nof the flux selected by the pipe will increase. Second, the\nnumber of pion decays within a relatively short pipe\ndecreases with this growth. These two tendencies compensate each\nother in the resulting flux.\n\nThe energy distribution of neutrinos obtained from the decay of a pion\nwith the energy $E$ is uniform in the interval $(aE,\\,0)$\nwith $a=1-(m_\\mu \/m_\\pi)^2$. Therefore, the\ndistribution $F(\\vep)$ of the neutrino energy $\\vep$ can be obtained from\nthe energy distribution of pions near the forward direction $f(E)$\nas\n \\be\nF(\\vep)=\\int\\limits_{\\vep\/a}^{E_e} f(E)dE\/(aE)\\,,\\qquad\na=1-\\fr{m_\\mu^2}{m_\\pi^2}\\approx 0.43\\,.\\label{spectr}\n \\ee\nThe increase of the angular spread in the decay is negligible\nin the rough approximation. Finally, at the end of NT we\nexpect to have the neutrino flux within the angle of 2~mrad\n \\bear{c}\n2\\cdot 10^{9} \\nu\/{\\rm s} \\;\\; {\\rm with}\\;\\; E_\\nu=\\la\nE_\\nu^h\\ra\\approx 30~{\\rm GeV},\\\\ \\mbox{ and }\\;\\; 2\\cdot\n10^{9} \\nu\/{\\rm s} \\;\\; {\\rm with}\\;\\; E_\\nu=\\la E_\\nu^\\ell\\ra\\approx\n9~{\\rm GeV}.\n \\eear{nucount}\nWe denote below neutrinos with $\\la E_\\nu\\ra=30$~GeV and\n$9$~GeV as {\\it high energy neutrinos} and {\\it low\nenergy neutrinos}, respectively.\n\n\n$\\circ$ Other sources of $\\nu_{\\mu}$ and $\\nu_e$ change these\nnumbers only slightly.\n\n\\bu {\\bf The background $\\pmb{\\nu_\\tau}$ beam}.\n\nThe $\\tau$ neutrino are produced in PP. Two mechanisms\nwere discussed in this respect, the Bethe-Heitler process\n$\\gamma N\\to \\tau\\bar{\\tau}+X$~\\cite{Skrinsky} and the process\n\\eqref{nutsorce} which is dominant~\\cite{Telnov}. The\ncross section \\eqref{nutsorce} is five orders smaller than\n$\\sigma(\\gamma N\\to X)$. The mean transverse momentum $\\la p_t\\ra$ of $\\nu_\\tau$ is given by $m_\\tau$, it is more than three times\nhigher than $\\la p_t\\ra$ for $\\nu_\\mu$. Along with, e.g.,\n$\\bar{\\nu}_\\tau$ produced in this process, in NT each\n$\\tau$ decays to $\\nu_\\tau$ plus other particles.\nTherefore, each reaction of such a type is a source of a\na $\\nu_\\tau+\\bar{\\nu}_\\tau$ pair. Finally, for the flux density\nwe have\n \\be\nN_{\\nu_\\tau}\\sim 3\\cdot 10^3\\nu_\\tau\/({\\rm s}\\cdot {\\rm mrad}^2)\\lesssim\n8\\cdot 10^{-6} N_{\\nu_\\mu}\\,.\\label{nutflux}\n \\ee\nThe $\\nu_\\tau$ (or $\\bar{\\nu}_\\tau$) energy is typically\nhigher than that of $\\nu_\\mu$ by a factor of $2\\div 2.5$.\n\nBesides, $\\nu_\\tau$ will be produced by non-decayed pions\nwithin the protecting wall behind NP in the process like\n$\\pi N\\to D_sX\\to \\tau\\nu_\\tau X$. The cross section of\nthis process increases rapidly with the energy growth and\nequals $0.13\\mu$b at $E_\\pi=200$~GeV \\cite{tel1}. A rough\nestimate shows that the number of additional $\\nu_\\tau$\npropagating in the same angular interval is close to the\nestimate \\eqref{nutflux}. In the numerical\nestimates below we consider, for definiteness, the first\ncontribution only. A measurement of $\\nu_\\tau$ flux in\nthe NBD is a necessary component for the study of $\\nu_\\mu-\\nu_\\tau$\noscillations in FDD.\n\n\n\n\n\n\\section{ Nearby detector (NBD)}\n\n\nThe main goal of the nearby detector (NBD) is to measure\nthe energy and angular distribution of neutrinos within\nthe beam as well as $N_{\\nu_e}\/N_{\\nu_\\mu}$ and\n$N_{\\nu_\\tau}\/N_{\\nu_\\mu}$.\n\nWe propose to place the NBD at the reasonable distance behind NT\nand a concrete\nwall (to eliminate pions and other particles from the initial\nbeam). For estimates, we consider the body of NBD in a form of the\nwater cylinder with a radius about 2-3~m (roughly the same as\nNT) and length $\\ell_{NBD}\\approx 100$~m. The detailed construction\nof the detector should be considered separately.\n\nFor $E_\\nu=30$ GeV, the cross section for $\\nu$ absorption\nis\n \\bear{c}\n \\sigma(\\bar{\\nu} N\\to \\mu^+h)=0.1 \\pi\\alpha^2\\fr{m_pE_{\\bar{\\nu}}}\n{M_W^4\\sin^4\\theta_W}\n \\approx\n10^{-37} {\\rm cm}^2,\\\\ \\sigma(\\nu N\\to\n\\mu^-h)=0.22 \\pi\\alpha^2\\fr{m_pE_\\nu}{M_W^4\\sin^4\\theta_W}\\approx 2\\cdot 10^{-37}\n{\\rm cm}^2.\n \\eear{}\nTaking into account these numbers, the free path length in water is\n $\\lambda_{\\bar{\\nu}}=10^{13}$~{\\rm cm} and $\\lambda_\\nu= 0.45\\cdot\n10^{13}$~{\\rm cm}. That gives\n \\bear{c}\n(1\\div 2)\\cdot 10^7\\;\\; \\mu\/{ {\\rm year}}\\;\\;\n ({\\rm with}\\;\\;\\la E_\\mu\\ra\\sim 30\\;{\\rm GeV});\\\\\n150\\div 250\\;\\; \\tau\/{ {\\rm year}}\\;\\; ({\\rm with}\\;\\;\\la\nE_\\tau\\ra\\sim 50\\;{\\rm GeV})\\,.\n \\eear{numberNBD}\n(here 1 year =$10^7$~s, that is LC working time). These numbers look sufficient for\ndetailed measurements of muon neutrino spectra and for\nverification of the calculated direct $\\nu_\\tau$ background.\n\n\n\\section{Far Distance Detector (FDD)}\n\nHere we consider how the FDD can be used for a solution of the single problem:\n$\\nu_\\mu-\\nu_\\tau$ and (or) $\\nu_\\mu-\\nu_{sterile}$ oscillations.\nOther possible applications should be considered elsewhere.\nWe discuss here two possible position of FDD -- at relatively small distance from LC -- FDD I (with special detector) and very far from LC -- FDD II (with using big detectors, constructed for another goals).\n\nFor the\nlength of oscillations we use estimate \\cite{Vysot}\n \\be\n L_{osc}\\approx\nE_\\nu\/(50~{\\rm GeV})\\cdot 10^5~{\\rm km}\\,.\\label{Losc}\n \\ee\n\n\n\\subsection{FDD I}\n\nWe discuss first the opportunity to construct special relatively compact detector with not too expensive excavation work at the distance of a few hundred kilometers from LC (for definiteness, 200 km). For this distance the NT should be\nturned at 16~mrad angle below horizon. This angle can be reduced by\n3~mrad (one half of angular spread of initial pion beam).\n\nWe consider the body of this FDD in the form of water channel of length 1~km with radius $R_F\\approx 40$~m. The transverse size is\nlimited by water transparency.\n\nThe fraction of neutrino's reaching this FDD is given by\nratio $k=(R_F\/L_F)^2\/[(r_{NT}\/L_{NT})^2]$. In\nour case $k\\approx 0.01$. Main effect under interest here is\n$\\nu_\\mu\\to \\nu_\\tau$ oscillation. They add\n$(L_F\/L_{osc})^2N_{\\nu_\\mu}$ to initial $N_{\\nu_\\tau}$.\n\nIn FDD of chosen sizes we expect the counting rate to be\njust 10 times lower than that in NBD \\eqref{numberNBD} for\n$\\nu N\\to \\mu X$ reactions with high energy neutrino. We\nalso expect the rate of $\\nu_\\tau N\\to \\tau X$ events to be\nanother $10^5$ times lower (that is about 10 times higher\nthan the background given by initial $\\nu_\\tau$ flux),\n \\be\n\\begin{array}{c}\n N(\\nu_\\mu N\\to \\mu X)\\approx (1\\div 2)\\cdot\n 10^6\/year,\\\\\nN(\\nu_\\tau N\\to \\tau X)\\approx (10\\div\n20)\/year\\end{array}\\;\\; in\\;\\; FDDI.\\label{FDDInumb}\n \\ee\nFor neutrino of lower energies effect increases. Indeed,\n$\\sigma(\\nu N\\to\\tau X)\\propto E_\\nu$ while $L_{osc}\\propto\nE_\\nu$. Therefore, observed number of $\\tau$ from\noscillations increases $\\propto 1\/E_\\nu$ at $E_\\nu\\ge\n10$~GeV. The additional counting rate for $\\nu_\\tau N\\to\n\\tau X$ reaction with low energy neutrino (with $\\la\nE_\\nu\\ra=9$~GeV) cannot be estimated so simply, but rough\nestimates give numbers similar to \\eqref{FDDInumb}.\n\nThese numbers look sufficient for separation of\n$\\nu_\\mu-\\nu_\\tau$ oscillations and rough measurement of\n$s_{23}$.\n\nNote that at considered FDD I size the counting rate of $\\nu_\\tau\nN\\to \\tau X$ reaction is independent on FDD distance from\nLC, $L_F$. The growth of $L_F$ improves the signal to\nbackground ratio for oscillations. The value of signal\nnaturally increases with growth of volume of\nFDD I.\n\n\n\n\\subsection{FDD II}\n\nNow we consider very attractive opportunity to use for FDD existent neutrino telescope with volume of 1 km$^3$ situated at Lake Baikal or in Antarctica (IceCube detector) with the\ndistance {\\it basic LC --- FDD II} $L_F\\approx 10^4$~km. This\nopportunity requires an excavation work\nfor NT and NBD at the angle about $50^\\circ$ under horizon.\n\n\nAt this distance according to \\eqref{Losc} for $\\nu$ with energy about 30~GeV, we expect\nthe conversion of $(L_F\/L_{osc})^2\\approx 1\/36$ for\n$\\nu_\\mu\\to \\nu_\\tau$ or $\\nu_\\mu\\to \\nu_{sterile}$.\n\nThe number of expected events $\\nu_\\mu\\to \\mu\nX$ with high energy neutrinos will be about 0.01 of that in\nNBD,\n \\be\n \\begin{array}{c}\n N(\\nu_\\mu N\\to \\mu X)\\approx (1\\div 2)\\cdot\n 10^5\/{\\rm year},\\\\\nN(\\nu_\\tau N\\to \\tau X)\\approx 3\\cdot 10^3\/{\\rm year}\\,.\\end{array} \\label{FDDIInumb}\n \\ee\nThe contribution of low energy neutrinos increases both\nthese counting rates.\n\nTherefore, one can hope that a few years of experimentation\nwith a reasonable $\\tau$ detection efficiency will allow one to\nmeasure $s_{23}$ with percent accuracy, and a similar period\nof observation of $\\mu$ production will allow to observe\nthe loss of $\\nu_\\mu$ due to transition of this neutrino to\n $\\nu_{sterile}$.\n\n\n\n\n\\section{Discussion }\n\nHere we suggest to construct Neutrino Factory with great physical potential using beam of Linear Collider after main collision there.\n\nAll technical details of the proposed scheme including\nsizes of all elements, construction, and materials of\ndetectors can be modified in the forthcoming simulations\nand optimization of parameters. The numbers obtained above\nrepresent the first careful estimates. In particular, rate of neutrino can be enhanced with increasing of length of PP, this rate can appear significantly higher after more accurate calculation of pion production there, the length of NT can be reduced due to economical reasons, etc.\n\nAfter first stage of Linear Collider its energy can be increased. For these stages proposed scheme can be used without changes (except magnetic field in BM and taking into account new time structure of neutrino beam).\n\nWe did not discuss here methods of $\\mu$ and $\\tau$ detection\nand their efficiency. Next, a large fraction of residual\nelectrons, photons and pions leaving the PP will reach the\nwalls of the NT pipe. The heat sink and radiation\nprotection of this pipe must be taken into account.\n\n\nA more detailed physical program of this $\\nu$F\nwill include many features of that in other projects\n(see, e.g.,~\\cite{nufact1}-\\cite{nufact3}).\n\n\\section{ Other possible applications of some parts of $\\pmb\\nu$F}\n\n\n\\bu {\\bf PP for the fixed\ntarget experiment}. The PP can be treated as an $eN\/\\gamma N$\ncollider with luminosity $3\\cdot 10^{39}$~cm$^{-2}$s$^{-1}$ with a\nc.m.s. energy of about 23~GeV. Therefore, if one adds some standard\ndetector equipment behind PP, it can be also used for a fixed target\n$eN\/\\gamma N$ experiment. Here one can\nstudy rare processes in $\\gamma N$ collisions, $B$\nphysics, etc.\n\n\n\n\\bu {\\bf Additional opportunity for using NBD.} The high rate\nof $\\nu_\\mu N\\to \\mu X$ processes\nexpected in NBD allows one to study new problems of high\nenergy physics. The simplest example is the opportunity to\nstudy charged and axial current induced structure functions and\ndiffraction ($\\nu N\\to \\mu +hadrons$, $\\nu\nN\\to\\mu N' \\rho^\\pm $, $\\nu N\\to\\mu N' b_1^\\pm$,...)\nwith high precision.\\\\\n\n\n\n\n\nI am thankful to N.~Budnev, S.~Eidelman, D.~Naumov, L.~Okun, V.~Saveliev,\nA.~Sessler, V.~Serbo, A.~Skrinsky, O.~Smirnov, V.~Telnov, M.~Vysotsky, M.~Zolotorev for comments.\nThe paper is supported in part by Russian grants RFBR, NSh,\nof Presidium RAS and\nPolish Ministry of Science and Higher\nEducation Grant N202 230337\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt many situations in modeling and analysis, it is helpful to use a coordinate system other than the standard Cartesian system. While a Cartesian system has many desirable properties, it is sometimes more beneficial to have a coordinate system which is better tuned to the character of the problem. If a problem has an identifiable symmetry to it, such as cylindrical, or spherical, or otherwise, then it is often possible to simplify or even eliminate one or more dimensions of the problem. Such reductions in the complexity of problems ease analysis and accelerate the acquisition of numerical solutions. This is the precise motivation for the works in \\cite{ConEuler} and \\cite{ConMHD} which eliminate the radial dimension of supersonic flow problems governed by the Euler and Ideal Magnetohydrodynamic (MHD) equations respectively to acquire the conical versions of those equations. The Conical Euler equations:\n\n\\begin{subequations}\\label{EulerCon}\n\\begin{gather}\n\\left(\\rho V^\\beta\\right)_{|\\beta} = 0 \\label{mass} \\\\\n\\left(\\rho V^i V^\\beta + G^{i\\beta}P\\right)_{|\\beta} = 0 \\label{mom} \\\\\n\\left( \\left[\\rho E+P\\right] V^\\beta \\right)_{|\\beta} =0 \\label{energy}\n\\end{gather}\n\\end{subequations}\n\n\nand the Conical MHD Equations:\n\n\n\\begin{subequations}\\label{MHDCon}\n\\begin{gather}\n \\left(\\rho V^\\beta\\right)_{|\\beta} = 0 \\label{mass_mhd} \\\\\n \\left(\\rho V^iV^\\beta - \\frac{1}{\\mu}B^iB^\\beta + G^{i\\beta}\\left(P + \\frac{|\\pmb{B}|^2}{2\\mu}\\right)\\right)_{|\\beta} = -\\frac{1}{\\mu}B^iB^\\beta_{|\\beta} \\label{mom_mhd}\n \\\\\n \\left( \\left(\\rho E+P+\\frac{|\\pmb{B}|^2}{\\mu}\\right)V^\\beta - \\frac{1}{\\mu}(\\pmb{V}\\cdot\\pmb{B})B^\\beta \\right)_{|\\beta} = -\\frac{1}{\\mu}(\\pmb{V}\\cdot\\pmb{B})B^\\beta_{|\\beta} \\label{energy_mhd}\n \\\\\n (V^\\beta B^i-V^iB^\\beta)_{|\\beta} = -V^iB^\\beta_{|\\beta} \\label{mag_mhd}\n\\end{gather}\n\\end{subequations}\n\n\nare defined on the surface of a sphere which is two dimensional, but curved. The unique character of the systems made them incompatible with basic numerical methods. Numerical methods have been developed for the conical Euler equations subject to the assumption of irrotational flow in \\cite{Guan} and \\cite{SriFAMethod} with success. These however did not so easily extend to the more general conical equations. Furthermore there are no known efforts to solve the conical MHD equations numerically. Therefore it was fitting to develop a new method designed to handle the challenge of solving a fluid flow problem on a curved manifold.\n\nThe curvature of the surface was accounted for using tensor calculus which provides tools that can systematically transform equations between coordinate systems. In fact it allows equations to be put in general form, not referencing any particular coordinate system, and thus appropriate to be adapted to any coordinate system. As an example, in numerical simulations of gas flows past bodies, it is convenient for the coordinate system to conform to the contour of the object in the flow. With a coordinate free formulation of the governing equations, the problem is abstractly the same regardless of the exact shape of the object around which the gas is flowing.\n\nTo use such formulations in practice though, appropriate numerical methods must be developed which accurately accommodate the non-uniformity of the coordinate lines. A key component of ensuring a numerical method does this is deriving discrete source terms analogous to the Christoffel symbols which show up in expressions involving derivatives with respect to curved coordinate lines. It is important in numerics for source terms to not only be consistent in the limit of zero mesh spacing, but also to have a behavior which is consistent with the continuous case even with a finite mesh spacing \\cite{balbas,BalancedNT,jin_2001,kurganov2007}. Work has been done on fluid flow problems on manifolds such as in \\cite{Man_Lev} in which the geometric terms were consistent in the limit of zero mesh spacing but did not truly capture the tensorial nature of the problems, and thus did not perfectly captured steady state solutions. Work has also been done to develop appropriate source terms in applications such as shallow water and chemically reacting flows which capture behavior and steady solutions in addition to being consistent in value. However, so far work has not been done which both addresses a fluid flow problem on a general curved manifold \\textit{and} derives the geometric source terms in such a way as to preserve the tensorial nature of the problem and thus accurately capture steady state solutions. In this work we demonstrate how to derive such source terms for a large class of discrete differential operators and manifolds. We then develop a numerical method involving these source terms to solve the conical Euler and MHD equations on the surface of a sphere.\n\nIn section \\ref{sec:CD} we introduce the covariant derivative in a curved coordinate system. In the following section we develop a discrete analog of the covariant derivative by deriving source terms which correctly account for the curvature of the coordinate system. An example of how these can be applied to a modern central scheme is presented in section \\ref{sec:CS}. The conical Euler and MHD equations are introduced in section \\ref{sec:CF}, and a numerical method to solve them is developed in sections \\ref{sec:Mesh}, \\ref{sec:BC}, \\ref{sec:Disc}, and \\ref{sec:SP}. Numerical results produced by this method are presented and discussed in section \\ref{sec:RD}.\n\n\n\\section{Covariant Derivative}\\label{sec:CD}\n\nWe restrict ourselves to the case of a Riemannian manifold. This restriction allows us to define a real vector basis on the manifold which refers back to a Cartesian coordinate system. This vector basis is the Jacobian matrix of the coordinate transformation between the Cartesian system and the system in which the problem is formulated. While such a restriction is not universally applicable, it does apply to a wide variety of current research areas. It mainly only breaks down in relativistic applications. Furthermore, this treatment of tensor calculus is simpler and highlights the use of tools from calculus and linear algebra.\n\nIn a curved coordinate system, the basis for vectors and tensors is no longer uniform. Thus it is possible for the components of a vector to change, but for the vector to remain the same, and conversely for the vector to change, but the components to remain the same. The covariant derivative (denoted $(\\cdot)_{|i}$ for differentiation in the $i^{th}$ coordinate direction) accounts for this. If a vector does not change in a given direction, then the covariant derivative in that direction will be zero, even if the components are changing.\n\nThe covariant derivative is the foundation of different kinds of derivatives which are seen in practice such as the gradient, divergence, curl, and Laplacian. It is thus the case that if an appropriate discrete form of the covariant derivative can be derived, then expressions for a wide variety of operators will naturally follow. In order to derive a discrete form, the mathematical character of the covariant derivative must be understood.\n\n\nConsider a $d$-dimensional Euclidean space spanned by two coordinate systems, a Cartesian system ($\\tilde{X}$) with coordinates $\\tilde{x}^i$ for $i\\in \\{1,2,3,...,d\\}$, and another curved system ($X$) with coordinates $x^i$ for $i\\in \\{1,2,3,...,d\\}$. The Jacobian matrix is defined at every point, given by $\\JJ{i}{j}$ and provides the basis for vectors and tensors. Thus:\n\n\\begin{equation}\n \\forall u^j \\in X \\text{, we have } \\JJ{i}{j}u^j=\\tilde{u}^i\\in\\tilde{X} \n\\end{equation}\n\nand\n\n\\begin{equation}\n \\forall w^{jk} \\in X \\text{, we have } \\JJ{i}{j}\\JJ{h}{k}w^{jk}=\\tilde{w}^{ih}\\in\\tilde{X} \n\\end{equation}\n\nand so on. \n\nLikewise, there is at every point a dual basis, $\\JD{i}{j}$, which acts as a basis for derivatives. That is:\n\n\\begin{equation}\n \\JD{i}{j}\\frac{\\partial }{\\partial x^i} = \\frac{\\partial }{\\partial \\tilde{x}^j}\n\\end{equation}\n\n\n\nWe would like for derivatives of tensors to transform in the same manner. However in general:\n\n\\begin{equation}\n \\JD{i}{j}\\JJ{k}{l}\\frac{\\partial u^l}{\\partial x^i}\\neq\\frac{\\partial \\tilde{u}^k}{\\partial \\tilde{x}^j}\n\\end{equation}\n\nSo instead the covariant derivative must be used. Examples of covariant derivatives of tensors of various orders are given here:\n\n\\begin{equation}\n (f)_{|i} = \\frac{\\partial f}{\\partial x^i}\n\\end{equation}\n\n\\begin{equation}\n (u^j)_{|i} = \\frac{\\partial u^j}{\\partial x^i} + \\Gamma\\indices{_i^j_k}u^k\n\\end{equation}\n\n\\begin{equation}\n (w^{jk})_{|i} = \\frac{\\partial w^{jk}}{\\partial x^i} + \\Gamma\\indices{_i^j_l}w^{lk} + \\Gamma\\indices{_i^k_l}w^{jl}\n\\end{equation}\n\n\nThese satisfy the transformation relationships:\n\n\\begin{equation}\\label{eq:CDtransr1}\n \\JD{i}{l}\\JJ{m}{j}\\left[\\frac{\\partial u^j}{\\partial x^i} + \\Gamma\\indices{_i^j_k}u^k\\right]=\\frac{\\partial \\tilde{u}^m}{\\partial \\tilde{x}^l} + \\tilde{\\Gamma}\\indices{_l^m_k}\\tilde{u}^k\n\\end{equation}\n\nand\n\n\\begin{multline}\\label{eq:CDtransr2}\n \\JD{i}{p}\\JJ{m}{j}\\JJ{n}{k}\\left[\\frac{\\partial w^{jk}}{\\partial x^i} + \\Gamma\\indices{_i^j_l}w^{lk} + \\Gamma\\indices{_i^k_l}w^{jl}\\right] \\\\ =\\frac{\\partial \\tilde{w}^{mn}}{\\partial \\tilde{x}^p} + \\tilde{\\Gamma}\\indices{_p^m_l}\\tilde{w}^{ln} + \\tilde{\\Gamma}\\indices{_p^n_l}\\tilde{w}^{ml}\n\\end{multline}\n\nand so on. The Christoffel symbol, $\\Gamma$, is defined by the metric tensor:\n\n\\begin{equation}\\label{ChrisDef}\n \\Gamma\\indices{_k^j_i} = \\Gamma\\indices{_i^j_k} = \\FullChris{i}{j}{k}\n\\end{equation}\n\nThe metric tensor, which defines length and angle in the curved system is given by:\n\n\\begin{equation}\\label{metricDef}\n G_{ij} = \\JJ{h}{i}\\JJ{h}{j}\n\\end{equation}\n\nand its inverse by:\n\n\\begin{equation}\\label{metricInvDef}\n G^{ki} = \\JJ{k}{l}^{-1}\\JJ{i}{l}^{-1}\n\\end{equation}\n\n\n\n\n\nPlugging equations \\ref{metricDef} and \\ref{metricInvDef} into \\ref{ChrisDef} gives:\n\n\\begin{multline}\n \\Gamma\\indices{_k^j_i} = \\frac{1}{2}\\JJ{j}{n}^{-1}\\JJ{l}{n}^{-1}\\bigg[ \\frac{\\partial }{\\partial x^k}\\left(\\JJ{h}{l}\\JJ{h}{i}\\right) \\\\ + \\frac{\\partial }{\\partial x^i}\\left(\\JJ{h}{l}\\JJ{h}{k}\\right) - \\frac{\\partial }{\\partial x^l}\\left(\\JJ{h}{i}\\JJ{h}{k}\\right) \\bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\frac{1}{2}\\JJ{j}{n}^{-1}\\JJ{l}{n}^{-1}\\bigg[ \\JJ{h}{i}\\frac{\\partial }{\\partial x^k}\\JJ{h}{l} + \\JJ{h}{l}\\frac{\\partial }{\\partial x^k}\\JJ{h}{i} \\\\ + \\JJ{h}{k}\\frac{\\partial }{\\partial x^i}\\JJ{h}{l} + \\JJ{h}{l}\\frac{\\partial }{\\partial x^i}\\JJ{h}{k} \\\\ - \\JJ{h}{k}\\frac{\\partial }{\\partial x^l}\\JJ{h}{i} - \\JJ{h}{i}\\frac{\\partial }{\\partial x^l}\\JJ{h}{k} \\bigg]\n\\end{multline}\n\nby switching the order of some of the derivatives, then combining and canceling terms, we get:\n\n\\begin{equation}\n = \\frac{1}{2}\\JJ{j}{n}^{-1}\\JJ{l}{n}^{-1}\\bigg[ 2\\JJ{h}{l}\\frac{\\partial }{\\partial x^k}\\JJ{h}{i} \\bigg]\n\\end{equation}\n\n\\begin{equation}\n = \\JJ{j}{n}^{-1}\\delta^h_n\\frac{\\partial }{\\partial x^k}\\JJ{h}{i}\n\\end{equation}\n\n\\begin{equation}\n = \\JJ{j}{h}^{-1}\\frac{\\partial }{\\partial x^k}\\JJ{h}{i}\n\\end{equation}\n\nA discrete formulation of the covariant derivative will have to have source terms which are consistent with this expression in the limit as mesh spacing goes to zero. As well as preserving the transformation relationships \\eqref{eq:CDtransr1} and \\eqref{eq:CDtransr2}.\n\n\n\\section{Discrete Formulation}\\label{sec:DiscCD}\n\nThe discrete representation of the $d$-dimensional manifold would be a list of points, $\\{(x^1_{,i},x^2_{,i},x^3_{,i},...,x^d_{,i})\\}^N_{i=1}$. Where $N$ is the number of points in the mesh, and the mesh index is a subscript separated with a comma from indices for tensor components and indices referring to coordinate directions. It is also assumed that there are Jacobian matrices at each point in the mesh, $\\left\\{ \\JJ{j}{k}_{,i} \\right\\}_{i=1}^N$. Discrete differential operators acting on a function defined on the mesh are denoted $D_i$ for differentiation in the $i^{th}$ coordinate direction. It is not assumed that there is another, Cartesian mesh, thus it causes no conflicts to define: $\\tilde{D}_k\\equiv \\JD{j}{k}D_j$.\n\n\nDeriving a consistent, discrete covariant derivative associated with a given discrete differential operator relies on the following theorem.\n\n\\begin{theorem}\\label{sumOfDiffs}\n Let $\\{u_i\\}_1^N$ be collection of values. If a linear combination of those values, $\\sum_{i=1}^N \\phi_iu_i$, has the property that the coefficients $\\{\\phi_i\\}_1^N$ sum to zero, then the linear combination can be written as a linear combination of differences of pairs of values in $\\{u_i\\}_1^N$.\n\\end{theorem}\n\n\\begin{proof}\nIf we have:\n\n\\begin{equation}\n \\sum_{i=1}^N \\phi_i = 0\n\\end{equation}\n\nthen we have:\n\n\\begin{equation}\n \\phi_1 = - \\sum_{i=2}^N \\phi_i\n\\end{equation}\n\nTherefore:\n\n\\begin{equation}\n \\sum_{i=1}^N \\phi_iu_i = \\phi_1u_1 + \\sum_{i=2}^N \\phi_iu_i\n\\end{equation}\n\n\\begin{equation}\n = -\\left(\\sum_{i=2}^N \\phi_i\\right)u_1 + \\sum_{i=2}^N \\phi_iu_i\n\\end{equation}\n\n\\begin{equation}\n = \\sum_{i=2}^N \\phi_i(u_i-u_1)\n\\end{equation}\n\nwhich is a linear combination of differences of pairs of values in $\\{u_i\\}_1^N$.\n\\end{proof}\n\n\nMany discrete differential operators have the property that the coefficients sum to zero. In fact, it is a requirement for standard finite difference approximations of derivatives. The theorem thus applies to a broad class of differential operators and allows Christoffel-like source terms to be derived for them.\n\n\n\nWe now consider a discrete differential operator which we would like to use to build a discrete covariant derivative. We assume that the discrete operator is consistent with true differentiation in the limit as mesh size goes to zero.; that is \n\n\\begin{equation}\n \\lim_{\\Delta x\\to 0}D_if = \\frac{\\partial f}{\\partial x^i}\n\\end{equation}\n\nAnd we assume that this operator has coefficients which sum to zero. Using Theorem \\ref{sumOfDiffs} the operation will be written as a weighted sum of differences. It should be pointed out that the particular set of differences given in the proof of Theorem \\ref{sumOfDiffs} is not necessarily the only way to write the operator as the sum of differences. It simply proves that there will always be at least one. Generally, for each coefficient, there will be an associated ``+'' index and ``-'' index. The difference associated with that coefficient is given by the ``+'' variable minus the ``-'' variable.\n\nTo come up with a discrete covariant derivative, we need to come up with an operator $CD_i$ such that:\n\n\\begin{equation}\n \\tilde{D}_j\\tilde{u}^k = \\JJ{k}{l}\\JD{i}{j}CD_iu^l\n\\end{equation}\n\nWe already have by definition that:\n\n\\begin{equation}\n \\tilde{D}_j\\tilde{u}^k = \\JD{i}{j}D_i\\tilde{u}^k\n\\end{equation}\n\nthus we only need to acquire:\n\n\\begin{equation}\n D_i\\tilde{u}^k = \\JJ{k}{l}CD_iu^l\n\\end{equation}\n\nWe can write the derivative of the Cartesian components in the ``$s$'' direction at mesh index ``$c$'' as:\n\n\\begin{equation}\n D_s\\tilde{u}^j_{,c} = \\sum_{i=1}^N \\phi_i(\\tilde{u}^j_{,k^+_i} - \\tilde{u}^j_{,k^-_i})\n\\end{equation}\n\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i(\\tilde{u}^j_{,k^+_i} - \\tilde{u}^j_{,c} + \\tilde{u}^j_{,c} - \\tilde{u}^j_{,k^-_i})\n\\end{equation}\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,k^+_i}u^l_{,k^+_i} - \\JJ{j}{l}_{,c}u^l_{,c} + \\JJ{j}{l}_{,c}u^l_{,c} - \\JJ{j}{l}_{,k^-_i}u^l_{,k^-_i} \\right)\n\\end{equation}\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,k^+_i}u^l_{,k^+_i} - \\JJ{j}{l}_{,c}u^l_{,c} \\right) + \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,c}u^l_{,c} - \\JJ{j}{l}_{,k^-_i}u^l_{,k^-_i} \\right)\n\\end{equation}\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\left( \\left(\\JJ{j}{l}_{,k^+_i} + \\JJ{j}{l}_{,c}\\right)u^l_{,k^+_i} - \\JJ{j}{l}_{,c}\\left(u^l_{,k^+_i} - u^l_{,c}\\right) \\right) \\\\\n + \\sum_{i=1}^N \\phi_i\\left( \\JJ{j}{l}_{,c}\\left(u^l_{,c} - u^l_{,k^-_i}\\right) + \\left(\\JJ{j}{l}_{,c} - \\JJ{j}{l}_{,k^-_i}\\right)u^l_{,k^-_i} \\right)\n\\end{multline}\n\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c}\\Bigg[ \\sum_{i=1}^N \\phi_i\\left( \\left(u^l_{,k^+_i} - u^l_{,c}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\right) \\\\\n + \\sum_{i=1}^N \\phi_i\\left( \\left(u^l_{,c} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\right) \\Bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c} \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(u^l_{,k^+_i} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline}\n\nThis suggests that the discrete covariant derivative corresponding to the discrete derivative is:\n\n\\begin{multline}\n CD_su^l = \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(u^l_{,k^+_i} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline}\n\n\nFurthermore it can be shown that with ``nice'' enough solution and manifold this expression is consistent with the true covariant derivative. Indeed:\n\n\\begin{multline*}\n \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(u^l_{,k^+_i} - u^l_{,k^-_i}\\right) + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline*}\n\n\n\\begin{multline}\n = \\frac{\\partial u^l}{\\partial x^s} + \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right)u^n_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right)u^n_{,k^-_i} \\Bigg] \n\\end{multline}\n\nAssuming $u$ and the manifold are sufficiently bounded and smooth, then:\n\n\\begin{multline}\n = \\frac{\\partial u^l}{\\partial x^s} + \\JJ{l}{m}^{-1}u^n\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\left(\\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,c}\\right) \\\\ + \\left(\\JJ{m}{n}_{,c} - \\JJ{m}{n}_{,k^-_i}\\right) \\Bigg] \n\\end{multline}\n\n\\begin{equation}\n = \\frac{\\partial u^l}{\\partial x^s} + \\JJ{l}{m}^{-1}u^n\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\JJ{m}{n}_{,k^+_i} - \\JJ{m}{n}_{,k^-_i} \\Bigg] \n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial u^l}{\\partial x^s} + \\JJ{l}{m}^{-1}u^n \\frac{\\partial }{\\partial x^s}\\JJ{m}{n}\n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial u^l}{\\partial x^s} + \\Gamma\\indices{_s^l_n}u^n \n\\end{equation}\n\nWhich is the covariant derivative of $u$.\n\nFor a rank 2 tensor, the derivation proceeds similarly.\n\n\n\\begin{equation}\n D_s\\tilde{w}^{jh}_{,c} = \\sum_{i=1}^N \\phi_i(\\tilde{w}^{jh}_{,k^+_i} - \\tilde{w}^{jh}_{,k^-_i})\n\\end{equation}\n\n\n\\begin{equation}\n = \\sum_{i=1}^N \\phi_i(\\tilde{w}^{jh}_{,k^+_i} - \\tilde{w}^{jh}_{,c} + \\tilde{w}^{jh}_{,c} - \\tilde{w}^{jh}_{,k^-_i})\n\\end{equation}\n\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\Bigg( \\JJ{j}{l}_{,k^+_i}\\JJ{h}{p}_{,k^+_i}w^{lp}_{,k^+_i} - \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} \\\\ + \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} - \\JJ{j}{l}_{,k^-_i}\\JJ{h}{p}_{,k^-_i}w^{lp}_{,k^-_i} \\Bigg)\n\\end{multline}\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\Bigg( \\JJ{j}{l}_{,k^+_i}\\JJ{h}{p}_{,k^+_i}w^{lp}_{,k^+_i} - \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} \\Bigg) \\\\ + \\sum_{i=1}^N \\phi_i\\Bigg(\\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}w^{lp}_{,c} - \\JJ{j}{l}_{,k^-_i}\\JJ{h}{p}_{,k^-_i}w^{lp}_{,k^-_i} \\Bigg)\n\\end{multline}\n\n\n\\begin{multline}\n = \\sum_{i=1}^N \\phi_i\\Bigg( \\left(\\JJ{j}{l}_{,k^+_i}\\JJ{h}{p}_{,k^+_i} - \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\right)w^{lp}_{,k^+_i} \\\\ + \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\left(w^{lp}_{,k^+_i} - w^{lp}_{,c}\\right) \\Bigg) \\\\ + \\sum_{i=1}^N \\phi_i\\Bigg(\\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\left(w^{lp}_{,c} - w^{lp}_{,k^-_i}\\right) \\\\ + \\left(\\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c} - \\JJ{j}{l}_{,k^-_i}\\JJ{h}{p}_{,k^-_i}\\right)w^{lp}_{,k^-_i} \\Bigg)\n\\end{multline}\n\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,c}\\right) + \\left(w^{lp}_{,c} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\JJ{j}{l}_{,c}\\JJ{h}{p}_{,c}\\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\nThis suggests that the discrete covariant derivative corresponding to the discrete derivative is:\n\n\\begin{multline}\n CD_sw^{lp} = \\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\nThis too can be shown to be consistent in the limit of mesh spacing going to zero.\n\n\\begin{multline*}\n \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{lp}_{,k^+_i} - w^{lp}_{,k^-_i}\\right) \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline*}\n\n\\begin{multline}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\\\ \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right)w^{nr}_{,k^+_i} \\\\ + \\JJ{l}{m}_{,c}^{-1}\\JJ{p}{q}_{,c}^{-1}\\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right)w^{nr}_{,k^-_i} \\Bigg]\n\\end{multline}\n\n\n\\begin{multline}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\\\ \\left(\\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c}\\right) \\\\ + \\left(\\JJ{m}{n}_{,c}\\JJ{q}{r}_{,c} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i}\\right) \\Bigg]\n\\end{multline}\n\n\\begin{multline}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\lim_{\\Delta x\\to 0} \\sum_{i=1}^N \\phi_i\\Bigg[ \\\\ \\JJ{m}{n}_{,k^+_i}\\JJ{q}{r}_{,k^+_i} - \\JJ{m}{n}_{,k^-_i}\\JJ{q}{r}_{,k^-_i} \\Bigg]\n\\end{multline}\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\frac{\\partial}{\\partial x^s}\\Bigg( \\JJ{m}{n}\\JJ{q}{r}\\Bigg)\n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\JJ{l}{m}^{-1}\\JJ{p}{q}^{-1}w^{nr}\\Bigg( \\JJ{m}{n}\\frac{\\partial}{\\partial x^s}\\JJ{q}{r} + \\JJ{q}{r}\\frac{\\partial}{\\partial x^s}\\JJ{m}{n}\\Bigg)\n\\end{equation}\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + w^{nr}\\Bigg( \\delta^l_n\\JJ{p}{q}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{q}{r} + \\delta^p_r\\JJ{l}{m}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{m}{n}\\Bigg)\n\\end{equation}\n\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + w^{lr} \\JJ{p}{q}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{q}{r} + w^{np}\\JJ{l}{m}^{-1}\\frac{\\partial}{\\partial x^s}\\JJ{m}{n}\n\\end{equation}\n\n\n\\begin{equation}\n = \\frac{\\partial w^{lp}}{\\partial x^s} + \\Gamma\\indices{_s^p_r} w^{lr} + \\Gamma\\indices{_s^l_n} w^{np}\n\\end{equation}\n\nWhich is the covariant derivative of a rank 2 tensor.\n\n\nFollowing the same process we can derive the discrete covariant derivative for a rank $n$ tensor corresponding to a discrete derivative:\n\n\\begin{multline}\n CD_sw^{i_1i_2...i_n} = \\sum_{i=1}^N \\phi_i\\Bigg[\\left(w^{i_1i_2...i_n}_{,k^+_i} - w^{i_1i_2...i_n}_{,k^-_i}\\right) \\\\ + \\prod_{l=1}^n\\JJ{i_l}{j_l}_{,c}^{-1}\\left(\\prod_{l=1}^n\\JJ{j_l}{m_l}_{,k^+_i} - \\prod_{l=1}^n\\JJ{j_l}{m_l}_{,c}\\right)w^{m_1m_2...m_n}_{,k^+_i} \\\\ + \\prod_{l=1}^n\\JJ{i_l}{j_l}_{,c}^{-1}\\left(\\prod_{l=1}^n\\JJ{j_l}{m_l}_{,c} - \\prod_{l=1}^n\\JJ{j_l}{m_l}_{,k^-_i}\\right)w^{m_1m_2...m_n}_{,k^-_i} \\Bigg]\n\\end{multline}\n\nThese expressions can be used to derive the discrete analog of any operator which is based on the covariant derivative. Not only are these expressions consistent in the limit of zero mesh spacing, but they also preserve the tensorial nature of the true covariant derivative. As a consequence of the latter property, we have the additional property:\n\n\\begin{theorem}\\label{PreserveZeroThm}\n Let $D$ be a discrete differential operator with coefficients that sum to zero. And let $CD$ be the associated discrete covariant derivative. Then for any rank $n$ tensor, $w$, we have the property:\n \\begin{multline}\n D_s\\tilde{w}^{j_1j_2...j_n} = 0 \\quad\\forall j_1j_2...j_n\\in\\{1,2,...,d\\}^n \\\\ \\Leftrightarrow CD_sw^{l_1l_2...l_n} = 0\\quad\\forall l_1l_2...l_n\\in\\{1,2,...,d\\}^n\n \\end{multline}\n where $d$ is the dimensionality of the manifold.\n\\end{theorem}\n\n\\begin{proof}\n If $D_s\\tilde{w}^{j_1j_2...j_n} = 0 \\quad\\forall j_1j_2...j_n\\in\\{1,2,...,d\\}^n$, then:\n \n \\begin{multline*}\n CD_sw^{l_1l_2...l_n} = \\left(\\prod_{k=1}^n\\JJ{l_k}{j_k}^{-1}\\right)D_s\\tilde{w}^{j_1j_2...j_n} = \\left(\\prod_{k=1}^n\\JJ{l_k}{j_k}^{-1}\\right)0 \\\\ = 0 \\quad\\forall l_1l_2...l_n\\in\\{1,2,...,d\\}^n\n \\end{multline*}\n \n Likewise, if $CD_sw^{l_1l_2...l_n} = 0\\quad\\forall l_1l_2...l_n\\in\\{1,2,...,d\\}^n$, then:\n \n \\begin{multline*}\n D_s\\tilde{w}^{j_1j_2...j_n} = \\left(\\prod_{k=1}^n\\JJ{j_k}{l_k}\\right)CD_sw^{l_1l_2...l_n} = \\left(\\prod_{k=1}^n\\JJ{j_k}{l_k}\\right)0 \\\\ = 0 \\quad\\forall j_1j_2...j_n\\in\\{1,2,...,d\\}^n\n \\end{multline*}\n \n\\end{proof}\n\nThis means that a tensor field which is uniform with respect to the Cartesian basis will be treated as exactly uniform with respect to the curved basis. This is an important property in ensuring that certain steady states of fluid flow problems are appropriately captured by a numerical method.\n\nAs a final point, it is worth noting that these expressions are still linear operators which do not depend on the function they are acting on. They depend solely on the mesh and stencil chosen, so as long as those things remain the same, the operators do not have to be recomputed. In practice then, applying the covariant derivative operator is only marginally more expensive computationally than applying a standard derivative operator.\n\n\n\\section{Application to central scheme for conservation laws}\\label{sec:CS}\n\nTo illustrate how the source terms we derived can be put into practice, we consider a central scheme developed by Kurganov and Tadmor \\cite{Kurg}. Central Schemes are a type of finite volume numerical method often applied to conservation laws. These have the advantage over other finite volume methods of not relying on solutions to the Riemann problem. The simplicity of such methods makes them easier to implement, and faster to run. \n\nThe method derived has the semi-discrete form for a one dimensional problem with uniform mesh spacing:\n\n\\begin{multline}\\label{CartesianCS}\n \\frac{d}{dt}u_{,i} = -\\frac{f^+_{,i+1\/2}+f^-_{,i+1\/2}}{2\\Delta x} + \\frac{\\lambda_{M,i+1\/2}}{2\\Delta x}[u^+_{,i+1\/2}-u^-_{,i+1\/2}] \\\\\n +\\frac{f^+_{,i-1\/2}+f^-_{,i-1\/2}}{2\\Delta x} - \\frac{\\lambda_{M,i-1\/2}}{2\\Delta x}[u^+_{,i-1\/2}-u^-_{,i-1\/2}]\n\\end{multline}\n\nIn this expression, $\\{u_{,i}\\}_{i=1}^N$ is the discrete representation of the quantity being conserved, $f$ is the flux function for that quantity, and $\\lambda_M$ is the maximum wave speed at the specified cell boundary. The index notation $i\\pm1\/2$ refers to the plus and minus boundaries of the $i$th cell, and a superscript $+$ or $-$ refers to a value defined on the plus or minus side of that cell boundary. These are calculated:\n\n\n\\begin{equation}\n u^{\\mp}_{,i\\pm1\/2} = u_{,i} \\pm \\frac{\\Delta x}{2}u_{x,i}\n\\end{equation}\n\n\n\\begin{equation}\n u^{\\pm}_{,i\\pm1\/2} = u_{,i\\pm1} \\mp \\frac{\\Delta x}{2}u_{x,i\\pm1} = u^{\\pm}_{,(i\\pm1)\\mp1\/2}\n\\end{equation}\n\n\\begin{equation}\n f^{\\mp}_{,i\\pm1\/2} = f\\left(u^{\\mp}_{,i\\pm1\/2}\\right)\n\\end{equation}\n\n\\begin{equation}\n f^{\\pm}_{,i\\pm1\/2} = f\\left(u^{\\pm}_{,i\\pm1\/2}\\right) = f\\left(u^{\\pm}_{,(i\\pm1)\\mp1\/2}\\right)\n\\end{equation}\n\n\\begin{equation}\n \\lambda_{M,i\\pm1\/2} = \\text{max}\\left( \\lambda\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\mp}_{,i\\pm1\/2}\\right) \\right), \\lambda\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\pm}_{,i\\pm1\/2}\\right) \\right) \\right)\n\\end{equation}\n\nExpressions for $u_{x,i}$ are derived based on the values of $u$ in neighboring cells. In order to improve stability, numerical methods for conservation laws use TVD slope approximations which prevent spurious oscillations from occurring around shock waves. This is addressed in the next subsection.\n\nThe expression for $\\frac{d}{dt}u_{,i}$ in equation \\eqref{CartesianCS} can be computed according to alogrithm \\ref{CartesianCSUt}. This can then be integrated in time using the ODE solver of one's choice.\n\n\\begin{algorithm}\n\\caption{Compute Time Derivative}\\label{CartesianCSUt}\n\\begin{algorithmic}[1]\n\\Procedure{$U_t$}{$u$}\n\\State $\\text{compute }u_{x,i}\\quad\\forall i$ using a TVD scheme\n\\State $u^{\\mp}_{,i\\pm1\/2} \\gets u_{,i} \\pm \\frac{\\Delta x}{2}u_{x,i}$\n\\State $f^{\\mp}_{,i\\pm1\/2} \\gets f\\left(u^{\\mp}_{,i\\pm1\/2}\\right)$\n\\State$\\lambda_{M,i\\pm1\/2} \\gets \\text{max}\\left[ \\rho\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\mp}_{,i\\pm1\/2}\\right) \\right), \\rho\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\pm}_{,(i\\pm1)\\mp1\/2}\\right) \\right) \\right]$\n\\State $u_{t,i} \\gets $\\eqref{CartesianCS}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\nThis method performs well on problems set in a Cartesian coordinate system, but it is not suited, in its current form, to be used on a curved manifold. Both the slope approximations and the time derivative formula \\eqref{CartesianCS} must be modified using the discrete source terms that have been derived.\n\n\n\\subsection{Slope limiting}\n\n\nIt is a known problem that numerical methods for fluid flow problems can cause non-physical oscillations to occur near the steep gradients of shock waves. In some cases, these oscillations can even cause the solution to destabilize and blow up. To prevent this, TVD slope approximations, or ``slope limiters,'' are used to calculate discrete derivatives. The most common slope limiter is probably the minmod limiter where minmod is defined by:\n\n\\begin{equation}\n \\text{minmod}(x,y) = \\frac{1}{2}(\\text{sign}(x)+\\text{sign}(y))\\min(|x|,|y|)\n\\end{equation}\n\nand the derivative of the solution in each mesh cell is given by:\n\n\\begin{equation}\\label{ux}\n u_{x,i} = \\text{minmod}\\left( \\frac{u_{,i}-u_{,i-1}}{\\Delta x}, \\frac{u_{,i+1}-u_{,i}}{\\Delta x}\\right)\n\\end{equation}\n\nTo apply this process to tensorial quantities, the covariant derivative must replace all derivatives. That is:\n\n\\begin{equation}\n (u)_{|x,i} = \\text{minmod}\\left( CD^B_xu_{,i}, CD^F_xu_{,i}\\right)\n\\end{equation}\n\nwhere $CD^B_x$ and $CD^F_x$ are respectively the discrete covariant derivative operators derived from the backward and forward derivative operators in equation \\eqref{ux}. By considering the tensor basis to be constant inside a mesh cell, we have the relationship:\n\n\\begin{equation}\n u_{x,i} = (u)_{|x,i}\n\\end{equation}\n\nThus we can compute the values of $u$ throughout the cell as:\n\n\\begin{equation}\n \\tilde{u}_{,i}(x) = u_{,i} + (u)_{|x,i}(x-x_{,i})\n\\end{equation}\n\nwhich provides a way to compute the values at the cell boundaries.\n\n\\subsection{Parallel transport}\n\nBefore addressing the changes to the time derivative formula \\eqref{CartesianCS}, we must first introduce a new concept to overcome an issue with finite volume methods on manifolds. Finite volume methods are based on integration rather than differentiation. The derivation for equation \\eqref{CartesianCS} presented in \\cite{Kurg} is based entirely on integration. This poses a unique challenge on a manifold with a non-uniform tensor basis. In such a setting, integrating the components of a tensor field results in a meaningless quantity. As an example, consider the integral:\n\n\\begin{equation}\n \\int_0^{2\\pi}\\int_0^1 \\cos\\theta\\hat{r} - \\sin\\theta\\hat{\\theta} \\quad rdrd\\theta\n\\end{equation}\n\nwhich is the integral of the Cartesian vector $\\hat{x}$ over the unit circle. A simple calculation will show that the integral comes out to be zero even though the true vector field is nonzero everywhere. This clearly creates a problem for a numerical method which is based on integration. In order to apply finite volume methods, tensors must be shifted to uniform bases before they can be integrated. In order to carry out these shifts without changing the tensors, we use the process of parallel transport. Parallel transport is the process of moving a tensorial quantity from one basis to another without changing its true value \\cite{Man_Lev,Lovelock}. Definition \\ref{PTdef} states this more formally.\n\n\\begin{definition}\\label{PTdef}\n Let $s$ be a curve along a manifold. A tensor, $w$ is said to be \\emph{parallel transported} along $s$ if the covariant derivative of $w$ along $s$ is identically zero. That is:\n \n \\begin{equation}\\label{PT}\n (w)_{|i}\\frac{\\partial x^i}{\\partial s}=0\n \\end{equation}\n \n\\end{definition}\n\nIf we have a tensor defined at a point, $x^i_1$ on a manifold, and are interested in finding out what that tensor's components would be at another point $x^i_2$, we can solve equation \\eqref{PT} with $s$ being a curve which connects $x^i_1$ and $x^i_2$.\n\nA discrete analog of this process can be developed using the discrete covariant derivative. Consider two neighboring mesh cells with indices $i-1$ and $i$, and centers $x_{i-1}$ and $x_{i}$. Say there is a tensor defined at $x_{i-1}$ which we would like to transport to $x_{,i}$. Using the two point difference operator, the parallel transport condition can be posed as:\n\n\\begin{equation*}\n \\frac{1}{\\Delta x}\\left[u^l_{,i} - u^l_{,i-1} + \\JJ{l}{m}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i} - \\JJ{m}{n}_{,i-1}\\right)u^n_{,i-1}\\right] = 0\n\\end{equation*}\n\n\\begin{equation}\n \\Rightarrow u^l_{,i} = u^l_{,i-1} - \\JJ{l}{m}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i} - \\JJ{m}{n}_{,i-1}\\right)u^n_{,i-1}\n\\end{equation}\n\nfor a rank 1 tensor, and:\n\n\\begin{multline*}\n \\frac{1}{\\Delta x}\\bigg[w^{lp}_{,i} - w^{lp}_{,i-1} \\\\ + \\JJ{l}{m}_{,i}^{-1}\\JJ{p}{q}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i}\\JJ{q}{r}_{,i} - \\JJ{m}{n}_{,i-1}\\JJ{q}{r}_{,i-1}\\right)w^{nr}_{,i-1}\\bigg] = 0\n\\end{multline*}\n\n\n\\begin{multline}\n \\Rightarrow w^{lr}_{,i} = w^{lp}_{,i-1} \\\\ - \\JJ{l}{m}_{,i}^{-1}\\JJ{p}{q}_{,i}^{-1}\\left(\\JJ{m}{n}_{,i}\\JJ{q}{r}_{,i} - \\JJ{m}{n}_{,i-1}\\JJ{q}{r}_{,i-1}\\right)w^{nr}_{,i-1}\n\\end{multline}\n\nfor a rank 2 tensor, and so on. These expression conveniently provide a straightforward way to compute the discretely parallel transported form of tensors in neighboring mesh cells. The notation $PT_{,i}(w^l_{,j})$ will be used to refer to a tensor which has been parallel transported from mesh cell $j$ to mesh cell $i$.\n\nIn \\cite{Man_Lev}, parallel transport was used to adapt a finite volume method to a curved manifold by, in short, transporting neighboring cells to a common basis and then applying the cartesian form of the finite volume method to the transported components. The same will be done to the present central scheme, but using the discrete parallel transport expression which preserves tensorial transformations.\n\n\n\\subsection{Modified central scheme}\n\n\nA slightly modified process for computing the time derivative which accounts for the non-uniform basis can now be devised. First, the solution is reconstructed by calculating slopes using the minmod limiter on the backward and forward covariant derivatives. These are used to compute the values of $u$, and $f$ at the cell boundaries. These values are then parallel transported to the neighboring cells which depend on them. Once all the tensors share a basis, their components can be integrated to acquire meaningful quantities. The derivation of \\eqref{CartesianCS} presented in \\cite{Kurg} then proceeds identically, but applied to the transported quantities. The resulting expression is given here:\n\n\\begin{multline}\\label{ManifoldCS}\n \\frac{d}{dt}u_{,i} = -\\frac{PT_{,i}(f^+_{,i+1\/2})+f^-_{,i+1\/2}}{2\\Delta x} + \\frac{\\lambda_{M,i+1\/2}}{2\\Delta x}[PT_{,i}(u^+_{,i+1\/2})-u^-_{,i+1\/2}] \\\\\n +\\frac{f^+_{,i-1\/2}+PT_{,i}(f^-_{,i-1\/2})}{2\\Delta x} - \\frac{\\lambda_{M,i-1\/2}}{2\\Delta x}[u^+_{,i-1\/2}-PT_{,i}(u^-_{,i-1\/2})]\n\\end{multline}\n\n\n\nThe expression is similar to \\eqref{CartesianCS}, except that $u^\\pm_{,i\\pm1\/2}$ and $f^\\pm_{,i\\pm1\/2}$ are replaced by $PT_{,i}(u^\\pm_{,i\\pm1\/2})$ and $PT_{,i}(f^\\pm_{,i\\pm1\/2})$ respectively. In addition, the maximum wave speeds, $\\lambda_M$, have to be computed based on parallel transported values. The modified procedure for computing the time derivative is given in algorithm \\ref{ManifoldCSUt}. \n\n\\begin{algorithm}\n\\caption{Compute Time Derivative - Manifold}\\label{ManifoldCSUt}\n\\begin{algorithmic}[1]\n\\Procedure{$U_t$}{$u$}\n\\State $\\text{compute }u_{|x,i} = \\text{minmod}\\left( CD^B_xu_{,i}, CD^F_xu_{,i}\\right)\\quad\\forall i$\n\\State $u^{\\mp}_{,i\\pm1\/2} \\gets u_{,i} \\pm \\frac{\\Delta x}{2}u_{|x,i}$\n\\State $f^{\\mp}_{,i\\pm1\/2} \\gets f\\left(u^{\\mp}_{,i\\pm1\/2}\\right)$\n\\State$\\lambda_{M,i\\pm1\/2} \\gets \\text{max}\\left[ \\rho\\left( \\frac{\\partial f}{\\partial u}\\left(u^{\\mp}_{,i\\pm1\/2}\\right) \\right), \\rho\\left( PT_{,i}\\left(\\frac{\\partial f}{\\partial u}\\left(u^{\\pm}_{,(i\\pm1)\\mp1\/2}\\right)\\right) \\right) \\right]$\n\\State $u_{t,i} \\gets $\\eqref{ManifoldCS}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{remark}\n In some applications there will be the relationships $PT_{,i}(f(U_{,j}))=f(PT_{,i}(U_{,j}))$ and $PT_{,i}(\\frac{\\partial f}{\\partial U}(U_{,j}))=\\frac{\\partial f}{\\partial U}(PT_{,i}(U_{,j}))$. This would allow one to skip the step in which the flux functions and their Jacobians are parallel transported by instead using the parallel transported solution variables to compute the neighboring flux functions and wave speeds in the local basis. These relationships will not hold however, if $f$ depends on a spatial variable.\n\\end{remark}\n\nFor higher dimension domains, \\eqref{ManifoldCS} can be naturally extended. For two dimensions the expression is:\n\n\\begin{multline}\\label{2DManifoldCS}\n \\frac{d}{dt}u_{,i,j} = -\\frac{PT_{,i,j}(f^{1+}_{,i+1\/2,j})+f^{1-}_{,i+1\/2,j}}{2\\Delta x^1} + \\frac{\\lambda_{M,i+1\/2,j}}{2\\Delta x^1}[PT_{,i,j}(u^+_{,i+1\/2,j})-u^-_{,i+1\/2,j}] \\\\\n +\\frac{f^{1+}_{,i-1\/2,j}+PT_{,i,j}(f^{1-}_{,i-1\/2,j})}{2\\Delta x^1} - \\frac{\\lambda_{M,i-1\/2,j}}{2\\Delta x^1}[u^+_{,i-1\/2,j}-PT_{,i,j}(u^-_{,i-1\/2,j})] \\\\\n -\\frac{PT_{,i,j}(f^{2+}_{,i,j+1\/2})+f^{2-}_{,i,j+1\/2}}{2\\Delta x^2} + \\frac{\\lambda_{M,i,j+1\/2}}{2\\Delta x^2}[PT_{,i,j}(u^+_{,i,j+1\/2})-u^-_{,i,j+1\/2}] \\\\\n +\\frac{f^{2+}_{,i,j-1\/2}+PT_{,i,j}(f^{2-}_{,i,j-1\/2})}{2\\Delta x^2} - \\frac{\\lambda_{M,i,j-1\/2}}{2\\Delta x^2}[u^+_{,i,j-1\/2}-PT_{,i,j}(u^-_{,i,j-1\/2})]\n\\end{multline}\n\nand so on for arbitrary dimensions. All of these can be integrated in time using whichever ODE solver that one prefers. \n\n\n\n\n\\section{Conical Flow}\\label{sec:CF}\n\nThe conical Euler and MHD equations, which govern flow past an infinite cone of arbitrary cross section, are derived and analyzed in \\cite{ConEuler} and \\cite{ConMHD} respectively. These equations result from setting the corresponding system in a 3D Euclidean space covered by coordinates $(\\xi^1,\\xi^2,r)$, where $\\xi^\\beta$ are defined on the surface of the sphere and $r$ is the radial coordinate, and then setting the covariant derivative in the $r$ direction equal to zero. Doing so completely removes any dependence on $r$, leaving a system defined entirely on the surface of a sphere. The origin of the space (and center of the sphere) is taken to be the tip of the cone whose cross section, by definition, does not depend on $r$ either.\n\n\n\nThe conical equations, \\ref{EulerCon} and \\ref{MHDCon}, involve the contracted covariant derivative (denoted $(\\cdot)_{|\\beta}$) where the contraction is only performed over the components corresponding to the surface of the sphere ($\\beta\\in\\{1,2\\}$). These forms of the equations are slightly different than the forms presented in \\cite{ConEuler} and \\cite{ConMHD}, but they are still consistent, differing only by a factor of $\\sqrt{g}$. The forms considered for the analysis of these equations rely on the relationship $\\overset{(G)}{\\Gamma}\\indices{_i^j_j} = \\frac{1}{\\sqrt{G}}\\frac{\\partial \\sqrt{G}}{\\partial x^i}$ which will not in general be valid in the discrete case. The forms given here only rely on the tensorial transformation properties of the covariant derivative and thus can be discretized using the source terms already presented.\n\nOne option for solving the conical equations is to convert them to time dependent problems as described in \\ref{EulerCon} and \\ref{MHDCon} and apply the central scheme \\eqref{2DManifoldCS} and then march in time until a steady state is achieved. Initial tests of this procedure were conducted and it was determined that it was too time consuming and did not achieve good long term results due to waves reflecting off the surface of the cone. Instead, a finite difference\/area method was used to discretize the conical equations, and an iterative scheme based on Newton's method was used to solve the system of nonlinear equations. Solutions were achieved much faster and were void of residual transient modes. This is the method which is described throughout the following sections.\n\n\n\n\n\n\\section{Mesh}\\label{sec:Mesh}\n\nA computational domain must be created in order to solve the equations numerically. For the problem of conical flow, the domain is on the surface of a unit sphere. Because most meshing utilities assume Cartesian coordinate systems are being used, it is simplest to create a 2D mesh which represents the spherical slice of the 3D domain having been projected onto the XY plane, and then compute what the curvature would be in the solver. Spherical curvature is simple enough to compute since most all relations between Cartesian and spherical coordinate systems have analytical expressions.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.55]{NumberedMesh.png}\n \\caption{Example mesh for flow past an elliptic cone. The mesh is abstractly rectangular, with width 20 and height 5 and cell indices going from left to right, bottom to top. This mesh was created using GMSH.}\n \\label{numMesh}\n\\end{figure}\n\nFollowing this procedure, a mesh will be generated such as the one given in Figure \\ref{numMesh}. The center of the mesh is the origin $(0,0)$ and $x^2 + y^2 \\leq 1$ for all points in the mesh. Though this mesh is curved, it is abstractly rectangular, with a height and width and predictable ordering. The mesh cells can be identified by a single index, say $i$, and except for at the far left and far right boundaries of the mesh will have left and right neighbors $i-1$ and $i+1$ respectively, and top and bottom neighbors $i+W$ and $i-W$ respectively where $W$ is the width of the mesh. At the left boundary, the left neighbor has index $i+W-1$, and at the right boundary, the right neighbor has index $i-W+1$.\n\n\nEach computational cell has four vertices which have Cartesian coordinates $\\{(x_i,y_i)\\}^4_{i=1}$ reported by the mesh generating software. The spherical coordinates $(\\theta,\\phi)$ on the sphere ($\\theta$ is the azimuthal angle and $\\phi$ is the zenith angle) associated with each $(x,y)$ can then be computed as follows:\n\n\\begin{equation}\n\\theta=\\begin{cases}\n\t\\arctan(x\/y) & x\\geq0 \\\\ \n \\arctan(x\/y) - \\pi & x<0 \\\\ \n\\end{cases}\n\\end{equation}\n\n\\begin{equation}\n \\phi = \\arcsin(\\sqrt{x^2 + y^2})\n\\end{equation}\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.55]{MeshCoordsNumbered.png}\n \\caption{Coordinates defined by the mesh lines}\n \\label{MeshCoords}\n\\end{figure}\n\nThis gives for each cell four new sets of coordinates $\\{(\\theta_i,\\phi_i)\\}^4_{i=1}$. These coordinates will not in general be aligned with the mesh. In order to simplify the formulation of the discrete problem and some of the calculations involved, it is convenient to do one more coordinate transformation to a coordinate system whose coordinate lines are defined to be the mesh lines. These coordinates are $(\\xi^1,\\xi^2)$, with $\\xi^1$ going left to right, and $\\xi^2$ going bottom to top as shown in Figure \\ref{MeshCoords}. The exact value of each of these coordinates in each cell is not important, so it can be freely assumed that $\\xi^1,\\xi^2\\in[0,1]$ in each cell. The relationship between the spherical coordinates and the mesh coordinates can be computed as described in \\cite{SriFAMethod}. Basis functions are defined inside the cell which are given by:\n\n\\begin{subequations}\\label{CellBasis}\n\\begin{gather}\n b^1 = \\xi^1(1-\\xi^2) \\\\\n b^2 = \\xi^1\\xi^2 \\\\\n b^3 = (1-\\xi^1)\\xi^2 \\\\\n b^4 = (1-\\xi^1)(1-\\xi^2)\n\\end{gather}\n\\end{subequations}\n\n\nThere is one basis function corresponding to each corner of the cell. That function has unit value at that corner and zero at every other corner. Figure \\ref{b2plot} gives a plot of such a function. Using these functions, $\\theta$ and $\\phi$ can be given as functions of $\\xi^1$ and $\\xi^2$ inside each cell:\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.5]{BasisFunction.png}\n \\caption{Plot of $b^2$ basis function}\n \\label{b2plot}\n\\end{figure}\n\n\n\\begin{equation}\n \\theta(\\xi^1,\\xi^2) = \\sum_1^4\\theta_ib^i(\\xi^1,\\xi^2)\n\\end{equation}\n\n\\begin{equation}\n \\phi(\\xi^1,\\xi^2) = \\sum_1^4\\phi_ib^i(\\xi^1,\\xi^2)\n\\end{equation}\n\nwith derivatives:\n\n\\begin{equation}\n \\theta_{\\xi^1}(\\xi^1,\\xi^2) = \\sum_1^4\\theta_ib_{\\xi^1}^i(\\xi^1,\\xi^2)\n\\end{equation}\n\n\\begin{equation}\n \\theta_{\\xi^2}(\\xi^1,\\xi^2) = \\sum_1^4\\theta_ib_{\\xi^2}^i(\\xi^1,\\xi^2)\n\\end{equation}\n\netc.\n\nThe Jacobian matrix of the transformation from spherical coordinates to mesh coordinates is given by:\n\n\\begin{equation}\n J_{\\theta\\rightarrow\\xi} = \\left[ \\begin{smallmatrix} \\theta_{\\xi^1} & \\theta_{\\xi^2} \\\\ \\phi_{\\xi^1} & \\phi_{\\xi^2} \\end{smallmatrix} \\right]\n\\end{equation}\n\nfor just the surface coordinates or by:\n\n\\begin{equation}\n J_{\\theta\\rightarrow\\xi} = \\left[ \\begin{smallmatrix} \\theta_{\\xi^1} & \\theta_{\\xi^2} & 0 \\\\ \\phi_{\\xi^1} & \\phi_{\\xi^2} & 0 \\\\ 0 & 0 & 1 \\end{smallmatrix} \\right]\n\\end{equation}\n\nif the radial coordinate is included. \n\n\\begin{remark}\n It is important to keep in mind that $\\theta=\\frac{\\pi}{2}$ is the same as $\\theta=-\\frac{3\\pi}{2}$ in the cells at the far right side of the mesh. Otherwise the Jacobian matrices could have some erroneous entries.\n\\end{remark}\n\nThe Jacobian matrix of the transformation from Cartesian coordinates to spherical coordinates (on a unit sphere) is given by:\n\n\\begin{equation}\n J_{x\\rightarrow\\theta} = \\left[ \\begin{smallmatrix} x_\\theta & x_\\phi & x_r \\\\ \ty_\\theta & y_\\phi & y_r\\\\ z_\\theta & z_\\phi & z_r \\end{smallmatrix} \\right] = \\left[ \\begin{smallmatrix} -\\sin\\theta \\sin\\phi & \\cos\\theta \\cos\\phi & \\cos\\theta \\sin\\phi\\\\ \t\\cos\\theta \\sin\\phi & \\sin\\theta \\cos\\phi & \\sin\\theta \\sin\\phi\\\\ 0 & -\\sin\\phi & \\cos\\phi \\end{smallmatrix} \\right]\n\\end{equation}\n\nThe total Jacobian matrix from Cartesian coordinates to mesh coordinates is computed easily by the matrix product:\n\n\\begin{equation}\n J_{x\\rightarrow\\xi} = J_{x\\rightarrow\\theta}J_{\\theta\\rightarrow\\xi}\n\\end{equation}\n\nA Jacobian matrix can now be computed at the center of each computational cell $(\\xi^1,\\xi^2)=(0.5,0.5)$ which serves as the basis for tensors in that cell. Having achieved this, discrete Christoffel symbols can be derived according to the process outlined above.\n\n\n\\begin{remark}\n It is not entirely necessary that $\\xi^1,\\xi^2\\in[0,1]$. These variables can be treated as having different ranges within the cells in order to have better conditioned Jacobian matrices or less change in tensor components from one mesh cell to the next. If the mesh cells become very oblong, or their sizes change dramatically over a small region of the mesh, these issues could affect the stability of the numerical method. It is important however that all the $\\xi^1$ ranges in a mesh column, or $\\xi^2$ ranges in a mesh row are the same.\n\\end{remark}\n\n\n\n\\section{Boundary conditions}\\label{sec:BC}\n\nFree stream conditions for all solution variables are established in the outermost mesh cells, that is the row of cells along the top of the abstractly rectangular mesh, as Dirichlet boundary conditions. Higher order stencils used for discrete derivatives had to be backward biased in cells near the outer boundary since they would otherwise require values from cells which do not exist.\n\nAt the body of the cone, which is the row of cells along the bottom of the abstractly rectangular mesh, forward biased differencing must be used to avoid relying on nonexistent cells. The velocity component in the $\\xi^2$ direction is set to zero as a Dirichlet boundary condition here. This is the no penetration condition which requires that the velocity at a wall must be parallel to the wall. \n\nFor the MHD case, the cone is assumed to be a perfect conductor. A conductor which is in steady state will have a charge arrangement such that the electric field is perpendicular to the surface. In the perfectly conducting fluid, there is the relationship from Ohm's law \\cite{MHDFlowPastBodies}:\n\n\\begin{equation}\n -\\pmb{E} = \\pmb{V}\\times\\pmb{B}\n\\end{equation}\n\nIf we let $\\pmb{n}$ be the normal at the surface of the cone then we have:\n\n\\begin{equation}\n -\\pmb{n}\\times\\pmb{E} = \\pmb{n}\\times\\left(\\pmb{V}\\times\\pmb{B}\\right)\n\\end{equation}\n\nwhich leads to:\n\n\\begin{equation}\n 0 = \\pmb{V}\\left(\\pmb{n}\\cdot\\pmb{B}\\right) - \\pmb{B}\\left(\\pmb{n}\\cdot\\pmb{V}\\right)\n\\end{equation}\n\nand because of the no penetration condition, we have:\n\n\\begin{equation}\n 0 = \\pmb{V}\\left(\\pmb{n}\\cdot\\pmb{B}\\right) \\Rightarrow \\pmb{n}\\cdot\\pmb{B} = 0\n\\end{equation}\n\n\nwhich says that the magnetic field must also be parallel to the wall. All the other other variables at the wall are free. \n\nThe last thing that must be accounted for is that the mesh is periodic in the $\\xi^1$ direction. The far right column of cells is also to the left of the far left column of cells.\n\n\n\n\\section{Discretization}\\label{sec:Disc}\n\nFor each mesh cell, there is one variable corresponding to each unknown in the conical equations. For the conical Euler equations, that is:\n\n\\begin{equation*}\n \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ \\pmb{V}_,i \\\\ e_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N = \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ v^1_,i \\\\ v^2_,i \\\\ V^3_,i \\\\ e_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N \n\\end{equation*}\n\nand for the conical MHD equations:\n\n\\begin{equation*}\n \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ \\pmb{V}_,i \\\\ e_,i \\\\ \\pmb{B}_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N = \\left\\{ \\left[ \\begin{smallmatrix} \\rho_,i \\\\ v^1_,i \\\\ v^2_,i \\\\ V^3_,i \\\\ e_,i \\\\ b^1_,i \\\\ b^2_,i \\\\ B^3_,i \\end{smallmatrix} \\right]\\right\\}_{i=1}^N \n\\end{equation*}\n\n\nFor each variable in each cell there is an associated flux function. These are the functions on the LHS of equations \\eqref{EulerCon} and \\eqref{MHDCon} of which the contracted covariant derivative is being taken. For $\\rho$ and $e$, the flux is a rank 1 tensor, while for $\\pmb{V}$, and $\\pmb{B}$ the flux is a rank 2 tensor. These flux functions are computed in each cell using the variables and inverse metric tensor in that cell giving: \n\n\n\\begin{equation*}\n \\left\\{ \\left[ \\begin{matrix} f^j_{\\rho,i} \\\\ f^{kj}_{\\pmb{V},i} \\\\ f^j_{e,i} \\end{matrix} \\right]\\right\\}_{i=1}^N \\text{or } \\left\\{ \\left[ \\begin{matrix} f^j_{\\rho,i} \\\\ f^{kj}_{\\pmb{V},i} \\\\ f^j_{e,i} \\\\ f^{kj}_{\\pmb{B},i} \\end{matrix} \\right]\\right\\}_{i=1}^N \n\\end{equation*}\n\n\nWith these defined in every cell in the mesh, equations \\eqref{EulerCon} and \\eqref{MHDCon} can be discretized with the covariant derivatives derived earlier using any stencil that one desires. \n\nTo improve stability, a viscous-like dissipation term is added to discrete fluid dynamics equations even if it isn't present in the original equation. Since such terms often closely resemble second derivatives it is possible for discrete source terms to be added to the expression which allow them to appropriately transform between coordinate systems. When everything is put together, the result is a system of 5 or 8 times $N$ nonlinear equations:\n\n\n\\begin{equation}\\label{EulerDiscrete}\n \\left\\{ \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\end{matrix} \\right]_{,i}\\right)= \\pmb{0}\\right\\}_{i=1}^N \n\\end{equation}\n\nor\n\n\\begin{equation}\\label{MHDDiscrete}\n \\left\\{ \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{B}} \\end{matrix} \\right]_{,i} + \\left[ \\begin{matrix} 0 \\\\ \\frac{1}{\\mu}B^k(CD_\\beta B^\\beta) \\\\ \\frac{1}{\\mu}(\\pmb{V}\\cdot\\pmb{B})(CD_\\beta B^\\beta) \\\\ V^k(CD_\\beta B^\\beta) \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\\\ B^k \\end{matrix} \\right]_{,i}\\right) = \\pmb{0} \\right\\}_{i=1}^N \n\\end{equation}\n\n\n\nIn this project a five point central stencil was used to derive the discrete differential operator. This stencil is high order and symmetric, and avoids the problem of odd-even decoupling. The coefficients for this operator are the standard finite difference coefficients given here:\n\n\\begin{equation}\n D_1f_{,i} = \\frac{1}{12}f_{,i-2} + \\frac{-2}{3}f_{,i-1} + \\frac{2}{3}f_{,i+1} + \\frac{-1}{12}f_{,i+2}\n\\end{equation}\n\nand\n\n\\begin{equation}\n D_2f_{,i} = \\frac{1}{12}f_{,i-2W} + \\frac{-2}{3}f_{,i-W} + \\frac{2}{3}f_{,i+W} + \\frac{-1}{12}f_{,i+2W}\n\\end{equation}\n\n\nWe have suppressed the $\\frac{1}{\\Delta\\xi^i}$ for clarity, and because we will generally be assuming that $\\Delta\\xi^i=1$.\n\nTo come up with a viscous operator, we consider the viscous part of equation \\eqref{ManifoldCS}. To simplify this expression, a zero order slope approximation is used, and the maximum wave speeds are replaced with a constant, tunable viscous parameter $C_{\\text{visc}}$. The resulting operators before accounting for curvature are: \n\n\\begin{equation}\n Visc_{1}(u_{,i}) = C_{\\text{visc}}\\left[ -u_{,i-1} + 2u_{,i} -u_{,i+1} \\right]\n\\end{equation}\n\nand\n\n\\begin{equation}\n Visc_{2}(u_{,i})= C_{\\text{visc}}\\left[ -u_{,i-W} + 2u_{,i} -u_{,i+W} \\right]\n\\end{equation}\n\nThe total viscous term would then be the sum of the operators in each direction:\n\n\\begin{equation}\n Visc(u_{,i}) = Visc_{1}(u_{,i}) + Visc_{2}(u_{,i})\n\\end{equation}\n\n\nA covariant version of this operator can be derived the same as if it were a derivative operator. We point out that this viscous term, like the viscous term in equation \\eqref{ManifoldCS}, will not be exactly like a Laplacian operator. It is more accurately an averaging operator. Furthermore, since the coefficients which define the operator sum to zero, theorem \\ref{PreserveZeroThm} applies, meaning that for any tensor whose components are uniform in a Cartesian coordinate system, the covariant operator will evaluate to zero on the components in the curved system. \n\nPutting these stencils together admits the conservation form:\n\n\\begin{multline}\\label{FullConservationForm}\n (\\Delta_1 F)_{,i} = \\left(\\frac{-1}{12}f_{,i-1} + \\frac{7}{12}f_{,i} + \\frac{7}{12}f_{,i+1} + \\frac{-1}{12}f_{,i+2}\\right) + C_{\\text{visc}}(u_{,i}-u_{,i+1})\\\\ - \\left[\\left(\\frac{-1}{12}f_{,i-2} + \\frac{7}{12}f_{,i-1} + \\frac{7}{12}f_{,i} + \\frac{-1}{12}f_{,i+1}\\right) + C_{\\text{visc}}(u_{,i-1}-u_{,i})\\right] \\\\ = F^+-F^-\n\\end{multline}\n\nand likewise for $(\\Delta_2F)_{,i}$. After inserting the source terms to account for curvature, the method will no longer be conservative in the strictest sense, but it will capture the appropriate behavior of the equations.\n\nThe left and right boundaries of the mesh are periodic, and thus there will always be enough neighboring cells to complete the stencil. At the top and bottom boundary however, some cells will be missing. In the second-from-top and second-from-bottom rows of the mesh, the $+2W$ and $-2W$ cells respectively are missing, and in the bottom row of the mesh the $-W$ and $-2W$ cells are missing. The top row of the mesh has fixed values and therefore does not depend on neighboring cells. In the bottom row of the mesh a three point forward difference stencil and a two point average are used. In the second-from-top and second-from-bottom rows a four point difference stencil with a backward and forward bias respectively are used, and the same viscous averaging operator can be used since all the necessary cells exist. These operators are given here:\n\nSecond from the top:\n\n\\begin{equation}\n D_2f_{,i} = \\frac{1}{6}f_{,i-2W} + (-1)f_{,i-W} + \\frac{1}{2}f_{,i} + \\frac{1}{3}f_{,i+W}\n\\end{equation}\n\nSecond from the bottom boundary:\n\n\\begin{equation}\n D_2f_{,i} = \\frac{-1}{3}f_{,i-W} + \\frac{-1}{2}f_{,i} + f_{,i+W} + \\frac{-1}{6}f_{,i+2W}\n\\end{equation}\n\nAt the bottom boundary:\n\n\\begin{equation}\n D_2f_{,i} = \\frac{-3}{2}f_{,i} + 2f_{,i+W} + \\frac{-1}{2}f_{,i+2W}\n\\end{equation}\n\nand\n\n\\begin{equation}\n Visc_{2}(u_{,i}) = C_{\\text{visc}}\\left[ u_{,i} -u_{,i+W} \\right]\n\\end{equation}\n\n\nIn some situations it will be possible to pick values in ghost cells outside the boundaries of the mesh such that the expressions resulting from these operators can be considered to be in the same form as \\eqref{FullConservationForm}. It is however difficult to guarantee that this will always be possible. Fortunately, the regions in which these operators are used are well clear of the main bow shock wave, and though body shocks occur, they will mostly follow the $\\xi^2$ coordinate lines and so will not affect differencing in the $\\xi^2$ direction \\cite{Guan,sriThesis,ShockFreeCrossFlow}. It is therefore acceptable for these operators to be non-conservative.\n\n\n\n\\subsection{Preservation of steady state}\n\nThe goal of these conical flow problems is to solve for a steady flow that satisfies the equations. The goal of the discrete equations is to capture the steady state numerically. Though we cannot describe all steady state solutions, we do know a subset of them and can verify that the discretization is capable of accurately capturing them. In the case where there are no walls or boundaries, it is known that uniform values of all variables satisfies the equations. By theorem \\ref{PreserveZeroThm}, it is easily shown that this discretization of the conical equations exactly captures these solutions - that is any set of uniform density, uniform energy, uniform velocity, and (in the case of MHD) uniform magnetic field satisfy the discrete equations to machine precision.\n\n\n\n\n\\section{Solution procedure}\\label{sec:SP}\n\nAn algorithm based on Newton's method was developed to solve the system of nonlinear equations. This method iteratively solves a linearized form of the nonlinear system of equations set equal to zero. After each iteration, the equations are closer to being solved assuming certain conditions are met, involving smoothness and closeness to the solution.\n\nLet $F$ be a vector of nonlinear functions such as the residual of the discretization of our differential equations, and let $U$ be a vector of all the variables on which $F$ depends. By a Taylor expansion, we get\n\n\\begin{equation}\\label{expansion}\n F\\approx F(U) + \\frac{\\partial F}{\\partial U}\\Delta U\n\\end{equation}\n\nwhen $\\Delta U$ is small. Since we are interested in $F=0$ we have:\n\n\\begin{equation}\\label{NewtonExpression}\n \\frac{\\partial F}{\\partial U}\\Delta U = -F(U)\n\\end{equation}\n\nwhich can be solved for $\\Delta U$. If $F(U)$ is close enough to $F=0$, then $F(U+\\Delta U)$ should be even closer. This process can be repeated until a value of $U$ is achieved such that $F$ is satisfactorily close to zero.\n\nBy applying this process to the residual of the discretized system of equations, we can iterate to a solution, starting from an initial guess. Since the discretization given is known to be satisfied by a uniform solution, it is convenient to take such as the initial guess. In particular, the whole domain is set to the free stream values. The residual is thus zero everywhere, but the boundary conditions at the wall are not satisfied. To keep the residual close to zero, the algorithm slowly increments the boundary variables towards their specified values. After each incrementation, Newton's method is employed to relax the residual back down to within a desired tolerance of zero. Thus the residual can be kept close to zero always, and the solution will be achieved once the boundary conditions are satisfied and a last round of Newton's method has relaxed the residual back to zero. \n\nA pseudocode of applying this algorithm to the conical Euler equations is given in algorithm \\ref{EulerAlgo}. For the Euler equations, the boundary conditions at the wall are that the $v^2$ component of the velocity is zero. Algorithm \\ref{EulerAlgo} uses a linear incrementation of $v^2$, but other non-uniform increments could also be used. \n\n\n\\begin{algorithm}\n\\caption{Solve Conical Euler Equations}\\label{EulerAlgo}\n\\begin{algorithmic}[1]\n\\State $U \\gets U_\\infty$\n\\For{$\\textit{inc}=1 \\text{ to } \\textit{numIncrements}$}\n \\State $v^2_{,1:W} = \\frac{\\textit{numIncrements} - \\textit{inc}}{\\textit{numIncrements}}v^2_\\infty$\n \\For{$\\textit{it}=1 \\text{ to }\\textit{maxIt}$}\n \\State $\\text{compute }\\textit{Res}$\n \\If{$|\\textit{Res}|<\\textit{tol}$}\n \\State break\n \\Else\n \\State $\\text{compute }\\frac{\\partial \\textit{Res}}{\\partial U}$\n \\State $\\text{solve }\\frac{\\partial \\textit{Res}}{\\partial U}\\Delta U = -\\textit{Res}$\n \\State $U \\gets U + \\Delta U$\n \\EndIf\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nThe residual for the conical Euler equations is from equation \\eqref{EulerDiscrete}:\n\n\n\\begin{equation}\n Res_{,i} = \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\end{matrix} \\right]_{,i}\\right) \n\\end{equation}\n\nSince the difference and averaging operators are all linear, the Jacobian of the residual can be computed as:\n\n\\begin{equation}\n \\frac{\\partial \\textit{Res}}{\\partial U}_{,i} = \\left[ \\begin{matrix} CD_\\beta \\frac{\\partial f^\\beta_{\\rho}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^{k\\beta}_{\\pmb{V}}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^\\beta_{e}}{\\partial U} \\end{matrix} \\right]_{,i} + Visc\\left(I\\right)\n\\end{equation}\n\nwhere $I$ is the identity matrix.\n\nSolving the MHD equations can be done in a similar way, but with a few modifications. The first is simple which is incrementing the $\\xi^2$ component of the magnetic field to zero at the wall same as is done to that component of the velocity. The second modification is more significant.\n\nAccompanying equation \\eqref{MHDCon} is the requirement that the divergence of the magnetic field is identically zero. This constraint is not however explicitly enforced, allowing for the possibility that a there exists a solution involving a magnetic field which is not divergence free. Numerical tests have demonstrated that for time dependent Ideal MHD with Powell source terms, the numerical divergence is kept small if initial data is divergence free. Since a Newton's method does not march in the time-like direction, these observations unfortunately do not apply. No prior work exists on the conical Ideal MHD equations, and so a strategy to impose this constraint had to be developed from scratch. The most straightforward approach to ensure that the magnetic field remained divergence-less throughout the iteration process was to convert the linear solve portion of Newton's method into a constrained minimization problem.\n\nTo do this, first expression \\eqref{NewtonExpression} is changed to:\n\n\\begin{equation}\\label{NewtonAltExpression}\n \\frac{\\partial F}{\\partial U}\\Delta U = -F \\Leftrightarrow \\frac{\\partial F}{\\partial U} U_{\\text{next}} = \\frac{\\partial F}{\\partial U} U - F\n\\end{equation}\n\nwhich is an equivalent expression, but can be solved directly for $U_{\\text{next}}$. Instead of solving this expression exactly, we instead minimize:\n\n\\begin{equation}\n \\frac{1}{2}\\Bigg\\lVert\\frac{\\partial F}{\\partial U} U_{\\text{next}} - \\left(\\frac{\\partial F}{\\partial U} U - F\\right)\\Bigg\\rVert_2^2\n\\end{equation}\n\nsubject to the constraint that the magnetic field be divergence-less. This constraint can be stated mathematically as a linear equation:\n\n\\begin{equation}\\label{ConstraintHomo}\n (\\text{div}B) U = \\pmb{0}\n\\end{equation}\n\nwhere $(\\text{div}B)$ is a matrix which applies the contracted covariant derivative operator to the magnetic field variables. With this, the ``linear solve'' step in the algorithm is replaced with the linearly constrained least squares problem:\n\n\\begin{equation}\n \\min_{U_{\\text{next}}}\\frac{1}{2}\\Bigg\\lVert\\frac{\\partial F}{\\partial U} U_{\\text{next}} - \\left(\\frac{\\partial F}{\\partial U} U - F\\right)\\Bigg\\rVert_2^2 \\text{ s.t. } (\\text{div}B) U_{\\text{next}} = \\pmb{0}\n\\end{equation}\n\n\nFortunately, this problem is straight forward to solve. Using the method of Lagrange multipliers, it is easily shown that the solution is achieved by solving the system:\n\n\\begin{equation}\n \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\frac{\\partial F}{\\partial U} & (\\text{div}B)^T \\\\ (\\text{div}B) & 0 \\end{matrix} \\right]\\left[ \\begin{matrix} U_{\\text{next}} \\\\ \\lambda \\end{matrix} \\right] = \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\left(\\frac{\\partial F}{\\partial U} U - F\\right) \\\\ \\pmb{0} \\end{matrix} \\right]\n\\end{equation}\n\nwhere $\\lambda$ is a vector or Lagrange multipliers, the value of which is irrelevant. There are other ways to solve the constrained minimzation problem based on the QR or SVD factorizations, but this one was satisfactory in practice. Algorithm \\ref{MHDAlgo} provides a pseudo code of the modified Newton's Method for the MHD equations.\n\n\\begin{algorithm}\n\\caption{Solve Conical MHD Equations}\\label{MHDAlgo}\n\\begin{algorithmic}[1]\n\\State $U \\gets U_\\infty$\n\\For{$\\textit{inc}=1 \\text{ to } \\textit{numIncrements}$}\n \\State $v^2_{,1:W} = \\frac{\\textit{numIncrements} - \\textit{inc}}{\\textit{numIncrements}}v^2_\\infty$\n \\State $b^2_{,1:W} = \\frac{\\textit{numIncrements} - \\textit{inc}}{\\textit{numIncrements}}b^2_\\infty$\n \\For{$\\textit{it}=1 \\text{ to }\\textit{maxIt}$}\n \\State $\\text{compute }\\textit{Res}$\n \\If{$|\\textit{Res}|<\\textit{tol}$}\n \\State break\n \\Else\n \\State $\\text{compute }\\frac{\\partial \\textit{Res}}{\\partial U}$\n \\State $\\text{solve }\\min\\frac{1}{2}||\\frac{\\partial \\textit{Res}}{\\partial U} U_{\\text{next}} - \\left(\\frac{\\partial \\textit{Res}}{\\partial U} U -\\textit{Res}\\right)||_2^2\\text{ s.t. }(\\text{div}B) U_{\\text{next}}=\\pmb{0}$\n \\State $U \\gets U_{\\text{next}}$\n \\EndIf\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nSince the divergence-less constraint on the magnetic field is always satisfied, the residual for the conical MHD equations from equation \\eqref{MHDDiscrete} is given by:\n\n\\begin{equation}\n Res_{,i} = \\left[ \\begin{matrix} CD_\\beta f^\\beta_{\\rho} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{V}} \\\\ CD_\\beta f^\\beta_{e} \\\\ CD_\\beta f^{k\\beta}_{\\pmb{B}} \\end{matrix} \\right]_{,i} + Visc\\left(\\left[ \\begin{matrix} \\rho \\\\ V^k \\\\ e \\\\ B^k \\end{matrix} \\right]_{,i}\\right)\n\\end{equation}\n\nand the Jacobian is:\n\n\\begin{equation}\n \\frac{\\partial \\textit{Res}}{\\partial U}_{,i} = \\left[ \\begin{matrix} CD_\\beta \\frac{\\partial f^\\beta_{\\rho}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^{k\\beta}_{\\pmb{V}}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^\\beta_{e}}{\\partial U} \\\\ CD_\\beta \\frac{\\partial f^{k\\beta}_{\\pmb{B}}}{\\partial U} \\end{matrix} \\right]_{,i} + Visc\\left(I\\right)\n\\end{equation}\n\n\\begin{remark}\n In the case of solving the conical MHD equations with the magnetic field set identically to zero, then algorithm \\ref{MHDAlgo} reduces to algorithm \\ref{EulerAlgo}.\n\\end{remark}\n\n\\begin{remark}\n Depending on how one chooses to enforce boundary conditions, it may be necessary to add them as constraints in the constrained minimization problem. For example, if the boundary variables are stored in $U$ along with all the other variables, and their values are forced by inserting the equations $IU_{,i}=U_{\\text{boundary},i}$ (where $I$ is the identity matrix and cell $i$ is a boundary cell) into the linear solves, then it is possible that the solution to the minimization problem will have values other than those desired at the boundary. Augmenting the constraint equation \\eqref{ConstraintHomo} to $(\\text{div}B) U_{\\text{next}}=Z$ which includes $IU_{,i}=U_{\\text{boundary},i}$ guarantees that the boundary values will be what they are meant to be. To solve this modified formulation, one solves the system:\n \\begin{equation}\n \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\frac{\\partial F}{\\partial U} & (\\text{div}B)^T \\\\ (\\text{div}B) & 0 \\end{matrix} \\right]\\left[ \\begin{matrix} U_{\\text{next}} \\\\ \\lambda \\end{matrix} \\right] = \\left[ \\begin{matrix} 2\\frac{\\partial F}{\\partial U}^T\\left(\\frac{\\partial F}{\\partial U} U - F\\right) \\\\ Z \\end{matrix} \\right]\n \\end{equation}\n\\end{remark}\n\n\n\\begin{remark}\n A trade-off had to be made in enforcing the discrete divergence-free constraint. The flux-divergence form, or ``conservation form'' of the MHD equations is only valid if certain terms proportional to or involving the divergence of the magnetic field are equal to zero. These terms result from using vector calculus identities to manipulate Maxwell's equations. Discrete forms of these terms which are consistent with the MHD equations, and Maxwell's equations, and the vector calculus identities would not be linear expressions. To force these to be zero would require nonlinear constraint equations which are more difficult to satisfy. Instead of doing that, it was decided to use the simpler linear constraints \\eqref{ConstraintHomo} which are consistent in the limit of zero mesh spacing, but result in some numerical inconsistency.\n\\end{remark}\n\n\n\n\n\\section{Results and discussion}\\label{sec:RD}\n\nThe numerical method so far described, involving the discrete Christoffel symbols and the Newton's method was coded in Octave and was run on a variety of test cases. We present here some examples of solutions which it produced. Those included are designed to highlight the capabilities of the method more than to apply to any particular aerospace application.\n\n\\subsection{Gas properties}\n\nSo far, no assumptions have been made about the thermodynamic properties of the fluid being governed by the equations. We are therefore free to apply any valid pressure and temperature models without creating conflicts in the governing equations or numerical method. It was chosen however to assume the gas was perfect in the coming examples in order to avoid unnecessary complexity that might obfuscate characteristics of the method. The pressure of the gas is thus computed by the ideal gas law:\n \n \\begin{equation}\n P=(\\gamma-1)\\rho e\n \\end{equation}\n \n where $\\gamma$ is the ratio of specific heats, $c_p$ and $c_v$, at constant pressure and volume respectively (the value of $\\gamma$ is 1.4 for regular air). The specific heats are assumed constant, which results in the relationship:\n \n \\begin{equation}\n e = c_vT\n \\end{equation}\n\nwhere $T$ is the temperature of the gas. Furthermore, the gas constant $R=c_p-c_v$ can be defined.\n\n\\subsection{Non-dimensionalization}\n\nIt is generally preferable in fluid dynamics to solve non-dimensionalized versions of the governing equations. To this end, we introduce the non-dimensional variables:\n\n\\begin{equation}\n \\rho_* = \\rho\/\\rho_\\infty\n\\end{equation}\n\\begin{equation}\n V^i_* = V^i\/|\\pmb{V}_\\infty|\n\\end{equation}\n\\begin{equation}\n e_* = e\/|\\pmb{V}_\\infty|^2\n\\end{equation}\n\\begin{equation}\n B^i_* = B^i\/\\left(\\sqrt{\\rho_\\infty\\mu}|\\pmb{V}_\\infty|\\right)\n\\end{equation}\n\n\\begin{equation}\n P_* = P\/\\left(\\rho_\\infty|\\pmb{V}_\\infty|^2\\right)\n\\end{equation}\n\nwhere the subscript $\\infty$ refers to the free stream value. Additionally we have:\n\n\\begin{equation}\n E_* = E\/|\\pmb{V}_\\infty|^2 = e_* + \\frac{1}{2}|\\pmb{V}_*|^2\n\\end{equation}\n\nand for an ideal gas, there is the relationship:\n\n\\begin{equation}\n P_* = P(\\rho_*,e_*)\n\\end{equation}\n\nThe nondimensionalized version of equation \\eqref{EulerCon} is:\n\n\\begin{subequations}\\label{EulerConNonDim}\n\\begin{gather}\n\\left(\\rho_* V_*^\\beta\\right)_{|\\beta} = 0 \\\\\n\\left(\\rho_* V_*^i V_*^\\beta + G^{i\\beta}P_*\\right)_{|\\beta} = 0\\\\\n\\left( \\left[\\rho_* E_*+P_*\\right] V_*^\\beta \\right)_{|\\beta} =0 \n\\end{gather}\n\\end{subequations}\n\n\nAnd of equation \\eqref{MHDCon}:\n\n\\begin{subequations}\\label{MHDConNonDim}\n\\begin{gather}\n \\left(\\rho_* V_*^\\beta\\right)_{|\\beta} = 0 \\\\\n \\left(\\rho_* V_*^iV_*^\\beta - B_*^iB_*^\\beta + G^{i\\beta}\\left(P_* + \\frac{|\\pmb{B}_*|^2}{2}\\right)\\right)_{|\\beta} = -B_*^iB^\\beta_{*|\\beta} \n \\\\\n \\left( \\left(\\rho_* E_*+P_*+|\\pmb{B}_*|^2\\right)V_*^\\beta - (\\pmb{V}_*\\cdot\\pmb{B}_*)B_*^\\beta \\right)_{|\\beta} = -(\\pmb{V}_*\\cdot\\pmb{B}_*)B^\\beta_{*|\\beta} \n \\\\\n (V_*^\\beta B_*^i-V_*^iB_*^\\beta)_{|\\beta} = -V_*^iB^\\beta_{*|\\beta}\n\\end{gather}\n\\end{subequations}\n\n\n\\subsection{Free stream conditions}\n\nThe outermost row of mesh cells is used to impose the free stream conditions on the solution. These are imposed via Dirichlet conditions on the non-dimensional variables. For this project, free stream conditions were assumed to be uniform and constant. The values of the variables were set according to the desired angle of attack, angle of roll, and Mach number of the cone, and the relationship between the air stream and the magnetic field.\n\nThe definition of $\\rho_*$ requires that it always has a value of one in the free stream. Likewise, the magnitude of the vector $\\pmb{V}_*$ is always equal to one in the free stream. The direction of $\\pmb{V}_*$ is determined by the angle of attack and roll of the cone. Since the cone is assumed to be aligned with the $z$ axis, the Cartesian representation of the dimensionless free stream velocity is given by:\n\n\\begin{equation}\n \\pmb{\\tilde{V}}_{*\\infty} = \\left[ \\begin{smallmatrix} \\cos Roll & -\\sin Roll & 0 \\\\ \\sin Roll & \\cos Roll & 0 \\\\ 0 & 0 & 1 \\end{smallmatrix} \\right]\\left[ \\begin{smallmatrix} \\\\ 1 & 0 & 0 \\\\ 0 & \\cos AoA & \\sin AoA \\\\ 0 & -\\sin AoA & \\cos AoA \\end{smallmatrix} \\right]\\left[ \\begin{smallmatrix} 0 \\\\ 0 \\\\ 1 \\end{smallmatrix} \\right] = \\left[ \\begin{smallmatrix} -\\sin Roll \\sin AoA \\\\ \\cos Roll \\sin AoA \\\\ \\cos AoA \\end{smallmatrix} \\right]\n\\end{equation}\n\nThis vector is then transformed onto the local basis of the mesh. For a perfect gas, the value of $e_*$ is determined by the Mach number and gas constant by the expression:\n\n\\begin{equation}\n e_{*\\infty} = \\frac{1}{\\gamma(\\gamma-1)M_\\infty^2}\n\\end{equation}\n\nwhich comes from:\n\n\\begin{equation}\n e_{*\\infty} = \\frac{c_vT_\\infty}{|\\pmb{V}_\\infty|} = \\frac{c_vT_\\infty}{c_\\infty^2M_\\infty^2} = \\frac{c_vT_\\infty}{\\gamma RT_\\infty M_\\infty^2} = \\frac{c_v}{\\gamma (c_p-c_v)M_\\infty^2} = \\frac{1}{\\gamma(\\gamma-1)M_\\infty^2}\n\\end{equation}\n\n\nThe direction of $\\pmb{B}_{*\\infty}$ can be set somewhat arbitrarily. The magnitude however, should be small enough that the magnitude of the free stream velocity remains greater than the fastest magneto acoustic speed. Otherwise, information would be able to easily propagate upstream which would invalidate the conical assumption. The fast magneto acoustic speed is given in non-dimensional form by:\n\n\\begin{equation}\n c_{f*}^2 = \\frac{1}{2}\\left[ \\left(\\frac{1}{M^2} + |\\pmb{B}_{*}|^2\\right) + \\sqrt{\\left(\\frac{1}{M^2} + |\\pmb{B}_{*}|^2\\right)^2 - 4\\frac{1}{M^2}\\left(\\pmb{B}_{*}\\cdot\\pmb{w}\\right)^2} \\right]\n\\end{equation}\n\n\nwhere $\\pmb{w}$ is a unit vector which specifies the direction of the propagating wave. This speed should be less than the magnitude of the non-dimensional free stream velocity which is equal to one. A sufficient condition to ensure this is:\n\n\\begin{equation}\n |\\pmb{B}_{*\\infty}|^2 < 1-\\frac{1}{M_{\\infty}^2}\n\\end{equation}\n\nConsequently, the additional constraints are imposed that the magnitude of the non-dimensional magnetic field must be less than that of the non-dimensional velocity, and that the free stream Mach number must be greater than one.\n\n\n\n\\subsection{Right circular cone validation}\n\nTo demonstrate the reliability of the numerical method, a series of solutions were computed for circular cones at zero angle of attack. This scenario has been thoroughly studied and properties of the solutions can be checked against tables provided by NASA \\cite{NASA_Tables}. \n\nCones with half angles 5, 10, and 15 degrees were modeled at speeds of Mach 1.5, 2, 3, 4, and 5. The 10 degree mesh had 80 elements in the $\\xi^1$ direction whereas the 5 and 15 degree meshes only had 60. The setting of the problem is uniform in the $\\xi^1$ direction so resolution in this direction was not too important. All the meshes had 100 elements in the $\\xi^2$ direction. This meant that up to 40,000 variables were solved for in these experiments. Good convergence was achieved, with the $L_2$ norm of the residual being less than $10^{-9}$.\n\nThe shock wave angle, the surface to free stream density and pressure ratios and the surface Mach number were all computed based on the solutions and compared to NASA values. The results of this comparison are presented in tables \\ref{ShockAngle}, \\ref{DensityRatio}, \\ref{PressureRatio}, and \\ref{SurfaceMach}, and an example of a full solution is shown in Figure \\ref{fig:ValidationExample}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.2]{10deg_AoA0_M2_ShockAngle.png}\n \\caption{Example solution from validation testing. This is a 10 degree half angle cone at zero angle of attack and Mach 2. Pressure field is shown along with the distance in the XY plane to the shock wave. The angle of the shock wave is $\\theta_s=\\arcsin .52 = .547$. This image was rendered in ParaView.}\n \\label{fig:ValidationExample}\n\\end{figure}\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 0.734 & 0.744 & 0.789 \\\\ \n\\hline\n2 & 0.524 & 0.547 & 0.600 \\\\ \n\\hline\n3 & 0.347 & 0.379 & 0.444 \\\\ \n\\hline\n4 & 0.268 & 0.309 & 0.384 \\\\ \n\\hline\n5 & n\/a & 0.273 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 0.731 & 0.745 & 0.786 \\\\ \n\\hline\n2 & 0.525 & 0.545 & 0.592 \\\\ \n\\hline\n3 & 0.344 & 0.379 & 0.441 \\\\ \n\\hline\n4 & 0.261 & 0.309 & 0.380 \\\\ \n\\hline\n5 & & 0.272 & \\\\ \n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 0.469 & 0.132 & 0.455 \\\\ \n\\hline\n2 & 0.314 & 0.432 & 1.438 \\\\ \n\\hline\n3 & 0.819 & 0.002 & 0.826 \\\\ \n\\hline\n4 & 2.744 & 0.059 & 1.072 \\\\ \n\\hline\n5 & & 0.358 & \\\\ \n\\hline\n\\end{tabular}\n\\caption{Shock wave angle prediction. Angles are presented in radians}\\label{ShockAngle}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 1.047 & 1.137 & 1.261 \\\\ \n\\hline\n2 & 1.071 & 1.203 & 1.382 \\\\ \n\\hline\n3 & 1.132 & 1.370 & 1.687 \\\\ \n\\hline\n4 & 1.207 & 1.576 & 2.054 \\\\ \n\\hline\n5 & n\/a & 1.805 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 1.044 & 1.136 & 1.257 \\\\ \n\\hline\n2 & 1.067 & 1.201 & 1.377 \\\\ \n\\hline\n3 & 1.124 & 1.368 & 1.685 \\\\ \n\\hline\n4 & 1.193 & 1.571 & 2.047 \\\\ \n\\hline\n5 & & 1.802 & \\\\\n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 0.265 & 0.116 & 0.294 \\\\ \n\\hline\n2 & 0.374 & 0.158 & 0.351 \\\\ \n\\hline\n3 & 0.705 & 0.167 & 0.122 \\\\ \n\\hline\n4 & 1.133 & 0.307 & 0.355 \\\\ \n\\hline\n5 & & 0.156 & \\\\\n\\hline\n\\end{tabular}\n\\caption{Ratio of surface density to free stream density}\\label{DensityRatio}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 1.067 & 1.197 & 1.386 \\\\ \n\\hline\n2 & 1.102 & 1.296 & 1.587 \\\\ \n\\hline\n3 & 1.190 & 1.558 & 2.111 \\\\ \n\\hline\n4 & 1.299 & 1.916 & 2.847 \\\\ \n\\hline\n5 & n\/a & 2.387 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 1.062 & 1.195 & 1.378 \\\\ \n\\hline\n2 & 1.095 & 1.292 & 1.566 \\\\ \n\\hline\n3 & 1.178 & 1.551 & 2.091 \\\\ \n\\hline\n4 & 1.281 & 1.889 & 2.801 \\\\ \n\\hline\n5 & & 2.309 & \\\\ \n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 0.435 & 0.209 & 0.542 \\\\ \n\\hline\n2 & 0.626 & 0.244 & 1.348 \\\\ \n\\hline\n3 & 1.047 & 0.448 & 0.961 \\\\ \n\\hline\n4 & 1.404 & 1.419 & 1.661 \\\\ \n\\hline\n5 & & 3.348 & \\\\\n\\hline\n\\end{tabular}\n\\caption{Ratio of surface pressure to free stream pressure}\\label{PressureRatio}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c||c|c|c|}\n\\hline\n Solver & Half Angle = 5 & 10 & 15\\\\\n \\hline\\hline\n $M_\\infty$ = 1.5 & 1.486 & 1.462 & 1.431 \\\\ \n\\hline\n2 & 1.972 & 1.927 & 1.872 \\\\ \n\\hline\n3 & 2.925 & 2.813 & 2.683 \\\\ \n\\hline\n4 & 3.856 & 3.642 & 3.410 \\\\ \n\\hline\n5 & n\/a & 4.406 & n\/a \\\\\n \\hline\n \\hline\\hline\n NASA & Half Angle = 5 & 10 & 15 \\\\\n \\hline\\hline\n$M_\\infty$ = 1.5 & 1.458 & 1.375 & 1.271 \\\\ \n\\hline\n2 & 1.942 & 1.834 & 1.707 \\\\ \n\\hline\n3 & 2.891 & 2.710 & 2.507 \\\\ \n\\hline\n4 & 3.816 & 3.531 & 3.217 \\\\ \n\\hline\n5 & & 4.292 & \\\\\n \\hline\n \\hline\\hline\nAbsolute \\% error & Half Angle = 5 & 10 & 15 \\\\\n\\hline\\hline\n$M_\\infty$ = 1.5 & 1.925 & 6.338 & 12.612 \\\\ \n\\hline\n2 & 1.570 & 5.068 & 9.674 \\\\ \n\\hline\n3 & 1.163 & 3.795 & 7.031 \\\\ \n\\hline\n4 & 1.036 & 3.156 & 6.009 \\\\ \n\\hline\n5 & & 2.652 & \\\\ \n\\hline\n\\end{tabular}\n\\caption{Surface Mach number}\\label{SurfaceMach}\n\\end{table}\n\nResults at Mach 5 were not able to be achieved for the 5 and 15 degree cones. As the Newton's method was iterating to a solution, spurious oscillations began to arise which eventually destabilized the solution to the point that it blew up. Attempts were made to suppress these oscillations by increasing $C_{\\text{visc}}$, however when enough viscosity was added to achieve stability, the solutions were overly damped and non-physical. The stable capture of shock waves without sacrificing resolution is a difficult problem in fluid dynamics for which many difficult numerical methods have been devised. Evaluation and implementation of these was however beyond the scope of this project. \n\nThe stability of the solution was also observed to depend to some degree on the quality of the mesh. It is thus possible that if a more sophisticated mesh were designed, either up front or via an adaptive mesh method, that the steeper gradients could be better handled. \n\nWhen solutions were achieved they provided results which matched well with the values from the NASA tables. The surface Mach number consistently had the highest error, with a maximum of about 12\\%. Such consistency demonstrates the validity of the derivation of the sources terms which model the curvature of the discrete manifold.\n\n\\subsection{Additional Validation}\n\nOther cases of conical Euler flow were solved to compare to previous work on the subject not limited to right circular cones. Sritharan \\cite{sriThesis} provided plots of the pressure coefficient from a 10 degree half angle cone at 10 degrees angle of attack and Mach 2. The same case was run in the solver developed here and comparison plots were made. The mesh used was the same in the previous section.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.45]{CircleSurfaceComparison.png}\n \\caption{10 degree half angle cone at 10 degrees angle of attack and Mach 2. Pressure coefficient plotted around the surface of the cone.}\n \\label{fig:CircleSurfaceComparison}\n\\end{figure}\n\n\nFigure \\ref{fig:CircleSurfaceComparison} shows the pressure coefficient around the surface of the cone, and Figure \\ref{fig:CirclePhiComparison} shows the pressure coefficient plotted along 3 different curves in the $\\phi$ direction outward from the surface of the cone. The graphs show very good agreement in value. The shock waves are not as sharply resolved by this method, particularly when the shock is weaker. The position of the shock waves is consistent though, and so is the jump across them.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.45]{CirclePhiComparison.png}\n \\caption{10 degree half angle cone at 10 degrees angle of attack and Mach 2. Pressure coefficient plotted outward from the surface of the cone.}\n \\label{fig:CirclePhiComparison}\n\\end{figure}\n\n\nIn \\cite{Siclari}, Siclari provides a plot of the surface pressure coefficient for an elliptic cone with a sweep angle of 71.61 degrees and 6 to 1 aspect ratio. This cone was set at an angle of attack of 10 degrees and Mach 1.97. A comparison plot of the present method is given in Figure \\ref{fig:6to1Comparison}. Results were acquired using a mesh with 320 cells in the $\\xi^1$ direction and 50 in the $\\xi^2$ direction.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.55]{6to1EllipseComparison.png}\n \\caption{6:1 elliptic cone at 10 degrees angle of attack and Mach 2. Pressure coefficient plotted along the surface of the cone. The $x$-axis is scaled by the wingspan.}\n \\label{fig:6to1Comparison}\n\\end{figure}\n\nThe technique Siclari used to compute the solution was a shock fitting method and so had a very sharply resolved body shock. The shock is still captured well by the present method, and the plots show good agreement everywhere else as well.\n\n\n\\subsection{Other Euler results}\n\nWe now present some more examples of solutions produced by the described method. Meshes used in this section had between 30,000 and 40,000 total variables to be solved for, and in every case good convergence was still achieved with the $L_2$ norm of the residual being less than $10^{-9}$.\n\nFirst we consider the velocity field around a circular cone at an angle of attack. Figure \\ref{fig:M1p5_AoA5} shows the flow around a 10 degree cone at 5 degrees angle of attack and Mach 1.5. The mesh used was the same 80 by 100 mesh used for the 10 degree cone validation tests. There is clearly higher pressure on the windward surface of the cone than on the leeward side. In addition, the crossflow stream lines wrap around the body and converge on the top surface of the cone as predicted by \\cite{sriThesis,Siclari,NASA_con}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.3]{10deg_AoA5_M1p5_PressureCrossFlow.png}\n \\caption{10 degree half angle cone at 5 degrees angle of attack and Mach 1.5. Pressure field is shown along with the crossflow velocity.}\n \\label{fig:M1p5_AoA5}\n\\end{figure}\n\nIn Figure \\ref{fig:M2_AoA20}, the angle of attack has been increased to 20 degrees and the free stream Mach number has been increased to 2. In this case, the increase in pressure on the windward side is even greater compared to the free stream pressure, and the convergence point of the crossflow stream lines has been lifted off the surface of the cone. Furthermore, we see in Figure \\ref{fig:M2_AoA20_Mc} that supersonic crossflow bubbles have formed on either side of the surface of the cone. These are related to the change of type of the governing PDE model, and are consistent with the theory of this problem.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{10deg_AoA20_M2_PressureCrossFlow.png}\n \\caption{10 degree half angle cone at 20 degrees angle of attack and Mach 2. Pressure field is shown along with the crossflow velocity.}\n \\label{fig:M2_AoA20}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.3]{10deg_AoA20_M2_Mc.png}\n \\caption{10 degree half angle cone at 20 degrees angle of attack and Mach 2. crossflow Mach number is displayed.}\n \\label{fig:M2_AoA20_Mc}\n\\end{figure}\n\n\nA natural extension is to consider an elliptic cone in place of a circular one. These results are shown in Figures \\ref{fig:Ellipse} and \\ref{fig:Ellipse_Mc}. The behavior is qualitatively similar to the case of the cone, but with more accentuated features. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{EllipsePressureCrossFlow.png}\n \\caption{Elliptic cone at 20 degrees angle of attack and Mach 2. Pressure field is shown along with the crossflow velocity. Mesh is 160 elements in the $\\xi^1$ direction and 50 elements in the $\\xi^2$ direction.}\n \\label{fig:Ellipse}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{EllipseMc.png}\n \\caption{Elliptic cone at 20 degrees angle of attack and Mach 2. crossflow Mach number is displayed.}\n \\label{fig:Ellipse_Mc}\n\\end{figure}\n\nSo far, all these results are consistent with the expected behavior of this flow problem based on previous work of Ferri, Sritharan, and others. There is naturally motivation to consider more irregular shapes. To this end, we consider the case of Figure \\ref{fig:Fighter} which shows the flow field around a rough outline of the cross section of a fighter jet. This demonstrates the method's ability to handle more complex geometries and flow solutions.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{FighterRoll20_PressureCrossFlow.png}\n \\caption{Rough outline of aircraft at 20 degrees of roll, 10 degrees angle of attack, and Mach 1.5. Pressure field is shown along with the crossflow velocity. Mesh is 120 elements in the $\\xi^1$ direction and 50 elements in the $\\xi^2$ direction.}\n \\label{fig:Fighter}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.225]{FighterRoll20_PressureXYFlow.png}\n \\caption{Rough outline of aircraft at 20 degrees of roll, 10 degrees angle of attack, and Mach 1.5. Pressure field is shown along with the velocity projected onto the XY plane instead of onto the surface of the sphere.}\n \\label{fig:FighterXY}\n\\end{figure}\n\n\n\nAny function of the solution variables can be computed and displayed. This includes different views of the velocity field. It is most natural to view the velocity field projected onto the surface of the sphere, but it may be insightful to view the components from a different perspective. In Figure \\ref{fig:FighterXY}, the velocity field has been projected onto the XY plane and highlights behaviors of the solution which maybe were not apparent in Figure \\ref{fig:Fighter}.\n\n\n\\subsection{MHD results}\n\n\nWe now consider the case of a free stream containing a magnetic field. Errors for the examples in this section were higher than for the Euler case, with the $L_2$ norm of the residual being of order $1$ and $L_\\infty$ norm of the residual being of order $10^{-1}$. These errors are probably too high to give good qualitative results, and a few erroneous artifacts can be seen in the following examples. The increase in error is likely related to the divergenceless constraints applied to the solution which are known to only be truly consistent in the limit of zero mesh spacing. The solutions were however qualitatively consistent with theory and demonstrate true behaviors of the system. Further investigation is required to develop a discrete expression which can be better satisfied.\n\n\n\nWe expect to see some identifiable, qualitative differences in the MHD solutions compared to the non-conducting counterparts. The Lorentz force naturally opposes the motion of a conductor across magnetic field lines, and this force is proportional to the velocity of the conductor. As a result, MHD flows tend to have flattened velocity gradients compared to equivalent non-conducting flows. This behavior results in greater shock wave angles and redistribution of pressure and temperature fields \\cite{NumSimMHD,ExpResults}. The effects are also directional since the Lorentz force acts perpendicular to the magnetic field. In the case of ideal magnetohydrodynamic flows, there is the ``frozen-in'' property which states that the fluid cannot cross magnetic field lines, but is free to move along them \\cite{MHDFlowPastBodies}. All of these behaviors can be observed in the following figures.\n\n\n\n\nFigures \\ref{fig:MHD_Ref}, \\ref{fig:MHD_UP}, and \\ref{fig:MHD_SA} demonstrate an increase in shock wave angle with the addition of a magnetic field. Figure \\ref{fig:MHD_Ref} shows the same 10 degree half angle cone at 20 degrees angle of attack and Mach 2 presented above with no magnetic field present to serve as a reference. The mesh used had dimensions 40 cells in the $\\xi^1$ direction and 80 in the $\\xi^2$ direction for a total of 3200 elements and thus 25,600 variables. Two different orientations of magnetic fields were imposed both with magnitudes of $0.4$. In Figure \\ref{fig:MHD_UP} the magnetic field is imposed in the ``upward perpendicular'' direction which means that the magnetic field was perpendicular to the incoming flow stream in the upward direction. The Cartesian components of the magnetic field are given by:\n\n\\begin{equation}\n \\pmb{B}_{*\\infty} = \\left[ \\begin{smallmatrix} 0 \\\\ \\cos AoA \\\\ -\\sin AoA \\end{smallmatrix} \\right]\n\\end{equation}\n\nwhich mostly points in the $\\hat{y}$ direction but is kept perpendicular to the free stream as the angle of attack is increased. In Figure \\ref{fig:MHD_SA}, the magnetic field was stream-aligned which means that it was imposed in the same direction as the free stream velocity.\n\n\nIn both cases involving electromagnetic interaction, the shock wave angle can be seen to increase all around the circumference of the cone. It is clear in Figure \\ref{fig:MHD_UP} that the angle of the shock wave increases more around the top of the cone than around the bottom . This is likely due to the velocity having greater magnitude around the top and sides than near the crossflow stagnation region, and so the effect of the Lorentz force is greater.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{MHD_Ref_10deg_AoA20_M2_Pressure.png}\n \\caption{10 degree half angle cone at 20 degrees angle of attack and Mach 2. No magnetic field is present to provide a reference of the shock wave angle and strength. Results were achieved using the MHD solver with the free stream magnetic field set to zero. The final $L_2$ norm of the residual was less than $10^{-9}$.}\n \\label{fig:MHD_Ref}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{MHD_UpPerp_10deg_AoA20_M2_Pressure.png}\n \\caption{Magnetic field was imposed upward perpendicular to the incoming flow stream with a magnitude of 0.4. Unevenness of the pressure field outside the shock wave is likely due to numerical error.}\n \\label{fig:MHD_UP}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.275]{MHD_StrmAlign_10deg_AoA20_M2_Pressure.png}\n \\caption{Magnetic field was aligned with the incoming flow stream and given a magnitude of 0.4.}\n \\label{fig:MHD_SA}\n\\end{figure}\n\n\n\nTo illustrate the ``frozen-in'' property of Ideal MHD flows, we also considered the case of an asymmetric magnetic field. The following examples involve the same 10 degree cone at 20 degrees angle of attack and Mach 2. The magnetic field was imposed at 30 degrees counter-clockwise from the $y$ axis at varying angles from the cone ($z$) axis with a magnitude of $0.1$ as depicted in Figure \\ref{fig:BinfDiagram}. A mesh with 64 elements in each coordinate direction was used. As the angle off the cone axis increased, it can be seen in Figure \\ref{fig:MHD64SurfPress} that the maximum pressure region on the surface of the cone is rotated couner-clockwise.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.35]{BinfDiagram.png}\n \\caption{Orientation of free stream magnetic field.}\n \\label{fig:BinfDiagram}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.35]{MHD64SurfPress30deg.png}\n \\caption{Cone surface pressure coefficient 10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed 30 degrees off of the $y$ axis at the angle specified from the cone's axis.}\n \\label{fig:MHD64SurfPress}\n\\end{figure}\n\n\nThe pressure and velocity fields for the 90 degree case is shown in Figure \\ref{fig:90degVcP}. The maximum pressure region has clearly been rotated, as has the convergence point of the crossflow stream lines. This is consistent with the idea that the velocity is allowed to flow along the magnetic field lines, but is resisted in flowing across them. Likewise, we expect to see the magnetic field distorted by the flow of the flow of the fluid which is shown in the next two figures.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{64_roll30_pitch90_P_vc.png}\n \\caption{10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed 30 degrees off of the $y$ axis and 90 degrees from the cone's axis. Cross flow velocity and pressure are shown.}\n \\label{fig:90degVcP}\n\\end{figure}\n\n\nFigure \\ref{fig:90degBcP} shows the magnetic field projected onto the surface of the sphere along with the pressure field for the case of the magnetic field being 30 degrees off the $y$ axis and 90 degrees off the cone's axis, and Figure \\ref{fig:ConeAlignBcP} shows the the same for the case of the magnetic field being aligned with the cone's axis (0 degrees off of its axis). It is particularly visible in Figure \\ref{fig:ConeAlignBcP}, the case of cone-aligned magnetic field, that the magnetic field is constricted when the gas is compressed. The cross flow magnetic field near the cone's surface points from low density regions to higher density regions.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{64_roll30_pitch90_P_bc.png}\n \\caption{10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed 30 degrees off of the $y$ axis and 90 degrees from the cone's axis. Cross flow magnetic field and pressure are shown.}\n \\label{fig:90degBcP}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{64_ConAlign_P_bc.png}\n \\caption{10 degree cone, 20 degrees angle of attack and Mach 2. Magnetic field is imposed along the cone's axis. Cross flow magnetic field and pressure are shown.}\n \\label{fig:ConeAlignBcP}\n\\end{figure}\n\n\n\nThough the behaviors demonstrated so far are consistent with past work on the subject, there are some artifacts which likely do not belong. Clearly visible in Figure \\ref{fig:MHD_UP} is some unevenness of the pressure field outside the bow shock. This is visible in Figures \\ref{fig:90degVcP} and \\ref{fig:90degBcP} as well though not as prevalent. This is believed to be an artifact of the inconsistency of the divergenceless constraint preventing desirable convergence from being achieved. This unevenness tended to occur the more perpendicular the magnetic field was to the free stream velocity, which is when the Lorentz force effects would be stronger. Despite this, the solutions produced did still exhibit behaviors consistent with MHD theory which demonstrates the validity of the overall method, that is the discrete covariant derivatives and the solution algorithm.\n\n \n\\section{Conclusion}\n\nA numerical scheme has been developed which solves the conical Euler and MHD equations. This method relies heavily on the discrete Christoffel symbols which account for the curvature of the manifold in differential expressions. The discretization of the conical equations transforms in the appropriate tensorial manner and is exactly satisfied by a broad class of known solutions. A standard Newton's method was used to solve the system of nonlinear equations and produced results that were consistent with theory and prior numerical and experimental work. Better convergence is desired for the MHD case, but will likely require a more involved discretization in order to be achieved.\n\n\n\n\n\n\\section*{Acknowledgement}\n\nThis research was supported in part by an appointment to the Student Research Participation Program at the U.S. Air Force Institute of Technology administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and USAFIT.\n\n\n\n\\section*{References}\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction } \n\nIn their seminal paper \\cite{HL} Harvey and Lawson defined the notion of \ncalibrations. Let $M$ be a Riemannian manifold and $\\varphi$ be a closed\nk-form. We say that $\\varphi$ is a calibration if for any k-dimensional\nplane $\\kappa$ in the tangent bundle of $M$, we have \n\\( \\varphi|_{\\kappa} \\leq {vol({\\kappa})} \\).\nWe call a k-dimensional submanifold $L$ a calibrated submanifold if \n\\( \\varphi|_{L} = vol(L) \\). \nIt is easy to see that calibrated submanifolds minimize volume in their\nhomology class and thus provide examples of minimal varieties. \nFor a thorough discussion and numerous examples we refer the reader to\n\\cite{Hv},\\cite{HL},\\cite{Mc}.\n\nIn this paper we study the geometry of Calibrated submanifolds and investigate\nrelations between their moduli-space and geometry of the ambient manifold. The\npaper is organized as follows:\n\nIn section 2 we prove several comparison theorems for the volume of small balls in a Calibrated submanifold of a Riemannian manifold $M$, whose\nsectional curvature is bounded from above by some $K$. Let $L$ be a minimal submanifold in $M$, $p \\in L$ a point and $B(p,r)$ is a ball of radius $r$ around $p$\nin $M$. Then there are a number of results on comparison between the volume of\n$ L \\bigcap B(p,r)$ and a volume of a ball of radius $r$ in a space form of \nconstant curvature $K$ (see \\cite{And}, \\cite{Kim}, \\cite{MY}).\nOur main result in section 2 is Theorem\n2.0.2, which states that if $L$ is calibrated then the volume of a ball of \nradius $r$ in the induced metric on $L$ (which is smaller than $L \\bigcap \nB(p,r)$)\nis greater than the volume of a\nball of the same radius in a space form of constant curvature $K$ for $r \\leq r_0$ with $r_0$ depending only on the ambient manifold $M$.\nAs a corollary we deduce that there is an upper bound on a \ndiameter of a calibrated submanifold in a given homology class.\n\nIn section 3 we investigate Special Lagrangian Geometry on a Calabi-Yau\nmanifold. In section 3.1 we define Special Lagrangian submanifolds for any\nchoice of Kahler metric on a Calabi-Yau manifold\nand give basic facts pertaining to Special Lagrangian (SLag) Geometry. We will also prove a result about finite group actions on Calabi-Yau manifolds\nand construct several new examples of SLag submanifolds.\n\nIn section 3.2 we study connections between moduli-space of Special Lagrangian submanifolds and global geometry of the ambient Calabi-Yau manifold. \nWe will be \ninterested in submanifolds, which satisfy condition $\\star$ on their cohomology ring (defined in section 3.2). In particular tori satisfy this condition.\nWe state 2\nconditions on an ambient manifold for each of those the moduli-space is not\ncompact. \nThese conditions hold in many examples, and so we got a non-compactness theorem for the moduli-space.\n\nIn section 3.3 we use results of 2 previous sections to investigate\na Borcea-Voisin threefold in detail. We find a \nKahler metric on it for which we can completely characterize singular \nSpecial Lagrangian submanifolds \n(they will be a product of a circle with a cusp curve).\nMoreover SLag submanifolds don't intersect and the compactified moduli-space \nfills the whole Calabi-Yau manifold, i.e. Borcea-Voisin 3-fold fibers with \ngeneric fiber being a Special Lagrangian torus. We also construct a mirror\nto this fibration. \nThus the SYZ conjecture (see \\cite{SYZ}) holds in this example. \n\nIn section 3.4 we will examine holomorphic functions on a Calabi-Yau\nmanifold in a neighbourhood of a Special Lagrangian submanifold. An \nimmediate consequence of the fact that SLag submanifolds \nare 'Special'\nis Theorem 3.4.1, which states that the integral of a\nholomorphic function over SLag submanifolds is a constant\nfunction on their moduli-space. This will give a restriction on how a\nfamily of SLag submanifolds might approach a singularity (Corollary\n3.4.1) and also will give a restriction on SLag submanifolds asymptotic\nto a cone in $\\mathbb{C}^n$ (Theorem 3.4.2).\n\nIn section 4 we study coassociative submanifolds on $G_2$ manifolds. We extend\na coassociative condition for any choice of a closed (but not necessarily\nco-closed) $G_2$ form. Deformation of coassociative submanifolds will still be\nunobstructed and the moduli-space is smooth of dimension $b_2^+(L)$, there $L$\nis a coassociative submanifold. We will show that one example of a $G_2$\nmanifold constructed by Joyce in \\cite{Joy} is a fibration with generic fiber\nbeing a coassociative 4-torus. We also construct a mirror to this fibration.\n\nThere are a number of natural questions that arise from this paper. One is to\ngive a systematic way to construct fibrations on resolutions of torus quotients\n(both for SLag and coassociative geometry). Another point is that we produced\nthose fibrations for certain special choices of structures on the ambient manifold (a certain choice\nof Kahler metric or a certain closed $G_2$ form). We would like to get\nfibrations for any other isotopic structure. If we have a 1-parameter family of\nstructures then we obtain a 1-parameter family of moduli-spaces $\\Phi_t$. \nSuppose that $\\Phi_0$ compactifies to a fibration of the ambient manifold. We\nconjecture that so does each $\\Phi_t$ (both for SLag and coassociative\ngeometries). This in particular would imply the existence of SLag fibration for\nthe Calabi-Yau metric on a Borcea-Voisin threefold and coassociative fibration for a parallel $G_2$ structure.\nWe hope to address those issues in a future paper.\n\n{\\bf Acknowledgments} : This paper is written towards author's Ph.D. at\nthe\nMassachusetts Institute of Technology. The author wants to thank his\nadvisor, Tom Mrowka, for initiating him into the subject and for\nconstant encouragement. He is also grateful to Gang Tian for a\nnumber of useful conversations. Special thanks go to Grisha Mikhalkin for\nexplaining the Viro construction in Real Algebraic Geometry, which was\nused to construct examples of real quintics.\n\n\\section {Volume Comparison for Calibrated Submanifolds } \n If a Riemannian manifold $M$ has an upper bound $K$ on it's sectional\ncurvature then the volume of a sufficiently small ball in $M$ is greater\nthen the volume of a ball of the same radius in a space form of curvature\n$K$. It turns out that this holds more generally for calibrated\nsubmanifolds of $M$. Namely we have a following theorem :\n\\begin{thm}\n: Let $\\varphi$ be a calibrating k-form on an ambient manifold $M$ and $L$ be\na \ncalibrated submanifold. Let the sectional curvature of $M$ be bounded\nfrom above by $K$ . Let \n\\(r \\leq { min(injrad(M),\\frac{\\pi}{\\sqrt{K}}) } \\). \nLet $p \\in L$ and $B(p,r)$ be a ball of radius r around $p$ in $M$ and \\(B^K(r)\\) be a\nball of radius $r$ in a k-dimensional space of constant sectional \ncurvature $K$. Then \n\\[ vol(L\\bigcap{B(p,r)} \\geq vol(B^K(r)) \\]\n\\end{thm}\nRemark : If $\\varphi$ is a volume form on $M$, then this is Gunther's\nvolume comparison theorem.\\\\\n{\\bf Proof of Theorem 2.0.1}: The proof is based on a\nfollowing Lemma, which is\na counterpart to Rauch comparison theorem :\n\\begin{lem}\n: Let $M$ be a (complete) Riemannian manifold whose sectional\ncurvature is bounded from above by $K$ and $\\gamma$ : \n\\( [0,t] \\mapsto M \\) be a unit speed geodesic. Let $Y$ be a Jacobi\nfield along \\( \\gamma \\) which vanishes at 0, orthogonal to \n\\( \\gamma ' \\) and \\( t \\leq \\frac {\\pi}{\\sqrt{K}} \\). \nThen it's length \\( |Y(\\theta)| \\) satisfies the following differential \ninequality\n\\( {|Y|''} + K|Y| \\geq 0\\). \\\\\nMoreover if a function \\( \\Psi \\) is a solution to \n\\( \\Psi '' + K \\cdot \\Psi = 0 \\) , \\( \\Psi(0) = 0 \\)\nand \\( \\Psi(t) = |Y(t)| \\) then \\[ \\Psi (\\theta) \\geq |Y(\\theta )| \\]\nfor \\( 0 \\leq \\theta \\leq t \\) \n\\end{lem}\n{\\bf Proof }:\\\\\nFirst a condition on $t$ means that $Y$ doesn't vanish on \\( (0,t] \\) by Rauch\nComparison theorem.\nWe have \n\\(|Y| = \\sqrt{\\langle {Y,Y} \\rangle } \\), \\(|Y|' = \\frac{\\langle\n\\nabla_{t} Y,Y \\rangle }{|Y|} \\), \n\\[|Y|''= \\frac{| \\nabla_{t} Y|^{2} - \\langle Y,R(\\gamma ' ,Y) \\gamma '\n\\rangle } {|Y|} - \\frac{\\langle \\nabla_{t} Y, Y \\rangle ^{2} }{|Y|^3} \n\\geq \\frac{ | \\nabla_{t} Y|^{2} |Y|^{2} - \\langle \\nabla_{t} Y ,Y\n\\rangle ^{2} }{|Y|^{3} } - K|Y| \n\\geq - K|Y|\\]\nby Cauchy-Schwartz inequality. Here $R$ is a curvature operator, \n\\( \\gamma ' \\) is a (unit length) tangent field to geodesic \\( \\gamma \\). \nSince $Y$ is orthogonal to \\( \\gamma ' \\) then \n\\( \\frac {\\langle R( \\gamma ' , Y) \\gamma ', Y \\rangle} {|Y|^{2} } \\) is \nthe sectional curvature of a plane through $Y$ and\n\\( \\gamma ' \\), which is less then $K$.\\\\\nFor the second claim consider \\( F = \\frac{|Y|}{\\Psi} \\). \n$\\Psi$ is\npositive on the interval \\( (o,t] \\) and hence $F$ is well defined on that\ninterval.\nAlso \\( F ' = \\frac{|Y| ' \\Psi - \\Psi ' |Y| }{\\Psi ^{2} } \\).\\\\\nConsider \\( G= |Y| ' \\Psi - \\Psi ' |Y| \\). \\( G(0) = 0 \\) , \n\\( G ' = |Y| '' \\Psi - \\Psi '' |Y| \\geq 0 \\).\\\\ \nSo \\( G \\geq 0 \\) , i.e. \\( F ' \\geq 0 \\) . Now \\( F( t) = 1 \\) , so \n\\( F \\leq 1\\) i.e. \\( |Y| \\leq F \\) Q.E.D.\\\\\n\n\\noindent\nNow we can prove {\\bf Theorem 2.0.1}:\\\\\nLet $d_p$ be a distance function to $p$ on $M$. Then for an open dense\nset of full measure of values $t$, $t$ is a regular value of $d_p$\nrestricted to $L$. Let now\n\\[ f(t)= vol(L \\bigcap B(p,t) ) \\hbox{ and } g(t) = \\int_{L \\bigcap B(p,t)}\n|\\nabla_{L} d_{p}| \\] \nWe also consider an analogous situation on \\( \\overline{L} \\) - a space\nform of constant curvature $K$. Then \\( \\overline{f} = \\overline{g} \\)\nbecause \\( |\\nabla \\overline{d_p} | = 1 \\) on \\( \\overline{L} \\).\\\\\nFor $t$ a regular value as above we have by the co-area formula :\n\\begin{equation} \n f ' (t) \\geq g ' (t) ~,~\n g ' (t) = vol(S_{t})\n\\end{equation}\nand \n\\begin{equation} \n\\overline{f} ' = vol( \\overline{S_t}),\n\\end{equation}\nhere \\( S_{t} = d_{p}^{-1}(t) \\bigcap L \\).\\\\\nConsider now a map $\\xi$ : \\( S_{t} \\times [0,t] \\mapsto M \\),\n \\( \\xi ( a, \\theta ) = exp( \\frac{\\theta}{t} exp^{-1}(a) ) \\), here\n\\( a \\in S_{t} \\) , \\(\\theta \\in [0,t] \\).\nThen \\( vol(\\xi(S_{t} \\times [0,t] )) \\geq f(t) \\).\\\\ \nIndeed let $\\rho$ be a $k-1$ form on \\( B(p,r) \\) s.t. \\( d \\rho = \\varphi\n\\) (such $\\rho$ exists by Poincare Lemma).\nThen by the calibrating condition \\[ vol(\\xi(S_{t} \\times [0,t] )) \\geq \\int_{S_{t} \\times [0,t]} \n\\xi^{\\ast} \\varphi = \\int_{S_{t}} \\rho = \\int_{B(p,t) \\bigcap L} \\varphi = \nf(t) \\]\nAlso on \\( \\overline{L} \\) we have \\( \\overline{f(t)} =\nvol(\\xi(\\overline{S_{t}} \\times [0,t] )) \\).\\\\\nWe need to estimate \\(h(t)= vol(\\xi(S_{t} \\times [0,t] ) \\). Let $g '$ be\nthe product metric on \\( S_{t} \\times [0,t] \\). Then \\( h(t) = \\int_{S_{t} \n\\times [0,t]} Jac(d \\xi) dg ' \\).\\\\\nTo estimate \\( Jac(d \\xi) \\) at point \\( (a , \\theta ) \\) we take an o.n.\nbasis \\( v_{1} \\ldots v_{k-1} \\) to $S_t$ at $a$. Then \\( d \\xi (v_{i}) \\)\nis a value of a Jacobi field along a unit speed geodesic \\( (exp(s \\cdot\n\\frac{exp^{-1}(a)}{t}) | s \\in [0,\\infty)) \\) at \\( s= \\theta \\) which is\northogonal to this geodesic, vanishes at $0$ and those length is $1$ at\n\\( s=t \\).\\\\\nLet $F_{t}(\\theta)$ solve \\( F_{t} '' + K \\cdot F_{t} = 0 \\), \\(\nF_{t}(0)=0 \\), \\(F_{t}(t)=1\\).\\\\\nBy Lemma 2.0.1 we have \\(|d \\xi(v_{i})| \\leq F_{t}(\\theta) \\), so \n\\begin{equation}\nJac(d \\xi) \\leq (F_{t}(\\theta))^{k-1} \n\\end{equation}\nWe can consider an analogous situation on \\( \\overline{L} \\) and in that\ncase we have an equality \\( Jac(d \\overline{\\xi}) = (F_{t}(\\theta))^{k-1}\\).\nSo \n\\[\\overline{f(t)} = \\int_{\\overline{S_t} \\times [0,t]}\n(F_{t}(\\theta))^{k-1} = vol(\\overline{S_t}) \\cdot \\int_{[0,t]}\n(F_{t}(\\theta))^{k-1} d \\theta = \n(by (2)) = \\overline{f} '\n(t) \\cdot \\alpha (t) \\]\n(here \\( \\alpha (t)= \\int_{[0,t]} (F_{t}(\\theta))^{k-1} d \\theta \\)).\\\\\nReturning now to our calibrated submanifold we deduce from (3)\nand (1) that\n\\(f(t) \\leq f ' (t) \\cdot \\alpha (t) \\).\\\\\nSo \\( \\frac{f ' (t)}{f(t)} \\geq \\frac{\\overline{f} ' (t)}{\\overline{f} \n(t)} \\) , i.e. \\( ln(f) ' \\geq (ln( \\overline{f}) - \\epsilon) ' \\) for\nany \n\\( \\epsilon > 0 \\). Having $\\epsilon$ fixed we can choose $t_0$ small\nenough s.t. \\( lnf(t_{0}) \\geq ln \\overline{f} (t_{0}) - \\epsilon \\).\\\\\nNow \\(lnf(\\theta ) \\) is defined for a.e. $\\theta$ and is an increasing \nfunction on $\\theta$, so\n\\[lnf(t) \\geq lnf(t_{0}) + \\int_{[t_{0},t]} lnf ' \\geq ln \\overline{f}\n(t_{0})- \\epsilon + \\int_{[t_{0},t]} (ln \\overline{f}) ' = ln\n\\overline{f} (t) - \\epsilon \\]\nNow $\\epsilon$ was arbitrary, hence \\( lnf(t) \\geq ln \\overline{f} (t) \\)\ni.e. \\( f(t) \\geq \\overline{f} (t) \\) Q.E.D .\\\\\n\n\\noindent \nWe wish to discuss the compactification of some moduli-space of\ncalibrated submanifolds in a given homology class.\nIf we have a moduli-space $\\Phi$ we can look on it as a subspace in\nthe space of rectifiable currents. k-dimensional currents have a mass\nnorm ${\\bf M}$ and and a flat norm ${\\cal F}$ (see \\cite{Mor}, p.42) \n\\[{\\bf M}(L)= sup(\\int_{L}\\eta | \\eta \\hbox{ a k-form}, \n\\forall p \\in M :|\\eta(p)| \\leq 1 ) \\]\n\\[{\\cal F}(A) = inf({\\bf M}(A) + {\\bf M}(B) | L=A+ \\partial B) \\] \nSince all the\nsubmanifolds in $\\Phi$ are closed and have the same volume, then by the\nFundamental compactness theorem (theorem 5.5 in \\cite{Mor}) we have that the\nclosure \\( \\overline {\\Phi} \\) of $\\Phi$ in the flat topology is compact.\\\\\nAlso for compact subsets of $M$ there is a Gromov-Hausdorff distance\nfunction\n\\(d^{GH}\\), there \\[ d^{GH}(K,N)= sup_{p \\in K } inf_{q \\in N } d(p,q)\n\\]\nUsing Theorem 2.0.1 we get \n\\begin{cor}\n: There is a constant \\(C=C(M,{\\varphi}) \\) s.t. for \n\\( K,N \\in {\\Phi} \\) \nwe have \\(d^{GH}(K,N) \\leq { C \\cdot ({\\cal F}(K-N))^{\\frac{1}{k+1} } } \\).\n\\end{cor}\n{\\bf Proof } : Suppose \\( d^{GH}(K,N) = r \\). Then we have a \npoint \\(p \\in K \\) s.t. \\(d(p,N) =r \\).\\\\\nIt is easy to construct a nonnegative function $f$ supported in a ball \n$B(p,r)$ which is equal to $1$ on a ball $B(p,r\/2)$ and s.t. \\( |\\nabla (f)| \n\\leq \\frac{const}{r} \\).\\\\\nSuppose \\( K-N = A+ \\partial B \\). Obviously \\(K-N(f \\varphi) \\geq \nvol(B(p,r\/2) \\bigcap K) \\)\\\\\n\\(\\geq const \\cdot r^{k} \\) by theorem 2.0.1\\\\\nAlso \\(K-N(f \\varphi) = A(f \\varphi) + B(df \\wedge \\varphi) \\leq {\\bf M}(A)+\n\\frac{const \\cdot {\\bf M}(B)}{r} \\leq \\frac{const \\cdot ({\\bf M}(A)+{\\bf M}\n(B))}{r}\\).\\\\\nSo taking the infimum we get \\[ const \\cdot r^{k} \\leq \\frac{{\\cal F}(K-N)}\n{r} \\]\nwhich is the statement of the Corollary. Q.E.D. \n\\\\\n\\noindent\nFrom that we get an immediate corollary \n\\begin{cor}\n: If a sequence of submanifolds $L_i$ in $\\Phi$ converges to\na current $L$, then it converges to the support of $L$ in Gromov-Hausdorff\ntopology .\n\\end{cor}\nWe now come to the main result of this section.\nWe wish to strengthen Theorem 2.0.1 by an analogous result for volume of\nballs of radius $r$ in the induced metric on calibrated submanifolds \n(which are smaller then the balls we considered before). We have the\nfollowing\n\\begin{thm}\n: Let $M$,$\\varphi$,$p$,$L$,$K$ and \\(B^{K}(r)\\) as\nin Theorem 2.0.1\nand let \\(d_L\\) be \na distance function to $p$ on $L$ in the induced metric on $L$.\nThen for \\( r \\leq min(injrad(M),R(K)) \\) we have :\n\\[vol ( x \\in{L}|d_{L}(x) \\leq{r}) \\geq vol(B_{K}(r)) \\] and \\(R(K)=\\pi\/ \n\\sqrt{K} \\) for $K$ positive.\n\\end{thm} \n\\begin{cor}:\nLet $M, \\varphi$ as before. Then there is an a priori bound on a diameter of calibrated submanifolds in a given homology class $\\eta$.\n\\end{cor}\n{\\bf Proof of the Corollary}: Choose some $r$ satisfying conditions of theorem\n2.0.2. Let $L$ be some\ncalibrated submanifold\nin a homology class $\\eta$. Let $\\Gamma$ be a maximal covering of $L$ by\ndisjoint balls of radius $r$. Since by theorem 2.0.2 each such ball has a\na volume at least $\\epsilon$ and the volume of $L$ is \\( v= [ \\varphi ] \n(\\eta) \\), then such covering exists and the number of elements in\n$\\Gamma$ is at most \\( N = \\frac{v}{\\epsilon} \\). Now every point in $L$\nis contained in one of the balls of radius $2r$ with the same centers as \nballs in $\\Gamma$.\\\\\nSo it is easy to deduce that the diameter of $L$ is at most \\( 4rN \\).\n Q.E.D. \\\\\n\n\\noindent\n{\\bf Proof of Theorem 2.0.2}: We wish to use the same argument as in \nthe proof of\nTheorem 2.0.1 for the distance function $d_L$. The problem is that $d_L$ is\nnot a smooth function in the $r$-neighbourhood of $p$. But we can\nstill smoothen it using the following technical Lemma:\n\\begin{lem}:\nLet $L$ be a submanifold, \\( p \\in L \\) and $d_L$\nas before. \nWe can pick \\( \\rho > 0 \\) and a \\( (C^{\\infty}) \\) function\n\\( 0 \\leq \\nu \\leq 1 \\) on \\( [0, \\infty ) \\) which is $0$ on \\( (0, \\rho]\n\\), $1$ on \\( [2 \\rho, \\infty ) \\) and nondecreasing s.t.\nfor any positive $\\epsilon$ there is a function \\( \\lambda _{\\epsilon}\n\\)on $L$ which satisfies :\n\n1) \\( \\lambda_{\\epsilon} \\) is \\( C^{\\infty} \\) outside of p \n\n2) \\( d_{L} \\leq \\lambda_{\\epsilon} \\leq d_{L}(1+ \\epsilon ) \\)\n\n3) \\( | \\nabla \\lambda_{\\epsilon}| \\leq 1+ \\nu(d_{L}) \\epsilon \\) \n\\end{lem}\n{ \\bf Proof}: Pick a positive \\( \\rho << injrad(L) \\). Choose a function\n$\\kappa$ on $M$ s.t. \\( \\kappa=1\\) on \\( B(p, 2 \\rho ) \\) and \\( \\kappa =\n0 \\) outside of \\(B(p,3 \\rho) \\).\nChoose a nonnegative radially symmetric function $\\sigma$ on $\\mathbb{R}^k$ \nwith\nsupport in the unit ball which integrates to 1 and let \\( \\sigma_{n}(x)=\nn^{k} \\cdot \\sigma(nx) \\). Then \\( \\sigma_{n} \\) also integrates to 1.\\\\\nChoose a nonnegative function \\( \\eta \\leq 1 \\) , \\( \\eta = 0 \\) on \\(B(p,\n\\frac{5 \\rho}{4} ) \\) and \\( \\eta = 1 \\) outside of \\( B(p, \\frac{3 \\rho}\n{2} ) \\).\n\nDefine now \\( \\mu^{n} : L \\mapsto R \\) , \\[ \\mu^{n}(q)= \\int_{T_{q}L}\nd_{L}(exp(\\theta))\\sigma^{n}(\\theta) d \\theta \\]\nHere \\(T_{q}L \\) is the tangent bundle to $q$ at $L$. Since \\( \\sigma^n\n\\) was radially symmetric function and \\( T_{q}L\\) has a metric, the\nexpression \\( \\sigma^{n}(\\theta) \\) is well defined and also integration\ntakes part only on a ball of radius \\( \\frac{1}{n} \\subset T_{q}L \\).\\\\ \nAlso it is clear that \\[\\mu^{n}= d_{L} + o(\\frac{1}{n}) \\] \nThe point is that for large $n$, \\(\\mu^n\\) is a smooth function on $L$.\nIndeed let us denote by \\( J(a,b) \\) the Jacobian of exponential map from\n$a$ that hits $b$ for $a,b$ points in $L$ that are close enough. Then\n\\(J(a,b) \\) is a smooth function on \\( (a,b) \\) and we can rewrite\n\\[ \\mu^{n}(q) = \\int_{L} J(q,b)^{-1} \\cdot d_{L}(b) \\cdot \\sigma\n(exp_{b}^{-1}(q)) db \\] \nand it is clear from this definition that\n\\( \\mu^n \\) is a smooth function of $q$ for $n$ large enough.\\\\\nAlso one can easily prove that \\( |\\mu^{n}(q_{1}) - \\mu^{n}(q_{2}) | \\leq \nd(q_{1},q_{2}) \\cdot (1 + o(\\frac{1}{n})) \\), hence \\[|\\nabla \\mu^{n} |\n\\leq 1 + o(\\frac{1}{n}) \\]\nNow pick \\( \\epsilon > 0 \\). Define \n\\( \\lambda_{\\epsilon}^{n} = (1 + \\eta \\epsilon)(\\kappa \\cdot d_{L} + (1-\n\\kappa) \\cdot \\mu^{n}) \\).\\\\\nThen \\( \\lambda_{\\epsilon}^{n} = d_L\\) on \\(B(p,\\frac{3 \\rho}{2}) \\) and\nit is smooth outside of $p$. \\\\\nOne can also directly verify that we can choose a constant C s.t. for\nsufficiently large $n$, the function \\( \\lambda_{\\epsilon} =\n\\lambda_{\\frac{\\epsilon}{C}}^{n} \\)\nsatisfies properties 2) and 3) as desired. Q.E.D.\\\\\n\n\\noindent \nNow we can prove {\\bf Theorem 2.0.2}: We will use the fact that the function \\(\\alpha(t) \\), defined in the proof of Theorem 2.0.1, is an increasing\nfunction of t for \\( 0 \\leq t \\leq \\frac{\\pi}{\\sqrt{K}} \\) for $K$ positive and\nfor \\( 0 \\leq t \\leq R(K) \\) for $K$ negative.\n\nPick $\\rho$ as in Lemma 2.0.2. \nLet \\( \\epsilon >0 \\). We will follow the lines of proof of Theorem 2.0.1\nfor the function \\( \\lambda_{\\epsilon} \\) instead of the distance\nfunction.\nWe denote by \\[ f(t)= vol(\\lambda_{\\epsilon}^{-1}([0,t])~,~\n S_{t} = \\lambda_{\\epsilon}^{-1}(t) \\] \nThen conditions on \\( \\lambda_{\\epsilon} \\) and the co-area formula imply\nthat for a regular value $t$ we have \\( f'(t) \\geq \\frac{vol(S_{t})}\n{1+ {\\epsilon} {\\nu(t)}} \\).\\\\\nAlso we can consider \\(A_{t} = ((a, \\theta)| a \\in S_{t} , 0 \\leq \\theta\n\\leq d_{p}(a) ) \\) (here $d_p$ is the distance to $p$ in the ambient\nmanifold).\nWe have \\( \\xi : A_{t} \\mapsto M \\), \\(\\xi(a, \\theta) =\nexp_{M}(\\frac{\\theta \\cdot exp^{-1}(a) }{d_{p}(a)}) \\).\\\\\nAs before we will have \\( f(t) \\leq vol(\\xi(A_{t})) \\) and \n\\( Jac(d \\xi) \\leq (F_{d_{p}(a)}(\\theta))^{k-1} \\) (see (3), we have the same\nnotations as in Theorem 2.0.1). The estimate for Jacobian is true for the following reason: Let $v_1, \\ldots, v_{k-1}$ be an o.n. basis to $S_t$ at $a$. Then only the normal component of $d\\xi(v_i)$ to the geodesic contributes to \n$Jac(d\\xi)$. The normal component can be estimated by Lemma 2.0.1.\n \nSo we will have \\( vol(\\xi(A_{t})) \\leq \\int_{S_t} \\alpha(d_{p}(a)) da\n\\leq vol(S_{t}) \\cdot \\alpha(t) \\) (here we used\nthe fact that $\\alpha$ is an increasing function and \\( d_{p}(a) \\leq\nd_{L}(a) \\leq \\lambda_{\\epsilon}(a) = t \\)).\\\\ \nCombining all this we get \\[(lnf)'(t) \\geq \\frac{(ln \\overline{f})'(t)}{1+\n\\epsilon \\nu(t) } = [(\\frac{ln \\overline{f}}{1+ \\epsilon \\nu })' +\n\\epsilon \\nu '\/(1+\\epsilon \\nu)^{2} \\cdot ln(\\overline{f})](t)\\]\nNow \\( \\nu(t) = 0 \\) for \\(t \\leq \\rho \\) and \\( \\nu'(t) = 0 \\) for \\( t\n\\geq 2 \\rho \\) and \\( ln(\\overline{f}) \\geq -C \\) for \\( 2\\rho \\geq t \\geq \\rho\n \\).\nSo \\[ (lnf)' \\geq (\\frac{ln \\overline{f}}{1+ \\epsilon \\nu})' - \\epsilon C' \\]\ni.e. \\( (lnf+ \\epsilon C' t)' \\geq (\\frac{ln \\overline{f}}{1+ \\epsilon\n\\nu})' \\) \\\\\nand for $\\theta$ small we have \n \\( ln(f(\\theta) + \\epsilon C' \\theta) \\geq ln(\\overline{f}(\\theta)) =\n\\frac{ln \\overline{f}(\\theta)}{1+ \\epsilon \\nu(\\theta)}) \\).\\\\\nSo \\(ln(f + \\epsilon C't) \\geq ln \\overline{f}\/(1+ \\epsilon \\nu) \\).\\\\\nHere $\\epsilon$ was arbitrary and we are done. Q.E.D. \n\n\\section{Special Lagrangian geometry on a Calabi-Yau manifold}\n\\subsection{ Basic properties and examples}\nLet $M^{2n}$ be a Calabi-Yau manifold, $\\varphi$ a holomorphic volume\nform and $\\omega$ a Kahler 2-form. \nIf $\\omega$ is a Calabi-Yau form then \\(Re({\\varphi}) \\) \nis a calibration (see \\cite {Mc})\nand calibrated submanifold $L$ can be characterized by\nalternative conditions : \\( \\omega |_{L} = 0 \\) and \\( Im({\\varphi})|_{L}\n= 0 \\).\nFor arbitrary Kahler form $\\omega$ we can define special Lagrangian (SLag) submanifolds by\nthose 2 conditions. The form $\\varphi$ has length $f$ with respect to the metric $\\omega$ (here $f$ is a positive function on $M$).\nWe can conformally change the metric so that the form\n\\( \\varphi \\) will have length $\\sqrt{2}^n$ with respect to the new metric\n$g'$. \nThen SLag submanifolds will be Calibrated by \\( Re \\varphi \\) with respect to \n$g'$. \n\\begin{lem}\nLet $L^n$ be a compact connected $n$-dimensional manifold. Then the moduli-space of SLag embeddings of $L$ into $M$ is a smooth manifold of dimension $b_1(L)$. \n\\end{lem}\n{\\bf Proof:} The proof is a slight modification of McLean's proof for a Calabi-Yau metric (see \\cite{Mc}).\n\nLet $i:L \\mapsto M$ be a (smooth) SLag embedding of $L$ into $M$. Locally the moduli-space $\\Gamma$ of $C^{2,\\alpha}$-embeddings of $L$ into $M$ (modulo the diffeomorphisms of $L$) can be identified with the $C^{2,\\alpha}$ sections of the normal bundle of $i(L)$ to $M$ via the exponential map. Also the normal bundle is naturally isomorphic to the cotangent bundle of $L$ via the map $v \\mapsto i_v \\omega$. Hence the tangent bundle to $\\Gamma$ can be identified with $C^{2,\\alpha}$ 1-forms on $L$. Let $V_k$ be the vector space of exact $C^{1,\\alpha}$ $k$-forms on $L$ and let $V=V_2 \\oplus V_n$. There is locally a map $\\sigma: \\Gamma \\mapsto V$, given at an embedding $j(L) \\in \\Phi$ by $(j^{\\ast}(\\omega),j^{\\ast}(Im\\varphi))$. The moduli-space $\\Phi$ of SLag embeddings is just the zero set of $\\sigma$. Now the differential of $\\sigma$ at $i(L)$ in the direction of $\\alpha$ (there $\\alpha$ is a $C^{2,\\alpha}$ 1-form on L as above) is\n\\[ (d\\alpha,d (f \\ast \\alpha)) \\]\nthere $f$ is the length of $\\varphi$ in the metric defined by $\\omega$. For\n$\\omega$ a Calabi-Yau metric $f$ is constant. We claim that the differential is surjective and the tangent space to $\\Phi$ is naturally isomorphic to the\nfirst cohomology $H^1(L,\\mathbb{R})$. To prove this consider first an operator $P$ from the space of $C^{3,\\alpha}$ functions on $L$ to $C^{1,\\alpha}$ $n$-forms on $L$, $P(h)=d (f\\ast dh)$. We claim that $P$ is surjective onto the space of exact $n$-forms and the kernel of $P$ is the space of constant functions on $L$. Since $f$ is non-vanishing $P$ is elliptic. So to prove the surjectivity it is enough to show that the co-kernel of $P$ consists of constant multiples of the volume form on $L$. Let $\\mu$ be in the co-kernel of $P$. Let $h=\\ast \\mu$. One easily computes that \\[\\int_{L}Ph \\cdot \\mu= \\pm \\int_{L} f|d^{\\ast}(\\mu)|^2 \\]\nSo $d^{\\ast}(\\mu)=0$, hence $\\mu$ is a constant multiple of the volume form on $M$. Let now $h$ be in the kernel of $P$. Then arguing as before we get that $\\mu=\\ast h$ is a constant multiple of the volume form on $M$, i.e. $h$ is a constant. \n\nNow we can prove the lemma . First we prove that $d\\sigma$ is surjective. Let $\\alpha$ be an exact 2-form on $L$, and $\\beta$ be an exact $n$-form on $L$. We need to find a 1-form $\\gamma$ on $L$ s.t. \\[d\\gamma=\\alpha ~ , ~ d(f \\ast \\gamma)=\\beta \\]\nSince $\\alpha$ is exact there is a 1-form $\\gamma'$ s.t. $d\\gamma'=\\alpha$. We are looking for $\\gamma$ of the form $\\gamma=\\gamma'+dh$ for a function $h$. Since the operator $P$ was surjective onto the space of exact $n$-forms on $L$, we get that such $h$ exists, so $d\\sigma$ is surjective, hence $\\Phi$ is smooth. Next we prove that $dim(\\Phi)=b_1(L)$. Let $W=ker(d\\sigma)$. $W$ is the tangent space to $\\Phi$ at $i(L)$. Since $W$ is represented by closed 1-forms on $L$, there is a natural map $\\xi:W \\mapsto H^1(L,\\mathbb{R})$. We claim that this map is an isomorphism. Indeed let $a \\in H^1(L,\\mathbb{R})$ and let $\\gamma'$ be a closed 1-form on $L$ representing the class $a$. From the properties of operator $P$ it is clear that there is a unique exact 1-form $\\gamma''=dh$ s.t. $\\gamma=\\gamma'+\\gamma''$ is in the kernel of $\\sigma$. Hence $\\xi$ is an isomorphism Q.E.D.\n\nRemark: A more general setup of deformations of SLag submanifolds in a symplectic manifold was considered by S. Salur in \\cite{Sem}.\n\nIn all subsequent discussions\nthe moduli-space will be connected (i.e. we take a connected component of the moduli-space of SLag embeddings of a given manifold $L$).\n\nWe can also define\n$\\Phi'$ - a moduli-space as special Lagrangian\nembeddings of a given manifold $L$ into $M$ over Diff' \n(diffeomorphisms of $L$ which induce identity map on the homology of $L$).\nThen \\( \\Phi' \\) is a covering space of $\\Phi$.\nNow any element $\\alpha$ in the first homology of $L$ induces a 1-form\n\\( h^{\\alpha} \\)on \\(\\Phi' \\) in the following way : Let \\(\\xi \\in \\Phi '\n\\) and let \\( L_{\\xi} \\) be it's support in $M$. If $v$ is a tangent\nvector to \\( \\Phi ' \\) at \\( \\xi \\) then we can view $v$ as a closed\n1-form on \\( L_{\\xi} \\). From definition of \\( \\Phi ' \\) it is clear\nthat the element \\( \\alpha \\) induces a well defined element in \\(\nH_{1}(L_{\\xi}) \\), which we will also call \\( \\alpha \\). So we\ndefine \\( h^{\\alpha}(v) = [v](\\alpha) \\). Hitchin\neffectively proved in \\cite{Hit} that \\( h^{\\alpha} \\) is a closed form on \\(\n\\Phi ' \\) (his notations are somewhat different from ours). Thus if we\npick \\( \\alpha_{1} \\ldots \\alpha_{k} \\) a basis for the first homology of $L$ then\nwe have a frame of closed forms \\(h^{1} \\ldots h^{k} \\) and\ncorrespondingly a dual frame of commuting vector fields \\( h_{1} \\ldots\nh_{k} \\) on \\( \\Phi ' \\). Hence any compact connected component $\\Gamma$ of $\\Phi$ must be a torus. Indeed the flow by commuting vector fields $h_i$ induces a\ntransitive $\\mathbb{R}^k$ action on $\\Gamma$ with stabilizer being a discrete\nsubgroup $A$, hence $\\Gamma$ is diffeomorphic to $\\mathbb{R}^k\/A$ - a k-torus. \n\nNext we investigate finite group actions on Calabi-Yau manifolds. Suppose\nthat a group $G$ acts by structure preserving diffeomorphisms on $M$. We have the following\n\\begin{lem} \n: Suppose a SLag submanifold $L$ is invariant under the $G$ action and $G$ acts\ntrivially on the first cohomology of $L$. Then $G$ leaves invariant every\nelement in the moduli-space $\\Phi$ through $L$.\\\\\nMoreover, suppose that $g \\in G$ and $x \\in M-L$ in an isolated fixed point of\n$g$. Then $x$ cannot be contained in any element of $\\Phi$\n\\end{lem}\n{\\bf Proof} : Since $G$ is structure preserving, it sends SLag submanifolds\nto SLag submanifolds. Since it leaves $L$ invariant, it preserves $\\Phi$\n(which is a connected component of $L$ in the moduli-space of SLag submanifolds).\nFrom the identification of the tangent space of $\\Phi$ at $L$\nwith $H^1(L,\\mathbb{R})$ and the fact that $G$ acts trivially on\n$H^1(L,\\mathbb{R})$ we deduce that $G$ acts trivially on the tangent space\nto $\\Phi$ at $L$. Hence $G$ acts trivially on $\\Phi$, i.e. it leaves each\nelement of $\\Phi$ invariant.\n\nTo prove the second statement, consider a set $S$ of those elements in $\\Phi$\nwhich contain $x$. Obviously $S$ is closed and doesn't contain $L$.\nWe prove that $S$ is open and then it will be empty.\n\nLet $L' \\in S$. Any element $L''$ close to $L'$ can be viewed uniquely\nas an image $exp(v)$, there $v$ is a normal vector field to $L'$. Suppose\n$v(x) \\neq 0$. Since $L''$ is $g$-invariant then $exp(g_{\\ast}v(x))$ is\nalso in $L''$, there $g_{\\ast}$ is a differential of $g$ at $x$. Since $L'$\nis $g$-invariant then $g_{\\ast}$ preserves the tangent space to $L'$ at\n$x$, hence it preserves the normal space. Also since $x$ is isolated then\n$g_{\\ast}$ has no nonzero invariant vectors. Hence $v(x) \\neq g_{\\ast}v(x)$ in the normal bundle. \n\nSince exponential map is a diffeomorphism from a small neighbourhood of the \nnormal bundle of $L'$ to $M$ we see that $exp(g_{\\ast}v(x))$ is not in $L''$-\na contradiction. So $v(x)=0$ i.e. $L'' \\in S$ Q.E.D.\n\nAs for examples of special Lagrangian submanifolds, many come\nfrom the following setup : Let $M$ be a Calabi-Yau manifold and $\\sigma$\nan antiholomorphic involution. Suppose $\\sigma$ reverses $\\omega$.\nThen the fixed-point set of $\\sigma$ is a special Lagrangian submanifold.\nFor a Calabi-Yau metric $\\omega$ the condition $\\sigma$ reverses $\\omega$ is\nequivalent to $\\sigma$ reversing the cohomology class $[\\omega]$, which often \ncan be easily verified.\nIndeed suppose $\\sigma$ reverses $[\\omega]$. Then \\( -\\sigma^{\\ast}(\\omega)\\)\nis easily seen to define a Kahler form, which lies in the same cohomology class\nas $\\omega$ and the metric it induces is obviously equal to \\(\\sigma^{\\ast}(g)\n\\), i.e. it is Ricci-flat. Hence by Yau's fundamental result (see \\cite{Yau}) \nwe have \\(-\\sigma^{\\ast}(\\omega)= \\omega \\).\n\nWe wish to discuss 2 collections of such examples. In both cases $M$ is a\nprojective manifold defined as a zero\nset of a collection of real polynomials.\nThen the conjugation of the projective space induces an anti-holomorphic\ninvolution which reverses the Fubini-Study Kahler form, hence it also reverses the Calabi-Yau form in the same cohomology class.\nThe fixed point set is a submanifold of a real projective space .\\\\\n\n\\noindent\n Our first example with be a complete intersection of hypersurfaces of\ndegree 4 and 2 in \\( \\mathbb {C}P^5 \\).\\\\\nFirst we note that a 2-torus can be represented as surface of degree 4 in\n$\\mathbb{R}^3$. Indeed a torus can be viewed as a circle bundle over a circle\n\\( ((x,y,0)|x^{2} + y^{2} = 1 ) \\) in $\\mathbb{R}^3$, there a fiber over a \npoint\n\\( a=(x,y,0) \\) is a circle of radius \\( \\frac{1}{2} \\) centered at $a$\nand passing in a plane through \\( a , (0,0,1) \\) and the origin . \nIf \\( (x,y,z) \\) is a point on our torus then it's distance to a point\n\\( (\\frac{x}{\\sqrt{x^{2} + y^{2}}}, \\frac{y}{\\sqrt{x^{2}+y^{2}}},0) \\) is\n\\( \\frac{1}{2} \\).\\\\\nIf we compute we get \\( 1+x^{2} + y^{2}+z^{2} -2 \\sqrt{x^{2}+y^{2}} =\n\\frac{1}{4} \\) , i.e. \n\\( p(x,y,z) = ( \\frac{3}{4} + x^{2} + y^{2}+z^{2})^{2} - 4(x^{2}+y^{2}) =\n0 \\).\n\nSo in inhomogeneous coordinates \\( x_{1} \\ldots x_{5} \\) on \\(\\mathbb{R}P^5\\) \nthe zero locus of 2 polynomials \\(p(x_{1},x_{2},x_{3}) \\) and\n\\(q(x_{4},x_{5}) \\) (there \\( q(x,y)=x^{2}+y^{2}-1 \\) ) is a 3-torus.\\\\\nIf we consider the corresponding homogeneous polynomials on \\(\\mathbb{R}P^5\\),\nthen it is easy to see that there is no solution for \\(x_{6}=0 \\).\nSo the zero locus of those polynomials in \\(\\mathbb{R}P^5\\) is a 3-torus.\nIf we perturb them slightly so that the corresponding complex 3-fold in\n\\(\\mathbb{C}P^5 \\) will be smooth then we obtain the desired example.\\\\\n\n\\noindent\n Other examples are quintics with real coefficients in \\(\\mathbb{C}P^4\\). In \nthat case real quintics would be special Lagrangian submanifolds. R.Bryant\nconstructed in \\cite{Br} a real quintic, which is a 3-torus .\\\\\nWe will construct, using Viro`s technique in real algebraic geometry \n(see \\cite{Vir}), real\nquintics $L_k$ which are diffeomorphic to projective space\n\\(\\mathbb{R}P^3\\) with k 1-handles attached for \\( k = 0, 1 , 2 , 3\\). If\n$k=3$ then \\(b_{1}(L_{3})=3 \\) and the cup product in the first cohomology\nof $L_3$ is $0$.\n\nThe construction goes as follows: First in inhomogeneous coordinates\n\\( x_{1} , \\ldots ,x_{4} \\) on \\(\\mathbb{R}P^4\\) we consider a polynomial \n\\(p \\cdot\nq \\), there \\( p(x_{1}, \\ldots , x_{4} ) = x_{1}^{2} + \\ldots + x_{4}^{2}\n- 1 \\) and \\( q = x_{4} \\). In \\(\\mathbb{R}P^4\\) the zero locus of the \npolynomial will be\n\\( \\mathbb{R}P^3 \\bigcup S^3 \\), there $\\mathbb{R}P^3$ is a zero locus of $q$,\n$S^3$ is a zero locus of $p$ \nand $\\mathbb{R}P^3$ intersects $S^3$ along \na 2-sphere \\( S^{2} \\subset \\mathbb{R}^3 = (x_4=0)\\).\n \nNow we consider \\( f = pq -\\epsilon h \\), there $h$ is some polynomial\nof degree up to 5 and $\\epsilon>0$ is small enough.\nThe Hessian of $pq$ on $S^2$\nis nondegenerate along the normal bundle to $S^2$ and vanishes along the axes \nof the normal bundle (the axes are a normal bundle of $S^3$ to $S^2$ and of\n$\\mathbb{R}P^3$ to $S^2$). If we look on those axes locally as coordinate\naxes then $pq= xy$ in those coordinates.\n\nSuppose first $h$ is non-zero along $S^2$. We can assume that $h>0$ on $S^2$.\nThe zero locus of $f=pq - \\epsilon h$ will live in 2 quadrants in which\n$xy>0$. Thus in zero locus of $f$, \na part $A_1$ of $\\mathbb{R}P^3$ outside $S^2$ will \"connect\" with one \nhemisphere $S'$ of\n$S^3$, and the part $A_2$ inside $S^2$ will connect with\nthe other hemisphere $S''$ and the zero locus of $f$ is a disjoint union of \n$\\mathbb{R}P^3$ and $S^3$.\\\\\nSuppose the zero set of $h$ intersects our $S^2$ transversally along k \ncircles such that no circle will lie in the interior of the other. We can \nassume w.l.o.g. that on the exterior $V$ of those circles $h$ is positive.\nThen along $V$, $A_1$ connects to $S'$ and $A_2$ connects to $S''$ as before.\nAlong the interior of every circle, $A_1$ connects to $S''$ and $A_2$ to\n$S'$. So near interior of these circles we get 1-handles connecting \n$\\mathbb{R}P^3$ with $S^3$.\nSo the zero locus of $f$ will be $\\mathbb{R}P^3$ \nand $S^3$\nconnected by k 1-handles, i.e. it will be an $\\mathbb{R}P^3$ with $k-1$ \n1-handles attached.\\\\\nIt is not hard to find examples of such $h$ for small values of k.\nFor instance for $k=4$ (i.e. for $L_3$) we can take \\[h= ((x_1-1\/3)^2 +\n(x_2 -1\/3)^2 - 1\/16)((x_1+1\/3)^2 + (x_2 +1\/3)^2 - 1\/16) \\] and the zero\nlocus of $h$ intersects $S^2 \\subset \\mathbb{R}^3$ in 4 circles.\n\n\\subsection{ Non-compactness of the moduli-space}\nIn this section we will consider connections between the moduli-space of\nSLag submanifolds and global geometry of the ambient Calabi-Yau\nmanifold $M$.\\\\\nLet $\\Phi$ be the moduli-space of SLag submanifolds. We have a fiber\nbundle $F$ over $\\Phi$, \\( F \\subset M \\times \\Phi \\),\n\\( F =((a, L)|a\n\\in M , L \\in \\Phi \\) s.t. $a \\in L$) .\\\\\nWe have a natural projection map \\( pr : F \\mapsto \\Phi \\),\nwhose fiber is the support of the element in $\\Phi$ and the evaluation map\n\\( ev : F \\mapsto M \\), \\( ev(a, L)=a \\).\\\\\nAlso the tangent space to a point \\( (a , L) \\in F \\) naturally splits\nas\n\\( T_{a}L \\oplus T' \\), there \\(T_{a}L\\) is\nthe tangent space to $L$ at $a$ (a tangent space to the fiber) and \n\\(T'= ((v(a),v)|v \\) is a variation v. field to the moduli-space\nand \\(a(v) \\) is the value of $v$ at $a$).\n\nLet $L$ be a compact k-dimensional oriented manifold with \\(b_{1}(L)=\nk\\). We say that $L$ satisfies condition $\\star$ if for $\\alpha_1$ \\ldots\n$\\alpha_k$ a basis for \\( H^{1}(L) \\) we have \n\\( \\alpha_{1} \\cup \\) \\ldots \\( \\cup \\alpha_k \\neq 0 \\) . \nThis holds e.g. if $L$ is a torus. On the other hand the real quintic\nwith \\( b_{1} =3 \\) that we constructed in section 3.1 doesn't satisfy\ncondition $\\star$. \n\\begin{thm}\n: Let $L$ be a special Lagrangian submanifold with \\(b_{1}(L)\n= k\\).\\\\\nSuppose $L$ satisfies $\\star$ .\nSuppose some connected component of \\( \\Phi ' \\) is compact. Then the \nBetti numbers of $M$ satisfy : \\(b_{i}(M) \\leq b_{i}(L \\times T^{k}) \\)\n(here $T^k$ is a $k$-torus).\\\\\nSuppose we have $G,g,x$ satisfying conditions of Lemma 3.1.1. Then $\\Phi$ \nitself is not compact.\n\\end{thm}\n{\\bf Proof of Theorem 3.2.1}: Suppose $L$ satisfies $\\star$. First prove that \n$\\Phi$ is orientable (in fact it has a natural volume element $\\sigma$). \nLet \\( L' \\in \\Phi \\) and \\( v_{1} \\ldots v_{k} \\) be \nelements of the tangent space to $\\Phi$ at $L'$. So \\( v_{1} \\ldots\nv_{k} \\) are closed 1-forms on $L'$ and we define:\n\\[\\sigma( v_{1} \\ldots v_{k} ) = [v_{1}] \\cup \\ldots \\cup [v_{k}]\n(L')\\] \nSuppose that $\\Phi$ is compact, or a connected component $\\Gamma$ of $\\Phi'$\nis compact. In each case we have an evaluation map as before. We will prove that in both cases it has a positive degree. We will give a proof for $\\Phi$, the\nproof for $\\Gamma$ is analogous.\n\nFirst $\\Phi$ has a natural volume element $\\sigma$ described above.\nSo the $2k$-form \\( \\alpha = pr^{\\ast}(\\sigma) \\wedge ev^{\\ast}(Re\n\\varphi) \\) is the volume form on $F$.\\\\ \nLet \\( L_{\\phi} \\in \\Phi \\) and \\( \\alpha_1\n\\ldots \\alpha_k \\) be a basis for \\( H^{1}(L_{\\phi}) \\) s.t. \\(\n\\alpha_1 \\cup \\ldots \\cup \\alpha_k [L_{\\phi}] = 1 \\). Then we can consider\ncorresponding\nvector fields \\(v_{1} \\ldots v_{k} \\) along $L_{\\phi}$, which\nform a frame for the bundle $T'$ (described in the beginning of this\nsection) restricted to \\( L_{\\phi} \\). So $[i_{v_j}\\omega]= \\alpha_j$ and\n\\(pr^{\\ast}(\\sigma)(v_{1}, \\ldots , v_{k} ) =1 \\).\n\nLet now $\\eta$ be a Riemannian volume form on $M$. Then we have \\[deg(ev) = \n\\int_{F}ev^{\\ast}(\\eta) \/ vol(M) \\] Since $F$ is a fiber bundle we\ncan use integration over the fiber formula to compute:\n\\[ \\int_{F}ev^{\\ast}(\\eta) = \\int_{\\Phi}(\\int_{L_{\\phi}} i_{v_1} \\ldots i_{v_k}\nev^{\\ast}(\\eta) )d \\phi \\] ( of course we choose \\( \\alpha_1 \\ldots \\alpha_k\n\\) for each \\( L_{\\phi} \\) ). \\\\\nAlso \\( i_{v_1} \\ldots i_{v_k}\nev^{\\ast}(\\eta) \\) is easily seen to be equal to \\(i_{v_1} \\omega\n\\wedge \\ldots \\wedge i_{v_k} \\omega \\) (all restricted to the fiber\n\\(L_{\\xi} \\) ).\\\\\nSo \\( \\int_{L_{\\xi}} i_{v_1} \\ldots i_{v_k} ev^{\\ast}(\\eta) = \\alpha_1\n\\cup \\ldots \\cup \\alpha_k (L_{\\phi}) = 1 \\).\\\\\nSo \\( deg(ev) = \\int_{\\Phi} 1 \/vol(M) = vol(\\Phi)\/vol(M) > 0 \\).\n\nNow let $\\Gamma$ be compact. Let $F'$ be a corresponding fiber bundle over $\\Gamma$.\nFirst we claim that \\(b_{i}(F') \\geq\nb_{i}(M) \\).\nSuppose this is not true for some $0 0$ then $\\int_{\\alpha}Re \\eta \\geq 1\/2$.\n\nLet now $\\Sigma$ be as before. Then $\\Sigma$ represents a homology class $h$\nand $\\int_{h}Re(\\eta)=1$. Also the integral of $Re(\\eta)$ on every component \nof $\\Sigma$ is at least $1\/2$, so $\\Sigma$ has at most 2 components.\nLet $\\Sigma '$ be another cusp curve on the boundary of the moduli-space.\nSuppose $\\Sigma '$ intersects $\\Sigma$. Since $h \\cdot h = 0$ we see that \n$\\Sigma$ and $\\Sigma '$ must have a common component. Suppose $\\Sigma$ has a \ncomponent $P$ which is not in $\\Sigma'$. Then $0= [P] \\cdot h = [P] \\cdot\n[ \\Sigma '] > 0 $- a contradiction. So $\\Sigma$ and $\\Sigma '$ have same\ncomponents, and since their total number (counted with multiplicity) is at \nmost 2, then $\\Sigma = \\Sigma '$.\n\nFinally we prove that the number of exceptional spheres is finite.\nAs we have seen, there are 2 types of exceptional curves:\n\n1) A curve with 2 components $A_i$ and $B_i$. Then $0= [A_i] \\cdot h=\n[A_i] \\cdot ([A_i] + [B_i])$. Now $[A_i] \\cdot [B_i] > 0$, so $[A_i] \\cdot\n[A_i] < 0 $.\n\nIf $A_j,B_j$ is another curve like that, then we have seen that $A_i$\ndoesn't intersect it, so in particular $[A_i] \\cdot [A_j] = 0$.\nSo one easily sees that the numbers of such curves is at most\n$5= b_2(\\overline{X})$. \n\n2) A curve with 1 component (possibly with multiplicities). Let this\ncurve be $k \\cdot P_i$, there $P_i$ is a primitive rational curve and\n$k \\cdot [P_i]= h $. To \nstudy those $P_i$ we make following observations: There is a $\\mathbb{Z}_2\n\\oplus \\mathbb{Z}_2$ action on $T^4$ with generators\n\n$\\gamma_1: (z_1,z_2) \\mapsto (z_1 + i\/2, z_2)$\n\n$\\gamma_2: (z_1,z_2) \\mapsto (z_1, z_2 + i\/2)$ \n\nThis action commutes with $\\alpha'$ action, and hence induces an action on\n$K^3$. It also preserves $\\overline{X}$.\n\nNext we find elements in $K^3$, which do not have a full orbit under the\naction. A point $(z_1,z_2)$ doesn't have a full orbit if it is preserved\nunder one of $\\gamma_1, \\gamma_2, \\gamma_1 \\circ \\gamma_2$.\nNow the fixed points are:\n\n$Fix(\\gamma_1)= ((z_1,z_2): (z_1 +i\/2, z_2)= (-z_1 +1\/2, -z_2+1\/2))$. \nThese are 2 points, disjoint from exceptional spheres. A similar\nanalysis for $\\gamma_2$ and $\\gamma_1 \\circ \\gamma_2$ produces 2 points\nfor each.\n\nNow the actions of $\\gamma_i$ are structure preserving on $\\overline{X}$,\nso they send SLag tori to Slag tori. Moreover they preserve an open set\nof tori $Re(z_i)=const$ in our moduli-space. So by Lemma 3.1.1 they leave elements\nof the moduli-space invariant. Hence they preserve the\nlimiting curve $P_i$ (because the convergence is in particular a\nGromov-Hausdorff convergence).\n\nFor a limiting curve $P_i$,\nconsider $\\chi(P_i) = [P_i] \\cdot [P_i] - c_1(K^3)([P_i]) + 2 = 2$.\nThen by theorem 7.3 of \\cite{MW} we can count $\\chi(P_i)$ by\nadding contributions of singular points (which are double points or branch \npoints), and each singular point gives a positive contribution. So $P_i$ has\nsingular points and there are at most 2 of those.\n\nLet $x$ is a singular point.\nThen it's orbit under $\\mathbb{Z}_2 \\oplus \\mathbb{Z}_2$ action consists of \nsingular points. So it cannot have length 4, so $x$ is one of 6 points $D$ \nwith orbit of length 2. So $P_i$ contains at least 2 points of the set $D$.\n\nIf $P_j$ is another curve of type 2, then $[P_i] \\cdot [P_j] = 0$, so they\ndon't intersect. Also $P_j$ contains at least 2 points from the set $D$.\nSo it is clear that the number of $P_i$ is at most 3.\n \nFrom all that we deduce that our moduli-space can be compactified to a pseudo-cycle. Also any point $x$ outside of bad neighbourhoods has a unique preimage\nin the smooth part of the moduli-space, so we deduce that the degree of the\nevaluation map is 1, so the compactified moduli-space fills the whole manifold\n$M$. Also elements of the compactified moduli-space don't intersect, so\n$M$ fibers with generic fiber being a SLag torus. Also the fibration is \nsmooth over the smooth part of the moduli-space. To prove that we need to prove that the differential of the evaluation map is an isomorphism. This is clearly\ntrue outside our `bad' neighbourhoods. Inside a bad neighbourhood, it is\nenough to prove that variational vector fields to our pseudoholomorphic tori\ndo not vanish. But this follows from a standard argument that each zero of such\na vector field gives a positive contribution to the first Chern class of the \nnormal bundle, which is trivial.\n\nWe want to point out that this example, then we can use\nHyperKahler trick and local product structure to study limiting SLag\nsubmanifolds, is quite ad hoc and some new ideas are needed to study\nsingular SLag submanifolds in general. \n\n\nNext we wish to construct a mirror, i.e. to compactify the dual fibration.\nLet $ \\stackrel{M}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ be a fibration\nover the compactified moduli-space and let $ \\stackrel{M_0}{\\stackrel{\\downarrow }{\\Phi}}$ be a restriction of this fibration over the (smooth)\nmoduli-space, \nthere $M_0 \\subset M$ is an open subset. Let $a \\in \\Phi$ and $L_a$ be a fiber.\nWe have a vector space $V_a=H_1(L_a,\\mathbb{R})$ and a lattice $\\Lambda_a=\nH_1(L_a, \\mathbb{Z})$ in it, and so we get a torus bundle $V_a\/\\Lambda_a$ \nover $\\Phi$. By dualizing each $V_a$ we get a dual bundle $V_a^{\\ast}\/\n\\Lambda_a^{\\ast}$. We will adopt the following definition of a topological mirror from M. Gross's paper (see \\cite{MG1})\n\n\\begin{dfn}\n:Let $\\stackrel{M'}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ be another fibration with $M'$ smooth and a corresponding fibration\n$\\stackrel{M_0'}{\\stackrel{\\downarrow }{\\Phi}}$ is a smooth torus \nfibration. Let $V_a'\/\\Lambda_a'$ as before.\nWe say that $M'$ is a topological mirror to $M$ if there is a fiberwise linear \nisomorphism $\\rho : V_a^{\\ast}\/ \\Lambda_a^{\\ast} \\mapsto V_a'\/ \\Lambda_a'$\nover $\\Phi$.\n\\end{dfn}\nSuppose now $M$ is a symplectic manifold and $\\stackrel{M_0}{\\stackrel{\\downarrow}{\\Phi}}$ is a Lagrangian fibration. Then Duistermaat's theory of action-angle\ncoordinates (see \\cite{MG2}) implies that there is an action of the cotangent bundle $T^{\\ast}\\Phi$ on the fibers with a stabilizer lattice $\\Lambda_b$. This of course induces a natural isomorphism $\\xi: V_a\/\\Lambda_a \\mapsto T^{\\ast}\\Phi \/ \\Lambda_b$. There is also a dual isomorphism $\\xi^{\\ast}: T \\Phi \\mapsto V_a^{\\ast}$ given explicitly by $\\xi(v) = [i_v \\omega]$, here $\\omega$ is a symplectic structure and $v \\in T\\Phi$ is viewed as a normal vector field to an element of $\\Phi$. Also the natural symplectic structure on $T^{\\ast}\\Phi$ projects to a symplectic structure on $T^{\\ast}\\Phi \/ \\Lambda_b$ and hence on $V_a\/\\Lambda_a$.\n\nIf our fibration is a Special Lagrangian fibration then one can get a symplectic structure on the dual bundle $V_a^{\\ast}\/\\Lambda_a^{\\ast}$ as follows (this construction was done by Hitchin in \\cite{Hit} and in coordinate-free way by Gross in \\cite{MG2}):\n\nWe have a map $\\alpha: V_a^{\\ast} \\mapsto T^{\\ast}\\Phi$ defined by periods of the closed form $Im \\varphi$. Explicitly, let $u \\in V_a^{\\ast}$. We can view $u \\in H^1(L,\\mathbb{R})$. For $v \\in T\\Phi$ we define \n\\begin{equation}\\label{PD}\n\\alpha(u)(v) = [i_v Im\\varphi] \\cup u ([L])= [i_v Im\\varphi](PD(u)) \n\\end{equation}\nHere $v$ is viewed as a normal vector field to $L$ and $PD(u)$ is a Poincare dual to $u$. One shows that for $u$ a section of $\\Lambda_a^{\\ast}$ (the integral cohomology lattice), $\\alpha(u)$ is a closed 1-form on $\\Phi$ and thus $\\alpha$ induces a symplectic structure on $V_a^{\\ast}\/\\Lambda_a^{\\ast}$. This motivates the following definition \n\\begin{dfn}\nLet $\\stackrel{M}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ and \n$\\stackrel{M'}{\\stackrel{\\downarrow }{\\overline{\\Phi}}}$ be 2 Special Lagrangian fibrations. Then $M'$ is a symplectic mirror to $M$ if the corresponding isomorphism $\\rho: V_a^{\\ast}\/\\Lambda_a^{\\ast} \\mapsto V_a'\/\\Lambda_a'$ over\n$\\Phi$ is a symplectomorphism.\n\\end{dfn}\n\nTo construct a topological mirror we make following observations: Let $F= \n\\stackrel{W_a\/ \\Lambda_a}{\\stackrel{\\downarrow}{U}}$ be some torus fibration.\nLet $a \\in U$. Then we have a monodromy representation $\\nu: \\pi_1(U,a) \n\\mapsto SL(W_a,\\Lambda_a)$ (see \\cite{MG1}). Moreover if $F' =\n\\stackrel{W_a' \/ \\Lambda_a '}{\\stackrel{\\downarrow}{U}}$ is another fibration\nand $K: W_a \\mapsto W_a' $ is an intertwining isomorphism for monodromy \nrepresentations, then $K$ induces\na natural fiberwise isomorphism between $F$ and $F'$. \n\nSo we can try to compactify a dual fibration by trying to find local \nisomorphisms between it and the original fibration. Let $U$ be a neighbourhood\nin $\\Phi$ and $a \\in U$.\nLet $e_1, \\ldots, e_n$ be\na basis for the lattice $\\Lambda_a$ and $e^1, \\ldots, e^n$ be a dual basis for the lattice $\\Lambda_a^{\\ast}$. Let $K: W_a\/ \\Lambda_a \\mapsto W_a^{\\ast}\/ \n\\Lambda_a^{\\ast}$ be some \nlinear map, which is given in terms of our bases by a matrix $K$. Let\n$\\alpha \\in \\pi_1(U,a)$ and $\\nu(\\alpha)$ be a monodromy map, which is given\nin terms of a basis $(e_i)$ by a matrix $A$. Then a dual representation\n$\\xi^{\\ast}(\\alpha)$ on $W_a^{\\ast}$ is given by a matrix $(A^T)^{-1}$ in a\nbasis $(e^i)$ (see \\cite{MG1}).\n\nSo we need $K \\cdot A = (A^T)^{-1} \\cdot K$ i.e. $K= A^T \\cdot K \\cdot A$.\n\nNow if $n=2$ there is a solution $K = \\left( \\begin{array}{clcr}\n0 & 1 \\\\\n-1 & 0 \\\\\n\\end{array} \\right) $.\n\nWe return now to $M$. On a 6-torus $T^6$ we have a natural \nisomorphism between integral homology and cohomology of SLag tori \n$T_{a,b,c}$. This isomorphism is invariant under $\\mathbb{Z}_2\n\\oplus \\mathbb{Z}_2$ action, hence it induces an isomorphism $\\rho$ between\n$\\Lambda_a$ and $\\Lambda_a^{\\ast}$ outside of bad neighbourhoods.\nTake a point $a$ on a boundary of a bad \nneighbourhood $Y$ in $\\Phi$. Then because of the product structure of our \nfibration over $Y$ we see that the monodromy matrices of $V\/ \\Lambda$ in $Y$ \nlook like $ \\left( \\begin{array}{clcr}\n\\ast & \\ast & 0 \\\\\n\\ast & \\ast & 0 \\\\\n0 & 0 & 1 \n\\end{array} \\right) $\n\nIt is clear from the above that the monodromy representation in Y is isomorphic to a dual representation.\n\nSo we can construct a topological mirror $M'$ as follows: Let $\\overline{W}$ be some\n'bad' neighbourhood in $M$ as before. Let $z_j=x_j+i \\cdot y_j$ be\ncoordinates on a 6-torus. Then a map $\\mu:T^6 \\mapsto T^6$, $\\mu(x_1,y_1,\nx_2, y_2 , x_3,y_3)= (x_1,-y_2,x_2,y_1,x_3,y_3)$ commutes with $\\mathbb{Z}_2^2$ action. Also $\\mu$ maps the boundary $\\partial \n\\overline{W}$ to\nitself. We take $\\overline{W}$ and glue it to $M - \\overline{W}$ by\n$\\mu$ and doing so for each 'bad' neighbourhood we obtain $M'$.\n\nNow $\\mu$ preserves SLag tori on $\\partial \\overline{W}$, and thus $M'$ naturally\nacquires a structure of a fiber bundle over $\\overline{\\Phi}$ with generic fiber being a torus. We claim that $M'$ is a topological mirror of $M$.\n\nIndeed we noted that outside of 'bad' neighbourhoods\nthere is a natural isomorphism $\\rho$ between bundles $V_a$ and $V_a^{\\ast}$\nas before, and of course the bundle $V_a'$ of $M'$ is isomorphic to the \nbundle $V_a$ outside of 'bad' neighbourhoods, so $\\rho$ can be viewed as an isomorphism between $\\Lambda_a^{\\ast}$ and $\\Lambda_a'$. We want to extend $\\rho$\ninside of $\\overline{W}$. First we need to check which isomorphism $\\rho$ induces on $\\partial \\overline{W}$ via the gluing map $\\mu$.\n\nLet $L= T_{a,b,c}$ be a SLag torus contained in $\\partial \\overline{W}$. Let \n$z_j=x_j + i \\cdot y_j$ be coordinates on $T^6$. Then $dy_1, \\ldots, dy_3$ is a\nbasis for $H^1(L,\\mathbb{Z})=\\Lambda_a^{\\ast}$ and $\\partial_{y_1},\\ldots, \\partial_{y_3}$ is a dual basis for $H_1(L,\\mathbb{Z})=\\Lambda_a'$. Then \n\\[\\rho: dy_1 \\mapsto -\\partial_{y_2} ~ , ~ dy_2 \\mapsto \\partial_{y_1} ~ , ~\ndy_3 \\mapsto \\partial_{y_3} \\] \nAs we saw, $\\rho$ is an intertwining operator between the\nmonodromy representations on $V_a^{\\ast}$ and $V_a'$. Hence $\\rho$ extends to an isomorphism inside $\\overline{W}$, and hence $M'$ is a topological mirror of $M$.\n\nSo far we viewed $M'$ just as a differential manifold (and one can easily show that $M'$ is diffeomorphic to $M$). We will see that\nis has additional interesting structures.\n\nLet $\\omega'$,$\\omega''$ and $\\eta = Re(\\eta)+ i \\cdot Im(\\eta)$ as before\n(see p.14). Then we easily see that near $\\partial \\overline{W}$ the gluing map $\\mu$ is an isometry. Also\n$\\mu^{\\ast}(\\omega')= Im \\eta, \\mu^{\\ast}(Im \\eta)= -\\omega',\\mu^{\\ast}(Re \\eta)\n= Re \\eta $.\nNow\n$Im(\\eta) + (dx_3 \\wedge dy_3)$ is a symplectic form on $\\overline{W}$. So we\nsee that we can glue it to our symplectic form $ \\omega' + (dx_3 \\wedge \ndy_3)$ outside of $\\overline{W}$ to get a symplectic form $\\omega^{\\ast}$ on $M'$.\nMoreover near $\\partial \\overline{W}$, $\\mu$ intertwines between $I$ and $K$ - almost \ncomplex structures defined by $\\omega'$ and $Im \\eta$. Thus we can glue\n$K$ inside $\\overline{W}$ to $I$ outside of $\\overline{W}$ to get an almost complex structure\n$I'$ on $M'$ compatible with $\\omega^{\\ast}$. We can glue a form $(Re \\eta + i Im \\eta) \\wedge idz_3 $ outside of $\\overline{W}$ to a form $(Re \\eta -i \\omega'') \\wedge idz_3$ inside $\\overline{W}$ to get a \ntrivialization $\\varphi'$ of a canonical bundle of $I'$.\n\nA submanifold $L \\in \\Phi$, then viewed as a submanifold of $M'$, is \nCalibrated by $Re \\varphi '$, which can be described by alternative \nconditions \n$\\omega^{\\ast} |_L=0, Im \\varphi '|_L= 0$. So we can view our moduli-space on $M'$\nas Special Lagrangian submanifolds, except for the fact that $I'$ is not\nan integrable a.c. structure and so $M'$ is only symplectic. If we were\nable to establish the fibration structure on $M$ by SLag submanifolds of the Calabi-Yau metric instead of $\\omega'$,\nwe would have obtained a Calabi-Yau structure on the mirror. \n\nFinally we prove that $ \\rho: V_a^{\\ast}\/\\Lambda_a^{\\ast} \\mapsto V_a'\/\\Lambda_a'$ is a symplectomorphism and thus $M'$ is a symplectic mirror to $M$ according to Definition 3.3.2. The symplectic structure on $V_a^{\\ast}\/\\Lambda_a^{\\ast}$ was obtained from the map $\\alpha: V_a^{\\ast} \\mapsto T^{\\ast}\\Phi$. Also the symplectic structure on $V_a'\/ \\Lambda_a' $ was obtained from a map $\\xi': V_a' \\mapsto T^{\\ast}\\Phi$. We will prove that\n\\[\\alpha = \\xi' \\circ \\rho \\] and then we are done. This is obviously true outside of bad neighbourhoods. Let now $L$ be in one of bad neighbourhoods, so $L$\nhas a form $T \\times S^1$. Let $\\beta^1,\\beta^2$ be (an oriented) basis for $H^1(T,\\mathbb{Z})$. Then $\\beta^1,\\beta^2, [dy_3]$ is a basis for $H^1(L,\\mathbb{Z})$, which we can view as a basis for $\\Lambda_a^{\\ast}$. Let $\\beta_1, \\beta_2, [S^1]$ be a corresponding dual basis for $H_1(L,\\mathbb{Z})$, which we can also view as a basis for $\\Lambda_a$ and $\\Lambda_a'$. Then $\\rho(\\beta^1) = -\\beta_2, \\rho(\\beta^2)= \\beta_1 ~ and ~ \\rho ([dy^3])= [S^1]$.\n\nLet $v^i= \\xi' (\\beta_i)$ and $\\gamma= \\xi'([S^1])$ in $T^{\\ast}\\Phi$ (because of the product structure $\\gamma$ can be viewed as a 1-form $dx_3$ on a moduli-space inside our bad neighbourhood).\nLet $v_1,v_2, \\partial_{x_3}$ be a dual basis of $T\\Phi$. So if we view $v_i$\n as normal vector fields to the fiber then $i_{v_i}\\omega^{\\ast}$ represent cohomology classes $\\beta^i$. But in a bad neighbourhood\n $\\omega^{\\ast} = Im\\eta + dx_3 \\wedge dy_3$, so\n$[i_{v_i}Im\\eta]= \\beta^i$. \n\nNow by equation \\ref{PD} we have \\[ \\alpha(\\beta^i) (v_j)= [i_{v_j}Im\\varphi] \\cup \\beta^i([L])= [i_{v_j}Im\\eta \\wedge -dy_3] \\cup \\beta^i (L)=\\beta^j \\cup -[dy_3]\\cup \\beta^i([L])= -\\beta^i\\cup \\beta^j([T]) \\] \nAlso one easily shows that $\\alpha(\\beta^i)(\\partial_{x_3})=0 $. So one deduces that\n$\\alpha(\\beta^1)=-v^2 ~ ,~ \\alpha(\\beta^2)= v^1$ and also $\\alpha([dy_3])= \\gamma$. So $\\alpha= \\xi' \\circ \\rho $ and we are done. \n\nRemark: It is clear that applying these ideas we can get analogous results for\na Calabi-Yau 4-fold $N$ obtained from resolution of a quotient of an 8-torus by\n$\\mathbb{Z}_2^3$, there the generators of $\\mathbb{Z}_2$ actions are\n\n$\\alpha: z_1, \\ldots , z_4 \\mapsto -z_1, -z_2, z_3, z_4 $\n\n$\\beta : z_1, \\ldots , z_4 \\mapsto z_1, z_2, -z_3, -z_4$ \n\nand $\\gamma: z_1, \\ldots , z_4 \\mapsto z_1, -z_2 + 1\/2, -z_3 + 1\/2 , z_4$\n\nIndeed the resolution of the quotient by $\\alpha$ and $\\beta$ is a product of\n2 $K^3$ surfaces with a product structure, there each $K^3$ has a metric \nEuclidean outside of bad neighbourhoods as before. \n\nFor a fixed point set of $\\gamma$ we introduce a bad neighbourhood $X$ in \n$z_2,z_3$ coordinates and consider a neighbourhood $Z = T^2 \\times X \\times\nT^2$ in $T^8$. Then $\\alpha$ and $\\beta$ act freely on that neighbourhood.\nInside $Z$ we introduces structures as before and this way we get a Kahler\nmetric and a SLag torus fibration on $N$. \n\n\\subsection{ Holomorphic functions near SLag Submanifolds }\nIn this section we examine holomorphic functions in a neighbourhood of a\nspecial\nLagrangian submanifold. Such examples can be obtained for instance\nfrom Calabi-Yau manifold $X$ in \\(\\mathbb{C}P^{n}\\) defined as a zero locus \nof real\npolynomials. In that case we have \\( L = X \\bigcap \\mathbb{R}P^{n} \\) a Special\nLagrangian submanifold. Let $P$ be some real polynomial of degree\n$k$ without real roots. Then for any polynomial $Q$ of degree $k$ the\nfunction \\( \\frac {Q}{P} \\) is a holomorphic function on $X$ in a\nneighbourhood of $L$. More generally let $L$ be a fixed point set of an \nantiholomorphic involution $\\sigma$ and $h$ a meromorphic function on $M$.\nThen obviously \\( \\overline{h \\circ \\sigma} \\) is also a meromorphic function\non $M$ and so is \\(g = h \\cdot (\\overline{ h \\circ \\sigma})+ 1 \\). Also on $L$\n$g$ is real valued and $\\geq 1$. So $ f= 1\/g$ is a holomorpic function in a \nneighbourhood of $L$. \n\nAn immediate consequence from the fact that SLag submanifolds are `Special',\ni.e. $ Im \\varphi |_{L}= 0$ if the following \n\\begin{thm}\n: Let $L_0$ be Slag Submanifold and $f$ be a holomorphic\nfunction\nin a neighbourhood of $L_0$. Let $\\xi$ be a function on our moduli-space,\n\\( \\xi(L) = \\int_{L} f \\). Then $\\xi$ is a constant function.\n\\end{thm}\n{\\bf Proof}: Consider the following $(n,0)$ form \\( \\mu = f \\varphi\\). Then\n$\\mu$ is holomorphic, hence closed and obviously $\\xi(L)= \\int_{L}\\mu$. \nQ.E.D.\\\\\n\n\\noindent \nThis yields a following corollary :\\\\\nFor \\( 0 < \\theta < \\pi \\) we denote by \\( A_{\\theta} \\) an open\ncone in complex plane given by \\( ( z=re^{i \\rho } | r>0 , 0< \\rho <\n\\theta ) \\).\n\\begin{cor}\n: Let $M$ be a Calabi-Yau $n$-fold and $f$ a holomorphic\nfunction on some domain $U$ in $M$. Let $L(t)$ be flow of Slag\nsubmanifolds contained in $U$ and \\( p \\in U \\) a point s.t. \\( f(p)=0\\).\nSuppose that the distance \\(d(p, L(t)) \\rightarrow 0 \\) as \\(t\n\\rightarrow \\infty \\). Then $L(t)$\ncannot be contained in in the domain \\( f^{-1} (A_{\\theta}) \\) for \n\\( \\theta < \\frac{\\pi}{2n} \\). \n\\end{cor}\nRemark: This Corollary gives a restriction of how singular SLag currents\nmight look like .\\\\\n{\\bf Proof of Corollary 3.4.1}: Suppose $L(t)$ are contained in \\(W= \nf^{-1}(A_{\\theta})\\) as above. We can find an \\( \\epsilon >0 \\) s.t. \n\\(g=f^{n+ \\epsilon }\\) is well defined an holomorphic on $W$ and \n\\( g(W) \\subset\nA_{\\frac{\\pi}{2}} \\). Then \\( h= \\frac{\\pi}{2g} \\) is holomorphic on $W$, \n\\( h(W) \\subset A_{\\frac{\\pi}{2}} \\) and for $z$ close to $p$ we have \\(|h(z)| \\geq const \\cdot \nd(z,p)^{-n- \\epsilon} \\).\n\nSince \\( \\int_{L(t)} h \\) is constant and \\( Re(h),Im(h) >0\\) on $L(t)$\nthen \\( \\int_{L(t)} |h| \\) is bounded by a constant. \\\\\nTake now any \\( \\delta >0 \\) and pick $t$ and \\( p_{t} \\in L(t) \\) s.t.\n\\(d(p,p_{t}) \\leq \\delta \\). Consider \\(B=B(p_{t}, \\delta ) \\bigcap L(t)\n\\). By Theorem 2.0.1, \\( vol(B) \\geq const \\cdot \\delta^{n} \\) and on \n$B$ we have \\( |h| \\geq \\frac{const}{\\delta^{n+ \\epsilon}} \\). So \\( \\int_\n{L(t)} |h| \\geq \\int_{B} |h| \\geq const \\cdot \\delta^{- \\epsilon}\\).\\\\\nNow $\\delta$ was arbitrary - a contradiction. Q.E.D.\\\\\n\n\\noindent\nApplying these ideas we can also get restriction on SLag submanifolds in\n$\\mathbb{C}^n$ which are asymptotic to a cone. We have the following theorem:\n\\begin{thm}\n: Let \\( L \\subset \\mathbb{C}^{n} \\) be a special\nLagrangian submanifold\nasymptotic to a cone $\\Lambda$ and let \\( z_{1} \\ldots z_{n} \\) be\ncoordinates on\n$\\mathbb{C}^n$. Then $L$ cannot be contained in the cone \n\\[ B_{\\theta}^{\\delta}= ( (z_1, \\ldots, z_n)| z_1 \\in A_\\theta ~,~ |z_1|>\n\\delta \\cdot |z_i| ) \\]\nfor $\\delta > 0 ~,~ \\theta < \\pi\/2n $.\n\\end{thm}\nRemark : The order, to which $L$ is required to be asymptotic to a cone\nwill become clear from the proof.\\\\\n\n\\noindent \n{\\bf Proof of the theorem}: Consider a flow $L(t)$ of SLag submanifolds in the unit ball\nin\n$\\mathbb{C}^n$ with boundary in a unit sphere, \\( L(t) = (z|t \\cdot z \\in L , |z| \\leq 1 ) \\). We wish to prove that \\( \\int_{L(t)} |z_{1}|^{-n-\n\\epsilon } \\) is uniformly bounded in $t$ for some $\\epsilon > 0$ as in the proof of Corollary 3.4.1. This will lead us to a\ncontradiction as before because there are points in $L(t)$ which converge\nto the origin in $\\mathbb{C}^n$ for $t \\rightarrow \\infty$.\n\nLet $d$ be the distance function to the origin on $L$. Let \\(v =\n\\nabla d, w=\\frac{v}{||v||^{2} } \\). Then since $L$ is asymptotic to a\ncone, $w$ will be a well-defined v. field outside some ball $B$ in $L$\nand it's length will converge to 1 at $\\infty$. We will also assume that\nvector $w(x)$ is close to the line through $x$ and the origin, i.e. that \nthere is a\nfunction \\( g : R_{+} \\mapsto R_{+} \\) s.t. the length of the orthogonal\ncomponent of $w(x)$ to this line is $\\leq$ \\( g(||x||) \\) and \\( \\int_{[1,\n\\infty ) } \\frac{g(t)}{t} dt < \\infty \\).\n\nWe extend $w$ inside\nof $B$ to be a \\( C^{\\infty} \\) v.field on $L$. Let \\( \\eta_{t} \\) will\nbe flow of $w$ in time $t$, then the derivative of $d$ along \\( \\eta_{t}\n\\) is 1 outside $B$.\\\\\nWe can consider the corresponding flow \\( \\sigma_t \\) on $L_s$,\n\\[ \\sigma_{t}(x) = \\frac{\\eta_{t}(sx)}{s+t} ~,~ \\sigma_{t} : L_{s}\n\\mapsto L_{s+t} \\]\nLet $v_s$ be a vector field on $L_s$ inducing the flow. One can\neasily show that on the boundary of $L_s$ we have \\( ||v_{s}|| \\leq const\n\\cdot \\frac{g(s)}{s} \\). \n\nPick $\\epsilon$ small enough so that for $f(z) = z_{1}^{-n- \\epsilon}$ we have $Ref,Imf $ are positive on $L_t$.\nLet $h(t)= \\int_{L_t}f=\\int_{L_t}f \\varphi$. We need to prove that $h(t)$ \nis a bounded function of $t$ and then we are done. \\\\\nLet $Q_t$ be the boundary of $L_t$. Then \\[h'(t)=\\int_{L_t}{\\cal\nL}_{v_t}f \\varphi=\\int_{L_t}d(i_{v_t}f \\varphi)=\\int_{Q_t}f \\cdot i_{v_t}\n\\varphi \\]\nConditions on $B_{\\theta}^{\\delta}$ imply that $|f|$ is uniformly \nbounded on $Q_t$. Also we know that \\(|v_{t}| \\leq \\frac{g(t)}{t}\\).\\\\\nSo \\( |h'(t)| \\leq const \\cdot vol(Q_{t}) \\frac{g(t)}{t} \\). Now $Q_t$\nconverges to the base of the cone, so it's volume is bounded, so \\(\n|h'(t)| \\) is an integrable function of $t$ by our assumptions, so \n$h(t)$ is uniformly bounded it $t$ . Q.E.D. \\\\\n\n\\noindent \nUsing those ideas for the 2-torus we get a following fact for analytic\nfunctions of 1 complex variable :\n\\begin{lem}\n: There is no holomorphic function $f$ from the open half disk \\\\\n\\(D = (re^{i \\theta } | 0 0 \\) on \\( L_{\\frac{1}{4}} \\). We look at the flow \n\\( L_{t} : t\\rightarrow 0 \\). We have 2 cases :\\\\\n1) We have a (first) value $t_0$ s.t. \\( ReP' (x, t_{0}) = 0 \\) or\n\\(ImP'(x,t_{0}) = 0 \\). W.l.o.g. we assume that the second case holds. Let\n\\(ReP'(x,t_{0}) = a \\). Consider \\( h = \\frac{\\pi}{2f(\\sqrt{P'-\na})} \\).\\\\\nThen as we approach $t_0$, h remains a holomorphic function with \\( Reh ,\nImh > 0 \\) and \\( |h(z)| > const \\cdot |z-(x+it_{0})|^{-2} \\). So as in \nCorollary 3.4.1 we get a contradiction.\\\\\n2) If \\(ReP' , Im P' \\) remain positive as \\( t \\rightarrow 0 \\) then we\nget a contradiction by looking at \\( \\int_{L_{t}}|P'| \\) . Q.E.D. \n\n\\section{Coassociative Geometry on a $G_2$ manifold}\nFor an oriented 7-manifold $M$, let $\\bigwedge ^3 T^{\\ast}M$ be a bundle of\n3-forms on it. This bundle has an open sub-bundle $\\bigwedge_+ ^3 M$ s.t.\n$\\varphi \\in \\bigwedge^3 T^{\\ast}_p M$ is in $\\bigwedge_+ ^3 M$ if there is a\nlinear isomorphism $\\sigma: T_p M \\mapsto \\mathbb{R}^7$ s.t. $\\sigma^{\\ast}\n\\varphi _0 = \\varphi$, there $\\varphi_0$ is a standard $G_2$ 3-form on\n$\\mathbb{R} ^7$ (see \\cite{Joy}, p. 294).\n\nA global section $\\varphi$ of $\\bigwedge_+ ^3 M$ defines a topological $G_2$\nstructure on $M$. In particular this defines a Riemannian metric on $M$.\nWe will be interested in a closed $\\varphi$. If $\\varphi$ is also co-closed then\nit is parallel and defines a metric with holonomy contained in $G_2$ (see\n\\cite{Joy}). In this case the form $\\ast \\varphi$ is a calibration and a\ncalibrated 4-submanifold $L$ is called a coassociative submanifold. This can\nalso be given by an alternative condition $\\varphi|_L = 0$. For a closed\n$\\varphi$ we can define coassociative submanifolds by this condition. They no\nlonger will be Calibrated (because $\\ast \\varphi$ is not closed).\nBut nevertheless they are quite interesting because they admit an unobstructed\ndeformation theory. In fact we can copy a proof of theorem 4.5 in \\cite{Mc} to show that their moduli-space $\\Phi$ is smooth of dimension $b_2 ^+ (L)$\n(proof of\nthat theorem in fact never used the fact that $\\ast \\varphi$ is closed).\nIf $L$ is coassociative as before and $p \\in L$ we can identify the normal\nbundle to $L$ at $p$ with self-dual 2-forms on $L$ by a map \\[v \\mapsto\ni_v \\varphi \\] for $v$ a normal vector to $L$. Thus a tangent space to $\\Phi$\nat $L$ can be identified with closed self-dual 2-forms on $L$.\n\nIn a similar way to SLag Geometry we have a following lemma for finite group\nactions on $M$ :\n\\begin{lem}\nSuppose a finite group $G$ acts on $M$ preserving $\\varphi$. Suppose $L$ is\na coassociative submanifold, $G$ leaves $L$ invariant and acts trivially on the second\ncohomology of $L$. Then $G$ leaves every element of the moduli-space $\\Phi$\nthrough $L$ invariant.\\\nMoreover, suppose $g \\in G$ and $x \\in M - L$ is an isolated fixed point of\n$g$. Then $x$ is not contained in any element in $\\Phi$.\n\\end{lem}\nThe proof of this lemma is completely analogous to proof of Lemma 3.1.1.\n\nNext we want to point out an example in which a $G_2$ manifold is a fibration\nwith generic fiber being a coassociative 4-torus. Our manifold $M$ is obtained from\nresolution of a quotient of a 7-torus by a finite group. We hope to give a\nsystematic way of producing such examples in a future paper.\n\nLet the group be $\\mathbb{Z}_2^3$ with generators \\[ \\alpha: (x_1, \\ldots, x_7)\n\\mapsto (-x_1,-x_2,-x_3,-x_4,x_5,x_6,x_7) \\]\n\\[\\beta : (x_1,\\ldots,x_7) \\mapsto (-x_1+1\/2,1\/2-x_2,x_3,x_4,-x_5,-x_6,x_7) \\]\n\\[\\gamma : (x_1,\\ldots,x_7) \\mapsto (-x_1, x_2, 1\/2-x_3, x_4, -x_5,x_6,-x_7)\\]\n\n(compare with \\cite{Joy}, p.302). We will follow Joyce's exposition of that\nexample.\n\nThe fixed point locus of each generator is a disjoint union of 3-tori. Their\nfixed loci don't intersect and their compositions have no fixed points. Around\neach fixed point the quotient looks like $V=T^3 \\times B\/\\pm1$, there $B$ is a\nball in $\\mathbb{R}^4$. We will show how to get a $G_2$ structure on resolution of singularities $\\overline{V}$. We will treat a fixed locus of, say, $\\alpha$. \n\nLet $x_1, \\ldots, x_4$ be coordinates on $\\mathbb{R}^4$. Let\n$\\omega_1,\\omega_2,\\omega_3$ be a standard HyperKahler package on\n$\\mathbb{R}^4$. \nFor coordinates $x_5,x_6,x_7$, let $\\delta_i$ be a dual 1-form to $x_{8-i}$. \nThen the $G_2$ 3-form on $\\mathbb{R}^7$ is \\[ \\varphi= \\omega_1 \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 + \\omega_3 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3 \\]\nWe can use either one of 3 complex structures on $\\mathbb{R}^4$ to identify it with $\\mathbb{C}^2$. That way for each singularity of the form $\\mathbb{R}^4\/\\pm1$ we get a resolution of a singularity $\\overline{U}$ and a (non-integrable) HyperKahler package on $\\overline{U}$ in a similar way to section 3.3.\nSuppose, for example, that we used a complex structure $I$ on $\\mathbb{R}^4$. Then $\\omega_2$ and $\\omega_3$ lift to $\\overline{U}$. Also $\\omega_1$ is replaced by a\nKahler form $\\omega_1'$ (as in Section 3.3).\n On $\\overline{V}= \\overline{U} \\times T^3$\nwe can consider \\[ \\varphi'= \\omega_1' \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 +\\omega_3 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3 \\]\nand $\\varphi'$ will be a closed $G_2$ form. Doing so for each fixed locus we get a $G_2$ structure on $M$. D. Joyce proved that this 3-form can be deformed to a parallel $G_2$ form. We will use the form $\\varphi'$ because we can construct a coassociative 4-torus fibration on $M$ for $\\varphi'$.\n\nOn $T^7$ we have a following coassociative 4-torus fibration \\[T_{a,b,c}=\n((x_1,\\ldots,x_7)| x_1=a, x_3=b, x_6=c) \\]\nNote that the 4 coordinates on each $T_{a,b,c}$ are chosen so that each\ngenerator of $\\mathbb{Z}_2^3$ acts non-trivially on exactly 2 of those coordinates. Those $T_{a,b,c}$ become coassociative tori on $M$ and fill a big \nopen neighbourhood of $M$. We would like to see what happens then\nthese tori enter a 'bad' neighbourhood $V$ above. For that we have to consider a bigger tubular neighbourhood $W=X \\times T^3$, there\n\\[ X= (v \\in T^4| v = w + (0,a,0,b), ~there~ w \\in U ~and~ a,b \\in \\mathbb{R}\/ \\mathbb{Z} ) \\]\nSo $W$ are exactly those points in $T^7$ which are contained on\n$T_{a,b,c}$ for a torus $T_{a,b,c}$, which intersects $V$. We have a resolution of singularities $\\overline{W}$, which can be viewed as a neighbourhood in $M$.\n\nWe would like to investigate coassociative tori in $\\overline{W}$. As we mentioned, we have 3 different ways to resolve a singularity using either one of the\nstructures $I,J,K$. We have 2 different cases :\n\n1) The structure we are using is either $I$ or $K$. We can assume that it is $I$. The $G_2$ form looks like \\[\\omega_1' \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 + \\omega_3 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3 \\]\nOur tori will look like \n\\[T_c \\times L \\]\nthere $T_c$ is a torus in $x_5,x_6,x_7$ coordinates defined by condition $x_6=c$ and $L$ is in $\\overline{U}$ defined by conditions: $\\omega_1'|_L=0, \\omega_3|_L=0 $, i.e it is a Special Lagrangian torus as in section 3.3. The results of\nsection 3.3 precisely apply to show that the compactified moduli-space fibers\nthe neighbourhood $\\overline{W}$. \n\n2) The structure we are using is $J$. Then we have a package $\\omega_1, \\omega_2',\\omega_3$ and the $G_2$ form looks correspondingly. Then our tori again look like $T_c \\times L$, there $L$ satisfies $\\omega_1|_L=0, \\omega_3|_L = 0$, i.e.\nthey are holomorphic tori with respect to a structure $J$. So what we get is\na neighbourhood in $K^3$ with a standard holomorphic fibration over a neighbourhood in $S^2$. So in any case our manifold $M$ fibers with generic fiber being a coassociative 4-torus.\n\nFinally we want to construct a topological mirror for this torus fibration. We will use definitions from section 3.3. It is clear that because of the local product structure the local monodromy representation is isomorphic to the dual representation.\nHence we can construct a dual fibration by performing a surgery for each 'bad'\nneighbourhood $\\overline{W}$ in a similar way to section 3.3. So for instance if $\\overline{W}$ is a neighbourhood of $\\alpha$ then we glue $\\overline{W}$ to $M - \\overline{W}$ along a boundary by a map \\[ \\eta: (x_1,\\ldots,x_7) \\mapsto (x_1,-x_4,x_3,x_2,x_5,x_6,x_7) \\]\nAlso near the boundary of $\\overline{W}$ we have $\\eta^{\\ast}(\\omega_1)= \\omega_3 ~,~ \\eta^{\\ast}(\\omega_3) = -\\omega_1 ~,~ \\eta^{\\ast}(\\omega_2)= \\omega_2 $.\nSo we can glue a closed $G_2$ form $\\varphi^{\\ast}= \\omega_3 \\wedge \\delta_1 + \\omega_2 \\wedge \\delta_2 - \\omega_1 \\wedge \\delta_3 + \\delta_1 \\wedge \\delta_2 \\wedge \\delta_3$ inside of $\\overline{W}$ to $\\varphi'$ outside of\n$\\overline{W}$ to get a closed $G_2$ form $\\varphi''$ on the mirror. Also the mirror fibration is a fibration by coassociative 4-tori with respect to $\\varphi''$. \n\nRemark: The original example in Joyce's paper \\cite{Joy} was a quotient by a slightly different $\\mathbb{Z}_2^3$ action with generators \\[ \\alpha: x_1, \\cdots , x_7 \\mapsto -x_1,-x_2,-x_3,-x_4,x_5,x_6,x_7 \\] \\[ \\beta : x_1 , \\cdots , x_7 \\mapsto \n-x_1,1\/2-x_2,x_3,x_4,-x_5,-x_6,x_7 \\] \\[ \\gamma: x_1, \\cdots , x_7 \\mapsto 1\/2 - x_1,x_2,1\/2-x_3,x_4,-x_5,x_6,-x_7 \\]\nWe get a closed $G_2$ form $\\varphi$ on the resolution of singularities similarly to previous example. Our manifold $M$ again will be fibered by coassociative 4-tori if we start from a family $T_{a,b,c} = ((x_1, \\ldots, x_7)|x_1=a,x_2=b,x_7=c)$. Indeed we consider a neighbourhood $U_i$ of a fixed component of one of\nthe generators and a bigger neighbourhood $X_i=(v \\in T^7|v= u + (0,0,a_1,a_2,a_3,a_4,0) ~ s.t.~ u \\in U ~ and ~ a_i \\in \\mathbb{R}\/\\mathbb{Z}) $. Then $X_i$ are disjoint, so we can use product structure on $X_i$ to get a coassociative torus fibration on $M$ as in the previous example.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the standard approach to quantum mechanics the wave function provides a complete description of any system and is used to calculate probability distributions for observables associated with the system. In the pilot-wave theory pioneered by de Broglie \\cite{de0,de1} in the 1920's, particles have definite positions and velocities and are ``guided\" by the wave function, which satisfies Schrodinger's equation. A similar approach was developed by Bohm \\cite{Bohm1,Bohm1b} in the early 1950's (see \\cite{Bohm5} and \\cite{Hol2} for extensive reviews).\n\nFor a single particle in the non-relativistic theory the velocity of the particle is given by\n\\begin{equation}\n\\frac{d\\vec{X}}{dt}=\\vec{J}(\\vec{X},t)\\;,\n\\end{equation}\nwhere $\\vec{X}$ is the particle's position and\n\\begin{equation}\n\\vec{J}=\\frac{\\hbar}{2im|\\psi|^2}\\left[\\psi^*\\vec{\\nabla}\\psi-\\psi\\vec{\\nabla}\\psi^*\\right]\\;.\n\\end{equation}\nParticle trajectories are, therefore, integral curves to the vector field $\\vec{J}$.\n\nIn this paper I first consider the pilot-wave theory for a non-relativistic particle in a potential. I construct a Hamiltonian that gives Schrodinger's equation and the guidance equation for the particle. This involves imposing a constraint and the use of Dirac's method of dealing with constrained dynamical systems \\cite {Dir1, Dir2}. I then find the Hamiltonian for a relativistic particle in Dirac's theory and for a quantum scalar field.\n\nA Hamiltonian formulation of the pilot-wave theory has also been developed by Holland \\cite{Holland}. His approach is significantly different from the approach taken in this paper and will be discussed at the end of section 2.\n\\section{Non-Relativistic Pilot-Wave Theory}\nIn this section I construct a Hamiltonian that gives Schrodinger's equation and the guidance equation for a single particle and for a collection of particles.\n\nThe Lagrangian\n\\begin{equation}\nL=\\frac{1}{2}i\\hbar\\left[\\psi^*\\frac{\\partial\\psi}{\\partial t}-\\psi\\frac{\\partial\\psi^*}{\\partial t}\\right]-\\frac{\\hbar^2}{2m}\n\\vec{\\nabla}\\psi\\cdot\\vec{\\nabla}\\psi^*-V\\psi^*\\psi\n\\end{equation}\ngives Schrodinger's equation\n\\begin{equation}\ni\\hbar\\frac{\\partial \\psi}{\\partial t}=-\\frac{\\hbar^2}{2m}\\nabla^2\\psi+V\\psi\n\\end{equation}\nunder a variation with respect to $\\psi^*$ and its complex conjugate under a variation with respect to $\\psi$.\n\nThe canonical momenta $\\Pi_{\\psi}=\\partial L\/\\partial \\dot{\\psi}$ and $\\Pi_{\\psi^*}=\\partial L\/\\partial \\dot{\\psi}^*$ are given by\n\\begin{equation}\n\\Pi_{\\psi}=\\frac{1}{2}i\\hbar\\psi^*\\hskip 0.4in and \\hskip 0.4in \\Pi_{\\psi^*}=-\\frac{1}{2}i\\hbar\\psi\\;.\n\\end{equation}\nWe therefore have the primary constraints\n\\begin{equation}\n\\phi_1=\\Pi_{\\psi}-\\frac{1}{2}i\\hbar\\psi^*\\approx 0\\hskip 0.4in and \\hskip 0.4in \\phi_2=\\Pi_{\\psi^*}+\\frac{1}{2}i\\hbar\\psi\\approx 0\\;,\n\\label{Pi}\n\\end{equation}\nwhere $\\approx$ denotes a weak equality, which can only be imposed after the Poisson brackets have been evaluated. These constraints satisfy\n\\begin{equation}\n\\{\\phi_1(\\vec{x}),\\phi_1(\\vec{y})\\}=\\{\\phi_2(\\vec{x}),\\phi_2(\\vec{y})\\}=0\n\\end{equation}\nand\n\\begin{equation}\n\\{\\phi_1(\\vec{x}),\\phi_2(\\vec{y})\\}=-i\\hbar\\delta^{3}(\\vec{x}-\\vec{y})\\;.\n\\end{equation}\nThese constraints are, therefore, second class constraints.\n\nThe canonical Hamiltonian density $h_C=\\Pi_{\\psi}\\dot{\\psi}+\\Pi_{\\psi^*}\\dot{\\psi}^*-L$ is given by\n\\begin{equation}\nh_C=\\frac{\\hbar^2}{2m}\\vec{\\nabla}\\psi\\cdot\\vec{\\nabla}\\psi^*+V\\psi^*\\psi+\\phi_1\\dot{\\psi}+\\phi_2\\dot{\\psi}^*\n\\end{equation}\nand the total Hamiltonian density is given by\n\\begin{equation}\nh_T=\\frac{\\hbar^2}{2m}\\vec{\\nabla}\\psi\\cdot\\vec{\\nabla}\\psi^*+V\\psi^*\\psi+u_1\\phi_1+u_2\\phi_2\\;,\n\\label{HT}\n\\end{equation}\nwhere $u_1$ and $u_2$ are undetermined parameters.\n\nFor consistency we require that\n\\begin{equation}\n\\dot{\\phi_1}=\\{\\phi_1,H_T\\}\\approx 0\\hskip 0.4in and \\hskip 0.4in \\dot{\\phi}_2=\\{\\phi_2,H_T\\}\\approx 0\\;,\n\\end{equation}\nwhere $H_T=\\int h_Td^3x$ is the total Hamiltonian. These two equations give\n\\begin{equation}\nu_1=\\frac{i}{\\hbar}\\left[\\frac{\\hbar^2}{2m}\\nabla^2\\psi-V\\psi\\right]\n\\end{equation}\nand\n\\begin{equation}\nu_2=-\\frac{i}{\\hbar}\\left[\\frac{\\hbar^2}{2m}\\nabla^2\\psi^*-V\\psi^*\\right]\\;.\n\\end{equation}\nSubstituting these expressions for $u_1$ and $u_2$ into $H_T$ and integrating by parts gives\n\\begin{equation}\nH_T=\\frac{i}{\\hbar}\\int\\left[\\Pi_{\\psi}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\psi-V\\psi\\right)-\n\\Pi_{\\psi^*}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\psi^*-V\\psi^*\\right)\\right]d^3x\\;.\n\\end{equation}\nThe equation of motion for $\\psi$ is\n\\begin{equation}\n\\dot{\\psi}=\\{\\psi,H_T\\}=\\frac{i}{\\hbar}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\psi-V\\psi\\right)\\;,\n\\end{equation}\nwhich is Schrodinger's equation. The equation of motion for $\\Pi_{\\psi}$ is\n\\begin{equation}\n\\dot{\\Pi}_{\\psi}=\\{\\Pi_{\\psi},H_T\\}=\\frac{i}{\\hbar}\\left(\\frac{\\hbar^2}{2m}\\nabla^2\\Pi_{\\psi}-V\\Pi_{\\psi}\\right)\\;.\n\\end{equation}\nFrom (\\ref{Pi}) we see that this is Schrodinger's equation for $\\psi^*$. Similar results hold for $\\dot{\\psi}^*$ and for\n$\\dot{\\Pi}_{\\psi^*}$. The constraints $\\phi_1$ and $\\phi_2$ can be promoted to strong equations if the Poisson bracket is replaced by the Dirac bracket.\n\nA similar Hamiltonian density $h=-\\frac{i\\hbar}{2m}\\vec{\\nabla}\\Pi\\cdot\\vec{\\nabla}\\psi -\\frac{i}{\\hbar}V\\Pi\\psi$ appears in Schiff\n\\cite{Sch1}, but he does not follow Dirac's procedure to obtain $h$. Gergely \\cite{Ger1} follows Dirac's approach and finds the Hamiltonian density $h=\\frac{\\hbar^2}{2m}\\vec{\\nabla}\\psi^*\\cdot\\vec{\\nabla}\\psi +V\\psi^*\\psi+\\dot{\\psi}\\phi_1+\\dot{\\psi}^*\\phi_2$. Schrodinger's equation and its complex conjugate then follow from the conditions $\\dot{\\phi}_1\\simeq 0$ and $\\dot{\\phi}_2\\simeq 0$\n(see \\cite{Str1} for a discussion on the Hamiltonian formulation of the DKP equation, which is also a first order equation, using Dirac's procedure).\n\nIn the pilot-wave theory the wave function is written as\n\\begin{equation}\n\\psi=Re^{iS\/\\hbar}\\;,\n\\end{equation}\nwhere $R$ and $S$ are real functions. The velocity of the particle is taken to be \\cite{de1,Bohm1}\n\\begin{equation}\n\\frac{d\\vec{X}}{dt}=\\frac{1}{m}\\vec{\\nabla}S(\\vec{X},t)\\;,\n\\label{eom}\n\\end{equation}\nwhere $\\vec{X}(t)$ is the particle's position. The velocity can also be written as\n\\begin{equation}\n\\frac{d\\vec{X}}{dt}=\\frac{\\vec{j}}{\\psi^*\\psi}=\\frac{\\hbar}{2im|\\psi|^2}\\left[\\psi^*\\vec{\\nabla}\\psi-\\psi\\vec{\\nabla}\\psi^*\\right]\\;,\n\\label{J}\n\\end{equation}\nwhere $\\vec{j}$ is the probability current density.\n\nThis equation of motion follows from the Hamiltonian\n\\begin{equation}\nH_p=\\frac{\\vec{p}}{m}\\cdot\\vec{\\nabla}S(\\vec{X},t)\\;.\n\\end{equation}\nTo see this consider the equations of motion\n\\begin{equation}\n\\dot{X}^k=\\{X^k,H_p\\}=\\frac{1}{m}\\left[\\frac{\\partial S}{\\partial x^k}\\right]_{\\vec{x}=\\vec{X}}\n\\label{x}\n\\end{equation}\nand\n\\begin{equation}\n\\dot{p}_k=\\{p_k,H_p\\}=-\\frac{p_l}{m}\\left[\\frac{\\partial^2 S}{\\partial x^k\\partial x^l}\\right]_{\\vec{x}=\\vec{X}}\\;.\n\\label{p}\n\\end{equation}\nEquation (\\ref{x}) is the correct equation of motion for the particle. Equation (\\ref{p}) gives the equation of motion for $p_k$, but $p_k$ is not related to the particle velocity in this theory. We can therefore solve for the particle's trajectory using (\\ref{x}) alone and ignore (\\ref{p}).\n\nNow consider the Hamiltonian for the field and particle\n\\begin{equation}\nH=H_T+H_P\\;.\n\\label{Ham1}\n\\end{equation}\nThe equations of motion $\\dot{\\psi}=\\{\\psi,H\\}$, $\\dot{X}^k=\\{X^k,H\\}$ and $\\dot{p}_k=\\{p_k,H\\}$ are the same as above. However the equation of motion for $\\Pi_{\\psi}$ is altered by the addition of $H_p$ to $H_T$:\n\\begin{equation}\n\\dot{\\Pi}_{\\psi}=\\{\\Pi_{\\psi},H\\}=\\{\\Pi_{\\psi},H_T\\}+\\frac{p_k}{m}\\{\\Pi_{\\psi},\\partial_k S\\}\\;.\n\\label{Poisson}\n\\end{equation}\nTo evaluate $\\{\\Pi_{\\psi},\\nabla^kS\\}$ it is convenient to write $\\partial_kS$ as\n\\begin{equation}\n\\partial_kS(\\vec{X},t)=\\frac{\\hbar}{2i}\\int\\left[\\frac{\\partial_k\\psi(\\vec{x},t)}{\\psi(\\vec{x},t)}-\n\\frac{\\partial_k\\psi^*(\\vec{x},t)}{\\psi^*(\\vec{x},t)}\\right]\\delta^3(\\vec{x}-\\vec{X})d^3x\\;.\n\\end{equation}\nThe last term in (\\ref{Poisson}) given by\n\\begin{equation}\n\\frac{p_k}{m}\\{\\Pi_{\\psi}(\\vec{x}),\\partial_kS(\\vec{X},t)\\}=\\frac{p_k}{m\\psi(\\vec{x},t)}\\frac{\\partial}{\\partial x^k}\\delta^3(\\vec{x}-\\vec{X}).\n\\end{equation}\nThis term can be eliminated by imposing the constraint $p_k\\approx 0$.\nFor consistency we require that\n$\\dot{p}_k=\\{p_k,H\\}\\approx 0$. From equation (\\ref{p}) we see that this is satisfied, so that no new constraints arise.\nThe Hamiltonian $H$ plus the constraint $p_k\\approx 0$ therefore generates Schrodinger's equation and the equation of motion for the particle.\n\nThe generalization of (\\ref{Ham1}) to a system of $N$ particles is straightforward. The Hamiltonian is given by\n\\begin{equation}\nH=\\frac{i}{\\hbar}\\int\\left[\\Pi_{\\psi}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi-V\\psi\\right)-\n\\Pi_{\\psi^*}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi^*-V\\psi^*\\right)\\right]d^3x_1...d^3x_N\n+\\sum_{k=1}^N\\frac{\\vec{p}_k}{m_k}\\cdot\\vec{\\nabla}_kS\\;,\n\\end{equation}\nwhere $\\vec{\\nabla}_kS$ can also be written as\n\\begin{equation}\n\\vec{\\nabla}_kS=\\sum_{k=1}^N\\frac{\\hbar}{2i|\\psi|^2}\\left[\\psi^*\\vec{\\nabla}_k\\psi-\\psi\\vec{\\nabla}_k\\psi^*\\right]\\;,\n\\end{equation}\n$\\psi=\\psi(\\vec{x}_1...,\\vec{x}_N)$ and $V=V(\\vec{x}_1...,\\vec{x}_N,t)$. This Hamiltonian plus the constraints $\\vec{p}_k\\approx 0$ gives both the field and particle equations of motion.\n\nA Hamiltonian formulation of the pilot-wave theory has also been developed by Holland \\cite{Holland}. In his approach the wave function is written as $\\psi=\\sqrt{\\rho}e^{iS\/\\hbar}$ and the Hamiltonian is given by\n\\begin{equation}\nH_{tot}=H+\\int\\left\\{-\\pi_{\\rho}\\left[\\frac{1}{m}\\frac{\\partial}{\\partial q_i'}\\left(\\rho\\frac{\\partial S}{\\partial q_i'}\\right)\\right]-\\pi_{S}\\left[\\frac{1}{2m}\\left(\\frac{\\partial S}{\\partial q_i'}\\right)\\left(\\frac{\\partial S}{\\partial q_i'}\\right)+Q+V\\right]\\right\\}d^3q',\n\\end{equation}\nwhere\n\\begin{equation}\nH=\\sum_i\\frac{p_i^2}{2m}+V+Q\\;,\n\\end{equation}\n$p_i$ is the momentum of the $i^{th}$ particle, V is the classical potential and $Q$ is the quantum potential which is given by\n\\begin{equation}\nQ=-\\frac{\\hbar^2}{2m\\sqrt{\\rho}}\\frac{\\partial^2\\sqrt{\\rho}}{\\partial q_i^2}\\;.\n\\end{equation}\nHamilton's equations for $\\dot{S}$ and $\\dot{\\rho}$ give Schrodinger's equation and Hamilton's equations for the particle degrees of freedom are given by\n\\begin{equation}\n\\dot{q}_i=p_i\/m \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\dot{p}_i=-\\frac{\\partial}{\\partial q_i}[V+Q]\\;.\n\\end{equation}\nCombining these two equations gives\n\\begin{equation}\nm\\ddot{q}_i=-\\frac{\\partial}{\\partial q_i}[V+Q]\\;,\n\\end{equation}\nwhich is the second order form of the theory. This equation can be derived from the first order guidance equation, but it is not equivalent to the guidance equation. Holland then considers Liouville's equation and shows, for pure states, that the momentum is constrained to satisfy the guidance equation. This approach differs significantly from the approach taken in this paper.\n\\section{Relativistic Pilot-Wave Theory}\nIn this section I construct a Hamiltonian that gives the Dirac equation plus the guidance equation.\n\nThe Lagrangian for the Dirac field coupled to the electromagnetic field is given by\n\\begin{equation}\nL=\\frac{i}{2}\\left[\\bar{\\psi}\\gamma^{\\mu}D_{\\mu}\\psi-(D^*_{\\mu}\\bar{\\psi)}\\gamma^{\\mu}\\psi\\right]-m\\bar{\\psi}\\psi\\;,\n\\end{equation}\nwhere $D_{\\mu}=\\partial_{\\mu}+ieA_{\\mu}$ and I have taken $\\hbar=c=1$. There are two primary constraints\n\\begin{equation}\n\\phi_1=\\Pi_{\\psi}-\\frac{1}{2}i\\bar{\\psi}\\gamma^0\\approx 0\\hskip 0.4in and \\hskip 0.4in \\phi_2=\\Pi_{\\bar{\\psi}}+\\frac{1}{2}i\\gamma^0\\psi\\approx 0\n\\label{const}\n\\end{equation}\nthat satisfy\n\\begin{equation}\n\\{\\phi_{1_k}(\\vec{x}),\\phi_{1_{\\ell}}(\\vec{y})\\}=\\{\\phi_{2_k}(\\vec{x}),\\phi_{2_{\\ell}}(\\vec{y})\\}=0\n\\end{equation}\nand\n\\begin{equation}\n\\{\\phi_{1_k}(\\vec{x}),\\phi_{2_{\\ell}}(\\vec{y})\\}=-i(\\gamma^{0})_{\\ell k}\\delta^{3}(\\vec{x}-\\vec{y})\\;.\n\\end{equation}\nThe constraints are, therefore, second class. Following the same procedure that was used for Schrodinger's equation gives\n\\begin{equation}\nH_T=i\\int\\left\\{\\Pi_{\\psi}\\gamma^0\\left[i\\gamma^k\\partial_k\\psi-eA_{\\mu}\\gamma^{\\mu}\\psi-m\\psi\\right]+\n\\left[i(\\partial_k\\bar{\\psi})\\gamma^k+e\\bar{\\psi}A_{\\mu}\\gamma^{\\mu}+m\\bar{\\psi}\\right]\\gamma^0\\Pi_{\\bar{\\psi}}\\right\\}d^3x.\n\\end{equation}\nIn the pilot-wave theory the velocity of the particle is given by \\cite{Bohm2,Hol1}\n\\begin{equation}\n\\frac{dX^{\\mu}}{d\\tau}=J^{\\mu}\n\\end{equation}\nwhere\n\\begin{equation}\nJ^{\\mu}=\\frac{\\bar{\\psi}\\gamma^{\\mu}\\psi}{\\sqrt{a^2+b^2}}\\;,\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; J^{\\mu}J_{\\mu}=1\\;,\n\\end{equation}\n\\begin{equation}\na=\\bar{\\psi}\\psi\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\; b=i\\bar{\\psi}\\gamma^5\\psi\\;.\n\\end{equation}\nThe particle Hamiltonian is\n\\begin{equation}\nH_P=p_{\\mu}J^{\\mu}\n\\end{equation}\nand the Hamiltonian for the field and particle is given by\n\\begin{equation}\nH=H_T+H_p\\;.\n\\end{equation}\nAs in the non-relativistic approach we must impose $p_{\\mu}\\approx 0$ so that addition of $H_p$ to $H_T$ does not alter the field equations for $\\Pi_{\\psi}$ and $\\Pi_{\\bar{\\psi}}$.\n\\section{Pilot-Wave Approach in Scalar Quantum Field Theory}\nIn this section I consider a massive scalar field and find a Hamiltonian that gives Schrodinger's equation plus the guidance equation for the field (see \\cite{Bohm4,Hol3} for discussions on the pilot-wave theory for scalar fields).\n\nConsider a real scalar field $\\phi(x^{\\mu})$ with the Lagrangian\n\\begin{equation}\nL=\\frac{1}{2}\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi-\\frac{1}{2}m^2\\phi^2\\;.\n\\end{equation}\nThe canonical momentum is given by\n\\begin{equation}\n\\Pi=\\frac{\\partial\\phi}{\\partial t}\n\\end{equation}\nand the Hamiltonian is given by\n\\begin{equation}\nH=\\frac{1}{2}\\int\\left[\\Pi^2+(\\nabla\\phi)^2+m^2\\phi^2\\right]d^3x\\;.\n\\end{equation}\nSchrodinger's equation for this theory is given by\n\\begin{equation}\ni\\frac{\\partial\\Psi}{\\partial t}=\\frac{1}{2}\\int\\left[-\\frac{\\delta^2}{\\delta\\phi^2}+(\\nabla\\phi)^2+m^2\\phi^2\\right]d^3x\\;\\Psi\\;.\n\\end{equation}\nThe wave function can be written as\n\\begin{equation}\n\\Psi=Re^{iS}\n\\end{equation}\nand the equation of motion is taken to be\n\\begin{equation}\n\\Pi=\\frac{\\partial\\phi}{\\partial t}=\\frac{\\delta S}{\\delta\\phi}\\;.\n\\end{equation}\nConsider confining the field to a box of length $L$ with periodic boundary conditions. The field can be written as\n\\begin{equation}\n\\phi(\\vec{x},t)=\\sqrt{\\frac{1}{V}}\\sum_{\\bf{k}}q_{\\bf{k}}(t)e^{i\\bf{k}\\cdot\\bf{x}}\\;,\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf{k}}=\\frac{2\\pi}{L}(n_x,n_y,n_z)\\;,\n\\end{equation}\n$n_x, n_y$ and $n_z$ integers, $q_{\\bf{k}}^*=q_{-\\bf{k}}$ and $p_{\\bf{k}}^*=p_{-\\bf{k}}$.\nSchrodinger's equation is given by\n\\begin{equation}\ni\\frac{\\partial\\Psi}{\\partial t}=\\sum_{{\\bf{k}}\/2}\\left[-\\frac{\\partial^2}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}\n+(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\right]\\Psi\\;,\n\\label{SE}\n\\end{equation}\nwhere $\\Psi=\\Psi(q_{\\bf{k}},q^*_{\\bf{k}},t)$ and ${\\bf{k}}\/2$ indicates that the sum is carried out over half of the values of ${\\bf{k}}$.\n\nA Lagrangian that produces (\\ref{SE}) is\n\\begin{equation}\nL=\\frac{1}{2}i\\left[\\Psi^*\\frac{\\partial\\Psi}{\\partial t}-\\Psi\\frac{\\partial\\Psi^*}{\\partial t}\\right]-\n\\sum_{{\\bf{k}}\/2}\\left[\\frac{\\partial{\\Psi}}{{\\partial q_{\\bf{k}}}}\\frac{\\partial{\\Psi^*}}{{\\partial q^*_{\\bf{k}}}}+\n(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi\\Psi^*\\right]\\;.\n\\end{equation}\nFollowing the same procedure that was discussed in section 2 leads to the\nconstraints\n\\begin{equation}\n\\Pi_{\\Psi}\\approx\\frac{1}{2}i\\hbar\\Psi^*\\hskip 0.4in \\hskip 0.4in \\Pi_{\\Psi^*}\\approx-\\frac{1}{2}i\\hbar\\Psi\n\\end{equation}\nand to the total Hamiltonian\n\\begin{equation}\nH_T=i\\sum_{{\\bf{k}}\/2}\\int\\left\\{\\Pi_{\\Psi}\\left[\\frac{\\partial^2\\Psi}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi\\right]-\n\\Pi_{\\Psi^*}\\left[\\frac{\\partial^2\\Psi^*}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi^*\\right]\\right\\}dqdq^*\\;,\n\\end{equation}\nwhere $dq=\\Pi_{{\\bf{k\/2}}}dq_{{\\bf{k}}}$ and $dq^*=\\Pi_{{\\bf{k\/2}}}dq^*_{{\\bf{k}}}$. This Hamiltonian gives the correct equations of motion assuming Gauss' theorem (and the vanishing of surface terms) in infinite dimensional spaces.\nThe guidance equations are given by\n\\begin{equation}\n\\dot{q}_{\\bf{k}}=\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\n\\dot{q}^*_{\\bf{k}}=\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\;.\n\\end{equation}\nThis equation of motion follows from the Hamiltonian\n\\begin{equation}\nH_p=\\sum_{{\\bf{k}}\/2}\\left[p_{\\bf{k}}\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\n+p^*_{\\bf{k}}\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\right]\\;,\n\\end{equation}\nwhere\n\\begin{equation}\n\\{q_{\\bf{k}},p_{\\bf{l}}\\}=\\{q^*_{\\bf{k}},p^*_{\\bf{l}}\\}=\\delta_{\\bf{kl}}\n\\end{equation}\nand all other Poisson brackets vanish.\nThe equations of motion of the particle and field follow from the Hamiltonian\n\\begin{equation}\nH=H_T+H_p\n\\end{equation}\nplus the constraints $p_{\\bf{k}}\\approx 0$ and $p^*_{\\bf{k}}\\approx 0$.\n\\section{Conclusion}\nIn this paper I showed that the Hamiltonian\n\\begin{equation}\nH=\\frac{i}{\\hbar}\\int\\left[\\Pi_{\\psi}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi-V\\psi\\right)-\n\\Pi_{\\psi^*}\\left(\\sum_{k=1}^N\\frac{\\hbar^2}{2m_k}\\nabla^2_k\\psi^*-V\\psi^*\\right)\\right]d^3x_1...d^3x_N\n+\\sum_{k=1}^N\\vec{p}_k\\cdot\\vec{\\nabla}_kS\n\\end{equation}\nplus the constraints $\\vec{p}_k\\approx 0$ gives Schrodinger's equation for a collection of non-relativistic particle in a potential and the guidance equation\n\\begin{equation}\n\\frac{d\\vec{X}_k}{dt}=\\vec{\\nabla}_kS(\\vec{X},t)\\;,\n\\end{equation}\nwhere $\\vec{X}_k$ is the position of the $k^{th}$ particle.\nIt is important to note that the canonical momenta $\\vec{p}_k$ are not related to $\\vec{v}_k$ and it is consistent to impose the constraints\n$\\vec{p}_k\\approx 0$.\n\nI then showed that\n\\begin{equation}\nH=i\\int\\left\\{\\Pi_{\\psi}\\gamma^0\\left[i\\gamma^k\\partial_k\\psi-eA_{\\mu}\\gamma^{\\mu}\\psi-m\\psi\\right]+\n\\left[i(\\partial_k\\bar{\\psi})\\gamma^k+e\\bar{\\psi}A_{\\mu}\\gamma^{\\mu}+m\\bar{\\psi}\\right]\\gamma^0\\Pi_{\\bar{\\psi}}\\right\\}d^3x+p_{\\mu}J^{\\mu},\n\\end{equation}\nwhere\n\\begin{equation}\nJ^{\\mu}=\\frac{\\bar{\\psi}\\gamma^{\\mu}\\psi}{\\sqrt{a^2+b^2}}\\;,\n\\end{equation}\n\\begin{equation}\na=\\bar{\\psi}\\psi\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\; b=i\\bar{\\psi}\\gamma^5\\psi\n\\end{equation}\nplus the constraint $p_{\\mu}\\approx 0$ gives the Dirac equation and the guidance equation\n\\begin{equation}\n\\frac{dX^{\\mu}}{d\\tau}=J^{\\mu}\\;.\n\\end{equation}\n\nLastly I considered the massive Klein-Gordon equation as a quantum field (not as a single particle wave equation). After decomposing the field into normal modes\n\\begin{equation}\n\\phi(\\vec{x},t)=\\sqrt{\\frac{1}{V}}\\sum_{\\bf{k}}q_{\\bf{k}}(t)e^{i\\bf{k}\\cdot\\bf{x}}\n\\end{equation}\nI showed that the Hamiltonian\n\\begin{equation}\n\\begin{array}{cc}\n H=i\\sum_{{\\bf{k}}\/2}\\int\\left\\{\\Pi_{\\Psi}\\left[\\frac{\\partial^2\\Psi}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi\\right]-\n\\Pi_{\\Psi^*}\\left[\\frac{\\partial^2\\Psi^*}{\\partial q_{\\bf{k}}\\partial q^*_{\\bf{k}}}-(k^2+m^2)q_{\\bf{k}}q^*_{\\bf{k}}\\Psi^*\\right]\\right\\}dqdq^*& \\\\\n & \\\\\n +\\sum_{{\\bf{k}}\/2}\\left[p_{\\bf{k}}\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\n +p^*_{\\bf{k}}\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\right]\\;,\n \\end{array}\\hspace {-0.12in}\n\\end{equation}\nwhere $dq=\\Pi_{{\\bf{k\/2}}}dq_{{\\bf{k}}}$ and $dq^*=\\Pi_{{\\bf{k\/2}}}dq^*_{{\\bf{k}}}$,\nplus the constraints $p_{\\bf{k}}\\approx 0$ and $p^*_{\\bf{k}}\\approx 0$ gives Schrodinger's equation and the guidance equations\n\\begin{equation}\n\\dot{q}_{\\bf{k}}=\\frac{\\partial S}{\\partial q^*_{\\bf{k}}}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; and \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\n\\dot{q}^*_{\\bf{k}}=\\frac{\\partial S}{\\partial q_{\\bf{k}}}\\;.\n\\end{equation}\n\\section*{Acknowledgements}\nThis research was supported by the Natural Sciences and Engineering Research\nCouncil of Canada.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeyvl b/data_all_eng_slimpj/shuffled/split2/finalzzeyvl new file mode 100644 index 0000000000000000000000000000000000000000..6d2b164cc96a5e8ab5c16292ca8f2d2d0c6b5139 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeyvl @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction} \\setcounter{equation}{0}\n\nTensors are multilinear objects ubiquitous in the applications in different flavours, see e.g. \\cite{allman2008phylogenetic, bernardi2012algebraic, bernardi2021high, comon2014tensors}. They can be seen as elements of $V_1 \\otimes \\dots \\otimes V_d$, where any $V_i$ is a vector space of finite dimension over some field $\\mathbb{K}$. Being provided with an additive structure, one particular interest is {\\it tensor decomposition}, i.e. additive decompositions of tensors into sums of elementary objects, often referred to as tensors of {\\it rank 1}. Throughout all this paper we assume that $\\mathbb{K}$ is algebraically closed and of characteristic $0$.\n\n\\noindent {\\it Structured} tensors are tensors with prescribed symmetries between the factors of the tensor product. Even in this case we can talk about structured tensors of {\\it structured rank} $1$ which are in some sense the most elementary tensors with that structure. \n\n\\noindent Some examples are {\\it symmetric} tensors $\\Sym^d V$, i.e. tensors invariant under permutations of the factors, and {\\it skew-symmetric} tensors $\\superwedge^d V$, i.e. tensors invariant under permutations of the factors up to the sign of the permutation. In these instances the tensors of {\\it structured rank} $1$ are determined by the underlying geometry. Indeed, since both $\\Sym^d V$ and $\\superwedge^d V$ are irreducible representations of the group $SL(V)$, one can consider the highest weight vector of both of them and its orbit in the projectivization under the group action. These orbits turn out to be projective varieties which are Veronese and Grassmann varieties respectively. The (skew-)symmetric tensors of (skew-)symmetric rank $1$ are the points of the Veronese (Grassmann) variety. In the symmetric case, due to the canonical identification of $\\Sym^d V$ with the vector space of homogeneous polynomials of degree $d$ in $\\dim V$ variables, the symmetric rank $1$ tensors are $d$ powers of linear forms $l^d$, with $l \\in V$. In the skew-symmetric case, the tensors of skew-symmetric rank $1$ look like $v_1 \\wedge \\dots \\wedge v_d$ for some $v_1,\\dots,v_d \\in V$. \\smallskip\n\n\\noindent The irreducible representations of $SL(V)$ are known and usually described as {\\it Schur modules}, as defined in \\cite{fulton2013representation}. The respective minimal orbit inside their projectivization is in general a {\\it Flag variety}, and tensors of structured rank $1$ in the general context will represent flags, possibly partial, of the vector space $V$. \\smallskip\n\nComing back to (skew-)symmetric tensors, clearly one would like to compute the (skew-)symmetric rank of any (skew-)symmetric tensor. In both cases but chronologically first in the symmetric, see \\cite{iarrobino1999power}, and after for the other one, see \\cite{arrondo2021skew}, {\\it apolarity theories} have been developed to compute the ranks of such tensors. Even though they may seem different, these two theories share a greatest common idea which is the {\\it evaluation}. Starting from the usual contraction $V^* \\times V \\longrightarrow \\mathbb{K}$, one gets respectively the maps\n\n$$ \\Sym^d V \\otimes \\Sym^e V^* \\longrightarrow \\Sym^{d-e} V\\quad \\text{and} \\quad \\superwedge^d V \\otimes \\superwedge^e V^* \\longrightarrow \\superwedge^{d-e} V$$\ncalled {\\it apolarity actions}. Consequently, given any (skew-)symmetric tensor, one can compute its annihilator, i.e. any element in the dual world, symmetric or skew-symmetric respectively, that kills the given tensor in any degree via the suitable apolarity action. Such annihilator turns out to be an ideal in the symmetric or exterior algebra of $V^*$ respectively. In both cases, given a tensor of (skew-)symmetric rank $1$, there is attached to it an {\\it ideal} generated in the respective symmetric or exterior algebra. The ideal of multiple tensors of rank $1$ is just the intersection of the respective ideals in both cases. The key result in both theories is the {\\it apolarity lemma} which has the following simple statement. Finding a decomposition of a (skew-)symmetric tensor into a sum of rank $1$ (skew-)symmetric tensors is equivalent to find the inclusion of the ideal of the rank $1$ (skew-)symmetric tensors involved in the decomposition inside the annihilator of the (skew-)symmetric tensor we would like to decompose. It follows this second question. Remark that if a (skew-)symmetric tensor admits more than one decomposition, the apolarity lemma implies that the ideals associated to all such decompositions are contained in the annihilator.\n\n\\begin{question}\nIs it possible to define an apolarity theory for any other irreducible representation of $SL(V)$?\n\\end{question}\n\nThe main motivation of this document is to answer this question. We will present a suitable apolarity action which will be called {\\it Schur apolarity action}. If we denote with $\\mathbb{S}_{\\lambda} V$ the Schur module determined by the partition $\\lambda$, it is the map\n\n$$ \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\lambda \/ \\mu} V $$\nwhere $\\mathbb{S}_{\\lambda \/ \\mu} V$ is called {\\it skew Schur module}, cf. Definition \\ref{skewSchurModule} of this article. Several examples as well as an analogous {\\it Schur apolarity lemma} are provided. \\smallskip\n\nThe content of this article is original but it is worth noting the strong connection of the Schur apolarity with the {\\it non-abelian apolarity}. Introduced for the first time in \\cite{landsberg2013equations} to seek new sets of equations of secant varieties of Veronese varieties, it is an apolarity which implements vector bundles techniques for any projective variety $X$. More specifically, the non-abelian apolarity action has always in its codomain an irreducible representation. On the contrary in the Schur apolarity case, the skew Schur module is in general reducible. Hence we may think of the Schur apolarity as a slight generalization of the non-abelian one. Another apolarity theory which is worth noting is described in \\cite{galkazka2016multigraded} for toric varieties, that features also the use of Cox rings. Formerly the special case in which the toric variety is a Segre-Veronese variety has been introduced in \\cite{catalisano2002ranks} as {\\it multigraded apolarity}.\n\n\\smallskip\n\nThe article is organized as follows. In Section \\ref{primasez} we recall basic definitions needed to develop the theory. In Section \\ref{secondasez} we give a description of the maps $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V \\longrightarrow \\mathbb{S}_{\\nu}V$ which turn out to be useful in the Schur apolarity setting. In Section \\ref{terzasez} we introduce the Schur apolarity action. In Section \\ref{quartasez} we state and prove the Schur apolarity lemma. In Section \\ref{quintasez} we give a description of the ranks of certain linear maps induced by the apolarity action when a structured tensor has structured rank $1$. In Section \\ref{sestasez} we investigate the secant variety of the Flag variety of $\\mathbb{C}^1 \\subset \\mathbb{C}^k \\subset \\mathbb{C}^n$ together with an algorithm which distinguishes tensors of the secant variety with different ranks.\n\\bigskip\n\n\n\n\n\\section{Notation and basic definitions} \\label{primasez} \\setcounter{equation}{0} \\medskip\n\nLet $V$ be a vector space over a field $\\mathbb{K}$ of dimension $\\operatorname{dim}(V) = n < \\infty$. In the following $\\mathbb{K}$ is always intended algebraically closed and of characteristic $0$. We adopt the following notation for the algebras:\n\\begin{enumerate}\n\\item[$\\bullet$] $\\Sym^{\\bullet}{V} = \\bigoplus_{k\\geq 0} \\Sym^k V$ is the {\\it symmetric} algebra, i.e. the algebra of polynomials in $n$ variables;\n\\item[$\\bullet$] $\\superwedge^{\\bullet} V = \\bigoplus_{k \\geq 0} \\superwedge^k V$ is the {\\it exterior} algebra.\n\\end{enumerate}\n\n\\noindent We will indicate with $V^*$ the dual vector space of $V$, i.e. the vector space of linear functional defined on $V$. Remark that this space $V^*$ defines an action $\\operatorname{tr} \\colon V \\times V^* \\to \\mathbb{K}$ on $V$ called {\\it trace}. \n\n\\noindent The group $SL(V)$ is the group of automorphisms of $V$ inducing the identity on the space $\\superwedge^n V$. Such a group defines a transitive action on $V$\n\\begin{align*} SL(V) &\\times V\\ \\to\\ V \\\\ (&g , v) \\longmapsto g(v). \\end{align*}\nThis action can be extended to $V^{\\otimes d}$ just setting \n\n$$g \\cdot (v_1 \\otimes \\dots \\otimes v_d) = g(v_1) \\otimes \\dots \\otimes g(v_d).$$\n\n\\noindent The symmetric group $\\mathfrak{S}_d$ acts on $V^{\\otimes d}$ by permuting the factors of the tensor product. It can be easily seen that the actions of $SL(V)$ and $\\mathfrak{S}_d$ commutes. These standard facts and their consequences in terms of representations are classical, (cf. \\cite{fulton2013representation}).\n\\medskip\n\nIn order to develop an apolarity theory for any irreducible representation of the special linear group we need to set some notation and basic definitions. The facts we are going to recall are borrowed from \\cite{fulton2013representation}, \\cite{fulton1997young}, \\cite{sturmfels2008algorithms} and \\cite{procesi2007lie}. \\smallskip\n\nIn the following we regard the irreducible representations of $SL(n)$ as Schur modules. They are denoted with $\\mathbb{S}_{\\lambda}V$ where $\\lambda = (\\lambda_1,\\dots,\\lambda_k)$, with $k < n$, is a partition. The number $k$ is the {\\it length} of the partition. We denote with $|\\lambda| := \\lambda_1 + \\dots + \\lambda_k$ the number partitioned by $\\lambda$. Sometimes we may also write $\\lambda \\vdash |\\lambda|$ to underline that $\\lambda$ is a partition of $|\\lambda|$. We follow \\cite{fulton2013representation} for a construction of this representations. \n\n\\noindent Fix a partition $\\lambda$ of the integer $d$ whose length is less than $\\dim(V)$. We can draw its {\\it Young diagram} placing $\\lambda_1$ boxes in a row, $\\lambda_2$ boxes below it and so on, where all the rows are left justified. A filling of positive integers will turn the diagram into a {\\it tableau of shape} $\\lambda$. A tableau of shape $\\lambda$ is said {\\it semistandard} if reading from left to right the sequences of integers are weakly increasing, while from top to bottom the sequences are strictly increasing. We will abbreviate it with just {\\it sstd} tableau. A sstd tableau of shape $\\lambda$ is called {\\it standard} if also the row sequences are strictly increasing and there are no repetitions. In this case we will simply write {\\it std} tableau. Let $T$ be a std tableau of shape $\\lambda$ with entries in $\\{1,\\dots,d\\}$. The {\\it Young symmetrizer associated to $\\lambda$ and $T$} is the endomorphism $c_{\\lambda}^T$ of $V^{\\otimes d}$ which sends the tensor $v_1 \\otimes \\dots \\otimes v_d$ to\n\n\\begin{equation} \\label{YoungSymm} \\sum_{\\tau \\in C_{\\lambda}} \\sum_{\\sigma \\in R_{\\lambda}} \\operatorname{sign}(\\tau) v_{\\tau(\\sigma(1))} \\otimes \\dots \\otimes v_{\\tau(\\sigma(d))} \\end{equation}\n\n\\noindent where $C_{\\lambda}$, $R_{\\lambda}$ respectively, is the subgroup of $\\mathfrak{S}_d$ of permutations which fix the content of every column, of every row respectively, of $T$. The image of the Young symmetrizer \n\n$$\\mathbb{S}_{\\lambda}^T V := c_{\\lambda}^T (V^{\\otimes d}) $$\n\n\\noindent is called {\\it Schur module} associated to $\\lambda$. It is easy to see that any two Schur modules that are images of $c_{\\lambda}^T$ and $c_{\\lambda}^{T'}$, with $T$ and $T'$ two different std tableaux of shape $\\lambda$, are isomorphic. Hence we drop the $T$ in the notation and we write only $\\mathbb{S}_{\\lambda} V$. It can be proved that Schur modules are irreducible representations of $SL(n)$ via the induced action of the group, and they are completely determined by the partition $\\lambda$, cf. \\cite[p. 77]{fulton2013representation}. From the construction of such modules we have the inclusion\n\n$$ \\mathbb{S}_{\\lambda} V \\subset \\superwedge^{\\lambda_1'} V \\otimes \\dots \\otimes \\superwedge^{\\lambda_h'} V =: \\superwedge_{\\lambda'} V. $$\n\n\\noindent where $\\lambda'$ is the {\\it conjugate} partition to $\\lambda$, i.e. obtained transposing the diagram of $\\lambda$ as if it were a matrix. \\smallskip\n\nWe would like to give a little more explicit insight of the elements of such modules since we will need it in the following. We recall briefly the construction given in \\cite{sturmfels2008algorithms} of a basis of these spaces in terms of sstd tableaux. The Schur-Weyl duality, see \\cite[Theorem 6.4.5.2, p. 148]{Landsberg2011TensorsGA}, gives the isomorphism\n\n\\begin{equation} \\label{SWduality} V^{\\otimes d} \\simeq \\bigoplus_{\\lambda\\ \\vdash\\ d} \\mathbb{S}_{\\lambda} V^{\\oplus m_{\\lambda}} \\end{equation}\n\n\\noindent where $m_{\\lambda}$ is the number of std tableaux of shape $\\lambda$ with entries in $\\{1,\\dots,d\\}$. For example $m_{(2,1)} = 2$ since we have the two std tableaux\n\n\\begin{equation} \\label{EsT1} T_1 = \\begin{ytableau} 1 & 2 \\\\ 3 \\end{ytableau}\\ , \\quad T_2 = \\begin{ytableau} 1 & 3 \\\\ 2 \\end{ytableau}\\ . \\end{equation}\n\n\\noindent Now fix a module $\\mathbb{S}_{\\lambda} V$ together with a std tableaux $T$ of shape $\\lambda$, where $\\lambda \\vdash d$. Fix also a basis $v_1,\\dots,v_n$ of $V$ and consider a sstd tableaux $S$ of shape $\\lambda$. The pair $(T,S)$, regarded in \\cite{sturmfels2008algorithms} as {\\it bitableau}, describes an element of $\\mathbb{S}_{\\lambda}V$ in the following way. At first build the element \n$$v_{(T,S)} := v_{i_1} \\otimes \\dots \\otimes v_{i_d},$$\nwhere $v_{i_j} = v_k$ if there is a $k$ in the box of $S$ corresponding to the box of $T$ in which a $j$ appears. We drop the $T$ in $v_{(T,S)}$ if the choice of $T$ is clear. After that, one applies the Young symmetrizer $c_{\\lambda}^T$ to the element $v_{(T,S)}$. For example, if $\\lambda =(2,1)$, consider the std tableau $T_1$ in \\eqref{EsT1} and the sstd tableau\n\n\\begin{center} S = \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} . \\end{center}\n\n\\noindent Then $c_{\\lambda}(v_S) = c_{\\lambda}(v_1 \\otimes v_1 \\otimes v_2) = 2 \\cdot v_1 \\wedge v_2 \\otimes v_1$, which we may represent pictorially as\n\n$$ c_{\\lambda}(v_S) = 2 \\cdot \\left ( \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} - \\begin{ytableau} 2 & 1 \\\\ 1 \\end{ytableau} \\right )$$\n\n\\noindent where in this particular instance the tableaux represent tensor products of vectors with the order prescribed by $T_1$. We may use sometimes this notation.\n\n\\noindent As a consequence of \\cite[Theorem 4.1.12, p. 142]{sturmfels2008algorithms}, one has the following result.\n\n\\begin{prop} \\label{BaseSchur}\nThe set \n$$\\{ c_{\\lambda}^T (v_{(T,S)})\\ :\\ S\\ \\text{is a sstd tableau of shape}\\ \\lambda \\}$$\n is a basis of the module $\\mathbb{S}_{\\lambda} V$. \n\\end{prop} \n\n\\noindent As already told, choosing different std tableaux of the same shape give rise to isomorphic Schur modules. The only difference between them is the way they embed in $V^{\\otimes |\\lambda|}$. Let us see an example.\n\n\\begin{example}\nLet $\\lambda = (2,1)$ and consider the std tableau\n\n$$T_1 = \\begin{ytableau} 1 & 2 \\\\ 3 \\end{ytableau}\\ ,\\ T_2 = \\begin{ytableau} 1 & 3 \\\\ 2 \\end{ytableau}$$\n\n\\noindent and the sstd tableau\n\n$$S = \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau}\\ .$$\n\n\\noindent We get that \n\n$$ v_{(T_1,S)} = v_1 \\otimes v_1 \\otimes v_2,\\ v_{(T_2,S)} = v_1 \\otimes v_2 \\otimes v_1. $$\n\n\\noindent Applying the respective Young symmetrizers we get that \n\n$$ c_{\\lambda}^{T_1} (v_{(T_1,S)}) = 2 \\cdot (v_1 \\otimes v_1 \\otimes v_2 - v_2 \\otimes v_1 \\otimes v_1) = 2 \\cdot (v_1 \\wedge v_2) \\otimes v_1, $$\n$$ c_{\\lambda}^{T_2} (v_{(T_2,S)}) = 2 \\cdot (v_1 \\otimes v_2 \\otimes v_1 - v_2 \\otimes v_1 \\otimes v_1) = 2 \\cdot (v_1 \\wedge v_2) \\otimes v_1. $$\n\\end{example}\n\n\\noindent It is clear that we get the same element of $\\superwedge^2 V \\otimes V$. However it is clear that it embeds in a different way in $V^{\\otimes 3}$ under the specific choice of the std tableau of shape $\\lambda$. Since we are interested only in the elements of $\\superwedge^2 V \\otimes V$, we will ignore the way it embeds in $V^{\\otimes d}$. For this reason we reduce to work in the vector space\n\n$$ \\mathbb{S}^{\\bullet} V := \\bigoplus_{\\lambda} \\mathbb{S}_{\\lambda} V$$\nwhere roughly every Schur module appears exactly once for each partition $\\lambda$. One can see that such a space can also be obtained as the quotient\n\n$$ \\mathbb{S}^{\\bullet} V = \\Sym^{\\bullet} \\left (\\superwedge^{n-1} V \\oplus \\superwedge^{n-1} V \\oplus \\dots \\oplus \\superwedge^1 V \\right)\/I^{\\bullet}$$\n\n\\noindent where $I^{\\bullet}$ is the two-sided ideal generated by the elements\n\n\\begin{align} \\begin{split} \\label{PluckRel}\n(v_1 \\wedge & \\dots \\wedge v_p) \\cdot (w_1 \\wedge \\dots \\wedge w_q) \\\\\n& - \\sum_{i=1}^p (v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge w_1 \\wedge v_{i+1} \\wedge \\dots \\wedge v_p) \\cdot (v_i \\wedge w_2 \\wedge \\dots \\wedge w_q)\n\\end{split}\n\\end{align}\n\n\\noindent called {\\it Pl\\\"ucker relations}, for all $p \\geq q \\geq 0$. Remark that the elements \\eqref{PluckRel} are the equations which define Flag varieties in general as incidence varieties inside products of Grassmann varieties. See \\cite{fulton2013representation, towber1977two, towber1979young} for more details. Let us highlight that we are not going to use the natural symmetric product that $\\mathbb{S}^{\\bullet} V$ has as a quotient of a commutative symmetric algebra.\n\n\\noindent Eventually, since we are dealing with only one copy of $\\mathbb{S}_{\\lambda}V$ for any partition $\\lambda$, to ease the construction of the theory we will imagine to build every Schur module using a fixed std tableau. If $\\lambda \\vdash d$, this one will be given by filling the diagram of $\\lambda$ from top to bottom, starting from the first column, with the integers $1,\\dots,d$. For instance, if $\\lambda = (3,2,1)$, then the fixed tableau will be\n\n\\begin{center} \\begin{ytableau} 1 & 4 & 6 \\\\ 2 & 5 \\\\ 3 \\end{ytableau} . \\end{center}\n\n\\smallskip\n\n\n\\section{Toward Schur apolarity} \\label{secondasez} \\setcounter{equation}{0} \\medskip\n\nIn the following we will introduce the apolarity theory. For this purpose we construct an action called {\\it Schur apolarity action}. \\smallskip\n\nIn order to develop our theory we need to use multiplication maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$. Since $SL(n)$ is reductive, every representation is completely reducible and in this particular instance the behaviour of the tensor product of any two irreducible representations is well-known and computable via the Littlewood-Richardson rule which we recall. Given two Schur modules we have\n\n\\begin{equation} \\label{LR} \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\simeq \\bigoplus_{\\nu} N^{\\lambda,\\mu}_{\\nu} \\mathbb{S}_{\\nu} V^* \\end{equation}\n\n\\noindent where the coefficients $N^{\\lambda,\\mu}_{\\nu} = N^{\\mu,\\lambda}_{\\nu} \\geq 0$ are called {\\it Littlewood-Richardson coefficients}. Sufficient conditions to get $N^{\\lambda,\\mu}_{\\nu} = 0$ are either $|\\nu| \\neq |\\lambda| + |\\mu|$ or that the diagram of either $\\lambda$ or $\\nu$ does not fit in the diagram of $\\nu$. Therefore as soon as $N^{\\lambda,\\mu}_{\\nu} \\neq 0$, the map $\\mathcal{M}^{\\lambda,\\mu}_{\\nu} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$ is non trivial and it will be a projection onto one of the addends $\\mathbb{S}_{\\nu} V^*$ appearing in the decomposition of $\\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^*$ into sum of irreducible representations as described by \\eqref{LR}. Remark also that via \\eqref{LR} it follows that the vector space of such equivariant morphisms has dimension $N^{\\lambda,\\mu}_{\\nu}$. \\smallskip\n\nBefore proceeding we recall some basic combinatorial facts.\n\n\\begin{definition} \\label{skewtab}\n\\noindent Two partitions $\\nu \\vdash d$ and $\\lambda \\vdash e$, with $0 \\leq e \\leq d$, are such that $\\lambda \\subset \\nu$ if $\\lambda_i \\leq \\mu_i$ for all $i$, possibly setting some $\\lambda_i$ equal to $0$. In this case $\\lambda$ is a {\\it subdiagram} of $\\nu$. The {\\it skew Young diagram} $\\nu \/ \\lambda =(\\nu_1 - \\lambda_1,\\dots,\\nu_k - \\lambda_k)$ is the diagram of $\\nu$ with the diagram of $\\lambda$ removed in the top left corner. A {\\it skew Young tableau} $T$ of shape $\\nu \/ \\lambda$ is the diagram of $\\nu \/ \\lambda$ with a filling. The definitions of sstd and std tableau apply also in this context.\n\\end{definition}\n\n\\noindent For example if $\\nu = (3,2,1)$ and $\\lambda = (2)$, the skew diagram $\\nu \/ \\lambda$ together with a sstd skew tableau of shape $\\nu \/ \\lambda$ is\n\n\\begin{center} \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ 1 & 2 \\\\ 3 \\end{ytableau} . \\end{center}\n\n\\noindent Skew-tableaux can be seen as a generalization of the regular tableaux by setting $\\lambda = (0)$ in Definition \\ref{skewtab}. \n\n\\begin{definition} \\label{skewSchurModule} Fix a std skew tableau of shape $\\nu \/ \\lambda$. One can define analogously to the formula \\eqref{YoungSymm} the Young symmetrizer $c_{\\nu \/ \\lambda} : V^{\\otimes |\\nu|-|\\lambda|} \\longrightarrow V^{\\otimes |\\nu|-|\\lambda|}$. The {\\it skew Schur module} $\\mathbb{S}_{\\nu \/ \\lambda} V$ is the image of $c_{\\nu \/ \\lambda}$. \n\\end{definition}\n\nClearly also in this case two skew Schur modules determined by two different skew std tableaux of the same shape are isomorphic. Moreover they are still representations of $SL(n)$ but in general they may be reducible. Indeed it holds\n\n\\begin{equation} \\label{SKEW} \\mathbb{S}_{\\nu \/ \\lambda} V \\cong \\bigoplus_{\\mu} N^{\\lambda, \\mu}_{\\nu} \\mathbb{S}_{\\mu} V \\end{equation}\n\n\\noindent where the coefficients $N^{\\mu, \\nu}_{\\lambda}$ are the same appearing in \\eqref{LR}. For more details see \\cite[p. 83]{fulton2013representation}. Also in this case we assume that such modules are built using a std skew tableau of shape $\\nu \/ \\lambda$ filled with the integers from $1$ to $|\\nu| - |\\lambda|$ from top to bottom, starting from the first column.\n\\smallskip\n\n\\begin{definition} \nLet $\\nu \/ \\lambda$ be any skew partition and consider a sstd skew tableau $T$ of shape $\\nu \/ \\lambda$. The {\\it word associated to} $T$ is the string of integers obtained from $T$ reading its entries from left to right, starting from the bottom row. The obtained word $w_1 \\dots w_k$ is called either {\\it Yamanouchi word} or {\\it reverse lattice word} if for any $s$ from $0$ to $k-1$, the sequence $w_k w_{k-1} \\dots w_{k-s}$ contains the integer $i+1$ at most many times it contains the integer $i$. For short this words will be denominated $Y${\\it -words}. The {\\it content} of $T$ is the sequence of integers $\\mu = (\\mu_1,\\dots,\\mu_n)$ where $\\mu_i$ is the number of $i$'s in $T$. Note that this may be not a partition.\n\\end{definition}\n\n\\noindent For example, given\n\n\\begin{center} $T_1$ = \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ 1 & 2 \\\\ 3 \\end{ytableau} , \\quad $T_2$ = \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ 1 & 3 \\\\ 2 \\end{ytableau} \\end{center}\n\n\\noindent their associated words are $w_{T_1} = 3121$ and $w_{T_2} = 2131$ respectively. Remark that only the first one is a Y-word since in $w_{T_2}$ the subsequence $13$ has the integer $3$ appearing more times than the integer $2$. In both cases the content is $(2,1,1)$.\n\n\\begin{definition}\nLet $\\lambda$ and $\\nu$ be two partitions such that $\\lambda \\subset \\nu$ and consider a skew sstd tableau $T$ of shape $\\nu \/ \\lambda$. The tableau $T$ is a {\\it Littlewood-Richardson skew tableau} if its associated word is a Y-word.\n\\end{definition}\n\n\\begin{prop}[Prop. 3, p. 64, \\cite{fulton1997young}]\nLet $\\mu$, $\\lambda$ and $\\nu$ such that $\\mu, \\lambda \\subset \\nu$ and $|\\mu| + |\\lambda| = |\\nu|$, and consider the skew diagram $\\nu \/ \\lambda$. The number of Littlewood-Richardson skew tableau of shape $\\nu \/ \\lambda$ and content $\\mu$ is exactly $N^{\\lambda,\\mu}_{\\nu}$.\n\\end{prop}\n\n\\begin{remark}\nThe set of std tableau of shape $\\lambda$ with entries in $\\{1,\\dots,|\\lambda|\\}$ is in $1$ to $1$ correspondence with the set of Y-words of length $n$ and {\\it content} $\\lambda$, i.e. with $\\lambda_i$ times the integer $i$. \n\\end{remark}\n\nIndeed we can define two functions $\\alpha$ and $\\beta$\n\n$$ \\left \\{ \\text{std tableaux of shape}\\ \\lambda\\ \\right \\} \\xrightarrow{\\quad\\alpha\\quad} \\left \\{ \\text{Y-words of length}\\ n\\ \\text{and content}\\ \\lambda \\right \\} $$\nand \n\n$$ \\left \\{ \\text{std tableaux of shape}\\ \\lambda\\ \\right \\} \\xleftarrow{\\quad\\beta\\quad} \\left \\{ \\text{Y-words of length}\\ n\\ \\text{and content}\\ \\lambda \\right \\} $$\nthat are inverses each other. \\smallskip\n\n\\noindent Let $T$ be a std tableau of shape $\\lambda$ with entries in $\\{1,\\dots,|\\lambda|\\}$. Remark that each entry $l \\in \\{1,\\dots,|\\lambda|\\}$ appears in a certain position $(i,j)$, i.e. in the box in the $i$-th row and $j$-th column. Then consider the sequence $(a_1,\\dots,a_{|\\lambda|})$, where we set $a_l = i$ if $l$ appears in the position $(i,j)$. Roughly, starting with the smallest entry in $\\{1,\\dots,|\\lambda|\\}$ record the row in which it appears in $T$. We define $\\alpha(T)$ to be the reversed sequence \n\n$$\\alpha(T):= \\operatorname{rev}(a_1,\\dots,a_{|\\lambda|}) := (a_{|\\lambda|},\\dots,a_1),$$\nwhere we consider $\\operatorname{rev}$ as the involution that acts on the set of words reversing them. It is easy to see that $\\alpha(T)$ is a Y-word since the sequences of integers are determined by the entries of $T$. \\smallskip\n\n\\noindent Let $\\underline{a}$ be a Y-word of content $\\lambda$, so that $\\lambda_i$ is the number of times in which an integer $i$ appears in $\\underline{a}$. Consider its reversed sequence $\\operatorname{rev}(\\underline{a}) = (a_1,\\dots,a_{|\\lambda|})$. We define the std tableau $\\beta(\\underline{a})$ of shape $\\lambda$ in the following way. For any integer $i \\in \\{1,\\dots,k\\}$, where $k= l(\\lambda)$, consider the subsequence \n\n$$\\operatorname{rev}(\\underline{a})_i := (a_{k_1},\\dots,a_{k_{\\lambda_i}})$$\ngiven by all $i$'s with respect to the order with which they appear in $\\operatorname{rev}(\\underline{a})$. Then we set the $(i,j)$-th entry of $\\beta(\\underline{a})$ to be equal to $k_j$ appearing in the subsequence $\\operatorname{inv}(\\underline{a})_i$. Even in this case it is clear that the image will be a std tableau of shape $\\lambda$. \n\n\\noindent For more details we refer to \\cite[Definition IV.1.3, p. 252]{akin1982schur}. \n\n\\begin{example}\nConsider $\\lambda = (3,1)$ and let $T$ be the tableau\n\n$$ T = \\begin{ytableau} 1 & 2 & 4 \\\\ 3 \\end{ytableau} .$$\nWe apply $\\alpha$ to $T$. We get at first $(a_1,\\dots,a_4) = (1,1,2,1)$, so that $\\alpha(T) = (1,2,1,1)$. Now we apply $\\beta$ to see that actually we come back to $T$. Consider the reversed sequence $\\operatorname{rev}(\\alpha(T)) = (1,1,2,1)$, which has content $\\lambda$. Here we have only two subsequences\n\n$$\\operatorname{rev}(\\alpha(T))_1 = (a_1,a_2,a_4) = (1,1,1)\\ \\text{and}\\ \\operatorname{rev}(\\alpha(T))_2 = (a_3) = (2).$$\nHence it follows that\n\n$$ \\beta((\\alpha(T))) = T = \\begin{ytableau} 1 & 2 & 4 \\\\ 3 \\end{ytableau} .$$\n\\end{example}\n\n\\begin{definition}[Definition IV.2.2, p. 257 in \\cite{akin1982schur}]\nLet $T$ be a Littlewood-Richardson skew tableau of shape $\\nu \/ \\lambda$ and content $\\mu$. We can define a new tableau $T'$ of shape $\\nu \/ \\lambda$ and content $\\mu'$ not necessarily sstd as following. Let $\\underline{a}$ be the word associated to $T$ and consider $\\underline{a}' := \\alpha(\\beta(\\underline{a})')$, where with the notation $\\beta(\\underline{a})'$ we mean the conjugate tableau to $\\beta(\\underline{a})$, i.e. the one obtained from the latter transposing it as if it would be a matrix. Then define $T'$ as the skew tableau of shape $\\nu \/ \\lambda$ whose associated word is $\\underline{a}'$. \n\\end{definition}\n\n\\begin{example}\nLet $\\nu = (3,2)$, $\\lambda = (1)$ and consider\n\n$$ T = \\begin{ytableau} *(gray) & 1 & 1 \\\\ 1 & 2 \\end{ytableau},$$\nwhere in this case the content is $\\mu = (3,1)$. The associated word is $\\underline{a} = (1,2,1,1)$ which is a Y-word. Then \n\n$$ \\beta(\\underline{a}) = \\begin{ytableau} 1 & 2 & 4 \\\\ 3 \\end{ytableau}\\quad \\text{and}\\quad \\beta(\\underline{a})' = \\begin{ytableau} 1 & 3 \\\\ 2 \\\\ 4 \\end{ytableau} $$\nso that $\\alpha(\\beta(\\underline{a})') = (3,1,2,1)$. Therefore we get\n\n$$ T' = \\begin{ytableau} *(gray) & 2 & 1 \\\\ 3 & 1 \\end{ytableau} .$$\n\\end{example}\n\n\\begin{definition}[Definition IV.2.4, p. 258]\nConsider three partitions $\\lambda$, $\\mu$ and $\\nu$ such that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$ and consider the diagrams of $\\nu \/ \\lambda$ and $\\mu$. Let $T$ be a Littlewood-Richardson skew tableau of shape $\\nu \/ \\lambda$ and content $\\mu$. When considering a Young diagram, we denote with $(i,j)$ the entry of the diagram positioned in the $i$-th row and $j$-th column. We can define a map $\\sigma_T : \\nu \/ \\lambda \\longrightarrow \\mu$ from the entries of the diagram of $\\nu \/ \\lambda$ to the entries of the diagram of $\\mu$ such that\n\n$$\\sigma_T (i,j) := ( T(i,j), T'(i,j)), $$\nfor every entry $(i,j)$ of $\\nu \/ \\lambda$. In the following we adopt the notation $V^{\\otimes \\lambda}$ to denote $V^{\\otimes |\\lambda|}$ in which every factor is identified with some entry $(i,j)$ of the Young diagram of $\\lambda$. \n\\end{definition}\n\n\\begin{remark}\nRecall that we can write the Young symmetrizer $c_{\\lambda}^T : V^{\\otimes d} \\longrightarrow V^{\\otimes d}$ as the composition $b_{\\lambda} \\cdot a_{\\lambda}$ of two endomorphisms $a_{\\lambda}^T,\\ b_{\\lambda}^T : V^{\\otimes d} \\longrightarrow V^{\\otimes d}$ such that on decomposable elements they act as\n\n$$ a_{\\lambda} (v_1 \\otimes \\dots \\otimes v_d) := \\sum_{\\sigma \\in R_{\\lambda}} v_{\\sigma(1)} \\otimes \\dots \\otimes v_{\\sigma(d)},$$\n\n$$ b_{\\lambda} (v_1 \\otimes \\dots \\otimes v_d) := \\sum_{\\tau \\in C_{\\lambda}} \\operatorname{sgn}(\\tau) v_{\\tau(1)} \\otimes \\dots \\otimes v_{\\tau(d)}.$$\nIn the following we are going to use these two maps. In particular remark that\n\n$$a_{\\lambda} (V^{\\otimes d}) = \\Sym_{\\lambda} V := \\Sym^{\\lambda_1} V \\otimes \\dots \\otimes \\Sym^{\\lambda_k} V,$$\nwhile for $b_{\\lambda}$ we have\n$$b_{\\lambda} (V^{\\otimes d}) = \\superwedge_{\\lambda'} V$$\nso that we get the inclusion $\\mathbb{S}_{\\lambda} V \\subset \\superwedge_{\\lambda'} V$ we have already seen. \n\\end{remark}\n\n\\noindent We are ready to give the definition we need of the maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$. \n\n\\begin{definition} \\label{defmapmult}\nLet $\\lambda$, $\\mu$ and $\\nu$ be three partitions such that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$, and fix a Littlewood-Richardson tableau $T$ of shape $\\nu \/ \\lambda$ and content $\\mu$. We define the map $\\mathcal{M}^{\\lambda,\\mu}_{\\nu,T} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$ as the following composition of maps\n\n\\begin{align*} \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\rightarrow &\\Sym_{\\lambda} V^* \\otimes \\Sym_{\\mu} V^* \\rightarrow \\\\\n&\\rightarrow \\Sym_{\\lambda} V^* \\otimes \\Sym_{\\nu \/ \\lambda} V^* \\rightarrow \\Sym_{\\nu} V^* \\rightarrow \\mathbb{S}_{\\nu} V^*, \\end{align*}\nwhere the first map is the transpose of the map\n\n$$ a_{\\lambda} \\otimes a_{\\mu} : \\Sym_{\\lambda} V^* \\otimes \\Sym_{\\mu} V^* \\rightarrow \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^*.$$\nThe second map is the tensor product of the identity on $\\Sym_{\\lambda}V^*$ with the map\n\n$$\\Sym_{\\mu} V^* \\longrightarrow (V^*)^{\\otimes \\mu} \\longrightarrow (V^*)^{\\otimes \\nu \/ \\lambda} \\longrightarrow \\Sym_{\\nu \/ \\lambda} V^*$$\nthat previously embeds $\\Sym_{\\mu} V^*$ in $V^{\\otimes |\\mu|}$ denoting the factors of $V^{\\otimes |\\mu|}$ with the entries of the diagram of $\\mu$. Then it rearranges such factors accordingly to the map $\\sigma_T$, and eventually it applies the map $a_{\\nu \/ \\lambda}$ to get $\\Sym_{\\nu \/ \\lambda} V^*$. \n\\noindent The third map is the tensor product of the maps\n\n$$\\Sym^{\\lambda_i} V^* \\otimes \\Sym^{\\nu_i - \\lambda_i} V^* \\longrightarrow \\Sym^{\\nu_i} V^*.$$\nEventually the last map is just the skew-symmetrizing part of the Young symmetrizer $c_{\\nu}$, i.e.\n\n$$b_{\\nu} : \\Sym_{\\nu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*.$$\n\\end{definition}\n\n\\begin{example}\nConsider the partitions $\\lambda = (2,1)$, $\\mu = (1,1)$ and $\\nu = (3,2)$, so that $N^{(2,1),(1,1)}_{(3,2)} = 1$. In particular we have only one Littlewood-Richardson skew tableau of shape $(3,2) \/ (2,1)$ and content $(1,1)$ which is\n\n$$ T = \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ *(gray) & 2 \\end{ytableau} . $$\nWe describe the image of the element \n\n$$ t = (v_1 \\wedge v_2 \\otimes v_3 - v_2 \\wedge v_3 \\otimes v_1) \\otimes (\\alpha \\wedge \\beta) \\in \\mathbb{S}_{(2,1)} V \\otimes \\mathbb{S}_{(1,1)} V$$\nvia the map\n\n$$\\mathcal{M}^{(2,1),(1,1)}_{(3,2)} : \\mathbb{S}_{(2,1)} V \\otimes \\mathbb{S}_{(1,1)} V \\longrightarrow \\mathbb{S}_{(3,2)} V\\ .$$\nTo this end we describe all the maps involved in the composition defining $\\mathcal{M}^{(2,1),(1,1)}_{(3,2)}$ as described in Definition \\ref{defmapmult}. Remark that the element belonging to $\\mathbb{S}_{(2,1)} V$ is the one determined by the sstd Young tableau\n\n$$ \\begin{ytableau} 1 & 3 \\\\ 2 \\end{ytableau}\\ .$$\nAt first we have the map\n\n$$\\mathbb{S}_{(2,1)} V \\otimes \\mathbb{S}_{(1,1)} V \\longrightarrow \\Sym_{(2,1)} V \\otimes \\Sym_{(1,1)} V$$\nthat sends\n\n$$ t\\ \\mapsto\\ (v_1v_3 \\otimes v_2 - v_2v_3 \\otimes v_1 ) \\otimes (\\alpha \\otimes \\beta - \\beta \\otimes \\alpha). $$\nThen we have to apply the map $\\Sym_{(2,1)} V \\otimes \\Sym_{(1,1)} V \\longrightarrow \\Sym_{(2,1)} V \\otimes \\Sym_{(3,2)\/(2,1)} V$ to this last element. Remark that in this case the map $\\sigma_T$ acts as\n\n$$\\sigma_T :\\ \\begin{ytableau} *(gray) & *(gray) & b \\\\ *(gray) & a \\end{ytableau} \\longrightarrow \\begin{ytableau} b \\\\ a \\end{ytableau} $$\nand moreover $\\Sym_{(1,1)} V = \\Sym_{(3,2)\/(2,1)} V = V \\otimes V$. Now we apply the map $\\Sym_{(2,1)} V \\otimes \\Sym_{(3,2)\/(2,1)} V \\longrightarrow \\Sym_{(3,2)} V$, which is given by the tensor product of the maps\n\n$$\\Sym^2 V \\otimes \\Sym^1 V \\longrightarrow \\Sym^3 V,\\quad \\Sym^1 V \\otimes \\Sym^1 V \\longrightarrow \\Sym^2 V. $$\nThe image of our element is \n\n$$ v_1v_3\\alpha \\otimes v_2 \\beta - v_1v_3\\beta \\otimes v_2 \\alpha - v_2v_3\\alpha \\otimes v_1 \\beta + v_2v_3\\beta \\otimes v_1 \\alpha.$$\nEventually we have to apply the map $\\Sym_{(3,2)} V \\longrightarrow \\mathbb{S}_{(3,2)} V$ which is the skew-symmetrizing part of the Young symmetrizer. We can represent the image of such a map using Young tableaux of shape $(3,2)$\n\n$$\\begin{ytableau} 1 & 3 & \\alpha \\\\ 2 & \\beta \\end{ytableau} - \\begin{ytableau} 1 & 3 & \\beta \\\\ 2 & \\alpha \\end{ytableau} - \\begin{ytableau} 2 & 3 & \\alpha \\\\ 1 & \\beta \\end{ytableau} + \\begin{ytableau} 2 & 3 & \\beta \\\\ 1 & \\alpha \\end{ytableau}$$\nwhere we are considering such tableaux as elements of $V^{\\otimes 5}$ as described in Section \\ref{primasez}. \n\\end{example}\n\n\n\n\n\\begin{remark}The map $\\mathcal{M}^{\\lambda,\\mu}_{\\nu,T} : \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V \\longrightarrow \\mathbb{S}_{\\nu} V$ is clearly equivariant and by construction its image is contained in $\\mathbb{S}_{\\nu}V$. Moreover it is easy to see that since these maps are determined by a choice of a Littlewood-Richardson skew tableau, they all acts in different way and they are linearly independent.\n\\end{remark}\n\n\\medskip\n\n\\begin{remark} \\label{trim} The last thing we would like to underline is that some of these multiplication maps are involved in the construction of the elements of $\\mathbb{S}_{\\lambda}V$ via Young symmetrizers. Indeed, for example consider $\\lambda=(3,2,1)$. We know that $\\mathbb{S}_{(3,2,1)} V$ is the image of the respective Young symmetrizer applied $V^{\\otimes 6}$ using symmetrization along rows and then skew-symmetrizations along columns. Via this interpretation we can see $\\mathbb{S}_{\\lambda}V$ obtained by adding row by row, i.e. via the following composition\n\n$$\\mathbb{S}_{(3)}V \\otimes \\mathbb{S}_{(2)} V \\otimes \\mathbb{S}_{(1)} V \\longrightarrow \\mathbb{S}_{(3,2)}V \\otimes \\mathbb{S}_{(1)} V \\longrightarrow \\mathbb{S}_{(3,2,1)} V $$\n\n\\noindent where the first map is $\\mathcal{M}^{(3),(2)}_{(3,2)} \\otimes \\text{id}_{\\mathbb{S}_{(1)}V}$ and the second one is $\\mathcal{M}^{(3,2),(1)}_{(3,2,1)}$. Note that all these maps that add only a row are unique up to scalar multiplication since the respective Littlewood-Richardson coefficient is $1$. Hence we can see every module obtained by adding row by row in the order specified by the partition. We sometimes refer to the inverse passage, i.e. deleting row by row, as {\\it trimming}.\n\\end{remark}\n\n\\begin{remark} \\label{gradedring}\nWe end this section with a last remark. The space $\\mathbb{S}^{\\bullet} V$ together with the maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}$ is a graded ring. Precisely the graded pieces are given by\n\n$$\\left ( \\mathbb{S}^{\\bullet} V \\right )_a := \\bigoplus_{\\lambda\\ :\\ |\\lambda| = a} \\mathbb{S}_{\\lambda} V. $$\nMoreover, given two elements $g \\in \\mathbb{S}_{\\lambda} V$ and $h \\in \\mathbb{S}_{\\mu} V$ where $|\\lambda| = a$ and $|\\mu| = b$, then if $N^{\\lambda,\\mu}_{\\nu} \\neq 0$ we get that the element $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}(g \\otimes h)$ belongs to $\\mathbb{S}_{\\nu} V$, where $|\\nu| = |\\lambda| + |\\mu| = a+b$. Therefore the condition\n\n$$\\left ( \\mathbb{S}^{\\bullet} V \\right )_a \\cdot \\left ( \\mathbb{S}^{\\bullet} V \\right )_b \\subset \\left ( \\mathbb{S}^{\\bullet} V \\right )_{a+b}$$\nis satisfied.\n\\end{remark}\n\n\n\\section{Schur apolarity action} \\label{terzasez} \\setcounter{equation}{0} \\medskip\n\n\\noindent In this section we define the {\\it Schur apolarity action} which will be the foundation of our results. \\smallskip\n\nWe would like to define an apolarity action whose domain is $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$, with $\\lambda$ and $\\mu$ suitably chosen, which extends the symmetric and the skew-symmetric action. Naively it seems natural to require that $\\mu \\subset \\lambda$ and that this map is \n\n$$ \\varphi : \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\lambda \/ \\mu} V $$\n\n\\noindent and to set $\\varphi$ equal to the zero map if $\\mu \\not\\subset \\lambda$. \n\nBefore going along with the description, we have to recall the definition of {\\it skew-symmetric apolarity action} given in \\cite{arrondo2021skew}.\n\n\\begin{definition} Given two integers $1 \\leq h \\leq k \\leq \\dim(V)$, it can be described as\n\n$$ \\superwedge^{k} V \\otimes \\superwedge^{h} V^* \\longrightarrow \\superwedge^{k-h} V$$ \n$$ (v_1 \\wedge \\dots \\wedge v_k) \\otimes (\\alpha_1 \\wedge \\dots \\wedge \\alpha_h)\\ \\longmapsto \\sum_{R \\subset \\{1,\\dots,k\\}} \\operatorname{sign}(R) \\cdot \\det(\\alpha_i(v_j)_{j \\in R}) \\cdot v_{\\overline{R}} $$\n\n\\noindent where the sum runs over all the possible ordered subsets $R$ of $\\{1,\\dots,k\\}$ of cardinality $h$ and the set $\\overline{R}$ is the complementary set to $R$ in $\\{1,\\dots,k\\}$. The element $v_{\\overline{R}}$ is the wedge product of the vectors whose index is in $\\overline{R}$, and $\\operatorname{sign}(R)$ is the sign of the permutation which sends the sequence of integers from $1$ to $k$ to the sequence in which the elements of $R$ appear first already ordered, keeping the order of the other elements. \n\\end{definition} \n\\begin{remark}\nThe skew-symmetric apolarity action can be regarded as the composition\n\n$$ \\superwedge^h V^* \\otimes \\superwedge^k V \\longrightarrow \\superwedge^h V^* \\otimes \\superwedge^{h} V \\otimes \\superwedge^{k-h} V \\longrightarrow \\superwedge^{k-h}V$$\n\n\\noindent where the first map is the tensor product of the identity on the first factor and the comultiplication of the exterior algebra regarded as a bialgebra on the second one. The second map acts with the identity on the last factor and acts with the determinantal pairing $\\superwedge^h V^* \\otimes \\superwedge^{h} V \\longrightarrow \\mathbb{C}$ on the first two. \n\\end{remark}\n\n\\begin{definition} \\label{Schurapodef} Let $\\mathbb{S}_{\\lambda} V \\subset \\superwedge_{\\lambda'} V$ and $\\mathbb{S}_{\\mu} V^*\\subset \\superwedge_{\\mu'} V$ be two Schur modules. Then the Schur apolarity action is defined as the map\n\n $$\\varphi: \\mathbb{S}^{\\bullet} V \\otimes \\mathbb{S}^{\\bullet} V^* \\longrightarrow \\mathbb{S}^{\\bullet} V$$\n\n\\noindent such that when restricted to a product $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$, it is the trivial map if $\\mu \\not\\subset \\lambda$. Otherwise it is given by the restriction of the map\n\n$$\\tilde{\\varphi} : \\superwedge_{\\lambda'} V \\otimes \\superwedge_{\\mu'} V^* \\longrightarrow\n \\superwedge_{\\lambda' \/ \\mu'} V $$\n\n\\noindent that acts as the tensor product of skew-symmetric apolarity actions $\\superwedge^{\\lambda_i'} V \\otimes \\superwedge^{\\mu_i'} V^* \\longrightarrow \\superwedge^{\\lambda_i'-\\mu_i'} V$.\n\\end{definition}\n\n\\begin{example} \\label{esschurapoflag} Consider the partitions $\\lambda = (3,2,1)$ and $\\mu = (2)$. We have that $\\mu \\subset \\lambda$ so that the Schur apolarity action $\\varphi$ restricted to $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$ is not trivial. Consider the element\n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 \\in \\mathbb{S}_{(3,2,1)} V$$\nand let $\\alpha\\beta \\in \\Sym^2 V$ be any element. Then the Schur apolarity action acts as\n\n\\begin{align*}\n&\\mathcal{C}^{(3,2,1),(2)}_{t_1}(\\alpha \\beta) = \\\\\n&= \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_1) + \\alpha(v_1)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_2 \\otimes v_1 + \\\\\n&- \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_2) + \\alpha(v_2)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_1 \\otimes v_1.\n\\end{align*} \n\\end{example}\n\n\\begin{example}\nConsider $\\lambda = (2,2)$ and $\\mu = (1,1)$. Let\n$$ t = v_1 \\wedge v_2 \\otimes v_1 \\wedge v_3 + v_1 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\in \\mathbb{S}_{(2,2)}\\mathbb{C}^4$$\n\n\\noindent and let $s = x_1 \\wedge x_2 \\in \\mathbb{S}_{(1,1)}(\\mathbb{C}^4)^*$. Then\n\n\\begin{align*} \\varphi(t \\otimes s)& = \\\\ & = \\det \\left (\\begin{matrix} x_1(v_1) & x_1(v_2) \\\\ x_2(v_1) & x_2(v_2) \\end{matrix}\\right ) v_1 \\wedge v_3 + \\det \\left (\\begin{matrix} x_1(v_1) & x_1(v_3) \\\\ x_2(v_1) & x_2(v_3) \\end{matrix}\\right ) v_1 \\wedge v_2 \\\\ & = v_1 \\wedge v_3. \\end{align*}\n\\end{example}\n\\medskip\n\n\\noindent A priori we are not able to say that the image is contained in the skew Schur module $\\mathbb{S}_{\\lambda \/ \\mu} V$. The following fact will clear our minds.\n\n\\begin{prop} \\label{imgSchurAction}\nLet $ \\varphi: \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^* \\to \\superwedge_{\\lambda'\/\\mu'} V$ be the Schur apolarity action restricted to $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$. Then image of $\\varphi$ is contained in $\\mathbb{S}_{\\lambda \/ \\mu} V$. \n\\end{prop}\n\\begin{proof}\n\nWe prove it for two elements of the basis of $\\mathbb{S}_{\\lambda}V$ and $\\mathbb{S}_{\\mu} V^*$ given by the projections $c_{\\lambda}$ and $c_{\\mu}$ respectively. In the following the letters $a,b,\\dots$ denote the elements of the filling of tableaux of shape $\\lambda$, while greek letters $\\alpha,\\beta,\\dots$ denote the elements of the filling of tableaux of shape $\\mu$. We use the pictorial notation we have introduced when generating a basis of all these spaces in Section \\ref{primasez}. Performing Schur apolarity means erasing the diagram of $\\mu$ in the diagram of $\\lambda$ in the top left corner contracting the respective letters coming from the elements of $V$ with those of $V^*$. Every such skew tableau comes with a coefficient given by the contraction of the $\\alpha, \\beta, \\dots$ with the $a, b, \\dots$ in a specific order according with our notation. The image of Schur apolarity between two elements of the basis is then given by a sum of skew tableaux of shape $\\lambda \/ \\mu$ with proper fillings and coefficients. Remark that skew tableaux may contain {\\it disjoint} subdiagrams, i.e. diagrams which do not share neither a row or a column. For example \n\n \\begin{center}\n\\ytableausetup{nosmalltableaux}\n\\begin{ytableau}\n*(gray) & *(gray) & c \\\\\nd & e \\\\\nf \n\\end{ytableau} \\end{center}\n\n\\noindent contains two disjoint subdiagrams, $\\lambda_1 = (1)$ and $\\lambda_2 = (2,1)$. Let us group the addenda in the image in such a way that they all have the same coefficients and the respective disjoint subdiagrams share the same fillings. For example, consider the tableaux\n\n$$ U = \\begin{ytableau} a & b & c \\\\ d & e \\\\ f \\end{ytableau} \\quad \\text{and} \\quad S = \\begin{ytableau} \\alpha & \\beta \\end{ytableau} $$\n\n\\noindent and perform all the permutations accordingly to the maps $c_{\\lambda}$ and $c_{\\mu}$. Then we have to contract every single addend of the first tableau with every addend of the second as we have said above. In all these contractions we may find a summand like\n\n\\begin{equation} \\label{SchurApoImg} \\alpha(a) \\beta(b) \\cdot \\left ( \\begin{ytableau} *(gray) & *(gray) & c \\\\ d & e \\\\ f \\end{ytableau} - \\begin{ytableau} *(gray) & *(gray) & c \\\\ f & e \\\\ d \\end{ytableau} + \\begin{ytableau} *(gray) & *(gray) & c \\\\ e & d \\\\ f \\end{ytableau} - \\begin{ytableau} *(gray) & *(gray) & c \\\\ f & d \\\\ e \\end{ytableau} \\right ). \\end{equation}\n\n\\noindent It is possible to find such elements for two reasons. At first, if we consider permutations of the bigger diagram which send the erased elements in the right positions, we can find disjoint subdiagrams which share the same fillings. Moreover, if the elements fixed are contracted by a single addend of the element in $\\mathbb{S}_{\\mu} V^*$, the coefficients are always the same. We have collected every single group of skew tableaux such that the fillings of every subdiagram are permuted accordingly to the symmetrization rules of a std skew tableau of shape $\\lambda \/ \\mu$. Hence, every such group is the image via $c_{\\lambda \/ \\mu}$ of some element of $V^{\\otimes |\\lambda| - |\\mu|}$. In the Example \\ref{SchurApoImg}, the element is image via $c_{\\lambda \/ \\mu}$ of the tableau\n\n$$ \\alpha(a) \\beta(b) \\cdot \\begin{ytableau} *(gray) & *(gray) & c \\\\ d & e \\\\ f \\end{ytableau} . $$ \n\n\\noindent In particular, remark that the signs due to permutations come in the right way since permutations along rows of the skew tableau come from permutations along rows of the bigger diagram. The same happens for exchanges along columns with the proper sign of the permutations. This proves that the image is contained in $\\mathbb{S}_{\\lambda \/ \\mu} V$. \n\\end{proof} \n\n\\begin{remark} Proposition \\ref{imgSchurAction} tells us that these action can be restricted to the known apolarity maps. In the case $\\lambda = (d)$ and $\\mu = (e)$ such that $e \\leq d$ we have that $\\mathbb{S}_{\\lambda}V = \\Sym^d V$ and $\\mathbb{S}_{\\mu} V^* = \\Sym^e V^*$. It follows that $\\mathbb{S}_{\\lambda \/ \\mu} V = \\Sym^{d-e}V$ and by Proposition \\ref{imgSchurAction} the Schur apolarity action coincides with the classic apolarity action. Transposing all the diagrams, if $\\lambda = (1^d)$ and $\\mu = (1^e)$, then the Schur apolarity action coincides with the skew symmetric apolarity action.\n\\end{remark}\n\n\\noindent We conclude with some basic definitions.\n\n\\begin{definition} \\label{schurcatdef}\nFix $f \\in \\mathbb{S}_{\\lambda}V$ and let $\\mu \\subset \\lambda$, where $\\mu$ and $\\lambda$ both have length strictly less then $\\dim(V)$. The map induced by the Schur apolarity action $\\varphi$, introduced in Definition \\ref{Schurapodef}, is\n\n$$ \\mathcal{C}^{\\lambda,\\mu}_f : \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\lambda \/ \\mu} V$$\ndefined as $\\mathcal{C}^{\\lambda,\\mu}_f(g):=\\varphi(f \\otimes g)$ for any $g \\in \\mathbb{S}_{\\mu}V^*$, and is called {\\it catalecticant map of $\\lambda$ and $\\mu$}, or simply {\\it catalecticant map} if no specification is needed.\n\\end{definition}\n\n\\begin{definition}\nThe {\\it apolar set to} $f \\in \\mathbb{S}_{\\lambda}V$ is the vector subspace $f^{\\perp}\\subset\\mathbb{S}^{\\bullet}V^*$ defined as\n\n$$ f^{\\perp} := \\bigoplus_{\\mu} \\ker \\mathcal{C}^{\\lambda,\\mu}_f. $$ \n\\end{definition}\nThe apolar set of a point $f \\in \\mathbb{S}_{\\lambda} V$ will turn out to be useful on computing its structured rank.\n\\bigskip\n\n\\section{Additive rank and algebraic varieties} \\label{quartasez} \\setcounter{equation}{0}\\bigskip\n\nThis section is devoted to the description of rational homogeneous varieties and the use of the Schur apolarity action. We recall at first basic facts of the theory and we conclude with a result linking additive rank decompositions and the Schur apolarity.\n\n\\begin{definition} \\label{Xrango}\nLet $X \\subset \\mathbb{P}^N$ be a non degenerate algebraic variety and let $p \\in \\mathbb{P}^N$. The $X$-{\\it rank of $p$} is\n$$r_X(p) := \\min \\{ r\\ :\\ p \\in \\langle p_1,\\dots,p_r \\rangle,\\ \\text{with}\\ p_1,\\dots,p_r \\in X \\}. $$\n\\end{definition} \\smallskip\n\n\\noindent Let us see in detail the rational homogeneous varieties we are looking for. Given the group $G=SL(V)$, a {\\it representation of} $G$ is a vector space $W$ together with a morphism $\\rho : G \\longrightarrow GL(W)$. We use the notation $g \\cdot w$ instead of $\\rho(g) w$, with $g \\in G$ and $w \\in W$. If we choose a basis for $V$, we may identify $G$ with $SL(n)$. Consider the subgroup $H$ of diagonal matrices $x = \\operatorname{diag}(x_1,\\dots,x_n)$. An element $w\\in W$ is called {\\it weight vector} with {\\it weight} $\\alpha = (\\alpha_1,\\dots,\\alpha_n)$, with $\\alpha_i$ integers, if\n\n$$ x \\cdot w = x_1^{\\alpha_1} \\dots x_n^{\\alpha_n} w,\\ \\text{for all}\\ x \\in H. $$ \n\n\\noindent Since $H$ is a subgroup in which every element commutes, the space $W$ can be decomposed as\n\n$$ W = \\bigoplus_{\\alpha} W_{\\alpha}, $$\n\n\\noindent where $W_{\\lambda}$ is given by all weight vectors of weight $\\alpha$ and is called {\\it weight space}. If $W = \\mathbb{S}_{\\lambda} V$ every element of the basis $c_{\\lambda}(w_S)$ with $S$ sstd tableau of shape $\\lambda$, is a weight vector. Let $B \\subset G$ be the subgroup of upper triangular matrices. A weight vector $w \\in W$ is called {\\it highest weight vector} if $B \\cdot w = \\mathbb{C}^* \\cdot w$. It is well known that $W$ is an irreducible representation if and only if there is a unique highest weight vector, see \\cite{fulton2013representation}. In the case of Schur modules we have\n\n\\begin{prop} \\label{hwvSchur}\nIf $W = \\mathbb{S}_{\\lambda}V$, then the only highest weight vector in $W$ is $c_{\\lambda}(v_U)$ up to scalar multiplication, where $U$ is the tableau of shape $\\lambda$ whose $i-$th row contains only the number $i$, and $v_U \\in V^{\\otimes |\\lambda|}$ is defined in Section \\ref{primasez}.\n\\end{prop} \n\n\\noindent For a proof see \\cite[Lemma 4, p. 113]{fulton1997young}. The theory of highest weights and geometry are closely related. Given a highest weight vector $v$, there is a subgroup of $G$\n\n$$ P = \\{g \\in G\\ :\\ g \\cdot [v] = [v] \\in \\mathbb{P}(W) \\} $$\n\n\\noindent called {\\it parabolic subgroup}. The subgroup $P$ may not be normal and hence the quotient $G \/ P$ is just a space of cosets. Moreover, by the definition of $P$, the space $G\/P$ can be identified with the orbit $G \\cdot [v] \\subset \\mathbb{P}(W)$. It is a general fact that $G\/P$ is compact and hence a closed subvariety of $\\mathbb{P}(W)$ called {\\it rational homogeneous variety}. \n\\smallskip\n\n\\noindent Coming back to the symmetric and skew-symmetric cases, the highest weight vectors of the modules $\\Sym^d V = \\mathbb{S}_{(d)} V$ and $\\superwedge^k V = \\mathbb{S}_{(1,\\dots,1)} V$, $k < \\dim(V)$, are the elements $v_1^d$ and $v_1 \\wedge \\dots \\wedge v_k$ respectively. The action of $G$ on these two elements generates the Veronese and the Grassmann varieties respectively. \n\n\\begin{example} Consider $\\lambda = (2,1)$. Then the respective highest weight vector is determined by the tableau\n\n$$ U =\\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} $$\n\n\\noindent and $c_{\\lambda}(v_U) = v_1 \\wedge v_2 \\otimes v_1$. The action of $G$ on such an element generates a closed orbit which may be identified with the variety\n\n\\begin{align*} \\mathbb{F}(1,2;V) = \\{ (V_1,V_2)\\ :\\ V_1 \\subset V_2 \\subset V,\\ \\dim(V_1)=1,\\ &\\dim(V_2)=2 \\} \\\\\n&\\subset \\mathbb{G}(1,V) \\times \\mathbb{G}(2,V) \\end{align*}\n\n\\noindent called {\\it flag variety} of lines in planes in $V$. \n\\end{example}\nIn general, the minimal orbit inside $ \\mathbb{P}(\\mathbb{S}_{\\lambda}V)$ is the following Flag variety\n\n$$ \\mathbb{F}(k_1,\\dots,k_s; V) := \\{ (V_1,\\dots,V_s)\\ :\\ V_1 \\subset \\dots \\subset V_s \\subset V,\\ \\dim V_i = k_i \\} $$\n\n\\noindent embedded with $\\mathcal{O}(d_1,\\dots,d_s)$, where the $k_i$ and $d_i$ are integers determined by $\\lambda$ which we are going to describe in a moment. The Veronese and the Grassmann varieties appear as particular cases. Fixed a rational homogeneous variety $X \\subset \\mathbb{P}(\\mathbb{S}_{\\lambda}V)$, we will refer to the $X$-rank as $\\lambda${\\it -rank} in order to underline the connection with the respective representation in which $X$ is embedded. The points of $\\lambda$-rank $1$ are of the form \n\n$$ (v_1 \\wedge \\dots \\wedge v_{k_s})^{\\otimes d_s} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{k_1})^{\\otimes d_1} $$\n\n\\noindent and they represent the flag\n\n$$ \\langle v_1,\\dots,v_{k_1} \\rangle \\subset \\dots \\subset \\langle v_1,\\dots,v_{k_s} \\rangle.$$\n\n\\noindent Remark that the notation of tensors and flags may seem inverted but it is coherent with the action of the group. \\smallskip\n \n\\begin{remark} In the classic and skew-symmetric apolarity theories, given a point of symmetric or skew-symmetric rank $1$ respectively, there is attached an ideal generated in the symmetric or exterior algebra respectively.\nIn analogy to the known apolarity theories, we give the following definition.\n\\end{remark}\n\n\n\\begin{definition} \\label{subpoint}\nLet $\\lambda = (\\lambda_1^{a_1},\\dots,\\lambda_k^{a_k})$ be a partition, where $i^j$ means that $i$ is repeated $j$ times, such that $a_1 + \\dots + a_k < n$. The variety $X \\subset \\mathbb{P}(\\mathbb{S}_{\\lambda} V)$ is the flag variety $\\mathbb{F}(h_1,\\dots,h_k; V)$ embedded with $\\mathcal{O}(d_1,\\dots,d_k)$ where\n\n$$ h_i = \\sum_{j=1}^i a_i,\\ \\text{and}\\ d_i = \\lambda_i - \\lambda_{i+1},\\ \\text{setting}\\ \\lambda_{k+1}=0. $$\n\n\\noindent A point $p \\in X$ is of the form\n\n$$ p = (v_1 \\wedge \\dots \\wedge v_{h_k})^{\\otimes d_k} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{h_1})^{\\otimes d_1}$$\n\n\\noindent and it represents the flag \n\n$$W_1 = \\langle v_1, \\dots, v_{h_1} \\rangle \\subset \\dots \\subset W_k = \\langle v_1,\\dots,v_{h_k} \\rangle. $$\n\n\\noindent We may assume that their annihilators are generated by\n\n$$W_1^{\\perp} = \\langle x_{h_1 + 1},\\dots, x_n \\rangle \\supset \\dots \\supset W_k^{\\perp} = \\langle x_{h_k+1},\\dots,x_n \\rangle. $$\nConsider the spaces\n\n$$\\Sym^1 W_k^{\\perp},\\ \\Sym^{d_k+1} W_{k-1}^{\\perp},\\dots,\\ \\Sym^{d_k+\\dots+d_2+1} W_1^{\\perp}$$\nwhich we will refer to as ``generators''. We define the {\\it ideal $I(p)$ associated to the point} $p$ as the ideal generated by the generators inside the graded ring $\\left(\\mathbb{S}^{\\bullet} V^*,\\ \\mathcal{M}^{\\lambda,\\mu}_{\\nu} \\right )$ as described in Remark \\ref{gradedring}, where $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}$ are the multiplication maps introduced in Definition \\ref{defmapmult}. \n\\end{definition}\n\n\\begin{prop} \\label{PropIdealKill} Let $p \\in \\mathbb{S}_{\\eta} V$ be a point of $\\eta$-rank $1$. Then the associated ideal $I(p)$ is such that all its elements kill $p$ via the Schur apolarity action. \n\\end{prop}\n\n\\begin{proof}\nRemark at first that the generators kill $p$ via the Schur apolarity action. Then one has to prove that given $g \\in I(p)$ such that $\\varphi(g \\otimes p)=0$, then the product of $g$ and $h$ is still apolar to $p$ for any $h \\in \\mathbb{S}^{\\bullet} V^*$. Without loss of generality we can assume that $g \\in \\mathbb{S}_{\\lambda}V^*$ and $h \\in \\mathbb{S}_{\\mu} V^*$. Consider a partition $\\nu$ such that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$. Let us denote the element $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}(g \\otimes h)$ only with $g \\cdot h$. Clearly if $\\nu \\not \\subset \\eta$, then $g \\cdot h$ kills $p$ by Definition \\ref{Schurapodef}. Otherwise, to prove that $g \\cdot h$ kills $p$ is enough to recall the description of the multiplication maps given in Definition \\ref{defmapmult}, and to consider the diagram \\medskip\n\n\\begin{tikzcd}\n & \\mathbb{S}_{\\lambda}V^* \\otimes \\mathbb{S}_{\\mu}V^* \\otimes \\mathbb{S}_{\\eta} V \\arrow[ld] \\arrow[rd] & \\\\\n\\mathbb{S}_{\\nu} V^* \\otimes \\mathbb{S}_{\\eta} V \\arrow[rdd] & & \\mathbb{S}_{\\mu}V^* \\otimes \\mathbb{S}_{\\eta \/ \\lambda} V \\arrow[d] \\\\\n & & \\mathbb{S}_{\\nu \/ \\lambda} V^* \\otimes \\mathbb{S}_{\\eta \/ \\lambda} V \\arrow[ld] \\\\\n & \\mathbb{S}_{\\eta \/ \\nu}V & \n\\end{tikzcd}\n\n\\noindent from which it follows $\\varphi(g\\cdot h \\otimes p)=0$ by the fact that $\\varphi( h \\otimes \\varphi(g \\otimes p)) = \\varphi( h \\otimes 0) =0$ for any $h \\in \\mathbb{S}_{\\mu}V^*$ from the hypothesis.\n\\end{proof}\n\n\\begin{remark}\nGiven any element $f \\in \\mathbb{S}_{\\eta} V$, from the proof of Proposition \\ref{PropIdealKill} it follows that $f^{\\perp} \\subset \\mathbb{S}^{\\bullet} V^*$ is an ideal. Indeed, one needs only to prove the following fact. Given any element $g \\in f^{\\perp}$, which without loss of generality we can assume that belongs to $f^{\\perp} \\cap \\mathbb{S}_{\\lambda} V^*$ for some partition $\\lambda$, and given any element $h \\in \\mathbb{S}_{\\mu}V^*$ for a partition $\\mu$, the product\n\n$$ g \\cdot h := \\mathcal{M}^{\\lambda,\\mu}_{\\nu}(g \\otimes h) \\in \\mathbb{S}_{\\nu} V^*,$$\nwhere we assume that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$ and $\\nu \\subset \\eta$, is still an element of $f^{\\perp}$, i.e. it holds\n\n$$\\varphi(g \\cdot h \\otimes f)=0. $$\nTo prove this it is enough to apply verbatim the proof of Proposition \\ref{PropIdealKill}.\n\\end{remark}\n\n\\begin{remark}\nThe choice of the integers appearing on the symmetric powers of the generators depends on the embedding of the variety. For instance consider the flag variety $\\mathbb{F}(1,2,3;V)$ embedded with $\\mathcal{O}(1,1,1) $ in $\\mathbb{S}_{(3,2,1)} V$. Let $\\{v_1,\\dots,v_n\\}$ and $\\{x_1,\\dots,x_n\\}$ be bases of $V$ and $V^*$ respectively, dual each other. Consider the $(3,2,1)$-rank $1$ tensor \n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1. $$\nAccording with the notation of Definition \\ref{subpoint} we have that\n\n$$W_1 = \\langle v_1 \\rangle \\subset W_2 = \\langle v_1, v_2 \\rangle \\subset W_3 = \\langle v_1,v_2,v_3 \\rangle$$\nand\n$$W_1^{\\perp} =\\langle x_2,\\dots,x_n \\rangle \\supset W_2^{\\perp} = \\langle x_3,\\dots,x_n \\rangle \\supset W_3^{\\perp} = \\langle x_3,\\dots,x_n \\rangle.$$\nFrom Definition \\ref{subpoint} the generators are\n\n$$\\Sym^1 W_3^{\\perp},\\ \\Sym^2 W_2^{\\perp}\\ \\text{and}\\ \\Sym^3 W_1^{\\perp}.$$\nConsider the element $x_2^2x_3 \\in \\Sym^3 W_1^{\\perp}$. It is easy to see that $\\varphi(t \\otimes x_2^2x_3) = 0$ since either $x_2$ or $x_3$ is always evaluated on the part of $t$ representing the subspace $V_1$ of the flag. \n\n\\noindent On the other hand, consider the same variety embedded with $\\mathcal{O}(1,2,1) $ in $\\mathbb{S}_{(4,3,1)} V$. The analogous element to $t$ is\n\n$$s = v_1 \\wedge v_2 \\wedge v_3 \\otimes (v_1 \\wedge v_2)^{\\otimes 2} \\otimes v_1. $$\nRemark that the linear spaces $W_i$ and $W_i^{\\perp}$ are the same as before, with $i = 1,2, 3$. However it is easy to see that the element $x_2^2x_3$ is no longer apolar to $s$, indeed\n\n$$\\varphi(s \\otimes x_2^2x_3) = v_1 \\wedge v_2 \\otimes (v_1)^{\\otimes 3}.$$\nTherefore, we need to change the powers appearing on the generators as described in Definition \\ref{subpoint} to get as generators the spaces\n\n$$\\Sym^1 W_3^{\\perp},\\ \\Sym^2 W_2^{\\perp}\\ \\text{and}\\ \\Sym^4 W_1^{\\perp}.$$\nThis will allow us to evaluate elements of $W_1^{\\perp}$ to the part of the tensor $s$ representing $W_1$.\n\\end{remark}\n\n\\begin{remark}[Restriction to the known apolarities]\nLet $p = l^d \\in \\nu_d(\\mathbb{P}^{n-1})$ be a point of a Veronese variety. The point $p$ represents a line contained in $V$ and hence the respective annihilator is generated by $n-1$ linear forms. Applying Definition \\ref{subpoint} we get the ideal $I(p) \\subset \\mathbb{S}^{\\bullet}V^*$. In particular remark that the multiplication maps $\\Sym^d V^* \\otimes \\Sym^e V^* \\longrightarrow \\Sym^{d+e}V^*$ are involved in the definition. Hence it is not hard to check that the intersection $I(p) \\cap \\Sym^{\\bullet} V$ is the usual ideal of the point $p$ contained in $\\Sym^{\\bullet} V^*$ used in the classic apolarity theory. See \\cite{iarrobino1999power} for more details.\n\n\\noindent The same happens when we consider $p = v_1 \\wedge \\dots \\wedge v_k \\in \\superwedge^k V$ a point of a Grassmannian $\\mathbb{G}(k,V)$. Recall that such a point represents a $k$-dimensional subspace $W$ of $V$. The annihilator $W^{\\perp}$ of $W$ is $(\\dim(V)-k)$-dimensional. Applying Definition \\ref{subpoint} we get the ideal $I(p)$ generated inside $\\mathbb{S}^{\\bullet}V^*$ using the maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}$ introduced in Definition \\ref{defmapmult}. In particular remark that the multiplication maps $\\superwedge^d V^* \\otimes \\superwedge^e V^* \\longrightarrow \\superwedge^{d+e} V^*$ are involved in the definition. Hence the intersection $I(p) \\cap \\superwedge^{\\bullet} V^*$ is the usual ideal of the point $p$ contained in $\\superwedge^{\\bullet} V^*$ used in the skew-symmetric apolarity theory. See \\cite{arrondo2021skew} for more details.\n\\end{remark}\n\n\\begin{example}\nLet $ V = \\mathbb{C}^4$ and $\\lambda = (2,2)$. The minimal orbit inside $\\mathbb{P}(\\mathbb{S}_{(2,2)} \\mathbb{C}^4)$ is the Grassmann variety $X = (\\mathbb{G}(2,\\mathbb{C}^4),\\mathcal{O}(2))$. Let $\\{v_1,\\dots,v_4 \\}$ be a basis of $\\mathbb{C}^4$ and $\\{x_1,\\dots,x_4\\}$ be the respective dual basis of $(\\mathbb{C}^4)^*$. Let $p = (v_1 \\wedge v_2)^{\\otimes 2} \\in \\mathbb{S}_{(2,2)} \\mathbb{C}^4$ be a point of $(2,2)$-rank $1$. The element associated to the sstd tableau\n\\begin{center} \\begin{ytableau} 1 & 1 \\\\ 2 & 2 \\end{ytableau} \\end{center}\nand it represents the subspace $W_1$ spanned by $v_1$ and $v_2$. Hence the annihilator is spanned by $x_3$ and $x_4$. One can readily check that \n\n$$ I(p) \\cap \\mathbb{S}_{(1)} (\\mathbb{C}^4)^* = \\langle x_3,x_4 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* = \\langle x_i \\wedge x_j\\ :\\ j \\in \\{3,4\\} \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(2)} (\\mathbb{C}^4)^* = \\langle x_i x_j\\ :\\ j \\in \\{3,4\\} \\rangle$$\nusing the maps $\\mathbb{S}_{(1)} (\\mathbb{C}^4)^* \\otimes \\mathbb{S}_{(1)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{\\mu}(\\mathbb{C}^4)^*$, where $\\mu = (2), (1,1)$, restricted to $\\Sym^1 W_1^{\\perp}$. \n\n\\noindent Consider now $\\mu = (2,1)$. We have that $I(p) \\cap \\mathbb{S}_{(2,1)} (\\mathbb{C}^4)^*$ is given as the span of the images of the maps $\\mathcal{M}^{(1),(2)}_{(1,1)}$ and $\\mathcal{M}^{(1),(1,1)}_{(2,1)}$ restricted to $I(p)$ in one of the factors in the domain. For instance the map $\\mathcal{M}^{(1),(1,1)}_{(2,1)}$ restricted to $\\langle x_3, x_4 \\rangle \\otimes \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^*$ has as image\n\n\\begin{align*} \\langle (x_i \\wedge \\alpha \\otimes \\beta + \\beta \\wedge \\alpha \\otimes x_i -&x_i \\wedge \\beta \\otimes \\alpha - \\alpha \\wedge \\beta \\otimes x_i,\\ \\\\ \n&\\text{for all}\\ \\alpha \\wedge \\beta \\in \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^*,\\ i = 3,\\ 4 \\rangle. \\end{align*}\nOn the other hand, if we consider the same map restricted to $\\mathbb{S}_{(1)} (\\mathbb{C}^4)^* \\otimes (I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* )$, we get as image \n\n$$ \\langle \\alpha \\wedge \\beta \\otimes \\gamma - \\alpha \\wedge \\gamma \\otimes \\beta,\\ \\text{for all}\\ \\alpha \\in V^*,\\ \\beta \\wedge \\gamma \\in I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* \\rangle.$$\nIf one consider the span of all such images together with the ones obtained from the map $\\mathcal{M}^{(1),(2)}_{(2,1)}$, one gets that\n\n$$ I(p) \\cap \\mathbb{S}_{(2,1)} (\\mathbb{C}^4)^* = \\langle c_{(2,1)}(e_S)\\ :\\ S\\ \\text{is a sstd tableau in which an entry is 3 or 4} \\rangle.$$\nAnalogously one can compute that\n\n$$ I(p) \\cap \\mathbb{S}_{(2,2)} (\\mathbb{C}^4)^* = \\langle c_{(2,2)}(e_S)\\ :\\ S\\ \\text{is a sstd tableau in which an entry is 3 or 4} \\rangle.$$\nNote that all the elements in such spaces kill $p$ via the Schur apolarity action.\n\\end{example}\n\\smallskip\n\n\\begin{example}\nLet $ V = \\mathbb{C}^3$ and $\\lambda = (2,1)$. The minimal orbit inside $\\mathbb{P}(\\mathbb{S}_{(2,1)} \\mathbb{C}^3)$ is the Flag variety $X = (\\mathbb{F}(1,2;\\mathbb{C}^3),\\mathcal{O}(1,1))$. Let $\\{v_1,\\dots,v_3 \\}$ be a basis of $\\mathbb{C}^3$ and $\\{x_1,\\dots,x_3\\}$ be the respective dual basis of $(\\mathbb{C}^3)^*$. Let $p = v_1 \\wedge v_2 \\otimes v_1 \\in \\mathbb{S}_{(2,1)} \\mathbb{C}^3$ be a point of $(2,1)$-rank $1$. The element associated to the sstd tableau\n\\begin{center} \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} \\end{center}\nand it represents the line $W_1$ generated by $v_1$ contained in the subspace $W_2$ spanned by $v_1$ and $v_2$. Hence the annihilators are $W_1^{\\perp} = \\langle x_2,\\ x_3 \\rangle$ and $W_2^{\\perp} = \\langle x_3 \\rangle$. In this case the generators are\n$$\\Sym^1 W_2^{\\perp} = \\langle x_3 \\rangle\\ \\text{and}\\ \\Sym^2 W_1^{\\perp} = \\langle x_2^2, x_2x_3, x_3^2 \\rangle. $$\nFor any $\\mu \\subset (2,1)$ we can check that the subspace $I(p)$ associated to $p$ is such that\n\n$$ I(p) \\cap \\mathbb{S}_{(1)} (\\mathbb{C}^3)^* = \\langle x_3 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^3)^* = \\langle x_1 \\wedge x_3, x_2 \\wedge x_3 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(2)} (\\mathbb{C}^3)^* = \\langle x_1 x_3, x_2 x_3, x_3^2, x_2^2 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(2,1)} (\\mathbb{C}^3)^* = \\langle c_{(2,1)}(e_S)\\ :\\ S\\ \\text{is a sstd tableau in which an entry is 3} \\rangle.$$\n\\end{example} \\smallskip\n\nAs recalled in the Introduction, in the classic, skew-symmetric respectively, apolarity theory, a result called {\\it apolarity lemma}, {\\it skew-symmetric apolarity lemma} respectively, links the symmetric, skew-symmetric respectively, rank of a point with the equivalent condition of the inclusion of the ideal of points of rank $1$ inside the apolar set of the point. We now approach to an analogous result called {\\it lemma of Schur apolarity}, cf. Theorem \\eqref{LemmaSchurApo}. At first we need a preparatory lemma.\n\n\\begin{lemma} \\label{topdeg}\nLet $\\lambda$ be a partition of length less then $n$. Let $p \\in X \\subset \\mathbb{P}(\\mathbb{S}_{\\lambda} V)$ be a point of $\\lambda$-rank $1$. Then we have the equality \n\n$$ I(p)_{\\lambda} = (p^{\\perp})_{\\lambda} $$\n\n\\noindent where $p^{\\perp}$ is introduced in Definition \\ref{schurcatdef}, and where $I(p)_{\\lambda}$ and $(p^{\\perp})_{\\lambda}$ denotes $I(p) \\cap \\mathbb{S}_{\\lambda} V^*$ and $(p^{\\perp})\\cap \\mathbb{S}_{\\lambda} V^*$ respectively. \n\\end{lemma}\n\n\\begin{proof}\nWe present here a proof only for the case in which $\\lambda$ is a partition such that its diagram is union of two rectangles, being the general case identical but more cumbersome in terms of notation. \n\n\\noindent Assume that $\\lambda$ is given by the union of two rectangles. This means that $\\lambda = ((d+e)^k,e^{h-k})$, where $d,e >0$ and $0 \\mathbb{S}_{(b^k)} \\mathbb{C}^n$ if $a > b$. Hence the most square catalecticant map is the one given by $h = \\lceil \\frac{d}{2} \\rceil$. \n\\end{remark} \\smallskip\n\nWe discuss now the case of any flag variety. The situation here is a bit different as the following example shows.\n\n\\begin{example} \\label{EsempioFakeRk2}\nConsider the complete flag variety $\\mathbb{F}(1,2,3;\\mathbb{C}^4)$ embedded with $\\mathcal{O}(1,1,1)$ in $\\mathbb{P}(\\mathbb{S}_{(3,2,1)} \\mathbb{C}^4)$. Consider the element $ t$\n\n\\begin{equation} \\label{FakeRk2} t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 + v_1 \\wedge v_2 \\wedge v_3 \\otimes v_2 \\wedge v_3 \\otimes v_3 \\in \\mathbb{P}(\\mathbb{S}_{(3,2,1)} \\mathbb{C}^4). \\end{equation}\nBy \\eqref{FakeRk2} the element is written as a sum of two $(3,2,1)$-rank $1$ elements representing the flags \n\n$$\\langle v_1 \\rangle \\subset \\langle v_1, v_2 \\rangle \\subset \\langle v_1,v_2,v_3 \\rangle, \\quad \\langle v_3 \\rangle \\subset \\langle v_2, v_3 \\rangle \\subset \\langle v_1,v_2,v_3 \\rangle.$$\nHence the $\\lambda$-rank of $t$ is at most $2$. Consider for a moment only the first addend\n\n$$ t_1 = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 $$\nand consider the catalecticant map $\\mathcal{C}^{(3,2,1),(2)}_{t_1} : \\mathbb{S}_{(2)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{(3,2,1)\/(2)} \\mathbb{C}^4$. From Example \\ref{esschurapoflag} We have that\n\n\\begin{align*}\n&\\mathcal{C}^{(3,2,1),(2)}_{t_1}(\\alpha \\beta) = \\\\\n&= \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_1) + \\alpha(v_1)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_2 \\otimes v_1 + \\\\\n&- \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_2) + \\alpha(v_2)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_1 \\otimes v_1.\n\\end{align*}\nIt is easy to see that\n\n$$\\ker \\mathcal{C}^{(3,2,1),(2)}_{t_1} = \\langle x_1x_4, x_2x_4,x_3x_4,x_4^2,x_3^2 \\rangle$$\nand hence $ \\operatorname{rk}(\\mathcal{C}^{(3,2,1),(2)}_{t_1}) = 5$. Clearly this equality holds for any $t_1 \\in \\mathbb{F}(1,2,3;\\mathbb{C}^4)$. Coming back to $t$, if we consider the same catalecticant map we get\n\n$$\\ker \\mathcal{C}^{(3,2,1),(2)}_t = \\langle x_1x_4, x_2x_4,x_3x_4,x_4^2 \\rangle$$\nwhich means that $ \\operatorname{rk}(\\mathcal{C}^{(3,2,1),(2)}_{t_1}) = 6$. This implies that $t$ cannot have $(3,2,1)$-rank $1$ and hence the decomposition \\eqref{FakeRk2} is minimal. \n\n\\noindent Differently to the case of Grassmann varieties, chopping a column of the diagram of $(3,2,1)$ does not give the right information about the $(3,2,1)$-rank $1$ tensors. Indeed if we consider the partition $(1,1,1) \\subset (3,2,1)$ together with the respective catalecticant map we have that\n\n$$ \\ker \\mathcal{C}^{(3,2,1),(1,1,1)}_t = \\langle x_1 \\wedge x_2 \\wedge x_4, x_1 \\wedge x_3\\wedge x_4, x_2 \\wedge x_3 \\wedge x_4 \\rangle$$\nand hence $\\operatorname{rk}(\\mathcal{C}^{(3,2,1),(1,1,1)}_{t}) = 1$ which happens also for any $t \\in \\mathbb{F}(1,2,3;\\mathbb{C}^4)$. This is because the two flags appearing in the decomposition share the same biggest subspace $\\langle v_1,v_2,v_3 \\rangle$.\n\\end{example}\n\nEven though it seems that there is no connection between the rank of catalecticant maps and the $\\lambda$-rank of tensors, we can give a lower bound on the $\\lambda$-rank of a tensor. Let $X= \\left(\\mathbb{F}(n_1,\\dots,n_s;\\mathbb{C}^n),\\mathcal{O}(d_1,\\dots,d_s)\\right)$ be the minimal orbit in $\\mathbb{P}(\\mathbb{S}_{\\lambda} V)$. With this notation we have that\n\n$$ \\lambda =\\left ((d_1+\\dots+d_s)^{n_1},(d_2+\\dots+d_s)^{n_2-n_1},\\dots,d_s^{n_s-n_{s-1}} \\right). $$\nHence the Young diagram of $\\lambda$ is given by $d_s$ columns of length $n_s$, then $d_{s-1}$ columns of length $n_{s-1}$ and so on up to $d_1$ columns of length $n_1$. For example if $X=\\left(\\mathbb{F}(1,2,4;\\mathbb{C}^5),\\mathcal{O}(1,2,2)\\right)$, then $\\lambda =(5,4,2^2)$ and its Young diagram is \n\n$$ \\ydiagram{5,4,2,2}.$$\nLet $\\lambda$ be a partition such that it has $d_i$ columns of length $n_i$, with $1 \\leq i \\leq s$. Consider the catalecticant map determined by $(e^{n_s}) \\subset \\lambda$ with $e \\leq d_s$, i.e. the one that removes the first $e$ columns of length $n_s$ from the diagram of $\\lambda$. Then it is easy to see that $\\mathbb{S}_{\\lambda \/ (e^{n_s})} V \\simeq \\mathbb{S}_{\\mu_e} V$\nwhere\n\n\\begin{align}\\label{FormulambdaU}\\mu_e = ((d_1+\\dots+d_{s-1}+(d_s-e))^{n_1}&,\\ (d_2+\\dots+d_{s-1}+(d_s-e))^{n_2-n_1},\\dots \\\\ &\\dots, (d_{s-1}+(d_s-e))^{n_{s-1}-n_{s-2}},(d_s-e)^{n_s}) \\nonumber\n\\end{align}\ni.e. $\\lambda$ with the first $e$ columns of length $n_s$ removed. \n\n\n\n\\begin{algorithm} \\label{algoritmo2} In order to get a bound on the $\\lambda$-rank using catalecticants, we provide an algorithm that after the computation of a sequence of ranks of ``consecutive'' catalecticant maps returns a lower bound on the $\\lambda$-rank of the given tensor. The last registered rank will be a lower bound on the $\\lambda$-rank of $t$.\n\n\\noindent Before proceeding with the procedure, we need a preparatory fact.\n\n\\begin{prop} \\label{PropCheServe}\nLet $\\lambda$ be a partition with $d_i$ columns of length $n_i$, with $i= 1,\\dots,s$, and let $t \\in \\mathbb{S}_{\\lambda} V$. If \n\n$$\\operatorname{rk} \\mathcal{C}^{\\lambda,\\left ( \\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t = 1,\\ \\text{then for any}\\ 1 \\leq e \\leq d_s,\\ \\text{we have that}\\ \\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t = 1.$$\n\\end{prop}\n\\begin{proof}\nLet us prove the contraposition, i.e. if there exists an $e \\in \\{1,\\dots,d_s\\}$ such that \n\n$$\\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t > 1,\\ \\text{then}\\ \\operatorname{rk} \\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t > 1. $$\nAssume that for a certain $e \\in \\{1,\\dots,d_s\\}$ it happens that $\\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t > 1$. From the hypothesis we can assume that the $\\lambda$-rank of $t$ is at least $2$, otherwise it can be easily seen that the rank of $\\mathcal{C}^{\\lambda,(e^{n_s})}_t$ would be equal to $1$ which is against the hypothesis. Assume then that $t = t_1 + \\dots + t_r$ has $\\lambda$-rank $r \\geq 2$, where every $t_i$ has $\\lambda$-rank $1$ and it is written as\n\n$$ t_i = (v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s} \\otimes \\dots \\otimes (v_{1,i} \\wedge \\dots \\wedge v_{n_1,i})^{\\otimes d_1}$$\nfor some vectors $v_{1,i},\\dots,v_{n_s,i} \\in V$, for all $i = 1,\\dots,r$. The catalecticant map $\\mathcal{C}^{\\lambda,(e^{n_s})}_t$ clearly acts on the first products $(v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s}$ of every $t_i$. Since the rank of such map is at least $2$, we can find at least two points $t_i$ and $t_j$ such that\n\n$$(v_{1,i} \\wedge \\dots \\wedge v_{n_s,i}) \\neq (v_{1,j} \\wedge \\dots \\wedge v_{n_s,j})$$\nand also such that the images via the catalecticant map of the respective duals $(x_{1,i} \\wedge \\dots \\wedge x_{n_s,i})^{\\otimes e}$ and $(x_{1,j} \\wedge \\dots \\wedge x_{n_s,j})^{\\otimes e}$ are linearly independent. \nConsider now the catalecticant map \n\n$$\\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t : \\mathbb{S}_{\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}} V^* \\longrightarrow \\mathbb{S}_{\\mu_{\\lceil \\frac{d_s}{2} \\rceil}} V$$\nand the elements $(x_{1,i} \\wedge \\dots \\wedge x_{n_s,i})^{\\otimes \\lceil \\frac{d_s}{2} \\rceil}$ and $(x_{1,j} \\wedge \\dots \\wedge x_{n_s,j})^{\\otimes \\lceil \\frac{d_s}{2} \\rceil}$. It is clear that they are linearly independent and that their images via $\\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t$ are also linearly independent. Therefore we get that the rank of $\\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t$ is at least $2$. By contraposition we get that if $\\operatorname{rk} \\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t = 1,$ then $\\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t = 1$ for any $1 \\leq e \\leq d_s$.\nThis concludes the proof.\n\\end{proof}\n\n\\noindent Now we describe the algorithm which provides a lower bound of the $\\lambda$-rank. Let $t \\in \\mathbb{S}_{\\lambda} V$ and consider\nthe catalecticant map that removes half of the $d_s$ columns of maximal length from the diagram of $\\lambda$, rounded up to the next integer if needed. Compute the rank of such a catalecticant map \n\n$$\\mathcal{C}^{\\lambda,(\\lceil \\frac{d}{2} \\rceil^{n_s})}_t : \\mathbb{S}_{(\\lceil \\frac{d}{2} \\rceil^{n_s})} V^* \\longrightarrow \\mathbb{S}_{\\mu_{\\lceil \\frac{d}{2} \\rceil}} V,$$\nwhere $\\mu_{\\lceil \\frac{d}{2} \\rceil}$ denotes the partition as in \\eqref{FormulambdaU}. If the rank of the catalecticant is strictly greater than $1$, the algorithm stops and outputs this number. Such a number is a lower bound on the $\\lambda$-rank of $t$. Otherwise, by Proposition \\ref{PropCheServe} we get that\n\n$$\\operatorname{rk} \\mathcal{C}^{\\lambda,e^{n_s}}_t = 1,$$\nfor any $1 \\leq e \\leq d_s$. Hence we can consider the catalecticant with $e = d_s$ and its image will be generated by a unique element up to scalar multiplication. Such image is contained in $\\mathbb{S}_{\\mu_{d_s}} V$, where $\\mu_{d_s}$ denotes the partition with $d_i$ columns of length $n_i$, with $i = 1,\\dots,s-1$, accordingly with the notation used in \\eqref{FormulambdaU}. Choose a generator $t_1$ of the image of the chosen catalecticant map. Then set $\\lambda = \\mu_{d_s}$ and $t = t_1$ and repeat the previous steps. \\smallskip\n\n\\noindent Generalizing, at the $i$-th step we set $\\lambda = \\lambda_i$, where $\\lambda_i$ is the diagram with $d_j$ columns of length $n_j$ for $j = 1,\\dots,s-i+1$ , and we set $t = t_i$, where $t_i \\in \\mathbb{S}_{\\lambda_i} V$ is the generator of the one dimensional image obtained from the previous step of the algorithm. Then compute \n\n$$\\operatorname{rk} \\left ( \\mathcal{C}^{\\lambda_i,\\left \\lceil \\frac{d_{s-i+1}}{2} \\right \\rceil^{n_{s-j}}}_{t_i} \\right ).$$ \nIf it is strictly greater than $1$ the algorithm outputs this number and it stops. Otherwise consider the catalecticant map that removes all the columns of length $d_{s-i+1}$. Compute the generator $t_{i+1} \\in \\mathbb{S}_{\\lambda_{i+1}} V$ of the image of this last catalecticant map, set $\\lambda = \\lambda_{i+1}$ and $t = t_{i+1}$, and move to the $(i+1)$-th step. \n\n\\noindent If the rank of every catalecticant map we compute along the procedure is $1$, the algorithm outputs $1$. Obviously the output of the algorithm is an integer greater or equal than $1$ and it is a lower bound on the $\\lambda$-rank of $t$. Indeed, preliminary we have\n\n\\begin{prop} \\label{RanghiConsecutivi}\nConsider the flag variety $\\mathbb{F}(n_1,\\dots,n_s;V)$ embedded with $\\mathcal{O}(d_1,\\dots,d_s)$ in $\\mathbb{P}(\\mathbb{S}_{\\lambda} V)$. Let $t \\in \\mathbb{S}_{\\lambda} V$ be any point. If $t$ has $\\lambda$-rank $1$, then the Algorithm \\ref{algoritmo2} outputs a $1$. The converse is true if $d_i \\geq 2$ for any $i = 1,\\dots,s$. \n\\end{prop}\n\n\\begin{proof}\nAssume that $t$ has $\\lambda$-rank $1$, i.e.\n\n$$ t = (v_1 \\wedge \\dots \\wedge v_{n_s})^{\\otimes d_s} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{n_1})^{\\otimes d_1} $$\nfor some $v_i \\in V$. The image of the first catalecticant map of the algorithm, i.e. the one determined by $e=1$ and the partition $(1^{n_s})$, is the span of\n\n$$ (v_1 \\wedge \\dots \\wedge v_{n_{s}})^{\\otimes d_{s}-1} \\otimes (v_1 \\wedge \\dots \\wedge v_{n_{s-1}})^{\\otimes d_{s-1}} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{n_1})^{\\otimes d_1}$$\nand hence the map has rank $1$ since $t$ is non zero. The same happens for the next steps until $e = \\lfloor \\frac{d_s}{2} \\rfloor$. Therefore consider the catalecticant map that removes all the first $d_s$ columns and consider the only generator $t_{s-1}$ of its image\n\n$$ t_{s-1} = (v_1 \\wedge \\dots \\wedge v_{n_{s-1}})^{\\otimes d_{s-1}} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{n_1})^{\\otimes d_1}. $$\nAt this point set $t = t_{s-1}$ and $\\lambda = \\mu_{d_s}$ and repeat the previous steps. It is obvious that the algorithm will not stop when computing the rank of any catalecticant map since any such number is equal to $1$. Therefore it eventually outputs $1$.\n\n\\noindent For the converse part, assume that $d_i \\geq 2$ for any $i$ and suppose that the output of the algorithm is $1$. Suppose that $t$ has a $\\lambda$-rank $r$ decomposition $t = t_1 + \\dots + t_r$, with $t_i$ of $\\lambda$-rank $1$. We may assume that\n\n$$ t_i = (v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s} \\otimes \\dots \\otimes (v_{1,i} \\wedge \\dots \\wedge v_{n_1,i})^{\\otimes d_1}$$\nfor some vectors $v_{1,i},\\dots,v_{n_s,i} \\in V$, for all $i = 1,\\dots,r$. For any $1 \\leq e \\leq d_s$, the image of the catalecticant map is one dimensional by Proposition \\ref{PropCheServe} and it is contained in $\\langle t_1^e,\\dots,t_r^e \\rangle$, where $t_i^e$ denotes\n\n$$ t_i^e = (v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s-e} \\otimes \\dots \\otimes (v_{1,i} \\wedge \\dots \\wedge v_{n_1,i})^{\\otimes d_1}$$\nfor all $i = 1,\\dots, r$. We can assume that the element $(x_{1,1} \\wedge \\dots \\wedge x_{n_s,1})^{\\otimes e}$, defined taking the dual elements $x_{i,j}$ to the vectors appearing in the biggest subspace associated to $t_1$, is not apolar to $t$. Hence its image is a generator of the one dimensional image of the respective catalecticant map. If such an element of $\\mathbb{S}_{(e^{n_s})} V^*$ is the only one with this property, then we can already conclude that the biggest subspace must be the same for any $t_i$. Otherwise, if we could find another element defined in the same way as $(x_{1,1} \\wedge \\dots \\wedge x_{n_s,1})^{\\otimes e}$ but using this time the other $t_i$'s, then its image must be a scalar multiple of the image we have already obtained. Hence a certain linear combination of these two elements is apolar to $t$. On the other hand in the respective images of the two selected elements of $\\mathbb{S}_{(e^{n_s})} V^*$ there are also the tensors $t_1^e$ and $t_i^e$ which are linearly independent, unless $e = d_s$ and $t_1^{d_s} = t_i^{d_s}$ which happens only if the points $t_1$ and $t_i$ share the same remaining part of the flag. However, since we are assuming $d_s > 1$, and since the algorithm is settled to pick $\\lceil \\frac{d_s}{2} \\rceil$, we are always considering $e < d_s$ that allows to avoid such a problem. Hence, if the rank of the catalecticant is $1$, we get that all the subspaces are the same. Therefore the image of the catalecticant map is one dimensional and is generated by $t' = t_1^{d_s} + \\dots + t_r^{d_s}$. At this point the proof is just a repetition of the previous reasoning until one arrives to get $t_1 = \\dots = t_r$, i.e. $t$ has $\\lambda$-rank $1$. This concludes the proof. \n\\end{proof}\n\n\\begin{cor} \\label{RanghiConsecutivi2}\nLet $t \\in \\mathbb{S}_{\\lambda} V$ and suppose that the output of the Algorithm \\ref{algoritmo2} applied to $t$ is $r$. Then $t$ has $\\lambda$-rank greater or equal than $r$.\n\\end{cor}\n\n\n\n\n\\noindent We describe briefly the Algorithm \\ref{algoritmo2} \\index{algorithm!that computes a lower bound on the $\\lambda$-rank} in general for any $t \\in \\mathbb{S}_{\\lambda} V$. \\medskip\n\n\\hrule \\noindent {\\bf Algorithm \\ref{algoritmo2}.} \\hrule \\medskip\n\n\\noindent {\\bf Input}: A partition $\\lambda$ and an element $t \\in \\mathbb{S}_{\\lambda} V$, where the related minimal orbit inside the projectivization of the space is $X = \\left(\\mathbb{F}(n_1,\\dots,n_s;V),\\mathcal{O}(d_1,\\dots,d_s)\\right)$.\n\n\\noindent {\\bf Output}: A lower bound of the $\\lambda$-rank of $t$.\n\\smallskip\n\n\\begin{enumerate}[nosep, label = \\arabic*)]\n\\item set $i =s$;\n\\item set $r = 0$;\n\\item \\label{Sicomincia} {\\bf if} $i = 0$, {\\bf then}\n\\item $\\quad$ {\\bf print} $1$ {\\bf and exit};\n\\item set $r = \\operatorname{rk}\\mathcal{C}^{\\lambda,(\\lceil \\frac{d_i}{2} \\rceil^{n_i})}_t$;\n\\item {\\bf if} $r >1$, {\\bf then}\n\\item $\\quad$ {\\bf print} $r$ and {\\bf exit};\n\\item consider the map $\\mathcal{C}^{\\lambda,(d_i^{n_i})}_t$ and compute the only generator $t'$ of the image;\n\\item set $i = i-1$, $t = t'$ and $\\lambda = \\lambda_{i-1}$ where this last partition is the one with $d_j$ columns of length $n_j$, for $j = 1,\\dots,i-1$. Come back to \\ref{Sicomincia};\n\\end{enumerate}\n\\end{algorithm}\n{\\hrule \\medskip}\n\n\\begin{remark} \\label{CounterAlg}\nLet us highlight that if the Algorithm \\ref{algoritmo2} outputs $1$, obviously this does not imply that $t$ has $\\lambda$-rank $1$. Indeed, consider as an example of this phenomenon the partition $\\lambda = (5,3,1,1)$ and the tensor $t \\in \\mathbb{S}_{\\lambda} V$\n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\wedge v_4 \\otimes v_1 \\wedge v_2 \\otimes v_1 + v_1 \\wedge v_2 \\wedge v_5 \\wedge v_6 \\otimes v_1 \\wedge v_2 \\otimes v_1. $$\nThe output of the Algorithm \\ref{algoritmo2} in this case is $1$. This is due to the fact that the two points share the same partial flag $\\langle v_1 \\rangle \\subset \\langle v_1,\\ v_2\\rangle$. Hence the kernel of the first catalecticant map of the algorithm contains in particular the element $x_1 \\wedge x_2 \\wedge x_3 \\wedge x_4 $ $-x_1 \\wedge x_2 \\wedge x_5 \\wedge x_6$. Nonetheless, it is obvious that $t$ has $\\lambda$-rank $2$. \n\\end{remark}\n\n\n\\begin{example}[Example \\ref{EsempioFakeRk2} reprise]\nConsider the tensor\n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 + v_1 \\wedge v_2 \\wedge v_3 \\otimes v_2 \\wedge v_3 \\otimes v_3 \\in \\mathbb{P}(\\mathbb{S}_{(3,2,1)} \\mathbb{C}^4). $$\nFollowing the notation of the Algorithm \\ref{algoritmo2}, when $i=3$, then $\\lambda_0 = (3,2,1)$ and the first catalecticant map is \n\n$$ \\mathcal{C}^{(3,2,1),(1,1,1)}_t : \\mathbb{S}_{(1,1,1)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{(2,1)} \\mathbb{C}^4. $$\nAs we have already remarked this map has rank $1$ so the algorithm can continue. The generator of the image is \n\n$$ t_3 = v_1 \\wedge v_2 \\otimes v_1 + v_2 \\wedge v_3 \\otimes v_3. $$\nSet $\\lambda_1 = (2,1)$, $i=2$ and $t=t_1$ and restart the algorithm. This time we consider the catalecticant map\n\n$$ \\mathcal{C}^{(2,1),(1,1)}_t : \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{(1)} \\mathbb{C}^4.$$\nIt is easy to see that this time the rank is $2$. Hence the algorithm stops and the output is $(r_0,r_1) = (1,2)$. As Proposition \\ref{RanghiConsecutivi} is going to tell us in a moment, we get that $t$ has $\\lambda$-rank at least $2$. \n\\end{example}\n\nWe now use the procedure to investigate the $\\lambda$-rank of a tensor. \n\n\\begin{prop} \\label{RanghiConsecutivi}\nConsider the flag variety $\\mathbb{F}(k_1,\\dots,k_s;n)$ embedded with $\\mathcal{O}(d_1,\\dots,d_s)$ in $\\mathbb{P}(\\mathbb{S}_{\\lambda} \\mathbb{C}^n)$. Let $t \\in \\mathbb{S}_{\\lambda} \\mathbb{C}^n$ be any point. Then $t$ has $\\lambda$-rank $1$ if and only if the output of the Algorithm \\ref{algoritmo2} is a sequence of $1'$s of length $s$. As a consequence, if the last integer of the sequence obtained with the Algorithm \\ref{algoritmo2} is $r \\neq 1$, then $t$ has $\\lambda$-rank at least $r$.\n\\end{prop}\n\n\\begin{proof}\nAssume that $t$ has $\\lambda$-rank $1$, i.e.\n\n$$ t = (v_1 \\wedge \\dots \\wedge v_{k_s})^{\\otimes d_s} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{k_1})^{\\otimes d_1}. $$\nThen it is easy to see that the catalecticant map computed by the algorithm has rank $1$, where at every step $t$ is set equal to \n\n$$t_i = (v_1 \\wedge \\dots \\wedge v_{k_i})^{\\otimes d_i} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{k_1})^{\\otimes d_1}$$\nfor all $i $ starting from $s$ to $1$. Hence the output will be a sequence of $1$'s of length $s-1$. \n\\noindent On the other hand assume that the output of the Algorithm \\ref{algoritmo2} is a sequence of $1$'s of length $s-1$. Suppose $t = p_1 + \\dots + p_r$, where every $p_i$ has $\\lambda$-rank $1$. Hence each $p_i$ represents a flag of subspaces, each of them repeated a certain number of times. Assume that $r > 1$. Then we have either two alternatives. All the $p_i$'s share the same biggest subspace and this is coherent with the hypothesis. Otherwise there are at least a $p_i$ and a $p_j$ such that the respective two biggest subspaces are different. If this is the case, then, up to choosing a suitable basis, it is easy to see that the rank of the map will be at least $2$. This contradicts the hypothesis and hence all the $p_i$ must share the same biggest subspace. Now repeating this argument at every step of the algorithm one obtains that $t = p_1 = \\dots = p_r$, i.e. $t$ has $\\lambda$-rank $1$.\n\\end{proof}\n\n\n\\section{Secant varieties of Flag varieties} \\label{sestasez} \\setcounter{equation}{0}\\medskip\n\nThis section is devoted to the study of the $\\lambda$-rank appearing on the second secant variety to a Flag variety. An algorithm collecting all the results obtained will follow.\n\\medskip\n\n\\subsection{The case $\\lambda = (2,1^{k-1})$.} \\setcounter{equation}{0}\\medskip\n\nThe first case we consider is given by Flag varieties $\\mathbb{F} = \\mathbb{F}(1,k;\\mathbb{C}^n)$ embedded in $\\mathbb{P}(\\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n)$. We will see that such varieties are related to the adjoint varieties $\\mathbb{F}(1,k;\\mathbb{C}^{k+1})$. Only in this section we will refer to the $(2,1^{k-1})$-rank of a tensor simply with rank.\n\n\\begin{definition} Given a non degenerate variety $X \\subset \\mathbb{P}^N$, we use the following definition for the {\\it tangential variety} of $X$\n\n$$\\tau (X) = \\bigcup_{p \\in X} T_p X$$\n\n\\noindent where $T_p X$ denotes the tangent space to $X$ at $p$. \n\\end{definition}\nIn order to study the elements appearing on the second secant variety of $\\mathbb{F}$, we have to understand which ranks appear on the tangential variety of $X$. Since $\\mathbb{F}$ is homogeneous, we may reduce to study what happens on a single tangent space. By virtue of this fact, choose $p$ as the highest weight vector of this representation, i.e.\n\n\\begin{equation} \\label{hwtang} p = v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 \\end{equation}\n\n\\noindent for some $v_i \\in \\mathbb{C}^n$. This element may be represented with the sstd tableau\n\n\\begin{center} \\begin{ytableau}\n1 & 1 \\\\ 2 \\\\ \\vdots \\\\ k\n\\end{ytableau} .\\end{center}\n\n\\noindent Recall the following classic result.\nLet $p=v_1 \\wedge \\dots \\wedge v_k$ be a point of $\\mathbb{G}(k, V) \\subset \\mathbb{P} (\\superwedge^k V)$. Then \n\n\\begin{equation} \\label{TangGrass} \\widehat{T_p \\mathbb{G}(k, V)} = \\sum_{i=1}^k v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge V \\wedge v_{i+1} \\wedge \\dots \\wedge v_k. \\end{equation}\n\n\\begin{prop}\nLet $p \\in \\mathbb{F}$ be the highest weight vector in \\eqref{hwtang}. The cone over the tangent space $T_p \\mathbb{F}$ to $\\mathbb{F}$ at $p$ is the subspace \n\\begin{align*}\n\\langle &v_1 \\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1,\\ i \\in \\{2,\\dots,k\\}, \\\\ &v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1,\\ h \\in \\{1,\\dots,n \\}\\rangle \\subset \\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n.\n\\end{align*}\n\\end{prop}\n\n\\begin{proof}\n\\noindent By definition we have the inclusion $\\mathbb{F} \\subset \\mathbb{G}(1,\\mathbb{C}^n) \\times \\mathbb{G}(k,\\mathbb{C}^k)$. By \\cite[Prop. 4.4]{freire2019secant}, we have the equality\n\n$$ \\widehat{T_p \\mathbb{F}} = \\left ( \\widehat{T_p \\mathbb{G}(1,\\mathbb{C}^n) \\times \\mathbb{G}(k,\\mathbb{C}^k)} \\right ) \\cap \\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n$$\n\n\\noindent where $\\hat{Y}$ denotes the affine cone over the projective variety $Y$. Applying Formula \\eqref{TangGrass} we see that $\\widehat{T_p \\left(\\mathbb{G}(1,\\mathbb{C}^n) \\times \\mathbb{G}(k,\\mathbb{C}^k)\\right)}$ is the subspace \n\n\\begin{align*}\n\\langle v_1 &\\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_k \\otimes v_2,\\dots, v_1 \\wedge \\dots \\wedge v_k \\otimes v_n,\n\\\\ &v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1,\\ h \\in \\{k+1,\\dots,n\\},\\ i \\in \\{1,\\dots,k\\} \\rangle.\n\\end{align*}\n\n\\noindent It is easy to see that if $i \\neq 1$, the elements \n$$v_1 \\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1$$\nwith $h \\in \\{k+1,\\dots,n\\}$ satisfy the relations \\eqref{PluckRel}. Indeed, they are, up to the sign, the elements of the Schur module determined by the sstd tableaux\n\n\\begin{equation} \\label{TangFlag1} \\begin{ytableau}\n1 & 1 \\\\ 2 \\\\ \\vdots \\\\ k\n\\end{ytableau}\\quad \\text{and}\\quad \\begin{ytableau}\n1 & 1 \\\\ 2 \\\\ \\vdots \\\\ \\hat{i} \\\\ \\vdots \\\\ k \\\\ h\n\\end{ytableau}\\end{equation}\n\n\\noindent respectively, where $\\hat{i}$ means that $i$ is not appearing in the list. We can see also that the elements\n\n$$ v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1$$\n\n\\noindent satisfy the equations \\eqref{PluckRel} for any $h=2,\\dots,n$ and hence they belong to the module. Indeed they are the elements associated to the sstd tableaux\n\n\\begin{equation} \\label{TangFlag2} \\begin{ytableau}\n1 & h \\\\ 2 \\\\ \\vdots \\\\ k\n\\end{ytableau}\\ . \\end{equation}\n\n\\noindent Consider then the span of the elements of the Schur modules whose associated sstd tableaux is either in \\eqref{TangFlag1} or in \\eqref{TangFlag2}. Since they are all different, the respective elements of the module are linearly independent. Moreover the number of elements in \\eqref{TangFlag1} is $(k-1)(n-k)+1$, and those in \\eqref{TangFlag2} are $n-1$ , for a total of $-k^2 + kn + k$. Since the variety $\\mathbb{F}$ is smooth of dimension $-k^2+kn+k-1$, we can conclude that $\\widehat{T_p \\mathbb{F}}$ is the subspace\n\\begin{align*}\n\\langle &v_1 \\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1,\\ i \\in \\{2,\\dots,k\\}, \\\\ &v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1,\\ h \\in \\{1,\\dots,n \\}\\rangle.\n\\end{align*}\\vskip-0.6cm\\end{proof}\n\n\\noindent In order to understand which ranks appear in this space, we split the generators of $T_p \\mathbb{F}$ in the three following sets:\\begin{enumerate}[label = (\\arabic*)]\n\\item \\label{item1} $v_1 \\wedge \\dots \\wedge v_k \\otimes v_1$,\n\\item \\label{item2} $v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1$, for $i = 2,\\dots,k$ and $h = k+1,\\dots,n$,\n\\item \\label{item3} $v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1$ for $h = 2,\\dots,n$.\n\\end{enumerate}\nThe elements from \\ref{item1} and \\ref{item2} have rank $1$ and they represent the flags \n$$ \\langle v_1 \\rangle \\subset \\langle v_1,\\dots,v_k \\rangle $$ \n\\noindent and\n$$ \\langle v_1 \\rangle \\subset \\langle v_1,\\dots,v_{i-1},v_h,v_{i+1},v_k \\rangle $$\n\\noindent respectively. \n\n\\begin{prop} \\label{famiglia3} The elements $t$ from \\ref{item3} have all rank $1$ for $h = 2,\\dots,k$ and they represent the flags\n$$\\langle v_h \\rangle \\subset \\langle v_1,\\dots,v_k \\rangle. $$ \nIf $h = k+1,\\dots,n$ then the corresponding element has rank $2$ and it decomposes as\n $$ t = -\\frac{1}{2} (v_1-v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1-v_h) + \\frac{1}{2} (v_1+v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1+v_h).$$\n\\end{prop}\n\\begin{proof} Suppose that $h = 2,\\dots,k$. Then $t$ has the form \n$$ v_1 \\wedge \\dots \\wedge v_h \\wedge \\dots \\wedge v_k \\otimes v_h$$\nand hence it has rank $1$ and it represents the flag\n$$\\langle v_h \\rangle \\subset \\langle v_1,\\dots,v_k \\rangle. $$ \nIf $h=k+1,\\dots,n$, we can compute the kernel\n$$ \\ker \\mathcal{C}_t^{(2,1^{k-1}),(1)} = \\langle x_{k+1},\\dots,\\hat{x_h},\\dots,x_n \\rangle $$\nwhere $\\hat{x_h}$ means that $x_h$ does not appear among the generators. Remark that in general if $t$ has rank $1$, say for instance $t = p$ in \\eqref{hwtang}, then the catalecticant map $\\mathcal{C}^{(2,1^{k-1}),(1)}_t$ has rank $k$. This implies that if $t = t_1 +\\dots + t_r$, where every $t_i$ has rank $1$, we get the inequality\n\n\\begin{equation} \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_t = \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_{t_1 + \\dots + t_r} \\leq \\sum_{i=1}^r \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_{t_i} = r \\cdot k. \\end{equation}\n\n\\noindent Since in this case $\\operatorname{rk}(\\mathcal{C}^{(2,1^{k-1}),(1)}_t) = k+1$, we can already conclude that $t$ has not rank $1$. Hence the decomposition \n$$ t = -\\frac{1}{2} (v_1-v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1-v_h) + \\frac{1}{2} (v_1+v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1+v_h)$$\nis minimal and $t$ has rank $2$.\n\\end{proof}\n\n\\begin{remark} \\label{primosistema}\nThe fact that $\\operatorname{rk}(\\mathcal{C}^{(2,1^{k-1}),(1)}_t) = k+1$ suggests that we can restrict our study to the flag $\\mathbb{F}(1,k;\\mathbb{C}^{k+1})$ which is an adjoint variety. In this restricted case, in order to find a rank $2$ decomposition of the tensor, we should find at least one product of two distinct linear forms inside $\\ker\\mathcal{C}_t^{(2,1^{k-1}),(2)}$. Such linear forms are the equations of the two $k$-dimensional linear spaces in $\\mathbb{C}^{k+1}$ of the two flags associated to the decomposition. We can see that\n$$ (x_1-x_h)(x_1+x_h) \\in \\ker \\mathcal{C}^{(2,1^{k-1}),(2)} $$\nare the linear forms we are looking for and the respective $k$-dimensional linear spaces are\n$$ v(x_1-x_h) = \\langle v_1 + v_h,v_2,\\dots,v_k \\rangle $$\nand\n$$ v(x_1+x_h) = \\langle v_1 - v_h,v_2,\\dots,v_k \\rangle. $$\n\\end{remark} \n\\smallskip\n\n\\noindent Now we have to study the possible sums of elements from the three different sets. \n\n\\begin{remark} The sum of two elements from \\ref{item1} and \\ref{item2} gives\n\\begin{align*} a\\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 + &b\\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 = \\\\ &v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge (a \\cdot v_i + b \\cdot v_h) \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1\n\\end{align*}\nwhich has rank $1$. \n\\end{remark}\n\n\\begin{remark}\nThe sum of two elements from \\ref{item1} and \\ref{item3} with $h = 2,\\dots,k$ turns out to be\n\\begin{align*} a\\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 + b \\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_h = (v_1 + v_h) \\wedge \\dots \\wedge v_k \\otimes (v_1 + v_h)\n\\end{align*}\nwhich has rank $1$, while if $h = k+1,\\dots,n$ is\n\\begin{align*} & a\\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 +b \\cdot ( v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1) = \\\\ & b \\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes (\\frac{a}{2} \\cdot v_1 + b \\cdot v_h) + (\\frac{a}{2} \\cdot v_1 + b \\cdot v_h) \\wedge v_2 \\dots \\wedge v_k \\otimes v_1\n\\end{align*}\nwhich is again an element of the set \\ref{item3}. \n\\end{remark}\n\n\\noindent Now we focus on the case \\ref{item2} $+$ \\ref{item3}.\n\\begin{prop}\nFor any $v_j \\in \\mathbb{C}^n$, the sum of two elements from \\ref{item2} and \\ref{item3}, i.e. the tensor \n\\begin{align} \\label{2+3}\nt = a \\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge &v_j \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 + \\\\ &+ b \\cdot (v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1), \\nonumber\n\\end{align}\nhas \n\\begin{enumerate}[label = (\\roman*)]\n\\item \\label{item11} rank $2$ if $v_j$ and $v_h$ are not multiple and $v_h \\in \\langle v_2,\\dots,v_k \\rangle$\n\\item \\label{item12} rank $3$ if $v_j$ and $v_h$ are not multiple and $v_h \\not\\in \\langle v_2,\\dots,v_k \\rangle$\n\\item \\label{item13} rank $2$ if $v_j$ and $v_h$ are multiple.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nAssume at first that $v_j$ and $v_h$ are not multiple each other. If $v_h \\in \\langle v_2,\\dots,v_k \\rangle$, the sum \\eqref{2+3} reduces to\n\\begin{equation} \\label{2+3.1}a \\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_j \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 + b \\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_h. \\end{equation}\nWe claim that this element has rank $2$. Indeed if the tensor in \\eqref{2+3.1} has rank $1$, then the catalecticant map \n$$\\mathcal{C}_t^{(2,1^{k-1}),(1)} : \\mathbb{S}_{(1)} (\\mathbb{C}^n)^* \\rightarrow \\mathbb{S}_{(2,1^{k-1})\/(1)} \\mathbb{C}^n$$\n has rank $k$ as already discussed in the proof of Proposition \\ref{famiglia3}. Since the catalecticant map in this case has rank $k+1$, we can already conclude that the decomposition in \\eqref{2+3.1} is minimal. This proves \\ref{item11}.\n\n\\noindent Assume now that $v_h \\not \\in \\langle v_2,\\dots,v_k \\rangle$. In this case we use the catalecticant map \n$$\\mathcal{C}_t^{(2,1^{k-1}),(2)} : \\mathbb{S}_{(2)} (\\mathbb{C}^n)^* \\rightarrow \\mathbb{S}_{(2,1^{k-1})\/(2)} \\mathbb{C}^n$$\nto compute the rank of the element. At first remark that if $t$ is an element of rank $1$, then the rank of this catalecticant map is $k$. Indeed for instance if $t = p$, then the only elements of $\\mathbb{S}_{(2)} (\\mathbb{C}^n)^*$ which does not kill $p$ and whose images via the catalecticant map are linearly independent are\n$$ x_1^2, x_1x_2, \\dots,x_1x_k$$\nwhich are exactly $k$. This implies that if $t = t_1 + \\dots + t_r$ has rank $r$, then\n\\begin{equation} \\label{rankbound} \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_t = \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_{t_1 + \\dots + t_r} \\leq \\sum_{i=1}^r \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_{t_i} = r \\cdot k. \\end{equation}\nIn the instance of a tensor $t$ like \\eqref{2+3} one gets that the kernel of $\\mathcal{C}^{(2,1^{k-1}),(2)}_t$ is the subspace\n$$\\ker \\mathcal{C}^{(2,1^{k-1}),(2)}_t = \\langle x_mx_n,\\ \\text{where either}\\ (m,n)=(h,h)\\ \\text{or}\\ m,n \\neq 1,h \\rangle $$\ni.e. the elements of $\\mathbb{S}_{(2)} (\\mathbb{C}^n)^*$ not killing $t$ are all the ones in the span\n$$ \\langle x_1x_h,\\dots,x_kx_h,x_1x_2,\\dots,x_1x_k,x_1x_j,x_1^2\\rangle. $$\nThis means that $\\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_t = 2k+1$. This implies by \\eqref{rankbound} that $t$ has rank at least $3$. However, by Proposition \\ref{famiglia3}, the element $t$ is written as a sum of $3$ rank $1$ elements and hence its rank is $3$.\n\n\\noindent Finally assume that $v_j$ and $v_h$ are multiples. The element in \\eqref{2+3} reduces to \n\\begin{align*} \nt = a \\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge &v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 + \\\\ &+ b \\cdot (v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1). \\nonumber\n\\end{align*}\nWe obtain again that $\\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_t = k+1$ and hence $t$ has not rank $1$. One can see that $t$ can be written as\n\\begin{align*} \nt = &(v_1-v_h) \\wedge \\dots \\wedge v_{i-1} \\wedge (v_i-v_h) \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes (v_1-v_h) + \\\\ &+ (v_1+v_h) \\wedge \\dots \\wedge v_{i-1} \\wedge (v_i+v_h) \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes (v_1+v_h)\n\\end{align*}\nand hence $t$ has rank $2$. Note that up to change of coordinates this is the tensor described in Remark \\ref{primosistema}. This concludes the proof.\n\\end{proof}\n\n\\noindent We collect the elements we found in a table.\n\n\\begin{table}[ht]\\begin{center}\n\\begin{tabular}{| c | c | c | c |}\n\\hline\n$(2,1^{k-1})$-rank & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1)}$ & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(2)}$ & Notes \\\\ \\hline\n1 & $k$ & $k$ & sets $(1)$, $(2)$ and $(3)$ \\\\ \\hline\n2 & $k+1$ & $2k-1$ & set $(3)$ \\\\ \\hline\n3 & $k+2$ & $2k+1$ & $(2) + (3)$ \\\\ \\hline\n\\end{tabular}\n\\caption{The ranks appearing on the tangential variety to $\\mathbb{F}$.} \\label{tabella1}\\end{center}\n\\end{table}\n\n\\noindent Let us study now the elements of $\\mathbb{S}_{(2,1^{k-1})} V$ lying on a secant line to $\\mathbb{F}$. Such elements can be written as\n$$ v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 + w_1 \\wedge \\dots \\wedge w_k \\otimes w_1 $$\nso that their rank is at most $2$. Remark that letting the group $SL(n)$ act on this element, the rank and the numbers $\\dim \\langle v_1,\\dots,v_k \\rangle \\cap \\langle w_1,\\dots,w_k \\rangle$ and $\\dim \\langle v_1,\\dots,v_k \\rangle + \\langle w_1,\\dots,w_k \\rangle$ are preserved. In particular we may pick as a representative of the orbit of the element in the previous formula \n\n\\begin{equation} \\label{rango2} t=v_1 \\wedge \\dots \\wedge v_h \\wedge v_{h+1} \\wedge \\dots \\wedge v_k \\otimes v_i + v_1 \\wedge \\dots \\wedge v_h \\wedge v_{k+1} \\wedge \\dots \\wedge v_{2k-h} \\otimes v_j \\end{equation}\nin which the intersection of the $k$-dimensional spaces of the flags is explicit, i.e.\n\n$$\\langle v_1,\\dots,v_k \\rangle \\cap \\langle v_1,\\dots,v_h,v_{k+1},\\dots,v_{2k-h} \\rangle = \\langle v_1,\\dots,v_h \\rangle .$$ \nThe vectors $v_i$ and $v_j$ appearing after the tensor products in the first and second addend are one of the generators of the spaces $\\langle v_1,\\dots,v_k \\rangle$ and $\\langle v_1, \\dots,v_h,v_{k+1},\\dots,v_{2k-h}\\rangle$ respectively. Note that if both $v_i$ and $v_j$ belong to the intersection of the $k$-dimensional subspaces of the flags, then they can be the same vector up to scalar multiplication. Hence we may distinguish the elements on secant lines to $\\mathbb{F}$ using only two invariants: the dimension of the intersection of the $k$-dimensional subspaces of the two flags and whether the equality $\\langle v_i \\rangle = \\langle v_j \\rangle$ holds or not. In terms of Schur apolarity action, we can use the catalecticant maps to determine which orbit we are studying. Specifically the map\n\n$$ \\mathcal{C}_t^{(2,1^{k-1}),(1)} : \\mathbb{S}_{(1)} (\\mathbb{C}^n)^* \\longrightarrow \\mathbb{S}_{(2,1^{k-1})\/(1)} \\mathbb{C}^n $$\nwill give us information about the dimension of the intersection. Indeed consider a rank $2$ element $t$ as in \\eqref{rango2} and denote with $\\{x_i,\\ i=1,\\dots,n\\}$ the dual basis of the $\\{v_i,\\ i=1,\\dots,n\\}$. Then the image of $x_i$ can be either $0$ if $x_i = x_{2k-h+1},\\dots,x_n$, or non zero if $x_i = x_1,\\dots,x_{2k-h}$. It is easy to see that in this latter case all the images that we get are linearly independent as elements of $\\mathbb{S}_{(2,1^{k-1})\/(1)}\\mathbb{C}^n$. Hence the rank of the catalecticant map is equal to the dimension of the sum of the two $k$-dimensional subspaces of the two flags involved. Once that this number is fixed, the rank of the catalecticant\n$$ \\mathcal{C}_t^{(2,1^{k-1}),(2)} : \\mathbb{S}_{(2)} (\\mathbb{C}^n)^* \\longrightarrow \\mathbb{S}_{(2,1^{k-1})\/(2)} \\mathbb{C}^n $$\nwill help us on discriminating whether $\\langle v_i \\rangle = \\langle v_j \\rangle$ holds or not. Indeed by the definition of the Schur apolarity action, the element $x_p x_q \\in \\mathbb{S}_{(2)}(\\mathbb{C}^n)^*$, which can be written as $x_p \\otimes x_q + x_q \\otimes x_p$, is applied in both the factors of the tensor product $\\superwedge^k \\mathbb{C}^n \\otimes \\superwedge^1 \\mathbb{C}^n$ in which $\\mathbb{S}_{(2,1^{k-1})}\\mathbb{C}^n$ is contained. Hence the fact that $\\langle v_i \\rangle = \\langle v_j \\rangle$ is true or not will change the rank of this catalecticant map.\n\n\\noindent We give in the following table a classification of all the possible orbits depending on the invariants we have mentioned. Let us denote briefly $\\dim \\langle v_1,\\dots,v_k \\rangle \\cap \\langle w_1,\\dots,w_k \\rangle$ just with $\\dim V \\cap W$. \n\n\n\\begin{table}[ht] \\begin{center}\n\\begin{tabular}{| c | c | c | c | c |}\n\\hline\n$\\dim V \\cap W$ & $\\langle v_1 \\rangle = \\langle w_1 \\rangle$ & $(2,1^{k-1})$-rank & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1)}$ & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(2)}$ \\\\ \\hline\n$k$ & True\/False & 1 & $k$ & $k$ \\\\ \\hline\n$k-1$ & True & 1 & $k$ & $k$ \\\\ \\hline\n$k-1$ & False & 2 & $k+1$ & $2k-1$ \\\\ \\hline\n$\\vdots$ & & & & \\\\ \\hline\n$h$ & False & 2 & $2k-h$ & $2k$ \\\\ \\hline\n$h$ & True & 2 & $2k-h$ & $2k-h$ \\\\ \\hline\n$\\vdots$ & & & & \\\\ \\hline\n$0$ & False & 2 & $2k$ & $2k$ \\\\ \\hline\n\\end{tabular}\n\\caption{Orbits of points on a secant line to $\\mathbb{F}$, where $h = 0,\\dots,k$.} \\end{center}\n\\end{table}\n\\vskip-0.5cm\n\n\\noindent The results obtained so far can be collected in the following algorithm.\n\n\\begin{algorithm} The following algorithm distinguishes the rank of the elements of border rank at most $2$.\n\\vskip0.5cm\n\n\\noindent {\\bf Input}: An element $t \\in \\widehat{\\sigma_2(\\mathbb{F}(1,k;\\mathbb{C}^n))} \\subset \\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n$.\n\n\\noindent {\\bf Output}: If the border rank of $t$ is less or equal to $2$, it returns the rank of $t$.\n\n\\begin{enumerate}[nosep]\n\\item[1:] compute $(r_1,r_2,r_3) = \\left (\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1^k)},\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1)},\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(2)} \\right)$\n\\item[2:] {\\bf if} $r_1 = 1$ {\\bf then}\n\\item[3:] $\\quad$ $t$ has both border rank and rank equal to $1$, {\\bf exit};\n\\item[4:] {\\bf else if} $r_1 \\geq 3$ {\\bf then}\n\\item[5:] $\\quad$ $t$ has border rank at least $3$, {\\bf exit};\n\\item[6:] {\\bf else if} $r_1 = 2$ {\\bf then}\n\\item[7:] $\\quad$ $t$ has border rank $2$ and\n\\item[8:] $\\quad$ {\\bf if} $(r_1,r_2,r_3) = (2,k+2,2k+1)$ {\\bf then}\n\\item[9:] $\\quad$ $t$ has rank $3$ and it is the element given by \\ref{item2}$+$\\ref{item3} in Table \\ref{tabella1};\n\\item[10:] $\\quad$ {\\bf else if} $(r_1,r_2,r_3) = (2,2k-h,2k)$ {\\bf then}\n\\item[11:] $\\quad$ $t$ has rank $2$ and it is in the orbit with $\\dim V \\cap W = h$ and $\\langle v_i \\rangle \\neq \\langle v_j \\rangle$,\n\\item[12:] $\\quad$ {\\bf else if} $(r_1,r_2,r_3) = (2,2k-h,2k-h)$ {\\bf then}\n\\item[13:] $\\quad$ $t$ has rank $2$ and it is in the orbit with $\\dim V \\cap W = h$ and $\\langle v_i \\rangle = \\langle v_j \\rangle$\n\\item[14:] $\\quad$ {\\bf end if}\n\\item[15:] {\\bf end if}\n\\item[16:] {\\bf end}\n\\end{enumerate}\n\\end{algorithm}\n\n\\begin{remark}\nNote that this is not a complete classification of the orbits appearing on $\\sigma_2(X)$ by the action of $SL(V)$. For this purpose one has to make a more specific distinction of the orbits related to secant lines to $\\mathbb{F}$. In particular one has to discriminate whether the lines $\\langle v_i \\rangle$ and $\\langle v_j \\rangle$ both belong to the intersection of the two $k$-dimensional subspaces, only one of them belongs to this intersection and eventually none of them belong to it.\n\\end{remark}\n\n\\noindent As a conclusion of the discussion of this section, we obtain the following result. For a given non degenerate irreducible projective variety $X \\subset \\mathbb{P}^N$, for $s \\geq r$ we use the notation\n\n$$ \\sigma_{r,s} (X) := \\{ p \\in \\sigma_r(X): r_X(p)=s \\}. $$\n\\smallskip\n\n\\begin{cor}\nLet $\\mathbb{F} = \\mathbb{F}(1,k;n) $ embedded with $\\mathcal{O}(1,1)$ in $\\mathbb{P}(\\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n)$. Then we have\n\n$$ \\sigma_2(\\mathbb{F}) \\setminus \\mathbb{F} = \\sigma_{2,2} (\\mathbb{F}) \\cup \\sigma_{2,3} (\\mathbb{F}). $$\n\\end{cor}\n\\bigskip\n\n\\section*{Acknowledgements}\nI thank Giorgio Ottaviani for the help and useful comments, and Alessandra Bernardi for suggesting me the topic and for the support. I would like to thanks also Jan Draisma and the referees for their useful comments and remarks. The author is partly supported by GNSAGA of INDAM. \\bigskip\n\n\n\\noindent Contact: reynaldo.staffolani@unitn.it, Ph.D. student at University of Trento.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the open online communities, such as Free, Libre, and Open Source (FLOSS)\\cite{ArafatRiehle09}, it has been shown that the number of articles and edits per author follows a power law \\cite{Voss05,JohnsonFarajKudaravalli14}, like in scientific publication \\cite{Maillartetal08}. Even for Wikipedia, which claims that it is 'the encyclopedia that everybody can edit', this repartition exists \\cite{Kitturetal07b,Ortegaetal09,Zhangetal10}, and 'the top 10\\% of editors by number of edits contributed 86\\% of the PWVs {[}persistent word views{]}, and top 0.1\\% contributed 44\\% - nearly half! The domination of these very top contributors is increasing over time.' \\cite[p. 5]{Priedhorskyetal07}. \nThis apparent paradox is easy to understand: contributing to an online community is not only about having something to say but more\nand more about knowing how to say it \\cite{FordGeiger2012}.\nAnd, as Wikipedia has become bigger, the editing tasks have increased in complexity (see \\cite{FongBiuk-Aghai10} for a proposition of classification in terms of semantic complexity of these various type of edits), and have increased also the proportion of non-editing tasks. In other words, participants' types of activity have multiplied. \nBeyond the writing, which can be seen as the emerged part of the iceberg, but also the most important part, for an encyclopedia, are the actions leading to the writing (coordination tasks, discussions on the topic of the project, etc.), but also the actions of maintaining the existent, which take a growing importance with the maturation of the articles \\cite{KaneJohnsonMajchrzak14}.\n\nOne consequence is that over time, the amount of effort needed to add new content increases, since new edits are more likely to be rejected, making the work less rewarding \\cite{RansbothamKane11,AaltonenSeitler16}.\nThis may also explain, for a part, the contributor turnover \\cite{Farajetal16,KaneJohnsonMajchrzak14}: once a project is finished, or at least mature, some people, those interested in content addition, drop. As a consequence, there is a constant need for these projects to recruit new contributors, and to turn them into 'big' contributors, to guarantee the survival of the project in the long run.\n\nThere are a lot of experiments to slowly engage people into contribution (from simple edits to more complex tasks), based on the concept of legitimate peripheral participation \\cite{LaveWenger91,Wengeretal02}. For example, and still on Wikipedia, some experiments show that readers or contributors can be asked to perform small tasks, that they do, and then keep participating \\cite{HalfakerKeyesTaraborelli13}. Acknowledging the newcomers contribution with moral reward ('banstars') increases their investment and their retention, at least over the first year \\cite{Gallus15}.\nBut it may not be a very sustainable activity, as those who respond the most to these kinds of initiative seem to be those who are already willing to participate \\cite{Narayanetal17}.\nAnd, statistically speaking, big contributors seem to have been so from the beginning, and if there is a path to contribution, it concerns more the learning of the rules than the level of contribution \\cite{PancieraHalfakerTerveen09,DejeanJullien15}.\n\nIn a nutshell, beyond this statistical information of a majority of big contributors from the beginning, what these experiments seem to indicate is the existence of different profile of contributors regarding their involvement. And one may want to know better about these different profiles and if they are in the same proportion amongst all the projects.\nThis is important for the managers of such projects. It would allow them to better adapt their response to newcomers contributions, and to improve their retention rate.\nRecent studies \\cite{Weietal15,Yangetal16, Arazyetal16} have strongly improved our knowledge of the different types or profiles of contributors, from casual to very involved ones, through focused people. However they do so by using very complex methodologies (qualitative-quantitative mix, with a high workload to manually codify\/characterize the edits), making their replication for the practitioners limited. These studies are on the English Wikipedia only. The objective of this paper is to highlight different profiles of contributors with clustering techniques. The originality is to show how using only the edits, and their distribution over time, allows to build these contributors profiles with a good accuracy and stability amongst languages. These profiles are identifiable early in the history of involvement, suggesting that light monitoring of newcomers may be sufficient to adapt the interaction with them and increase the retention rate.\n\n\nThe paper continues as follows: the next section reviews our theoretical background to develop our hypotheses regarding the profiles of the contributors, and the good balance between the simplicity of the variables and the accuracy of the results. Then, we describe our data collection strategy (choice of Wikipedia, data and variables), before presenting the methodology and the results. Finally, we discuss our findings and highlight their implications for both theory and practice, before concluding.\n\n\\section{Research hypotheses: Contributor's Behaviors and Roles Detection}\n\nIt has been no more a matter of debate that regular contributors vary in the tasks they perform, leading to various 'career' within the projects. For instance, \\cite{OkoliOh07}, looking at English Wikipedia contributors, showed that people having lots of participation in various\narticles (and thus collaborating with a lot of people, but not in a sustained manner, something they assimilate to 'weak links', in a Granovetter's perspective \\cite{Granovetter85}) are more likely to become administrators (to have administrative rights) than those more focused on a sub-set of articles and talking repeatedly with a small subset of people (and then developing strong(er) links). This leads \\cite{Zhuetal11}, relying on \\cite{Bryantetal05}'s study, to propose two main careers for the Wikipedia contributors, coherent with \\cite{OkoliOh07}'s findings: from non-administrators to administrators and from non-members to Wiki-project regular members to Wiki-project core members (Figure 1, page 3433). On that aspect, \\cite{AntinCheshireNov12} confirmed that people involved from the beginning in more diverse revision activities are more likely to take administrative responsibilities.\n\nQualitative research has refined our understanding of people's interest and focus: in their in-depth analysis of one English Wikipedia article (autism) \\cite{KaneJohnsonMajchrzak14} showed that in the lifetime of an article, different tasks were required (contend edition, article structuring, knowledge stability protection), requiring different skills and centers of interest, and consequently endorsed by different persons, with different level of edits.\n\n\\cite{Huvila10}, using a ground theory approach via an online open-question survey to contributors, proposed a classification in five types for the contributors, according to their activities and to the way they find their information (from in depth research to personal\/professional area of expertise, through browsing the net). These profiles illustrate how diverse even contributing knowledge can be, between the topics, but also between the sources of information they rely on, but the contributing profiles remain: some people focus on an area of expertise, other contribute a lot on a lot of subjects, others are more casual, etc.\n\nInformed by these findings, several authors proposed qualitative techniques to retrieve and quantify the different roles the qualitative research has identified. \nBeing able to do quantitative identification makes its automatizing possible, and can then decrease the supervision burden, in addition to increase its accuracy and its rapidity.\nHowever, as for article quality identification\\footnote{It is well known, since \\cite{WilkinsonHuberman07} that there is a strong correlation between the number of edits and the probability for an (English) Wikipedia article to be of best quality. Nevertheless, as detailed by \\cite{DangIgnat17}, if one wants to refine this finding, more costly methods are needed in terms of data collection and analytic techniques}, there is a debate between the simplicity and the accuracy of the methods used.\nWhat is directly observable, in most of the open, online projects, is the number of contributions (edits, commits) over time. What is less accessible, requiring more data preparation, and most of the time, allowing only ex-post analyses, is the content, and the quality, of such contributions. \n\\cite{Yangetal16}, in a defense of the second trail of research, summarized this trade-off, as follows: \n'While classification based on edit histories can be constructed for most active editors, current approaches focus on simple edit counts and access privileges fail to provide a finer grained description of the work actually performed in an edit'. \n\n\nAnd it is to be acknowledged that, as far as the English Wikipedia is concerned, research has made tremendous progress. Via a mix of non supervised and supervised techniques \\cite{Yangetal16, Arazyetal16,Weietal15}, scholars identified and characterized the edits, and then constructed editor roles based on their characterized edits.\nLooking at the English Wikipedia, \\cite{Yangetal16} proposed a two-steps methodology. First, to enrich the description of the edits, they used a multi-class classifier to assign edit types for edits, based on a training data set, called \"the Annotated Edit Category Corpus\" they annotated themselves. Then they applied a LDA graphical model, in order, in a second step, to identify editors' repeating patterns of activity, and to cluster the editors according to their editing behaviors. Afterwards, the authors try to link these behaviors to the improvement of articles quality.\n\\cite{Arazyetal16}, clustered, on a stratified sub-set of a thousand English Wikipedia articles, the contributors according to their edits, edits classified using supervised learning techniques. They confirmed and refined the above qualitative results. They also showed, in \\cite{Arazyetal17}, that some people can take different role over time, when others stick to the same behavior in the various articles they contribute to. \n\nIn citizen science, \\cite{Jacksonetal16} used a similar approach to study the newcomers' activities (contributing sessions) and clustered their behavior in a Zooniverse project (a citizen science contributive project), Planet Hunters, \"an online astronomy citizen science project, in which astronomers seek the help of volunteers to filter data collected from the Kepler space telescope\". Based on a mix qualitative-quantitative methods, they first observe and interviewed participant regarding their contributing behaviors, in order to define the tasks to be observed to define a contributing pattern. Then, they aggregated page view data and server logs containing annotations and comments of each participant, and regrouped data by activity by 'session' (a session was defined as \"a submission of an annotation where no more than 30 minutes exists between the current and next annotation\"). They clustered the sessions based on counts of dimensions (e.g., number of contributions to \/object, \/discussion, annotations), using a k-means clustering algorithm to defined types of sessions, and, finally, they described the people by their history of participation (the types of sessions they did). Interestingly, the type of sessions and the contributor profile they found are very similar to those found in Wikipedia\\footnote{Their principal finding are 1) that many newcomers remained in a single session type (so they can be detected quite early in their participation journey); 2) that the contributor patterns can be regrouped in three types: Casual Workers, Community Workers, and Focused Workers.}. \n\nEven if these studies can be extended toward other case studies than English speaking projects, it is not sure they could go farther in terms of precision in the description of the different profiles, neither that the people involved in those projects will be aiming at investing their time to manually create the dataset of coded contributions these methods require.\nWe argue that there is still some work to do in the detection of these profiles, especially amongst newcomers, but more on the simplification of the detection methods rather that in their over-sophistication. \nWhat our discussion shows is that practitioners and researchers have the too extremities of the story: the newcomers seem to engage themselves in a contributing profile very early in their contributing history; they converge toward different contributing profiles.\nBut how much data do we need to connect the dots, and how early is it possible to do so? There are strong managerial reasons for advocating for detection as early as possible, with not too much apparatus.\n\nFor being able to quickly respond to contributors\/newcomers, community managers need not ex-post data analysis (which are very good to describe the behaviors), but tools to identify people along the way, to adapt the interactions as soon as possible.\nIf the development of complex, artificial intelligent tools is very high for the 'big' Wikipedias, it is slower for the smaller ones, and its tuning is based on the cultural and organizational principals of those big Wikipedias, and especially of the English one\\footnote{Such as the ORES project, on detecting the quality of the edits, which is quite developed over the different Wikipedias for detecting the quality of an edit, but not very much at article level \\url{https:\/\/www.mediawiki.org\/wiki\/ORES}.}.\n\n\n\nThis calls for a temporal description of the contributors, with the minimum of data extraction, or qualification (and possibly the data which are already available for consultation). As stressed by all these studies, the minimal information is the contribution (commit, edit), and all the studies cited are based on the analysis of the edits (sometimes the enriched edits).\nAs a consequence, we wonder, in this article, if observing only the edit behavior over time, it is possible to distinguish different profiles, and, if so, to link these profiles with the ones detailed by more complex methods. We put ourselves in the path of \\cite{Welseretal11} and of other sociological studies \\cite{GusevaRona-Tas01}, wondering if it is possible to find 'structural signatures of social attributes of actors'.\n\n\n\\section{Research methodology}\nThis section will describe the different steps that compose our research methodology, from raw data to the interpretation of the 4 clusters in terms of contributors activity and roles. Figure \\ref{flowchart} gives the global picture of the methodology\ndetailed in the next subsections.\\footnote{Figure inspired by \\cite{Jacksonetal16}.}\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=12cm]{flowchart.png}\n\\end{center}\n\\caption{Methodological flowchart.}\n\\label{flowchart}\n\\end{figure}\n\\subsection{Data collection strategy}\nOne of the most useful thing about Wikipedia is that many data are publicly available for downloading and analyzing. This includes information about Wikipedia content, discussion pages, contributor pages, editing activity (who, what and when), administrative tasks (reviewing content, blocking users, deleting or moving pages), and many other details\\footnote{It has to be understood that when speaking of \"user\" page, Wikipedia means the users of the wiki, or, more simply the contributors. The simple readers are called readers. In this article we use the terms contributor and user indifferently}. There are many different ways of retrieving data from Wikipedia such as web-crawlers, using available APIs and etc. \nWe used database dump files which are publicly available for every language and can be downloaded from Wikimedia Downloads center. An important advantage of retrieving information from these dump files is that researchers have complete flexibility as for the type and format of the information they want to obtain. These dump files are usually available in XML and SQL formats. An important remark about these dump files is that every new file includes again all data already stored in prior versions plus the new changes performed in the system since the last dump process, and excluding all information and meta-data pertaining pages that have been deleted in that interval.\n\n\n\nIn our research, we studied Danish and Romanian Wikipedia to show how our methodology can be implemented on mid-size language projects. The required data for our analysis was present in the \"pages-meta-history\" dump file which was completed on 1st January, 2018. This dump file contains data about complete Wikipedia text and meta-data for every change in the Wikipedia from the launch of that Wikipedia till December,2017. After getting the dump file, we used WikiDAT\\footnote{\\url{http:\/\/glimmerphoenix.github.io\/WikiDAT\/}} for extraction of data from the dumps. Wikipedia Data Analysis Toolkit abbreviated to WikiDAT is a tool that automates the extraction and preparation of Wikipedia data into 5 different tables of MySQL database (page, people, revision, revision hash, logging). WikiDAT uses Python and MySQL database and was developed with the motive to create an extensible toolkit for Wikipedia Data Analysis.\n\n\\subsection{Construction of the variables}\nIn the field of pattern recognition, it is very important to have features that are informative, discriminative and should explain the variability present in the data. As a primary data filtering step, the study has been limited to those contributors who have contributed more than 100 edits (irrespective of whether the edits made by them were minor or major) on the respective Wikipedias\\footnote{The definition of what a contributor is still a matter of debate. \\cite{PancieraHalfakerTerveen09}, studying the English Wikipedia, defined a \"Wikipedians\", or a regular, really involved contributors, as people having made at least 250 in their lifetime. We chose a smaller Figure because we wanted to capture the behavior of the not so involved, in a nutshell, all those who have been active for several months. And an \"editor\", for the Wikimedia Foundation, is somebody who have contributed 5 edits or more in a month. We also wanted to have a big enough number of contributors.}. We removed those contributors who were either robots, contributed only in a single month and contributed anonymously. There were such 171 contributors in the Romanian Wikipedia and 274 contributors in the Danish Wikipedia. \nAs said, our goal was to use simple activity measures based only on the edits and their distribution over time. With respect to the state of the art, contributors are likely to be grouped in terms of volume, intensity (focus) or duration of the activity. Starting with a \"brainstorming\" list of 12 initial features, a short list of 6 features was obtained after studying the correlation matrix. The features that were dropped were the number of edits\/contributions made, the number of days user has been on wikipedia, the minimum and the median gap between two consecutive posts, the median number of edits\/contributions made during different months and the number of different months a user has contributed in. Indeed, redundancy has been dropped by removing some heavily correlated features. The final features used for the statistical analysis are described in Table \\ref{tab:Variable-description}. \n\n\n\n\n\n\n\\begin{table}[!h]\n\\caption{\\label{tab:Variable-description}Description of the variables}\n\\begin{tabular}{|c|m{6 cm}|}\n\\hline \nVariable & Description\\tabularnewline\n\\hline \n\\hline \nRatio & The ratio between the number of edits and the number of days a contributor has been on Wikipedia from the very first edit \\tabularnewline\n\\hline \nMean\\textunderscore gap & The average gap between two consecutive posts measured in months\\tabularnewline\n\\hline \nMax\\textunderscore gap & The maximum gap between any two consecutive posts measured in months. \\tabularnewline\n\\hline \nNum\\textunderscore cons & The number of pairs of consecutive months with contributions \\tabularnewline\n\\hline \nMean\\textunderscore Month & Per month average edits made \\tabularnewline\n\\hline \n\nSD & Standard Deviation among the month average edits value \\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\n\nRatio is a measure of how massively the contributors have contributed during their entire period of contribution and it incorporates with it the relationship between the number of edits and the number of days. Mean\\textunderscore Month provides information about average number of edits made in a month and SD tells us about the variations in the contributions made during these months. Collectively, we can say that the features Ratio, Mean\\textunderscore Month and SD is an evaluation about the quantity and deviation of the contributions made by the contributors. Mean\\textunderscore Gap is a measure that describes the average time gap between two consecutive posts and Max\\textunderscore gap is a measure about the longest period of inactivity between two successive posts. Both features give us information about how often the contributors get active and how long they can quit the community before coming back. The feature Num\\textunderscore cons tells us about how many times the contributors have contributed successively for two consecutive months. For example, if a contributor made edits in January 2011 and February 2011, the count is increased by 1. In other words, Num\\textunderscore cons is a measure of regularity of contributors edits over time. \n\n\n\n\\subsection{Statistical methods}\nClustering techniques were used to group contributors in similar clusters highlighting various pattern in terms of activity and roles. In order to design robust conclusions, the Romanian Wikipedia was used to calibrate the methods and come up with a first groups interpretation. Then, the Danish Wikipedia was used as a validation dataset to check the group correspondence across different datasets. A contribution of this article is to provide this double checking in terms of cluster validation. Regarding the methods, a two-stage cluster analysis was performed:\n\\begin{enumerate}\n\\item A hierarchical clustering was done based on the features described in the Table \\ref{tab:Variable-description} with the \\textit{hclust} function of the R platform for statistical computing \\cite{RRR}. The metric used was the Ward distance, adapted to quantitative features \\cite{duda2012pattern}. The resulting dendrogram can suggest a first trend about the optimal number of clusters, in terms of loss of inertia.\n\n\n\\item Partitioning algorithms were used as alternative clustering methods in order to select the final typology. In our research, the contributors were clustered using a k-medoids clustering algorithm called PAM (Partitioning Around Medoids), from the R package \\textit{cluster}. The PAM algorithm is based on the search for $k$ representative objects or medoids among the observations of the data set. It is known to be more robust than the k-means algorithm, especially with respect to the initialization \\cite{kaufman2009finding}.\n\nIn this work, different typologies have been formed for $k$ ranging in the interval selected in step 1. Results of the PAM algorithms were consistent with those obtained from step 1. Then, the optimal number of cluster has been selected with cluster validation technique such as the silhouette index \\cite{halkidi2002cluster} which models how well contributors are clustered into their groups (intra vs. inter cluster inertia). \n\\end{enumerate}\n\nTo assist the interpretation of the resulting clusters, Principal Component Analysis (PCA) has been carried out in order to project the data onto a small number of dimensions that are combinations of the initial variables \\cite{saporta2006probabilites}. In this article, three dimensions were enough to explain almost $90\\%$ of the data variability. In addition to PCA, ANOVA analysis and Tukey statistical tests have helped to determine the significant variables within each cluster. All together, these methods ensure a full and robust interpretation of clustering results. \n\n\n\\section{Results}\nHierarchical clustering gives a first visualization of the data structure (Figure \\ref{den}). As for the optimal number of clusters $k$, the dendrogram suggests an interval between 2 and 10 clusters that could be investigated by further evaluation.\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=10cm]{Dendrogram.png}\n\\end{center}\n\\caption{Cluster dendrogram of the Romanian Wikipedia}\n\\label{den}\n\\end{figure}\nAs mentioned in the previous section, the evaluation has been made with cluster validity indexes such as the average silhouette width and the total within sum of squares. Figure \\ref{validity} depicts the evaluation results in four clusters of contributor's contribution behavior in Romanian Wikipedia, this number being validated afterward with the Danish Wikipedia.\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=8cm]{Rowiki.png}\n\\end{center}\n\\caption{Cluster Validation Plots}\n\\label{validity}\n\\end{figure}\n One cluster in our analysis contains the least number of contributors in both the cases. The distribution of the contributors in the clusters for both the Wikipedias are in Table \\ref{Table:Size_of_Clusters}. \n\\begin{table}[h] \n\\centering\n\\caption{Size of Clusters}\n\\label{Table:Size_of_Clusters}\n\\begin{tabular}{l c c c c }\n\\hline\nWikipedia & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 \\\\\n\\hline \nRomanian& 25 & 92 & 48 & 6 \\\\\nDanish& 45 & 144 & 61 & 24\\\\\n\\hline\n\\end{tabular}\n\\end{table}\nWith respect to clusters interpretation, a PCA with three principal components explains almost 90\\% of the total dataset variance. Figure \\ref{PCA} depicts the projection of the labeled contributors onto these three first dimensions. Analyzing the loadings for both wikis, it turns out that the first dimension (PC1) is correlated with the volume of the activity (ratio, mean number of edits) with a relative intra-cluster variability. Dimension 2 (PC2) relates to the periods of inactivity (the gaps). The correlation is negative in Figure \\ref{PCA}. Dimension 3 (PC3) mainly refers to the variable Num\\textunderscore cons, it relates to the notion of regularity. Please note that due to computational details, this correlation is positive for the Romanian Wikipedia and negative for the Danish Wikipedia. \n\n\\begin{figure*}[!h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Rplot.png}\n\\end{center}\n\\caption{PCA Analysis with projection of the four clusters}\n\\label{PCA}\n\\end{figure*}\n\n\nThe interpretation enables the extraction of the following contributors profiles:\n\\begin{itemize}\n\\item Cluster 1: contributors \"on a mission\". After joining the community or editing first articles, these contributors left Wikipedia for long periods. When they did contribute, they did it with a high, but short-term, activity.\n\\item Cluster 2: basic, or 'casual' contributors. This group contains basic characteristics with no significant markers, beside the fact that their activity is neither particularly intense, nor particularly focused (in terms of period, or of number of articles addressed).\n\\item Cluster 3: regular contributors. The activity is above the average (even if not that much) and the most regular among all groups, especially in terms of the number of consecutive months of presence. \n\\item Cluster 4: top contributors. The contributors present a huge activity ratio, they are those core, or very active contributors found in other research articles. Nevertheless, this cluster contains higher variability than others.\n\\end{itemize}\n\n\nThese interpretations are confirmed by unidimensional boxplots distributions (Figure \\ref{dawiki_boxplots} and Figure \\ref{rowiki_boxplots}, in Appendix).\n\nGenerally, boxplots give a fine picture of the features distributions within each cluster, with a focus on the intra cluster variability. An illustrative variable was added to the analysis: the number of different articles a contributor has contributed to. This external feature confirms the analysis above.\n\n\n\n\\section{Discussion}\n\n\\subsection{Simple methods = solid conclusions}\nOur goal was to evaluate if, with simple measures of contributing activities over time, it was possible to detect the different profiles of contributors with data reduction techniques. At least on the Wikipedia example, we have been able to detect the focused workers (cluster 1), the casual workers (cluster 2), and the regular workers (clusters 3 and 4), and even to discriminate between those the very involved (the top, or very top contributors \\cite{Priedhorskyetal07}). As far as the objective is to identify contributors profile, our article shows that following the edits is quite enough. The number of articles involved has been added as an illustrative variable, on order to better link our findings to the descriptions realized by \\cite{Balestraetal16,Arazyetal17}. In terms of methodology, it is noticeable to remark that simple data reduction techniques such as clustering and PCA allow to reach a comparable level of information as more refined approaches, such as \\cite{Weietal15}, who applied Non-parametric Hidden Markov Clustering models of profiles.\n\n\\subsection{Limitations and future research}\nHowever, this work suffers from some limitations that should be discussed, while opening future research direction. First, a strong hypothesis has been made by focusing only on contributors with more than 100 edits. If a potential application of such a clustering approach is to increase the users retention rate, it would be relevant to pay a special attention to the small contributors with less than 100 edits, and design retention strategies for them. However, dealing with such a population would lead to more data quality issues and uncertainty. A deep analysis of cluster revealed also the presence of peripheral participation periods, but mainly for the people on a mission (Cluster 1) so the first edits are of paramount importance and may need special treatments to distinguish between those learners and the casual contributors (Cluster 2), for instance.\nThe second limitation relies on the volume of data analyzed: the results should be generalized to bigger datasets like the English, or the French Wikipedia. Nevertheless, our research methodology gives some guarantees about the work's generalization capabilities since the methods have been first calibrated on the Romanian Wikipedia and then validated with the Danish Wikipedia (with very good consistency). However, those two are occidental Wikipedias, and it will be as interesting to run the same analysis on Arabic, or Thai, or Hindi wikipedias, in a word, on any other non-occidental, medium size Wikipedia.\nAnother weakness is related to the limited amount of features used to detect the profiles. It will be relevant to consider other characteristics that will add variety. For instance, a first step would consist in using the number of different articles as an explanatory variable instead of just an illustrative one. Other variables should be added as well, as long as they remain simple and easily observable (and computable) by the project 'managers' in all Wikipedias. \nThe highlighted profiles are identifiable early in the history of involvement, suggesting that light monitoring of newcomers may be sufficient to adapt the interaction with them and increase the retention rate. \n\nBut above all, further research will deal with the extension of this offline clustering towards dynamic techniques. The principle is to dynamically adapt the clusters as new contributors join the community. Online clustering methods (such as Growing Neural Gas) could be adapted in order to develop a dynamic decision support tool for online contributors assistance. \n\n\n\n\n\n \n\n\n\n\n\n\n\\bibliographystyle{SIGCHI}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUltracold atoms in optical lattices provide tunable experimental\nrealizations of correlated many-particle systems. Even the dimensionality\nof these systems can be chosen between one and three.\nRecently, the transition between superfluid and Mott insulating (MI) phases\nhas been observed by tuning the ratio between the kinetic and the interaction\nenergy \\cite{grein02,pared04}. Dynamical aspects can also be addressed \n experimentally \\cite{stofe04,kohl05}.\n\n\nThere are many theoretical treatments of these issues, ranging from \nmean-field approaches \\cite{jaksc98} to more sophisticated \ntechniques \\cite{fishe89,pai96,kashu96,kuhne98,elstn99,kuhne00}. \nThe transition at zero temperature ($T=0$)\nhas been analyzed in one-dimensional \nsystems \\cite{pai96,kashu96,kuhne98,elstn99,kuhne00}. \nBut there is still a multitude of open questions.\nOur aim is to explain the dynamics of the excitations\n which has moved in the focus recently\n\\cite{stofe04,kohl05,batro05}.\n\nThe present work is intended to clarify the significance of various\nexcitation processes depending on the kinetic and the \ninteraction energy and on the temperature. To this end, we will\nanalyze the spectral weights of the\none-dimensional MI phase with one boson per site $n=1$.\n\nThe low-energy excitations are either double occupancies ($n=2$),\nwhich we will call `particles', or they are holes ($n=0$). Both excitations \nare gapped as long as the system is insulating. They become soft at the \nquantum critical point which separates the insulator from the superfluid.\nWe use a continuous unitary transformation (CUT)\n\\cite{wegne94,knett00a} to obtain an effective Hamiltonian \nin terms of the elementary excitations `particle' and `hole'. \nThis effective Hamiltonian\nconserves the number of particles and of holes \\cite{knett03a}.\nThe CUT is realized in real space in close analogy to the \nderivation of the generalized $t$-$J$ model from the Hubbard model\n\\cite{reisc04}. The strongly correlated many-boson problem\nis reduced to a problem involving a small number of particles and holes.\nThis simplification enables us to calculate \nkinetic properties like the dispersions and\nspectral properties like spectral weights. \n\n\nThe article is set-up as follows. First, the model and the relevant\nobservable are introduced (Sect.\\ II). \nThen, the method used is presented and described (Sect.\\ III).\nThe spectral weights are computed in Sect.\\ IV. In Sect.\\ V, the\nexperiment is analyzed which requires calculations at finite temperatures\nas well. Finally, the results are discussed in Sect.\\ VI.\n \n \n\\section{Model}\nTo be specific, we study the Bose-Hubbard model \n$H = t H_t+ U H_U$ in one dimension\n\\begin{equation}\n\\label{hamiltonian}\n H = -t\\sum\\limits_i( b^\\dagger_i b^{\\phantom{\\dagger}}_{i+1}+b^\\dagger_{i+1}\n b^{\\phantom{\\dagger}}_i ) + (U\/2)\\sum_i \\hat{n}_i ( \\hat{n}_i-1) \n\\end{equation}\nwhere the first term is the kinetic part $t H_t$ and the second term the \nrepulsive interaction $U H_U$ with $U>0$. The \n bosonic annihilation (creation) operators are denoted by \n$b^{(\\dagger)}_i$, the number of bosons by \n$\\hat{n}_i =b^\\dagger_i b^{\\phantom{\\dagger}}_i$.\nIf needed the term $H_\\mu= -\\mu\\sum_i \\hat{n}_i$ is added to $H$\nto control the particle number.\nFor numerical simplicity, we truncate the local bosonic Hilbert space \nto four states. This does not change the relevant physics significantly\n\\cite{pai96,kashu96,kuhne98}.\n\nBesides the Hamiltonian we need to specify the excitation operator $R$. In the\nset-up of Refs.~\\onlinecite{stofe04} and \\onlinecite{kohl05} the depth\nof the optical lattices is changed periodically to excite the system. \nIn terms of the tight binding model (\\ref{hamiltonian}) this amounts\nto a periodic change of $t$ and of $U$\nleading to \n\\begin{equation}\nR \\propto \\delta t H_t +\\delta U H_U\\ .\n\\end{equation} \nSince multiples of $H$ do not induce excitations we consider\n\\begin{equation}\nR \\to \\tilde R = R- (\\delta U\/U) H\\ .\n\\end{equation} \nBoth operators $R$ and $\\tilde R$ induce the same transitions\n$\\langle n | R | m \\rangle = \\langle n | \\tilde R | m \\rangle$\nwhere $| n \\rangle \\neq | m \\rangle$ are eigen states of $H$.\nEventually, the relevant part of $R$ is proportional to $H_t$.\nFor simplicity, we set the factor of proportionality to one.\n\n\nIf the interaction dominates ($U\/t \\to \\infty$) the ground state of\n(\\ref{hamiltonian}) is the product state of precisely one boson per site\n$|\\text{ref}\\rangle=|1\\rangle_1\\otimes |1\\rangle_2 \\ldots\\otimes |1\\rangle_N$,\nwhere $|n\\rangle_i$ denotes the local state at site $i$ with $n$ bosons.\nWe take $|\\text{ref}\\rangle$ as our reference state; all deviations\nfrom $|\\text{ref}\\rangle$ are considered as elementary excitations. \nSince we restrict\nour calculation to four states per site we define three creation operators:\n$h^\\dagger_i |1\\rangle_i= |0\\rangle_i$ induces a hole at site $i$,\n$p^\\dagger_i |1\\rangle_i= |2\\rangle_i$ induces a particle at site $i$,\nand $d^\\dagger_i |1\\rangle_i= |3\\rangle_i$ induces a double-particle\nat site $i$. The operators $h$, $p$ and $d$ obey the \ncommutation relations for hardcore bosons.\n\n\\section{CUT}\nThe Hamiltonian (\\ref{hamiltonian}) conserves the number of \nbosons $b$. But if it is rewritten in terms of $h,p$, and $d$ it is no longer\nparticle-conserving, e.g., the application of $H_t$ to $|\\text{ref}\\rangle$\ngenerates a particle-hole pair. We use a CUT defined by\n\\begin{equation}\n \\label{eq:fleq}\n \\partial_l H (l) = [\\eta (l),H(l)]\n\\end{equation}\nto transform $H(l=0)=H$ from its form in \n(\\ref{hamiltonian}) to an effective Hamiltonian\n$H_\\text{eff}:=H(l=\\infty)$ which \\emph{conserves} the number\nof elementary excitations, i.e., $[H_U,H_\\text{eff}]=0$. An appropriate\nchoice of the infinitesimal generator $\\eta$ is defined by the matrix\nelements \n\\begin{equation}\n\\label{eq:generator}\n\\eta_{i,j} (l)={\\rm sgn}\\left(q_i-q_j\\right)H_{i,j}(l)\n\\end{equation}\nin an eigen basis of $H_U$; $q_i$ is the corresponding eigenvalue\n\\cite{knett00a}. The \nstructure of the resulting $H_\\text{eff}$ and the\nimplementation of the flow equation in second quantization in real space\nis described in detail in Refs.\\ \\onlinecite{knett03a} and \n\\onlinecite{reisc04}. \n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig1.eps}\n \\end{center}\n \\caption{Residual off-diagonality for values of $t\/U\\in \\{0.1,0.2\\}$ for\n maximal $r=2$ and $r=3$. {\\em Inset:} Magnification for small values of\n the flow parameter $l$. The ROD for $r=3$ and $t\/U=0.2$\n displays non-monotonic behavior. In this case, we stop \n the flow at the first minimum of the ROD. \n Its position is indicated by a vertical line.}\n \\label{fig_ROD}\n\\end{figure}\nThe flow equation (\\ref{eq:fleq}) generates a \nproliferating number of terms.\nFor increasing $l$, more and more hardcore\nbosons are involved, e.g., annihilation and creation of three bosons \n$p^\\dagger_i p^\\dagger_{i+1} p^\\dagger_{i+2}\np^{\\phantom\\dagger}_i p^{\\phantom\\dagger}_{i+1} p^{\\phantom\\dagger}_{i+2}$, \nand processes over a larger and larger range occur,\ne.g., hopping over $j$ sites $p^\\dagger_{i+j} p^{\\phantom\\dagger}_i$.\nTo keep the number of terms finite we omit normal-ordered terms\nbeyond a certain extension $r$.\nNormal-ordering means that the creation operators of the elementary\nexcitations appear to the left of the annihilation operators. If\nterms appear which are not normal-ordered the commutation relations\nare applied to rewrite them in normal-ordered form. The normal-ordering\nis important since it ensures that only less important terms are omitted.\nGenerically, such terms involve more particles \\cite{knett03a,reisc04}.\n\nWe define the extension $r$ as the distance between\nthe rightmost and the leftmost creation or annihilation\noperator in a term \\cite{knett03a,reisc04}. The extension $r$ of a term \nmeasures the range of the physical process which is described by\nthis term. So our restriction to a finite extension restricts the\nrange of processes kept in the description. This is the most\nserious restriction in our approach.\nNote that the extension $r$ implies for hardcore bosons\nthat at maximum $r+1$ bosons are involved.\n\n\n\n\nFor $l<\\infty$, $H(l)$ still contains terms\nlike $p^\\dagger_i h^\\dagger_{i+1}$ which do not conserve the number\nof particles. To measure the extent to which $H(l)$ deviates from the\ndesired particle-conserving form, we introduce the\nresidual off-diagonality (ROD) as the sum of the \nmoduli squared of the coefficients of all terms which change the number of\nelementary excitations. The ROD for maximal extensions $r=2$ and $r=3$ is \nshown in Fig.~\\ref{fig_ROD}. \nIdeally, the ROD decreases monotonically with $l$. \nThe calculation with $r=2$ indeed shows the desired\nmonotonic behavior of the ROD for all values of $t\/U$.\n \n \nBut non-monotonic behavior can also occur \\cite{reisc04}. It is observed in \nFig.\\ \\ref{fig_ROD} for extension $r=3$. \nWhile for small $t\/U$ the ROD still decays monotonically it displays\nnon-monotonic behavior for larger $t\/U$, see e.g.\\ the ROD for $t\/U=0.2$ and \n$r=3$ in Fig.~\\ref{fig_ROD}. \nAn uprise at larger values $l \\gtrapprox 1\/t$ signals that the \nintended transformation cannot be performed in a well-controlled way.\nThe uprise stems from matrix elements which do not decrease but increase,\nat least in an intermediate interval of the running variable $l$.\nSuch an increase occurs if the number of excitations is not correlated\nwith the total energy of the states. This means that two states are linked by\nan off-diagonal matrix element of which the state with low\nnumber of excitations is \\emph{higher} in total energy than the state with\na higher number of excitations. The increase of such a matrix element\nbears the risk that the unavoidable\ntruncations imply a too severe approximation.\n\nThe situation that the number of excitations is not correlated with the\ntotal energy can occur where the lower band edge of a continuum of more \nparticles falls below the upper band edge of a continuum of less particles or\nbelow the dispersion of a single excitation,\nsee, e.g., Ref.\\ \\onlinecite{schmi05b}. This situation implies additional\nlife-time effects since the states with less particles can decay\ninto the states with more particles. Even if the decay rates are very\nsmall the CUT can be spoilt by them because the CUT in its form defined \nby Eq.\\ (\\ref{eq:generator}) correlates the particle number and the energy of\nthe states.\n\nIn order to proceed in a controlled way, we neglect the small life-time \neffects if they occur at all. This is done by stopping the CUT at\n$l_\\text{min}$ at the first minimum of the ROD. \nThe position of $l_\\text{min}$ for $t\/U=0.2$ and $r=3$ is indicated in the\ninset of Fig.~\\ref{fig_ROD} by a vertical line. It is found that the remaining \nvalues of the ROD are small and thus negligible. Hence we omit the remaining\noff-diagonal terms. \nOnly close to the critical point, where\nthe MI phase vanishes, the present approach\nbecomes insufficient.\n\n\n\\begin{figure}[thbp]\n \\begin{center}\n \\includegraphics[width=\\columnwidth,height=\\columnwidth]\n\t\t {.\/fig2.eps}\n \\end{center}\n \\caption{(color online) Phase diagram of the Mott insulating (MI) and the\n superfluid (SF) phase in the $(t\/U,\\mu\/U)$ plane.\n Dotted (dashed) lines show CUT results for maximal extension $r=2$ \n ($r=3$). Solid lines (symbols) show series (DMRG) results from Refs.\\\n \\onlinecite{elstn99} and \\onlinecite{kuhne98}.}\n \\label{fig_PD}\n\\end{figure}\nWe check the reliability of our approach by comparing its results to those\nof other methods\nfor the phase diagram in Fig.~\\ref{fig_PD}. The upper\nboundary of the MI phase is given by the particle gap\n$\\Delta^\\text{p}:=\\text{min}_k \\omega^\\text{p}(k)$. \nThe lower boundary is given by the negative hole gap\n$-\\Delta^\\text{h}$ where\n$\\Delta^\\text{h}:=\\text{min}_k \\omega^\\text{h}(k)$.\nThe dotted (dashed) curves result from CUTs for $r=2$ ($r=3$).\nFor $r=2$, the CUT can be performed till $l=\\infty$;\nfor $r=3$, the flow is stopped at $l_\\text{min}$.\nSolid curves depict the findings by series\nexpansion, the symbols those obtained by DMRG \\cite{elstn99,kuhne98}. \nThe agreement is very good in view of the truncation of the \nHilbert space and in view of the low value of $r$. Note that the $r=3$\nresult agrees better with the series and DMRG results\nthan the $r=2$ result. As expected, the deviations\nincrease for larger values of $t$ because longer-range processes become\nmore important. Yet the values obtained by the CUT for the critical ratio \n$x_c:=t\/U$, where the MI phase vanishes, are reasonable.\nWe find $x_c^{(r=2)}=0.271$ and $x_c^{(r=3)}=0.258$. By high accuracy \ndensity-matrix renormalization $x_c=0.297\\pm0.01$ was found, see\nRef.\\ \\onlinecite{kuhne00} and references therein. Series expansion\nprovides $x_c=0.26\\pm0.01$ which is very close to our value $x_c^{(r=3)}$.\nThis fact underlines the similarity between series expansions and the\nreal space CUT as employed in the present work.\n\nWe conclude from the above findings for the phase diagram (Fig.\\\n\\ref{fig_PD}) that the mapping to the particle-conserving $H_\\text{eff}$\nworks very well for large parts of the phase diagram. It\ndoes not, however, capture the Kosterlitz-Thouless nature of the transition\nitself \\cite{kuhne00}.\nThe CUT yields reliable results within the MI phase for \n$t\\lessapprox 0.2U$. Henceforth, results for this regime\nwill be shown which were obtained in the $r=3$ scheme.\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth,height=\\columnwidth]\n\t\t {.\/fig3.eps}\n \\end{center}\n \\caption{(color online) The dispersions of a particle $\\omega^{\\rm p}(k)$ \n (black) and a hole $\\omega^{\\rm h}(k)$ (green\/grey) for $t=0.1U$ (solid),\n $t=0.15U$ (dotted) and $t=0.2U$ (dashed).}\n \\label{fig_disp}\n\\end{figure}\nIn Fig.~\\ref{fig_disp}, the single-particle dispersion $\\omega^{\\rm p}(k)$ \n(one-hole dispersion $\\omega^{\\rm h}(k)$) is shown as black \n(green or grey, resp.) curves.\nBoth dispersions increase with $t$; the particle dispersion \nalways exceeds the hole dispersion. \nOn increasing $t$, the center of the hole dispersion shifts\nto higher energies while the center of the particle dispersion \nremains fairly constant in energy.\n\n\\section{Observable}\nAt the beginning of the flow ($l=0$), the observable $R$ is proportional to \n$H_t$. The dynamic structure factor $S_R(k,\\omega)$ encodes the response of the\nsystem to the application of the observable $R$. The observable $R$ transfers \nno momentum because it is invariant with respect to translations.\nTherefore, the Bragg spectroscopy \\cite{stofe04,kohl05} measures the response \n$S_R(k=0,\\omega)$ at momentum $k=0$ and energy $\\omega$. \nA sketch of the spectral density $S_R(k=0,\\omega)$ for this observable is shown\nin Fig.~\\ref{fig_sketch}. We assume that the average energy \nis mainly determined by $H_U$ in the MI regime.\nThen the first continuum is located at $U$ and a \nsecond one at $2U$. For small $t\/U$ the continua will be well separated. \nThe energy-integrated spectral density in the first continuum is the spectral\nweight $S_1$. Correspondingly, $S_2$ stands for \nthe spectral weight in the second continuum. \n\n\nTo analyze the spectral weights in the effective model obtained by the CUT\nthe observable $R$ must be transformed as well.\nBefore the CUT, the observable is $R(l=0)=H_t$. It is a sum of \nlocal terms \n\\begin{equation}\n R(l=0)=H_t= \\sum_{{\\mathbf r}} \n b^\\dagger_{{\\mathbf r}-1\/2} b^{\\protect\\phantom\\dagger}_{{\\mathbf r}+1\/2}\n +b^\\dagger_{{\\mathbf r}+1\/2} b^{\\protect\\phantom\\dagger}_{{\\mathbf r}-1\/2}\\ ,\n\\end{equation}\nwhere we have rewritten the sum as a sum over the bonds ${\\mathbf r}$. The bond\npositions are in the centers of two neighboring sites. This notation emphasizes\nthat the observable acts locally on bonds. The observable is\ntransformed by the CUT to\n\\begin{equation}\n\\label{eq:Reff}\n {R}^{\\rm eff}=R(l=\\infty)=\\sum_{{\\mathbf r}} R(l=\\infty,{\\mathbf r})\\ .\n\\end{equation}\nIt is the sum over the transformed local observables $R(l=\\infty,{\\mathbf r})$ \nwhich are centered at the bonds ${\\mathbf r}$. \n\nThe local observable is sketched schematically in Fig.~\\ref{fig_obsaction}. \nThe sites on which the local observable acts are shown as filled circles. \nThe state of the sites shown as empty circles is not altered by the observable.\nAt $l=0$, the observable is $H_t$. It acts only locally on adjacent sites of \nthe lattice as shown in Fig.~\\ref{fig_obsaction}a. \n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig4.eps}\n \\end{center}\n \\caption{Sketch of the distribution of spectral weight $S(k=0,\\omega)$. \n The weight centered around $\\omega=U$ is given by\n $S_1$, the weight around $\\omega=2U$ by $S_2$.}\n \\label{fig_sketch}\n\\end{figure}\n\\begin{figure}[h]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig5.eps}\n \\end{center}\n \\caption{Terms of the local observable $R(l,{\\mathbf r})$. \n The arrow indicates the bond on\n which a term acts. Filled circles: sites on which a non-trivial\n operator acts; `non-trivial' means that the operator is different from\n the identity. Open circles: sites on which the term under study does not \n act, i.e., the term acts as identity on these sites.\n (a) At $l=0$ the observable is $H_t$. \n It is composed of local terms that act only on adjacent sites of the\n lattice. \n (b) and (c) More complicated terms appear during the flow. The sum\n of the distances of all operators in a term is our measure \n $r_{\\mathcal O}$ for the extension of a term. \n It is $2$ for (b) and $4$ for (c). Terms beyond\n a certain extension are omitted.}\n \\label{fig_obsaction}\n\\end{figure}\n\nWe compute the matrix elements of the excitation\noperator after the CUT, i.e., of ${R}^{\\rm eff}$. This operator\nconsists of terms of the form \n$c_{(\\bar{n};\\bar{m})} (h^\\dagger)^{n_h}(p^\\dagger)^{n_p} (d^\\dagger)^{n_d}\nh^{m_h}p^{m_p} d^{m_d}$ where we omitted all spatial subscripts\ndenoting only the powers \n$(\\bar{n};\\bar{m}) = (n_h\\; n_p\\; n_d; m_h\\; m_p\\; m_d)$ of the particular \noperator type. \nFor the full expression we refer the reader to Appendix~\\ref{sec:app}. \nThe coefficient $c_{(\\bar{n};\\bar{m})}$ is the corresponding prefactor.\nThe spectral weight $I^\\text{eff}_{(\\bar{n};\\bar{m})}$ \nis the integral of $S(k=0,\\omega)$ over all frequencies for momentum \ntransfer $k=0$. It stands for\n the excitation process starting from the states with $m_d$ double-particles,\n$m_p$ particles, and $m_h$ holes and leading to \nthe states with $n_d$ double-particles, $n_p$ particles, and $n_h$ holes. \n\nIn the course of the flow, contributions to the observable appear which\ndo not act on the bond on which the initial observable is centered.\nThe initial local process spreads out in real space for $l\\to \\infty$.\nExamples are sketched in Fig.~\\ref{fig_obsaction}b-c. \nIn order to avoid proliferation, also the terms in the observable have to be \ntruncated. Like for the terms in the Hamiltonian we introduce a measure\nfor the extension of the terms in the observable. This measure, however,\nis slightly different: It is the sum $r_{\\mathcal O}$ of the distances \nof all its local creation or annihilation operators to ${\\mathbf r}$.\nIf the value $r_{\\mathcal O}$ of a certain term\nexceeds a preset truncation criterion, this term is neglected.\n\nFigures \\ref{fig_obsaction}b-c illustrate the truncation criterion for the\nobservable. An operator that acts on the sites shown in \nFigure~\\ref{fig_obsaction}b meets the truncation criterion for \n$r_{\\mathcal O}=3$. It is kept in a $r_{\\mathcal O}=3$ calculation. \nAn operator acting on the sites in Figure~\\ref{fig_obsaction}c \nis kept in the calculation with $r_{\\mathcal O}=4$. But it does not meet\nthe truncation criterion for $r_{\\mathcal O}=3$; in a calculation with \n$r_{\\mathcal O}=3$ it is discarded. \n\nOnce the local effective observables $R(l=\\infty,{\\mathbf r})$ are known,\nthe total effective expression $R^\\text{eff}$ is given by the sum over \nall bonds (\\ref{eq:Reff}).\n\n\\begin{figure}[thbp]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]\n {.\/fig6a.eps}\n \\includegraphics[width=\\columnwidth]\n {.\/fig6b.eps}\n \\end{center}\n \\caption{(color online) Spectral weights of various processes for \n truncations $r_{\\mathcal O}\\in \\{2,3,4\\}$. \n (a) \n $I^\\text{eff}_{(\\text{hp;0})}$ (black curves) and \n $I^\\text{eff}_{(2\\text{h2p};0)}$ (green\/grey). \n (b) \n $I^\\text{eff}_{(\\text{h2p;p})}+I^\\text{eff}_{(\\text{2hp;h})}$ (black)\n and $I^\\text{eff}_{(\\text{hd;p})}$ (green\/grey)}\n \\label{fig_sw}\n\\end{figure}\nThe results for the observable truncations\n$r_{\\mathcal O}\\in \\{2, 3, 4\\}$ are shown in Fig.~\\ref{fig_sw}. At zero temperature, only the processes starting from the ground state\nare relevant. These processes are the $(\\bar{n};0)$ processes \nwhich start from the reference state $|\\text{ref}\\rangle$ because\nthe CUT is constructed such that\nthe reference state, which is the vacuum of excitations,\nbecomes the ground state after the transformation\n\\cite{knett00a,mielk98,knett03a}.\nThe weights of the $(\\bar{n};0)$ processes are shown \nin Fig.~\\ref{fig_sw}a. The by far dominant weight is\n $I^\\text{eff}_{(\\text{hp};0)}$; the process \n$I^\\text{eff}_{(\\text{2h2p};0)}$ is lower by orders of magnitude.\nThe agreement between results for various truncations is very good.\n\n\nThe particle-hole pair excited in $I^\\text{eff}_{(\\text{hp};0)}$ \nhas about the energy $U$ for low values of $t\/U$, \nwhile $I^\\text{eff}_{(\\text{2h2p};0)}$ leads to a response at $2U$.\nHence we find noticeable weight only around $U$ in accordance with\nrecent quantum Monte Carlo (QMC) data \\cite{batro05}.\n\n\nAt finite temperature, excitations are present before $R^\\text{eff}$\nis applied. At not too high temperatures, independent particles $p^\\dagger$\nand holes $h^\\dagger$ are the prevailing excitations. Other excitations,\nfor instance $d^\\dagger$ or correlated states $(p^\\dagger)^2$\nof two $p$-particles, are higher in energy and thus much less likely.\nSo the processes starting from a particle or a hole are the important ones\nwhich come into play for $T>0$. Hence we focus on\n $I^\\text{eff}_{(\\text{h2p;p})}$, $I^\\text{eff}_{(\\text{2hp;h})}$,\nand $I^\\text{eff}_{(\\text{hd;p})}$. \nThese weights are shown in Fig.~\\ref{fig_sw}b. The results \nfor $I^\\text{eff}_{(\\text{hd;p})}$ depend only very little on truncation. \nA larger dependence on the truncation is found for\n$I^\\text{eff}_{(\\text{h2p;p})}+I^\\text{eff}_{(\\text{2hp;h})}$. But the \nagreement is still satisfactory. \n\nThe processes \n$I^\\text{eff}_{(\\text{h2p;p})}$ and $I^\\text{eff}_{(\\text{2hp;h})}$\nincrease the energy by about $U$ because they \ncreate an additional particle-hole pair. The \n$I^\\text{eff}_{(\\text{hd;p})}$ process increments the energy\nby about $2U$ because a hole and a double-particle is generated.\n\n\n\\section{Approaching Experiment}\n\n\nLet us address the question what causes the high energy peak in\nRefs.~\\cite{grein02,stofe04,kohl05}. It was suggested that\ncertain defects, namely an adjacent pair of a singly and a doubly\noccupied site, are at the origin of the high energy peak.\nThe weight of such processes for a given particle state is\nquantified by $I^\\text{eff}_{(\\text{hd;p})}$. Its \nrelatively high value, see Fig.\\ \\ref{fig_sw}b, puts the presumption\nthat such defects cause the peak at $2U$ on a quantitative basis.\n\n\\subsection{Zero Temperature}\n\nBut what generates such defects? At zero or at very low temperature,\nthe inhomogeneity of the parabolic trap can imply the existence\nof plateaus of various occupations $\\langle n\\rangle \\in \\{0,1,2,\\ldots\\}$\ndepending on the total filling \\cite{batro02,kolla04,batro05}.\nIn the transition region from one integer value of $\\langle n\\rangle$\nto the next, defects occur which lead to excitations at $2U$.\n\nYet it is unlikely that this mechanism explains the experimental\nfinding since the transition region is fairly short at low value of\n$T\/U$ and $t\/U$, i.e., the plateaus prevail. The high energy peak at $2U$, \nhowever, has less weight by only a factor $2$ to $5$ compared to the weight \nin the low energy peak at $U$ \\cite{stofe04}. So we conclude that the \ninhomogeneity of the traps alone cannot be sufficient to\naccount for the experimental findings.\n\n\\subsection{Finite Temperature}\nAt higher temperatures, thermal fluctuations are a likely candidate\nfor the origin of the defects. Hence we estimate their effect in the following\nway. Thermally induced triply occupied $d$ states are\nneglected because they are very rare due to their high energy of\n$\\omega^\\text{d}(k)-2\\mu\\approx 2U$ above the vacuum $|\\text{ref}\\rangle$.\nWe focus on the particle $p$ and the holes $h$.\nThe average occupation of these states is estimated by a previously introduced\napproximate hardcore boson statistics \\cite{troye94}\n\\begin{subequations}\n\\label{eq:statistik}\n\\begin{eqnarray}\n \\langle n^{\\sigma}_k \\rangle &=& \n \\exp\\left({-\\beta( \\omega^{\\sigma}(k) -\\mu^{\\sigma} )}\\right)\n \\langle n^{\\rm vac} \\rangle\n\\\\\n \\langle n^{\\rm vac} \\rangle &=& \\left(1+z^{\\rm p}(\\beta) + z^{\\rm\n h}(\\beta)\\right)^{-1}\\ ,\n\\end{eqnarray}\n\\end{subequations}\nwhere \n$z^\\sigma = (2\\pi)^{-1}\\int_0 ^{2\\pi}dk e^{-\\beta(\\omega^{\\sigma}(k)\n-\\mu^{\\sigma})}$, $\\mu=\\mu^\\text{p}=-\\mu^\\text{h}$. The chemical potential\nchanges sign for the holes because a hole stands for the absence of\none of the original bosons $b$. Equation (\\ref{eq:statistik}) is obtained\nfrom the statistics of bosons without any interaction by correcting globally\nfor the overcounting of states. The fact that the overcounting is \nremedied on an average level implies that (\\ref{eq:statistik}) \nrepresents only an approximate, classical description \\cite{schmi05c}.\nThe chemical potential $\\mu$ is determined self-consistently such that \nas many particles as holes are excited, i.e.,\nthe average number of bosons $b$ per site remains one. \n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig7.eps}\n \\end{center}\n \\caption{(color online) Temperature dependence of the relative \n spectral weight $S_2\/S_1$ for various values of $t\/U$.\n {\\it Inset:} \n $S_2\/S_1$ as function of $t\/U$ for various temperatures.}\n \\label{fig_sw_T}\n\\end{figure}\nNext, we turn to the computation of spectral weights at finite temperatures.\nAt not too high temperatures,\nthe relevant channels are $(\\text{hp};0)$, $(\\text{h2p;p})$, $(\\text{2hp;h})$,\nwhich excite at around $U$ and $(\\text{2h2p};0)$, $(\\text{hd;p})$,\nwhich excite at around $2U$.\nThe spectral weights at finite $T$ are calculated by Fermis golden rule\nas before. The essential additional ingredient is the probability to\nfind a particle, or a hole, respectively, to start from and to find\nsites which can be excited, i.e., which are not occupied by a particle\nor a hole. This leads us to the equations\n\\begin{subequations}\n\\begin{eqnarray}\nI^{\\text{eff},T}_{(\\bar{n};0)} \n&=& I^{\\rm eff}_{(\\bar{n};0)}\\langle n^{\\rm vac} \\rangle^{n_h+n_p+n_d}\n\\\\\n I^{\\text{eff},T}_{(\\bar{n};\\sigma)} &=& \n\\frac{\\langle n^{\\rm vac} \\rangle^{n_h+n_p+n_d}}{2\\pi}\n\\int_0^{2\\pi} I^{k,{\\rm eff}}_{(\\bar{n};\\sigma)} \\langle n^{\\sigma}_k\\rangle dk\n\\end{eqnarray}\n\\end{subequations}\nwith $\\sigma\\in\\{\\text{p,h}\\}$. The powers of $\\langle n^{\\rm vac} \\rangle$\naccount for the probability that the sites to be excited are not blocked\nby other excitations; the factor\n$\\langle n^{\\sigma}_k\\rangle$ accounts for the probability \nthat the excitation necessary for the particular process\nis present.\nThe momentum dependence in $I^{k,{\\rm eff}}$ stems from the momentum\ndependence of the annihilated particle or hole. It is computed by the\nsum of the moduli squared of the \nFourier transform of the matrix elements $c_{(\\bar{n};\\bar{m})}$\nin the real space coordinate of the annihilated excitation. \n\nThe spectral weight in the high energy peak (at $2U$) relative to the weight in\nthe low energy peak (at $U$) is given by the ratio $S_2\/S_1$ where \n$S_2=I^{\\text{eff},T}_{(\\text{2h2p};0)}+I^{\\text{eff},T}_{(\\text{hd;p})}$ and \n$S_1=I^{\\text{eff},T}_{(\\text{hp};0)}+I^{\\text{eff},T}_{(\\text{h2p;p})}+\nI^{\\text{eff},T}_{(\\text{2hp;h})}$. Fig.~\\ref{fig_sw_T}\ndisplays this key quantity as function of $t\/U$ and $T\/U$ for \n$r_{\\mathcal O}=4$. The difference to the result for $r_{\\mathcal O}=3$ is \nless than $0.005$. \n\nTwo regimes can be distinguished. For low values of\n$T\\lessapprox 0.19U$, the ratio $S_2\/S_1$ increases on increasing $t\/U$\nbecause the particle gap $\\Delta^\\text{p}$ decreases so that\n$\\langle n^\\text{p}_{k\\approx 0}\\rangle$ grows.\nFor higher values of $T\\gtrapprox 0.19U$, the increase of \n$\\langle n^\\text{p}_{k\\approx 0}\\rangle$ \nis overcompensated by the decrease of the weights\n$I^{\\text{eff},T}_{(\\text{2h2p};0)}+I^{\\text{eff},T}_{(\\text{hd;p})}$\nand the increase of the weights $I^{\\text{eff},T}_{(\\text{hp};0)}+\nI^{\\text{eff},T}_{(\\text{h2p;p})}+ I^{\\text{eff},T}_{(\\text{2hp;h})}$, cf.\\\nFig.~\\ref{fig_sw}, so that the relative spectral weight\ndecreases on increasing $t\/U$. Around $T=0.19U$, the ratio is fairly\nindependent of $t\/U$.\n\n\nThe experimental value of $S_2\/S_1$ \\cite{stofe04} is about $0.2-0.5$ \nfor small values of $t\/U$ ($t\\leq 0.03U$). It increases\non approaching the superfluid phase. Hence, our estimate for\n$S_2\/S_1$ implies a \\emph{significant} temperature $T\\approx U\/3$\nin the MI phase, which was not expected. \nThis is the main result of our analysis of the spectral weights.\n\n\\section{Discussion}\nThe analysis of the spectral weights lead us to the conclusion that\nthe temperature of the Mott-insulating phases must be quite considerable.\nAt first sight, this comes as a surprise because other experiments\non cold atoms imply very low temperatures, see for instance Ref.\\\n\\onlinecite{pared04}. But the seeming contradiction can be resolved.\n\nOne has to consider the entropies involved \\cite{blaki04,rey04}.\nThe entropy per boson $S\/N$ in a three-dimensional harmonic trap\nis given by $\\approx 3.6(1-f_0)$ where $f_0$ is the fraction\nof the Bose-Einstein condensate \\cite{schmi05c}. \nAdiabatic loading into the optical lattice\nkeeps $S\/N$ constant so that we can estimate the temperature of\nthe MI phase from the derivative of the free energy \\cite{troye94}. \nFor $f_0=0.95; 0.9; 0.8$ we obtain significant temperatures\n$T\/U=0.12; 0.17; 0.27$ in satisfactory\nagreement with the analysis of the spectral weights \\cite{schmi05c}.\nThe analogous estimate in the case of the Tonks-Girardeau limit (large $U$ and\naverage filling $n$ below unity, see Ref.\\ \\onlinecite{pared04})\nleads to temperatures of the order of the hopping $t$:\n$T\/t= 0.17; 0.32; 0.61$ (assuming $n=1\/2$) \\cite{schmi05c}.\nThese values are again in good agreement with\nexperiment. \n\nThe physical interpretation is the following.\nOn approaching the Mott insulator by changing the filling to the\ncommensurate value $n\\to 1$ the temperature has to rise because\nthe available phase space decreases. For $n<1$ there is phase space \nwithout occupying any site with two or more bosons $b$ because\none can choose between occupation 0 or 1. But at $n=1$ the state\nwithout any doubly occupied site is unique. Hence no entropy \ncan be generated without inducing doubly occupied and empty sites, i.e., \nexcitations of $p$- and of $h$-type. This in turn requires that\nthe temperature is of the order of the gap which agrees well with\nour analysis of the spectral weights.\n\nWe like to draw the reader's attention to the fact that our result\nof a fairly large temperature ($T\\approx U\/3$) provides also a \npossible explanation why no response at $\\approx 2U$ was found\nby QMC a low temperatures \\cite{batro05}. \nFurther investigations will be certainly fruitful, e.g.,\nit would be interesting to obtain QMC results for the \nexcitation operator $R$ that we used in our present work.\nThe excitation operator used in Ref.\\ \\onlinecite{batro05}\nvanishes for vanishing momentum so that it is less suited to\ndescribe the experimental Bragg spectroscopy \\cite{stofe04}.\n\n\nIn the attempt to provide quantitive numbers for the temperature\nin the MI bose systems one must be cautious because the bosonic systems\nare shaken fairly strongly in experiment \\cite{stofe04,kohl05}. \nThough it was ascertained that the experiment was conducted\nin the linear regime it might be that the systems are heated by\nthe probe procedure so that the temperatures seen in the \nspectroscopic investigations are higher than those seen by other\nexperimental investigations.\n\nIt would be rewarding to clarify the precision of the\n spectroscopic investigations. Then the pronounced $T$ dependence of the\nrelative spectral weight in Fig.\\ \\ref{fig_sw_T} could be used as\na thermometer for Mott-insulating bosons in optical lattices.\n\nIn summary, we have studied spectral properties of bosons in one-dimensional\noptical lattices using particle-conserving continuous\nunitary transformations (CUTs). At $T=0$ and for small $t\/U$ \nspectral weight is only present at energies $\\approx U$. \nRecent experimental peaks at $\\approx 2U$ \\cite{stofe04} can\nbe explained assuming $T\\approx U\/3$. Our results suggest to investigate the\neffects of finite $T$ on bosons in optical lattices much more thoroughly.\n\n\\begin{acknowledgments}\nWe thank T. St\\\"oferle for providing the experimental data.\nFruitful discussions are acknowledged with A. L\\\"auchli, \nT. St\\\"oferle, I. Bloch,\nS. Dusuel, D. Khomskii, and E. M\\\"uller-Hartmann. \nThis work was supported by the DFG via SFB 608 and SP 1073.\n\\end{acknowledgments}\n\n\\bigskip\n\n\\begin{appendix}\n\\section{Explicit expression for the observable}\\label{sec:app}\nThe local observable depending on the flow parameter $l$ reads \n\\begin{eqnarray}\n \\label{eq:bose:obsformel2}\n R(l,\\mathbf r)&=&\\sum_{\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\}}\nc(l,\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\})\\times\\nonumber\\\\\n&&\\quad h_{i_1+r}^{\\dagger}\\cdot ... \\cdot h_{i_{n_h}+r}^{\\dagger} h_{i^\\prime_1+r}^{\\phantom{\\dagger}} \\cdot ... \\cdot h_{i^\\prime_{m_h}+r}^{\\phantom{\\dagger}}\\times\\nonumber\\\\\n&&\\quad p_{j_1+r}^{\\dagger}\\cdot ... \\cdot p_{j_{n_p}+r}^{\\dagger} p_{j^\\prime_1+r}^{\\phantom{\\dagger}} \\cdot ... \\cdot p_{j^\\prime_{m_p}+r}^{\\phantom{\\dagger}}\\times\\nonumber\\\\\n&&\\quad d_{k_1+r}^{\\dagger}\\cdot ... \\cdot d_{k_{n_d}+r}^{\\dagger}\nd_{k^\\prime_1+r}^{\\phantom{\\dagger}} \\cdot ... \\cdot\nd_{k^\\prime_{m_d}+r}^{\\phantom{\\dagger}}\\nonumber. \\\\\n\\end{eqnarray}\nThe numbers $(n_h\\; n_p\\; n_d; m_h\\; m_p\\; m_d)$ are defined as the number of \noperators involved in a term in Eq.~\\ref{eq:bose:obsformel2}. The number of\ncreation operators $h^\\dagger$, $p^\\dagger$, and $d^\\dagger$ is \ngiven by $n_h$, $n_p$, and $n_d$, respectively. \nThe number of annihilation operators $h$, $p$, and $d$ \nis given by $m_h$, $m_p$, and $m_d$, respectively. \nA set of these six numbers defines the type\n$(\\bar{n};\\bar{m})$ of a process. \nThe variables $\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\}$\nare multi-indices, e.\\ g.\\ $i=\\{i_1,...,i_{n_h}\\}$ which give the position of\nthe operator. The coefficients $c(l,\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\})$\nkeep track of the amplitudes of these processes during the flow. Their value\nat $l=\\infty$ defines the effective observable $R^\\text{eff}$. \n\nThe total observable is the sum \n\\begin{equation}\n R(l)= \\sum_{{\\mathbf r}} R(l,{\\mathbf r}).\n\\end{equation}\nNo phase factors occur so that the observable is invariant with respect\nto translations. Hence no momentum transfer takes place and the\nresponse at $k=0$ is the relevant one. \n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAdversarial images are perturbed visual stimuli that can fool a high performing image classifier with carefully chosen noise that is often imperceptible to humans~\\citep{szegedy2013intriguing,goodfellow2014explaining}. These images are synthesized using an optimization procedure that maximizes the wrong output class of a model observer, while minimizing any noticeable differences in the image for a reference observer. Understanding why adversarial images exist has been studied extensively in machine learning as a way to explore gaps in generalization~\\citep{gilmer2018adversarial,yuan2019adversarial,ilyas2019adversarial}, computer vision with applications to real-world robustness~\\citep{dubey2019defense,yin2019fourier,richardson2020bayes}, and recently in vision science to understand similar and divergent visual representations with humans ~\\citep{zhou2019humans,feather2019metamers,golan2019controversial,reddy2020biologically,dapello2020simulating}. Thus far there have been gaps in the literature on how natural image distributions and classification task impact the robustness of a model to adversarial images. \n\nThis paper addresses whether training on a specific natural image distribution or task plays a role in the adversarial robustness of a model. \\textit{Natural images} are images that are representative of the real world. MNIST, CIFAR-10, and ImageNet are examples of natural image datasets. The goal is to understand what it means for a model to inherently be more adversarially robust to objects vs scenes or objects vs digits, where the latter is addressed in this paper. The thesis of this paper is that both the natural image distribution and task (independently and jointly) play a role in the adversarial robustness of a model trained on them. \n\nAnswering questions about the role the image distribution and task in adversarial robustness could be critical for applications where an adversarial image can be detrimental (e.g. self-driving cars~\\citep{lu2017no}, radiology~\\citep{hirano2020vulnerability} and military surveillance~\\citep{ortiz2018defense,deza2019assessment}). These applications are often models of natural image distributions. Understanding if and how the dataset and task play a role in the adversarial robustness of a model could lead to better adversarial defenses for the aforementioned applications and a better understanding of the existence of adversarial images.\n\nThe works most closely related to the thesis of this paper are the following: \\citet{ilyas2019adversarial} found that adversarial vulnerability is not necessarily tied to the training scheme, but rather is a property of the dataset. Similarly,~\\citet{ding2019on} finds that semantic-preserving shifts on the image distribution could result in drastically different adversarial robustness even for adversarially trained models.\n\n All work in this paper studying the role of natural image distribution and classification task in the adversarial robustness of a model is empirical. The experiments presented require important performance comparisons. Therefore, we propose an unbiased metric to measure the adversarial robustness of a model for a particular set of images over an interval of perturbation strengths. Using this metric, we compare MNIST and CIFAR-10 models and find that MNIST models are inherently more adversarially robust than CIFAR-10 models. We then create a Fusion dataset ($\\alpha$-blend of MNIST and CIFAR-10 images) to determine whether the image distribution or task is causing this difference in adversarial robustness and discover that both play a role. Finally, we examine whether pretraining on one dataset (CIFAR-10 or MNIST), then training on the other results in a more adversarially robust learned representation of the dataset the model is trained on and find that this impacts robustness in unexpected ways. \n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/Figure1_Cartoon_Final.pdf}\n \\caption{(A) After using the same hyperparameters and training scheme (SGD) for both models, MNIST achieves around $99\\%$ accuracy, while CIFAR-10 peaks around $80\\%$ with ResNet50 (both without data-augmentation). In cases like these it may be obvious to say that better performing models will be more adversarially robust -- but this is not always the case, in some cases it is the opposite when fixing the image distribution~\\citep{zhang2019theoretically}; (B) One solution: example graphs showing the area under the curve of $f(\\epsilon)$ and $g(\\epsilon$), functions outputting the accuracy of an adversarial attack for a given $\\epsilon$ of two models before (top) and after (bottom) accuracy normalization. This shows how at $\\epsilon_0$, models go from an unmatched accuracy to a matched upper-bounded score of 1, allowing an unbiased computation of area under the curve.}\n \\label{fig:normalized}\n\\end{figure*}\n\n\n\\section{Proposed Adversarial Robustness Metric}\n\n\nIn order to make comparisons of how robust a model is to adversarial perturbations, a proper metric for adversarial robustness must be defined. \nWe define the adversarial robustness $R$ as a measure of the rate at which accuracy of a model changes as $\\epsilon$ (adversarial perturbation strength) increases over a particular $\\epsilon$-interval of interest. The faster the accuracy of a model decreases as $\\epsilon$ increases, the lower the adversarial robustness is for that model. We propose an adaptation of area under the curve (AUC) to measure adversarial robustness. A good measure of how much change of accuracy is occurring in an $\\epsilon$-interval for a model is the AUC of a function that outputs the accuracy for an adversarial attack given an $\\epsilon$ for a model. This AUC provides a total measure of model performance for an $\\epsilon$-interval. If the accuracy decreases quickly as $\\epsilon$ increases, then the AUC will be smaller. \n\nDespite how intuitive the previous notion may sound, we immediately run into a problem: Some datasets are more discriminable than others independent of model observers, as shown in Figure~\\ref{fig:normalized}(A). This must be taken into account when computing the area under the curve. It could be possible that under unequal initial performances, one model seems more `adversarially robust' over the other by virtue purely of the initial offset in the better performance.\n\nFigure~\\ref{fig:normalized}(B) shows that one simple solution to solve the differences in accuracy between two model systems is by normalizing them with respect to their accuracy under non-adversarial ($\\epsilon_0=0$) inputs. This yields the following expression: \\begin{equation}\n\\label{eq:Normalization}\n R=\\frac{1}{f(\\epsilon_0)(\\epsilon_1-\\epsilon_0)}\\int_{\\epsilon_0}^{\\epsilon_1}f(\\epsilon)d\\epsilon\n\\end{equation}\nwhich can be interpreted as the normalized area under the curve of a function $f(\\epsilon)$ that outputs the accuracy of a model for an adversarial attack of strength $\\epsilon$ over an $\\epsilon$-interval (i.e. $[\\epsilon_0, \\epsilon_1]$). Note that $f(\\epsilon_0)>0$ and $\\epsilon_1 > \\epsilon_0$. Computing $R$ is the same as integrating relative change (shown in supplementary material). Therefore, $R$ is an aggregate measure of relative change in accuracy over an $\\epsilon$-interval. The division by $f(\\epsilon_0)$ normalizes the function because the function now represents the change in accuracy with respect to no adversarial perturbations (i.e. it is now a relative change). Further, the accuracy at $f(\\epsilon_0)$ can be considered an \\textit{`oracle'} for the adversarial attacks of the model (i.e. the likely optimal or best performance for that $\\epsilon$-interval). The term $\\frac{1}{\\epsilon_1-\\epsilon_0}$ of Eq.~\\ref{eq:Normalization} puts the area under the curve of the normalized accuracy between $(0,1]$. This is so that it is easier to interpret and so that the metric is normalized for different $\\epsilon$-intervals (i.e. the maximum value is not $\\epsilon_1 - \\epsilon_0$, but instead is 1). Note that the metric is valid independent of the adversarial attack method.\n\nIf for a particular model, $R=1$, this implies that $f(\\epsilon)$ is constant over $[\\epsilon_0, \\epsilon_1]$. If for a model, $R\\approx 0$, that means that for all $\\epsilon$ in the interval, the model classifies nearly all the perturbed images of a given set incorrectly. $R$ can be arbitrarily close to 0.\n\nTo guarantee that $R\\leq 1$, the following constraint must be satisfied: \n\\begin{equation}\n \\int_{\\epsilon_0}^{\\epsilon_1}f(\\epsilon)d\\epsilon \\leq f(\\epsilon_0)(\\epsilon_1 - \\epsilon_0)\n\\end{equation}\nThis is a reasonable constraint to make. An interpretation of $f(\\epsilon_0)(\\epsilon_1 - \\epsilon_0)$ is a possible AUC for $f(\\epsilon)$. This AUC occurs when $f(\\epsilon) = f(\\epsilon_0$) for all $\\epsilon\\in[\\epsilon_0, \\epsilon_1]$. In other words, as $\\epsilon$ increases, the classification performance of the adversarial images does not change. An AUC greater than $f(\\epsilon_0)(\\epsilon_1 - \\epsilon_0)$ would imply that the accuracy increases above the starting accuracy (i.e. $f(\\epsilon_0)$). This behavior would contradict what it means to perform an adversarial attack.\n\nTo measure the impact $C$, that adversarial attacks have on a model between two specific $\\epsilon$ points instead of an interval, the following can be used: \\begin{equation}\n\\label{eq:relative}\n C =\\frac{f(\\epsilon)-f(\\epsilon_0)}{f(\\epsilon_0)}\n\\end{equation}\n\nwhere $C$ is the relative change between the performance of a model for two different $\\epsilon$'s of adversarial attacks. Normalizing to compute $R$ by taking the relative change in error with respect to a reference or optimal value $f(\\epsilon_0)$ (i.e. Eq. \\ref{eq:relative}) results in a less biased measure for adversarial robustness than other normalization schemes, such as taking the difference (i.e. $f(\\epsilon) - f(\\epsilon_0)$). This is because the other schemes are unable to properly account for differences in performance of models on a particular dataset or task. Broadly, we are not interested in how much the performance differs overall, but how much it differs relative from where it started.\n\nThere are two methods to find $f(\\epsilon)$: 1) to empirically compute multiple values of $\\epsilon$ and estimate the area under the curve using integral approximations, such as the trapezoid method; 2) to find the closed form expression of $f(\\circ)$ as one would do for psychometric functions~\\citep{wichmann2001psychometric} and integrate. In this paper, we do the former (compute multiple values of $\\epsilon$ and estimate the integral using the trapezoid method), although this method is extendable to the latter.\n\nPicking $\\epsilon_0$ and $\\epsilon_1$ is an experimental choice. Choosing $\\epsilon_0 = 0$ allows measures the adversarial robustness starting from no perturbations, yet $\\epsilon_0 > 0$ can also be used. For too high a choice of $\\epsilon_1$, the image can saturate and the performance will likely approach chance. This rebounding effect can be seen in some of the CIFAR-10 curves in our experiments.\n\nThere are certain assumptions for this normalization scheme to hold. For example, in both of our experiments MNIST and CIFAR-10 are equalized to have 10 classes and we assume an independent and identically distributed testing distribution such that chance performance for any model observers is the same at $10\\%$. One could see how the normalization scheme would give a misleading result if one dataset has 2 i.i.d classes that yield 50\\% chance and another dataset yields 10 i.i.d classes that yield 10\\% chance. In this case, proportions correct are not comparable and a more principled way of equalizing performance -- likely using $d'$ (a generalized form of Proportion Correct used in Signal Detection Theory) would be required~\\citep{green1966signal}.\n\nOverall, this robustness metric can be used to get a sense of whether a model is adversarially robust over a particular $\\epsilon$-interval or to measure how adversarially robust a model is comparatively to other models over that interval for a particular set of inputs. Note that this metric is not intended to be used to certify the adversarial robustness of an artificial neural network since it is an approximation of the change of accuracy of a model over an $\\epsilon$-interval for, in this paper, a specific set of images. \n\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/experiment_figure.pdf}\n \\caption{(A) 20 models are trained for each dataset\/task (MNIST, CIFAR-10, and later a Fusion Dataset) and network architecture (LeNet, ResNet50, FullyConnectedNet), using a different set of randomly initialized weights (i.e. 60 models per dataset); (B) The models are then tested on adversarial images generated using FGSM \\cite{goodfellow2014explaining} of various perturbation strengths. The results from testing result in graphs similar to \\ref{fig:normalized}(A). Using these results, the adversarial robustness is computed using Eq. \\ref{eq:Normalization}. The average adversarial robustness across the set of two models is compared to determine which model is more adversarially robust and analyze these results.}\n \\label{fig:setup}\n\\end{figure*}\n\n\\section{Experimental Design}\n\\label{sec:Methods}\n\nFigure $\\ref{fig:setup}$ visualizes the general experimental design, where models are trained on either MNIST or CIFAR-10 images, and later Fusion images. The architecture, optimization and learning scheme, and initial random weights between each MNIST and CIFAR-10 model is the same, allowing us to draw comparisons between the adversarial robustness of the models after attacking the trained models. \n\n\\subsection{Architectures}\nAll experiments used 3 networks: LeNet \\cite{lenet}, ResNet50 \\cite{resnet}, and a fully connected network (FullyConnectedNet) where we explored adversarial robustness over 20 paired network runs and their learning dynamics. FullyConnectedNet has 1 hidden layer with 7500 hidden units. This number of hidden units was chosen so the number of parameters for the FullyConnectedNet has the same order of magnitude as the number of parameters for ResNet50. FullyConnectedNet only have 1 hidden layer so that the network is not biased to approximate a hierarchical function as a convolutional neural network (See~\\cite{mhaskar2016deep,poggio2017and} and recently \\cite{neyshabur2020towards,deza2020hierarchically}).\n\n\\subsection{Datasets}\nThe datasets used were MNIST, CIFAR-10, and a Fusion Dataset. To use the exact same architectures with the datasets, MNIST was upscaled to $32\\times32$ and converted to 3 channels to match the dimensions of CIFAR-10 (i.e. $32\\times 32 \\times 3$). MNIST was changed instead of CIFAR-10 because given the low image complexity of MNIST images -- mainly their low spatial frequency structure, that lends itself to upscaling -- and thus it would be less likely to change the accuracy of models trained on that dataset. Preliminary results showed that there is a difference (insignificant in comparison to the other differences in results in this paper) in the adversarial robustness of models trained on the scaled and 3 color channel version and the regular version, with the scaled and 3 color channel version being less robust. Whether the changes to MNIST entirely caused the difference was not determined due to the differences between architectures that were used for each version. No other changes to the datasets were made (such as color normalization, which is typically used for CIFAR-10) in order to preserve the natural image distribution.\n\nThe Fusion dataset that is used in the experiments is not a natural image distribution. It was created with the purpose of better understanding the inherit adversarial robustness proprieties of natural image distribution models. Each fusion image in the dataset is generated with the following $\\alpha$-blending procedure: \n\\begin{equation}\n\\label{eq:fusion}\n F = 0.5M + 0.5C,\n\\end{equation} where $F$ is a new fusion image, $M$ is an MNIST image modified to 32x32x3 (by upscaling and increasing number of color channels), and $C$ is a CIFAR-10 image. Example fusion images can be found in Figure~\\ref{fig:fusion}. This dataset is similar to \\textit{Texture~shiftMNIST} from~\\citet{jacobsen2018excessive}.\n\nThe Fusion dataset was created online during training or testing during each mini-batch by formula~\\ref{eq:fusion}. The fusion image training set was constructed using the MNIST and CIFAR-10 training set and the fusion image test set was constructed using the MNIST and CIFAR-10 test set. During training, the MNIST and CIFAR-10 datasets are shuffled at the start of every epoch. Therefore, it is likely that no fusion images are shown to the model more than once. This was done to ensure that the model cannot learn any correlation between any CIFAR-10 object and any MNIST digit, as well as, improve generalization of the model. Additionally, it is important to note that no two models were trained on the exact same set of fusion images, but were evaluated on the same test images. Since we train 20 random models, it should average out any possible noise to a certain degree, but strictly speaking the images were different but the statistics were approximately matched.\n\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{figures\/Figure_Fusion_Dataset_Only.pdf}\n \\caption{The Fusion dataset was created to tease apart the cause and effects of the inherit adversarial robustness of models trained on natural image distributions. Here, we show sample images from the Fusion dataset consisting of alpha-blended MNIST + CIFAR-10 stimuli.}\n \\label{fig:fusion}\n\\end{figure}\n \n\\subsection{Hyperparameters, Optimization Scheme, and Initialization}\nIt is important to note that all hyperparameters are held constant, including the $\\epsilon$-interval. The only difference between the models using a certain architecture is the dataset\/task they are trained and tested on (just the task in the case of the Fusion dataset). In the experiments presented, the independent variables are the dataset and task, while the dependent variable being measured is the adversarial robustness of the model. Since all other variables are held fixed, if the adversarial robustness of the models trained on the different datasets\/tasks are different, then this change is due to the dataset\/task itself (i.e. the image distribution and classification task). If the $\\epsilon$-interval used to attack the two models is different we could not directly conclude that any differences are due to the image distribution and task because the difference could also be due to the differences in the strengths of the adversarial attacks on each model. Experiments using the Fusion dataset are presented in this paper to investigate which of the independent variables (i.e. whether image distribution or task) is playing a role in the differences in adversarial robustness.\n\nThe loss function used for all models was cross-entropy loss and the optimizer used was stochastic gradient descent (SGD) with weight decay $5\\times10^{-4}$, momentum $0.9$, and with an initial learning rate 0.01 for the FullyConnectedNet and LeNet models and an initial learning rate 0.1 for the ResNet50 models. The learning rate was divided by 10 at 50\\% of the training. The FullyConnectedNet and LeNet models were trained to 300 epochs and the ResNet50 models were trained to 125 epochs. ResNet50 models required less epochs during training because those models reached high levels of performance sooner than the other architectures. A batch size of 125 was used. The batch size was 125 since this is the closest number to a more typical batch size of 128 that divides both the number of CIFAR-10 images and MNIST images. This was needed to ensure that the batches align properly when creating the fusion images. These hyperparameters and optimization scheme were chosen since they resulted in the best performance of those tested in preliminary experiments. \n\nFor all experiments, each model was trained 20 times with matched initial random weights across different datasets. For example in the case of LeNet, 20 different LeNet models all with different initial random weights:~$\\{w_1,w_2,...,w_{20}\\}$ were used to train for CIFAR-10 in our first experiment, and these same initial random weights were used to train for MNIST. This removed the variance induced by a particular initialization (\\textit{e.g.} a lucky\/unlucky noise seed) that could bias the comparisons by arriving to a better solution via SGD. This procedure was possible because our MNIST dataset was resized to a 3-channeled version with a new size of $32\\times32\\times3$ instead of $28\\times28\\times1$ (original MNIST).\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/Figure_Regular_Fusion_Tree.pdf}\n \\caption{MNIST-trained networks (bottom left) across all architectures show greater adversarial robustness after accuracy normalization than CIFAR-10 trained networks (top left for each architecture). Notice too that ResNet50 appears to be the more adversarially robust network across network architectures (LeNet and FullyConnectedNet) independent of learning dynamics. Graphs of the normalized accuracy of the Fusion dataset on the object recognition task (top right) and digit recognition task (bottom right) for LeNet, ResNet50, and FullyConnectedNet. Generally, models trained on the digit task were more adversarially robust than those trained on the object task, showing the role that task plays in the adversarial robustness of a model. Additionally, these models were generally less adversarially robust than their MNIST and CIFAR-10 model counterparts. In combination, these results imply that both task and image distribution play distinct roles in the adversarial robustness of a model. The gold lines represent chance performance in the graphs.}\n \\label{fig:MNISTvsCIFAR_Original_and_Fusion}\n\\end{figure*}\n\n\\subsection{Adversarial Attacks}\n\nThe method used for generating adversarial images in the experiments presented in this paper is the Fast Gradient Sign Method (FGSM) presented in \\citet{goodfellow2014explaining}. The focus of the attacks was to create images that cause the model to misclassify in general, rather than misclassifying an image to a particular class. FGSM was chosen over other optimization based attacks such as Projected Gradient Descent (PGD) \\cite{madry2019deep} based on preliminary results as FGSM was sufficient to successfully adversarially attack the model. FGSM also has a lower computational cost than PGD allowing us to run more experiments and train more models. Adversarial training or other data-augmentations schemes that may bias the outcome were not performed. Importantly, given that an adversarial defense mechanism is not being proposed or used, strong adversarial attack methods, such as PGD, are not necessary in this first work -- contrary, but justified to the advice from \\citet{carlini2019evaluating}.\n\nThe $\\epsilon$-interval used in the experiments is $[0,0.3]$ (i.e. $\\epsilon_0 = 0, \\epsilon_1 = 0.3$). The upper bound of $0.3$ was chosen because adversarial images at that magnitude are difficult for many undefended classifiers to classify correctly. The trained models were adversarially attacked with $\\epsilon\\in\\{0, 0.0125, 0.025, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3\\}$ to approximate $f(\\epsilon)$. For the models using LeNet and FullyConnectedNet architectures, they were adversarially attacked at 1, 10, 25, 50, 150, and 300 epochs. Models using the ResNet50 architecture were adversarially attacked at 1, 10, 25, 50, 100, and 125 epochs. Different epochs were adverarially attacked to determine whether the results differed at different stages of learning.\n\n\\section{Experimental Results}\n\nThe following experiments provide a glimpse into the role of classification task and image distribution in the adversarial robustness of models.\n\nAll differences in robustness that are mentioned are statistically significant using a Welch's t-test with significance level $\\alpha=0.05$. This test was used because the models are unpaired and do not have equal variance since the models are trained on different datasets.\n\n\\subsection{Comparing MNIST vs CIFAR-10 Adversarial Robustness}\nThis experiment investigates whether MNIST models are inherently more adversarially robust than CIFAR-10 models. This was investigated by comparing the adversarial robustness of CIFAR-10 models and the MNIST models for the three architectures. Figure~\\ref{fig:MNISTvsCIFAR_Original_and_Fusion} (top left for each architecture) shows normalized accuracy graphs for the CIFAR-10 trained models and Figure~\\ref{fig:MNISTvsCIFAR_Original_and_Fusion} (bottom left) shows graphs of normalized accuracy for MNIST trained models. Both LeNet and FullyConnectedNet, the MNIST models were more adversarially robust than CIFAR-10 models, for each epoch we examined. The same pattern of results held for ResNet50 models except for the first epoch where there was no difference between the MNIST and CIFAR-10 models.\n\n\\underline{Result 1:} For the three network architectures tested (that all vary in approximation power and architectural constraints), MNIST trained models are inherently more adversarially robust than CIFAR-10 models. This implies that the task and\/or image distribution play a role in the adversarial robustness of the model.\n\n\\subsection{Comparing Object vs Digit Classification in the Fusion (MNIST + CIFAR-10) dataset}\n\nThe previous results suggested that after taking into account different measures of accuracy normalization, MNIST (both dataset and digit recognition task) models are intrinsically more adversarially robust than CIFAR-10 models. This implies that it is harder to fool an MNIST model, than a CIFAR-10 model, likely, in part, due to the fact that number digits are highly selective to shape, and show less perceptual variance than objects.\n\nNaturally, the next question that arises is if the task itself is somehow making each perceptual system less adversarially robust. To test this hypothesis the Fusion dataset was used. Models were trained to perform either digit recognition or object recognition on these fusion images -- thus we have approximately fixed the image distribution but varied the approximation task~\\citep{deza2020hierarchically}. They are approximately matched because no model is trained on the exact same images, the image distribution is approximately the same on average given the random sampling procedure. The goal with this new hybrid dataset is to re-run the same set of previous experiments and test adversarial robustness for both the digit recognition task and the object recognition task and probe the role\nof the type of classification task when fixing the dataset to test how adversarial robustness varies when all other variables remain constant.\n\nObservation: When examining the first epoch for the fusion trained models, the standard deviation of the curves in \\ref{fig:fusion}(B) are generally high. This is likely due to design choice of avoiding to show the same fusion image twice. This does not occur in later stages of training.\n\n\\underline{Result 2a:} Task plays a critical role in the adversarial robustness of a model. Figure ~\\ref{fig:MNISTvsCIFAR_Original_and_Fusion} contains the normalized curves of the results for the digit and object recognition tasks on the fusion dataset for each of the architectures. The models were evaluated on fusion images constructed from the MNIST and CIFAR-10 test sets. The FullyConnectedNet (all epochs), ResNet50 and LeNet fusion image models were more adversarially robust on the digit recognition task than the object recognition task for all epochs examined excluding the first epoch. This suggests that even if the image distribution is approximately equalized at training, the representation learned varies given the task, and impacts adversarial robustness differently.\n\n\\underline{Result 2b:} Image distribution also plays a role in the adversarial robustness of a model. Comparing the three architectures trained on the Fusion Dataset vs their regular image-distribution trained models show that increasing the image complexity (by adding a conflicting image with the hope of increasing invariance) in fact decreases adversarial robustness when compared to regularly trained networks. Comparing fusion image models trained on the digit task and MNIST models: for the FullyConnectedNet and LeNet architecture, the MNIST models were more robust. The same holds for the ResNet50 MNIST models except at the first epoch, where there was no difference. CIFAR-10 models using the FullyConnectedNet architecture were more adversarially robust than the fusion image models trained on the object recognition task for all epochs tested. The same was true for the LeNet and ResNet50 architectures except there were no differences between CIFAR-10 models and fusion images with object task in adversarial robustness for 1 and 50 epochs. \n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/Figure_Regular_PreTraining_Tree.pdf}\n \\caption{\\textit{Visual Hysteresis}: \n FullyConnectedNet and LeNet networks seems to carry over the learned representation and adversarial vulnerability from the pretrained system. However, only LeNet experiences a clear visual hysteresis where pretraining on CIFAR-10 for MNIST is worse (less adversarially robust) than only training on MNIST, yet pretraining on MNIST for CIFAR-10 is better (more adversarially robust) than only training on CIFAR-10 (See supplementary material). The gold lines represent chance performance in the graphs.}\n \\label{fig:MNISTvsCIFAR_Original_and_Pretraining}\n\\end{figure*}\n\n\\subsection{Impact of Pretraining on Out-Of-Distribution (o.o.d) image datasets}\n\nThis experiment investigates whether pretraining on one dataset (CIFAR-10 or MNIST), then training on the other results in a more adversarially robust learned representation of the dataset the model is trained on. \n\nThe pretraining procedure was done by using the existing fully trained CIFAR-10 or MNIST FullyConnectedNet, LeNet, and ResNet50 models as bases and then training\/fine-tuning them using the same training scheme but with MNIST or CIFAR-10 respectively. These models were then tested using the test sets of the datasets the models were fine-tuned on.\n\nFor the FullyConnectedNet, the MNIST models were more adversarially robust than the MNIST pretrained on CIFAR-10 model during early stages of learning, but the pretrained models were more robust when examined at 150 and 300 epochs of fine-tuning. The MNIST LeNet models were more adversarially robust for all stages of learning than the pretrained model. The pretrained ResNet50 models had no differences in robustness compared to the MNIST ResNet50 models, except for the first epoch where the pretrained models were more robust. This result is unexpected as this does not occur for the other architectures. These results would seem to suggest that architecture plays a role in the adversarial robustness of the learned representation contingent on the given dataset\/task and potentially compositional nature. \n\nPretraining on CIFAR-10 and then training on MNIST generally does not lead to more adversarially robust models. Next we investigate whether pretraining on MNIST and then training on CIFAR-10 has this same effect. We find that this is not always the case. Pretraining on MNIST then training on CIFAR-10 led to marginal improvements in adversarial robustness for LeNet, except for the first epoch (Figure~\\ref{fig:MNISTvsCIFAR_Original_and_Pretraining}). For ResNet50, pretraining resulted in less adversarially robust models at the start and end of training (1 and 125 epochs), otherwise there was no difference compared to not pretraining. The FullyConnectedNet pretrained models were more adversarially robust in earlier stages of learning, but were less robust in later stages. Tables of the robustness metrics for the CIFAR-10 models pretrained on MNIST (as well as for other experiments) can be found in the supplementary material. These findings requires further investigation. \n\nFor the ResNet50, LeNet, and FullyConnectedNet architectures, the models pretrained on CIFAR-10 then trained on MNIST were statistically significantly more adversarially robust than models pretrained on MNIST then trained on CIFAR-10 for all epochs examined.\n\n\\underline{Result 3}: Pretraining on CIFAR-10 followed by training on MNIST does not generally produce a more adversarially robust model than training on MNIST alone, with any of the tested architectures. This is counter intuitive given that humans typically base their learned representations on objects rather than figures~\\citep{janini2019shape}. On the other hand, pretraining on MNIST, then training on CIFAR-10 only aided LeNet; for FullyConnectedNet it helped in earlier stages of learning, while decreased robustness later. Generally, however, ResNet50 models were not affected in terms of carried-over robustness at any intermediate stages of learning. Investigating the origins of this visual hysteresis (an asymmetry in learned representation visible through robustness given the pretraining scheme)~\\citep{sadr2004object} and how it may relate to shape\/texture bias~\\citep{geirhos2018imagenet,hermann2019exploring}, spatial frequency sensitivity~\\citep{dapello2020simulating,deza2020emergent}, or common perturbations~\\citep{hendrycks2018benchmarking} is a subject of on-going work.\n\n\\section{Discussion}\nThis work verified that both the image distribution and task (independently or jointly) can impact the adversarial robustness of a model under FGSM. The next step is to investigate why, and what specific factors of the image statistics and task play a role. It is likely that MNIST trained networks are intrinsically more adversarially robust than CIFAR-10 trained networks in part due to the lower-dimensional subspace in which they live in given their image structure~\\citep{henaff2014local} compared to CIFAR-10 (\\textit{i.e.} MNIST has less non-zero singular values than CIFAR-10 allowing for greater compression for a fixed number of principal components). Additionally, in future work we want to know whether these observations hold with other optimization based attacks and gradient-free attacks, such as PGD \\cite{madry2019deep} and NES \\cite{ilyas2018blackbox} respectively. Given that FGSM is not considered a strong attack, would a stronger attack exacerbate these results? Based on the noticeable differences in adversarial robustness between the models testing only using FGSM, this is a promising direction. \n\nIndeed, this paper has only scratched the surface of the role of natural image distribution and task in the adversarial robustness of a model by comparing two well known candidate datasets over their learning dynamics: MNIST and CIFAR-10. Continuing this line of work onto exploring the role of the image distribution on adversarial robustness for other natural image distributions such as textures or scenes is another promising next step. Finally, future experiments should continue to investigate the effect of the learning objective on the learned representation induced from the image distribution. We have already seen how the task affects the adversarial robustness of a model even when image statistics are approximately matched under a supervised training paradigm. With the advent of self-supervised~\\citep{konkle2020instance,geirhos2020on,purushwalkam2020demystifying} and unsupervised~\\citep{zhuang2020unsupervised} objectives that may be predictive of human visual coding, it may be relevant to investigate the changes in adversarial robustness for the current (objects, digits) and new (texture, scenes) image distributions with the proposed adversarial robustness metric for these new learning objectives. \n\n\\section{Acknowledgements}\nThe authors thank Dr. Christian Bueno and Dr. Susan Epstein for their helpful feedback on this paper. This work was supported in part by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF \u2013 1231216.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFeynman integrals \\brk{FIs} are a corner stone of the perturbative\nQuantum Field Theory \\brk{pQFT}, at least in its contemporary formulation.\nWithin that branch of research one fruitful discovery were the\nintegration-by-parts identities \\brk{IBPs}~\\cite{Tkachov:1981wb,\nChetyrkin:1981qh}, i.e. linear identities among FIs.\nFor a given pQFT problem \\brk{such as, for example, scattering amplitudes}\nIBPs allow to reduce an infinite set of contributing FIs to a linear\ncombination of finite number of basic objects known as the master integrals\n\\brk{MIs}.\n\nRecently a novel framework~\\cite{\n Mastrolia:2018uzb,\n Frellesvig:2019kgj,\n Mizera:2019gea,\n Frellesvig:2019uqt,\n Frellesvig:2020qot%\n} based on the twisted cohomology theory \\cite{\n matsumoto:1994,\n cho:1995,\n matsumoto:1998,\n ohara:1998,\n OST:2003,\n \n goto:2013,\n \n goto:2015,\n goto:2015b,\n goto:2015c,\n Mizera:2017rqa,\n matsubaraheo:2019%\n} was proposed to describe relations among FIs.\nIt was shown that \\brk{for fixed topology of the corresponding Feynman graphs}\nFIs form a finite dimensional vector space endowed with a scalar product called\nthe intersection number. Among other applications, this structure then helped\nto derive novel algorithms for the direct projection of FIs onto the basis of MIs.\n\nIn \\secref{sec:cohom} review the basics of twisted cohomology theory and\ncomputation of intersection numbers.\nThen in \\secref{sec:second} we present another algorithm\\footnote{\n This section is based on the joint work with\n Federico Gasparotto,\n Manoj K. Mandal,\n Pierpaolo Mastrolia,\n Saiei J. Matsubara-Heo,\n Henrik J. Munch,\n Nobuki Takayama.\n}~\\cite{Chestnov:2022alh}\nfor reduction of FIs exploiting the connection with the\nGel'fand-Kapranov-Zelevinsky \\brk{GKZ} hypergeometric systems and the secondary\nequation~\\cite{matsubaraheo:2019}.\n\n\\section{Twisted cohomology}\n\\label{sec:cohom}\nHere we review some aspects of the twisted cohomology and intersection theory\nsee also \\cite{Mastrolia:2022tww, Mizera:2019ose, Frellesvig:2021vem,\nMandal:2022vok} and \\cite{Weinzierl:2022eaz}.\nOur central subject of study is going to be generalized hypergeometric\nintegrals of the form:\n\\begin{align}\n \\mathcal{I} = \\int_\\mathrm{\\Gamma} u\\brk{x} \\, \\varphi\\brk{x}\n \\ ,\n \n \n \\label{eq:FI-def}\n\\end{align}\nwhere $u\\brk{x} = \\prod_i \\mathcal{B}_i^{\\gamma_i}$ is a multivalued function,\n$\\mathrm{\\Gamma}$ is an $n$-dimensional integration contour such that $\\prod_i\n\\mathcal{B}_i\\brk{\\partial \\mathrm{\\Gamma}} = 0$, and $\\varphi \\equiv \\hat{\\varphi} \\> \\mathrm{d} x_1\n\\wedge \\ldots \\wedge \\mathrm{d} x_n$ is a holomorphic $n$-form \\brk{meaning that the\ncoefficient $\\hat{\\varphi}\\brk{x}$ is a rational function}.\n\nIntegrals such as~\\eqref{eq:FI-def} often appear as parametric representation of FIs. For\nexample, the Baikov representation of a FI with $\\ell$ loops and $E$\nexternal legs in $d$ dimensions, the multivalued function $u\\brk{x}$\ncontains a single factor $u\\brk{x} = \\mathcal{B}\\brk{x}^\\gamma$, where $\\mathcal{B}$ is\nthe Baikov polynomial~\\cite{Baikov:1996iu}, and the exponent\n$\\gamma = \\brk{d - \\ell - E - 1} \/ 2$~. Hence in the following we will\nrefer to the integrals~\\eqref{eq:FI-def} as generalized Feynman Integrals\n\\brk{GFI}.\n\nLinear equivalence relation between FIs:\n\\begin{align}\n \\mathcal{I} =\n \\int_\\mathrm{\\Gamma} u \\> \\varphi \\equiv \\int_\\mathrm{\\Gamma} u \\> \\brk{\\varphi\n + \\nabla_\\omega \\xi}\\ ,\n \\label{eq:FI-equiv}\n\\end{align}\nwhere we introduced the covariant derivative: $\\nabla_\\omega := \\mathrm{d} +\n\\omega \\wedge$ and the $1$-form\n\\begin{align}\n \\omega := \\mathrm{d} \\log{u}\\ ,\n \\label{eq:omega-def}\n\\end{align}\nwhich will be very useful in the following.\nThe equivalence relation~\\eqref{eq:FI-equiv} follows from the Stokes theorem:\n$\n 0\n = \\int_{\\partial \\mathrm{\\Gamma}} u \\> \\xi\n \n \n = \\int_\\mathrm{\\Gamma} u \\> \\nabla_\\omega \\xi\n \n \n \\ ,\n$\n where $\\nabla_\\omega \\xi := \\mathrm{d} \\xi + \\omega \\wedge \\xi$ is the\n covariant derivative.\n\nFixing the contour of integration $\\mathrm{\\Gamma}$ allows us to interpret\nrelation~\\eqref{eq:FI-equiv} as an equivalence of integrands.\nNamely, we collect $n$-forms $\\varphi$ into equivalence classes\n$\\bra{\\varphi}: \\varphi \\sim \\varphi + \\nabla_\\omega \\xi$ generated by adding\ncovariant derivatives of $\\brk{n - 1}$-forms.\nTheir totality forms the twisted cohomology group:\n\\begin{align}\n \\bra{\\varphi} \\in \\mathbb{H}^n_\\omega\n :=\n \\bigbrc{\n \\text{$n$-forms $\\varphi$} \\> | \\>\n \\nabla_\\omega \\varphi = 0\n }\n \\Big\/\n \\bigbrc{\n \\nabla_\\omega \\xi\n }\\ ,\n \\label{eq:cohom-def}\n\\end{align}\nwhich can be thought of as the space of linearly independent FIs \\brk{of a\ngiven topology}.\n\nAnalogously we can introduce the dual integrals $\\mathcal{I}^\\vee$, whose definition\nmimics~\\eqref{eq:FI-def} up to $u \\mapsto u^{-1}$ and $\\nabla_\\omega\n\\mapsto \\nabla_{-\\omega}$\\ . Elements of the dual twisted\ncohomology group will be denoted by kets $\\ket{\\psi}$.\n\n\n\n\\subsection{Counting the number of Master integrals}\nThe framework of twisted cohomology unites several seemingly independent\nmethods for computation of the number of MIs $r$:\n\\begin{enumerate}\n \\item Number of unreduced integrals produced by the Laporta algorithm \\cite{Laporta:2000dsw}.\n \\item Number of critical points, i.e. solutions of $\\dd \\log u\\brk{x} = 0$\\ \\cite{Baikov:2005nv, Lee:2013hzt}.\n \\item Number of independent integration contours $\\mathrm{\\Gamma}_\\lambda$ \\cite{Bosma:2017ens, Primo:2017ipr}.\n \\item Number of independent $n$-forms, i.e. $\\mathop{\\mathrm{dim}}\\bigbrk{\\mathbb{H}^n_{\\pm \\omega}}$\n \\cite{Mastrolia:2018uzb, Frellesvig:2020qot}.\n \\item Holonomic rank of GKZ system \\brk{volumes of certain polytopes}\n \\cite{Chestnov:2022alh, Henrik:2022}.\n\\end{enumerate}\n\n\\subsection{Scalar product between Feynman integrals}\nThe twisted cohomology theory allows us to view the set of FIs\n\\brk{of a given topology} as a finite dimensional vector space.\nA set of MIs $\\bra{e_\\lambda}$ for $\\lambda \\in \\brc{1, \\ldots, r}$\nthen forms a basis in that space.\n\nThe dual FIs really form a dual vector space to FIs due to the\nexistence of a scalar product:\n\\begin{align}\n \\vev{\\varphi | \\psi}\n = \\frac{1}{\\brk{2 \\pi \\mathrm{i}}^n}\n \\int \\iota\\brk{\\varphi} \\wedge \\psi\n \\ ,\n \\label{eq:interx-def}\n\\end{align}\ncalled the intersection number.\nThis scalar product allows to directly\ndecompose a given integral $\\mathcal{I}$\nin a basis of MIs $\\ensuremath{\\mathcal{J}}_\\lambda := \\int_\\mathrm{\\Gamma} u \\> e_\\lambda$,\nnamely $\\mathcal{I} = \\sum_{\\lambda = 1}^r c_\\lambda \\, \\ensuremath{\\mathcal{J}}_\\lambda$\\ .\nLinear algebra leads us to the master decomposition\nformula~\\cite{Mastrolia:2018uzb, Frellesvig:2020qot}:\n\\begin{align}\n &\\bra{\\varphi} = \\sum_{\\lambda = 1}^r c_\\lambda \\,\n \\bra{e_\\lambda}\\ ,\n \\quad\n c_\\lambda = \\sum_{\\mu = 1}^r \\vev{\\varphi | h_\\mu}\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\ ,\n \\\\\n &C_{\\lambda \\mu} := \\vev{e_\\lambda |\n h_\\mu}\n \\ ,\n \\label{eq:cmat-def}\n\\end{align}\nfor any choice of the dual basis $\\ket{h_\\mu}$.\nTherefore the intersection numbers~\\eqref{eq:interx-def} completely determine the\ndecomposition coefficients. Let's see now how they can be computed.\n\n\\subsection{Univariate intersection numbers}\n\\label{ssec:uni}\nIn the $n = 1$ case, intersection numbers~\\eqref{eq:interx-def}\nturn into a sum of residues~\\cite{cho:1995, matsumoto:1998, Frellesvig:2021vem}:\n\\begin{align}\n \\vev{\\varphi | \\psi}\n \n \n \\equiv \\frac{1}{2 \\pi \\mathrm{i}} \\int_X \\lrbrk{\n \\varphi - \\sum_{p \\in \\mathbb{P}_\\omega}\n \\nabla_\\omega \\bigbrk{\\theta_p\\brk{x, \\bar{x}} f_p}\n }\\wedge \\psi\n = \\sum_{p \\in \\mathbb{P}_\\omega} \\res{x = p}\\lrsbrk{f_p \\, \\psi}\\ ,\n\\end{align}\nwhere\n\\begin{itemize}\n \\item Integration goes over $X = \\mathrm{\\Complex P}^1$.\n \n \\item $\\mathbb{P}_\\omega := \\bigbrc{p \\>\\big|\\> \\text{poles of $\\omega$}}$, including\n the $\\infty$\\ .\n \\item Terms with Heaviside $\\theta$-functions regulate the integral with\n the help of a local potential $f_p$, which satisfies\n $\\nabla_\\omega f_p \\equiv \\brk{\\mathrm{d} + \\omega \\wedge} f_p = \\varphi$\n around the pole $p$\\ .\n %\n This differential equation can be solved via an Ansatz: $f_p =\nf_{p, \\, \\mathrm{min}} \\brk{x - p}^{\\mathrm{min}} +\nf_{p, \\, \\mathrm{min} + 1} \\brk{x - p}^{\\mathrm{min} + 1} + \\ldots +\nf_{p, \\, \\mathrm{max}} \\brk{x - p}^{\\mathrm{max}}\n$\\ .\n\\end{itemize}\n\n\\subsection{Multivariate intersection numbers}\nOne strategy for dealing with the intersection numbers of multivariate FIs\nis to apply the univariate procedure recursively one variable at a\ntime~\\cite{ohara:1998, Mizera:2017rqa, Mastrolia:2018uzb, Frellesvig:2019uqt, Frellesvig:2020qot}.\nConsider a 2 variable problem: given two 2-forms $\\varphi\\brk{x_1, x_2}$ and $\\psi\\brk{x_1, x_2}$\nwe would like to compute $\\vev{\\varphi | \\psi}$ by first integrating out $x_1$ and then $x_2$\\ .\nTo do that we pick a basis $\\bra{e_\\lambda}$ and its dual\n$\\ket{h_\\mu}$ for the internal $x_1$-integration and project $\\varphi$,\n$\\psi$ onto them \\brk{omitting the summation signs}:\n\\begin{alignat}{3}\n \\bra{\\varphi} &= \\bra{e_\\lambda} \\wedge \\bra{\\varphi_{\\lambda}}\n \\ ,\n \\quad\n &&\n \\bra{\\varphi_{\\lambda}} = \\vev{\\varphi | h_\\mu}\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\ ,\n &&\n \\\\\n \\ket{\\psi} &= \\ket{h_\\mu} \\wedge \\ket{\\psi_{\\mu}}\n \\ ,\n \\quad\n &&\n \\ket{\\psi_{\\mu}} = \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\vev{e_\\lambda| \\psi}\n \\ ,\n &&\n \\label{eq:psi-proj}\n\\end{alignat}\nThe internal $x_1$-integration can be seen as the insertion of the identity operator\n$\n \\mathbb{I}_{\\mathrm{c}}\n = \\ket{h_\\mu} \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\bra{e_\\lambda}\\ ,\n \n$\nwhich consequently allows us to write the remaining integral in $x_2$ as a sum over residues:\n\\begin{align}\n \\vev{\\varphi | \\psi} =\n \\langle \\varphi \\underbrace{\n | h_\\mu \\rangle\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\langle e_\\lambda |\n }_{\\mathbb{I}_{\\mathrm{c}}}\n \\psi \\rangle\n =\n \\sum_{p \\in \\mathbb{P}_P} {\\rm Res}_{x_2 = p}\\lrsbrk{\n f_{p, \\lambda} \\, C_{\\lambda \\mu} \\, \\psi_{\\mu}\n }\n \\ .\n \\label{eq:interx-res}\n\\end{align}\nSimilar to~\\secref{ssec:uni}, this formula requires the knowledge of a local vector potential\n$f_{p, \\lambda}$ near each pole $x_2 = p$.\nThe potential is fixed by the following system of first order differential\nequations \\brk{omitting the $p$ subscript}:\n\\begin{align}\n \\partial_{x_2} f_{\\lambda} + f_{\\mu} \\> P_{\\mu\n \\lambda} = \\varphi_\\lambda\n \\quad \\text{near $x_2 = p$}\\ .\n\\end{align}\nThe differential equation matrix $P$ and it's dual version $P^\\vee$\nare made out of $x_1$-intersection numbers:\n\\begin{align}\n P_{\\lambda \\nu} :=\n \\vev{\n \\brk{\\partial_{x_2} + \\omega_2} e_\\lambda | h_\\mu\n }\n \\bigbrk{C^{-1}}_{\\mu \\nu}\n \\ ,\\quad\n P^\\vee_{\\mu \\xi} :=\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\vev{\n e_\\lambda |\n \\brk{\\partial_{x_2} - \\omega_2} h_\\xi\n }\n \\ ,\n \\label{eq:pfaff-def}\n\\end{align}\nso they can be computed using the univariate method of~\\secref{ssec:uni}. The\nset $\\mathbb{P}_P$ in eq.~\\eqref{eq:interx-res} is defined as $\\mathbb{P}_P\n:= \\bigbrc{p \\> \\big| \\> \\text{poles of $P$}}$\\ .\n\nIn practice, to compute the residue at, say, $x_2 = 0$ we solve for $\\rho$ the\nfollowing system:\n\\begin{align}\n \\begin{cases}\n \n \\lrsbrk{x_2 \\, \\partial_{x_2} + P\\brk{x_2}} \\vv{f} = \\vv{\\varphi}\n \\\\[5pt]\n \\rho = \\res{x_2 = 0} \\lrsbrk{\n \\vv{f} \\, \\cdot \\, \\vv{\\psi}\n }\n \n \\end{cases}\n \\ ,\n \\label{eq:res-sys}\n\\end{align}\nwhere we rescaled $\\vv{\\varphi}\\brk{x_2} \\mapsto 1 \/ x_2 \\> \\vv{\\varphi}\\brk{x_2}$ and\n$P\\brk{x_2} \\mapsto 1 \/ x_2 \\> P\\brk{x_2}$,\nand canceled the $C$ matrix in the residue~\\eqref{eq:interx-res} against the\n$C^{-1}$ coming from eq.~\\eqref{eq:psi-proj}.\nThe series expansion of the system~\\eqref{eq:res-sys} is build from:\n\\begin{align}\n P\\brk{x_2} = \\sum_{i \\ge 0} x_2^i \\> P_i\n \\ , \\quad\n \\vv{\\varphi} = \\sum_{i \\ge k} x_2^i \\> \\vv{\\varphi}_i\n \\ , \\quad\n \\vv{\\psi} = \\sum_{i \\ge m} x_2^i \\> \\vv{\\psi}_i\n \\ ,\n \n\\end{align}\nfor integer $k, m \\in \\mathbb{Z}$.\nInserting an Ansatz\n$\\vv{f} = \\sum_i x_2^i \\> \\vv{f}_i$\\ ,\nand matching the powers of $x_2$ order by order, we obtain the linear system\n\\brk{here dots $\\gr{0}$ denote zeros}:\n\\begin{align}\n \n \\lrsbrk{\n \\begin{array}{c|ccccc|c}\n -1\n & \\vv{\\psi}_{1} & \\vv{\\psi}_{0} & \\vv{\\psi}_{-1} & \\vv{\\psi}_{-2} & \\vv{\\psi}_{-3}\n \n & \\cellcolor{gr1}\\gr{0}\n \\\\\n \\gr{0}\n & P_0 - 2 & \\gr{0} & \\gr{0} & \\gr{0} & \\gr{0}\n & \\vv{\\varphi}_{-2}\n \\\\\n \\gr{0}\n & \\gr{P_1} & P_0 - 1 & \\gr{0} & \\gr{0} & \\gr{0}\n & \\vv{\\varphi}_{-1}\n \\\\\n \\gr{0}\n & \\gr{P_2} & \\gr{P_1} & P_0 & \\gr{0} & \\gr{0}\n & \\vv{\\varphi}_{0}\n \\\\\n \\gr{0}\n & \\gr{P_3} & \\gr{P_2} & \\gr{P_1} & P_0 + 1 & \\gr{0}\n & \\vv{\\varphi}_{1}\n \\\\\n \\gr{0}\n & \\gr{P_4} & \\gr{P_3} & \\gr{P_2} & \\gr{P_1} & P_0 + 2\n & \\vv{\\varphi}_{2}\n \\end{array}\n }\n \\cdot\n \\lrsbrk{\n \\def1{1.2}\n \\begin{array}{c}\n \\rho\n \\\\\n \\vv{f}_{-2}\n \\\\\n \\vdots\n \\\\\n \\vv{f}_2\n \\\\\n -1\n \\end{array}\n } = 0\n \\ .\n\\end{align}\nThis equation has to be solved only for $\\rho$\\ .\nRow reduction of this matrix can be carried out only until the first row is\nfilled with zeros except for the element in the last column \\brk{highlighted\nwith grey}, which will contain the needed residue.\nOther poles of eq.~\\eqref{eq:interx-res} are treated in the same manner and\nthe sum of their residues produces the intersection number $\\vev{\\varphi | \\psi}$\\ .\n\n\n\\section{Decomposition via the secondary equation}\n\\label{sec:second}\nAs was observed in \\cite{Chestnov:2022alh} \\brk{see also \\cite{Henrik:2022}},\nthe twisted cohomology framework provides another method for computation of\nthe decomposition coefficients~\\eqref{eq:cmat-def}.\nThe first key idea is the so-called secondary equation\n\\cite{matsubaraheo:2019, Frellesvig:2020qot, Weinzierl:2020xyy}, which is a\nmatrix differential equation satisfied by the intersection matrix\n$C$:\n\\begin{align}\n \\begin{cases}\n \\partial_{z_i} \\, \\bra{e_\\lambda} = \\bigbrk{P_i}_{\\lambda \\nu}\n \\, \\bra{e_\\nu}\n \\\\\n \\partial_{z_i} \\, \\ket{h_\\mu} = \\ket{h_\\xi} \\, \\bigbrk{P^\\vee_i}_{\\xi \\mu}\n \\end{cases}\n \\Longrightarrow\n \\partial_{z_i} \\, C =\n P_i \\cdot C + C \\cdot \\lrbrk{P_i^\\vee}^\\mathrm{T}\n \\ ,\n \\label{eq:second}\n\\end{align}\nwhere $z_i$ are some external kinematical variables.\nThe other key step is computation of the differential equation\nmatrices $P$ and $P^{\\mathrm{aux}}$ made available thanks to the connection\nof the twisted cohomology theory, the GKZ formalism, and $D$-module theory.\nWe assume that this step is completed and refer the\ninterested reader to~\\cite{Chestnov:2022alh, Henrik:2022} for the full story.\nOnce the secondary equation~\\eqref{eq:second} is written down, we employ the known\nalgorithms for finding rational solutions of such systems, e.g.\nthe \\soft{Maple} package \\soft{IntegrableConnections}~\\cite{Barkatou:2012}.\n\nFinally, to determine the decomposition coefficients~\\eqref{eq:cmat-def} we\nrepeat the above procedure for an auxiliary basis $e^{\\mathrm{aux}} :=\n\\brc{e_1, \\ldots, e_{r - 1}, \\varphi}$, i.e. we compute an\nauxiliary $P^{\\mathrm{aux}}$ and then $C^{\\mathrm{aux}}$\\ . The FI decomposition is then\nencoded in the following matrix product:\n\\begin{align}\n \\lrsbrk{\n \n \\arraycolsep = -1pt \\def 1{0.9}\n \\begin{array}{c}\n e_1\\\\\n \\vdots\\\\\n e_{r-1}\\\\\n \\varphi\n \\end{array}\n \n }\n =\n C^{\\mathrm{aux}} \\cdot C^{-1}\n \\lrsbrk{\n \n \\arraycolsep = -1pt \\def 1{0.9}\n \\begin{array}{c}\n e_1\\\\\n \\vdots\\\\\n e_{r-1}\\\\\n e_r\n \\end{array}\n \n }\n \\quad\n \\Longrightarrow\n \\quad\n C^{\\mathrm{aux}} \\cdot C^{-1}\n = \\lrsbrk{\n \n \\arraycolsep = .7pt \\def 1{0.9}\n \\begin{array}{ccc|c}\n & & & 0\n \\\\\n & {\\mathrm{id}_{r-1}} & & \\vdots\n \\\\\n & & & 0\n \\\\\n \\hline\n \\rowcolor{gr1}\n c_1 & \\cdots & c_{r-1} & c_r\n \\end{array}\n \n }\n \\ ,\n \\label{eq:CauxCinv}\n\\end{align}\nwhere $\\mathrm{id}_{r - 1}$ denotes an identity matrix of size $\\brk{r - 1}$,\nand the decomposition coefficients $c_\\lambda$ are collected in the last row\nhighlighted with grey.\n\n\\subsection{A simple example}\nLet us briefly showcase how the secondary equation can produce the reduction\ncoefficients of a box diagram with a single dot $\\varphi = \\boxd$ in terms of the\nbasis $\n \\brk{e_1, e_2, e_3} = \\lrbrk{\n \\tbub,\n \\sbub,\n \\boxx\\\n }\n$\\ .\nThis topology has\n$u = \\brk{x_1 + x_2 + x_3 + x_4 + x_1 x_3 + t \/ s \\> x_2 x_4}^\\gamma$\\ .\nUsing the algorithm of\n\\cite{Chestnov:2022alh} and the \\soft{Asir} computer algebra system\n\\cite{url-asir} we generate the differential equation matrices:\n\\begin{align}\n P = \\lrsbrk{\n \\arraycolsep = 2pt \\def 1{1.5}\n \\begin{array}{ccc}\n - \\frac{ \\epsilon \\left(\\delta ^2 (12 z+11)+7 \\delta (z+1)+z+1 \\right)}{(3 \\delta\n +1) z (z+1)} & -\\frac{\\delta ^2 \\epsilon }{(3 \\delta+1)(z+1)} & \\frac{\\delta ^2\n \\epsilon (\\delta (z+2)+1)}{2 (3 \\delta +1) z\n (z+1) (\\delta \\epsilon +1)} \\\\\n \\frac{\\delta ^2 \\epsilon }{(3 \\delta +1)\n z \\left(z+1\\right)} & -\\frac{\\delta ^2\n \\epsilon }{(3\\delta+1) (z+1)} &\n -\\frac{\\delta ^2 \\epsilon (\\delta +2 \\delta\n z+z)}{2 (3 \\delta +1) z (z+1) (\\delta\n \\epsilon +1)} \\\\\n -\\frac{2 (2 \\delta +1) \\epsilon (\\delta\n \\epsilon +1)}{(3 \\delta +1) z (z+1)} & \\frac{2\n (2 \\delta +1) \\epsilon (\\delta \\epsilon\n +1)}{(3 \\delta +1) (z+1)} & -\\frac{\\epsilon\n \\left(\\delta ^2 (5 z+7)+\\delta (2\n z+5)+1\\right)}{(3 \\delta +1) z (z+1)}\n \\end{array}\n }\\ ,\n\\end{align}\nwhere $z = t \/ s$ is the ratio of the Mandelstam invariants, $\\delta$ is an\nadditional regularization parameter which should be set $\\delta \\to 0$ at the\nend of the computation, and $P^\\vee = P \\big|_{\\epsilon \\to\n-\\epsilon}$\n\\brk{see~\\cite{Chestnov:2022alh,Henrik:2022} for further details}.\nThe rational solution to the secondary equation~\\eqref{eq:second}\nlooks like this:\n\\begin{align}\n C =\n \\lrsbrk{\n \\arraycolsep = 1pt \\def 1{1}\n \\begin{array}{ccc}\n -\\frac{(2 \\delta +1) (4 \\delta +1)}{\\delta } &\n \\delta & -2 (\\delta \\epsilon -1) \\\\\n \\delta & -\\frac{(2 \\delta +1) (4 \\delta\n +1)}{\\delta } & -2 (\\delta \\epsilon -1) \\\\\n 2 (\\delta \\epsilon +1) & 2 (\\delta \\epsilon\n +1) & -\\frac{4 \\left(10 \\delta ^2+6 \\delta\n +1\\right) (\\delta \\epsilon -1) (\\delta\n \\epsilon +1)}{\\delta ^3}\n \\end{array}\n }\n \\ .\n\\end{align}\nWe repeat the same procedure for the auxiliary $C^{\\mathrm{aux}}$ and mount everything\ninto eq.~\\eqref{eq:CauxCinv} to produce:\n\\begin{align}\n \\boxd\n =\n -\\frac{2 \\varepsilon \\brk{2 \\varepsilon + 1}}{z \\brk{\\varepsilon + 1}} \\cdot\n \\tbub\n + 0 \\cdot\n \\sbub\n + \\brk{2 \\varepsilon + 1} \\cdot\n \\boxx\n \\ .\n\\end{align}\nTherefore the secondary equation method allows us to decompose FIs in terms of MIs via\nsolving a first order matrix differential equation~\\cite{Chestnov:2022alh}!\n\n\\section{Conclusion}\nWe reviewed the connection between FIs and the twisted cohomology theory,\nfocusing on the algorithms for computation of the uni- and multivariate\nintersection numbers, that is the scalar products between FIs.\n\nFurthermore, following~\\cite{Chestnov:2022alh}, we showed how the twisted\ncohomology together with the theory of GKZ hypergeometric system provide a\nway to compute the IBP reduction coefficients via essentially finding rational\nsolutions to a system of PDEs~\\eqref{eq:second} called the secondary equation.\nIn the future it would be interesting to further develop this connection and\napply it to other problems and processes within pQFT.\n\nFigures were made with \\soft{AxoDraw2}~\\cite{Collins:2016aya}.\n\n\\bibliographystyle{JHEP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzftgv b/data_all_eng_slimpj/shuffled/split2/finalzzftgv new file mode 100644 index 0000000000000000000000000000000000000000..60a1956dc5999e124d7bb6cfe45055ffb23b312e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzftgv @@ -0,0 +1,5 @@ +{"text":"\\subsubsection{\\textbf{Fundamentals and Applications of Isotope Effect in\nModern Technology.}}\n\n\\textbf{V.G. Plekhanov.}\n\n\\bigskip\n\n\\textit{Fonoriton Science Lab., Garon Ltd., P.O. Box 2632, Tallinn, 13802,\nESTONIA}\n\n\\TEXTsymbol{<}e-mail\\TEXTsymbol{>} vgplekhanov@hotmail.com\n\n\\bigskip\n\nDifferent crystals (semiconductors and insulators) with varying isotopic\ncomposition have been recently grown. I discuss here the effect of isotopic\nmass and isotopic disorder on the properties (vibrational, elastic, thermal\nand optical) of different crystals. The main applications of the stable\nisotopes are included self-diffusion, neutron transmutative doping (NTD) of\ndifferent semiconductors, optical fibers, isotope-based quantum computers,\netc. Because of space limitations this discussion will not exhaustive. I\nhope however, to give sufficient references to published work so that the\ninterest reader can easily find the primary literature sources to this\nrapidly expanding field of solid state physics.\n\n\\textbf{\\bigskip }\n\n\\textbf{Phonons, excitons, isotope-mixed crystals, laser materials, quantum\ninformation, isotope-based quantum computers. }\n\n\\bigskip\n\nIt is well-known that the presence of randomly distributed impurities in a\ncrystal can give rise to significant variations of its mechanical,\nelectrical, thermal, and optical properties with respect to those of the\npure solid. All these properties are, more or less, directly related to the\nstructure of the manifold of phonon states and any variation induced in this\nstructure by the presence of the impurities, will produce a corresponding\nalteration of the physical properties of the material. Of particular\ninterest is the case in which the impurity species is of the same chemical\nnature, but with a different mass, i.e. the case of isotopic impurities. The\nmechanisms by which the impurities (isotopes) perturb the phonon\ndistribution will depend on the mass difference between the host and guest\nspecies [1-3]. Phonons are the crystal excitations most directly related to\nthe isotopic masses. In monatomic crystals (like C, Ge, Si., etc.), and\nwithin the harmonic approximation, all phonon frequencies scale like the\nsquare root of the average isotopic mass. Namely, this feature can be used\nfor the nondestructive isotopic \\ characterization investigated materials.\nThe isotopic effect can be classified into two categories: 1) The first type\nis caused by the variation of the phonon frequencies with the average\nisotopic mass. To this type belongs the isotope effect in superconductors,\nwhich plays an important role in the search for the mechanism of high T$_{c}$\nsuperconductivity (see, e.g. [4]). The effect of changing the atomic mass M\nis to change the phonon frequencies $\\omega $ according to:\n\n$\\omega $ = $\\sqrt{\\frac{\\alpha }{\\text{M}}}$, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (1)\n\nwhere $\\alpha $ is a force constant characteristic of the phonon under\nconsideration. The change in atomic mass implies, at low temperatures (see\nbelow), a change in the average atomic displacement for each phonon mode. In\nthe case of one atom per primitive cell the mean squared phonon amplitude $%\n\\langle $u$^{2}\\rangle $ is given by [1;2]:\n\n$\\langle $u$^{2}\\rangle $ = $\\langle \\frac{\\hbar ^{2}}{\\text{4M}\\omega }%\n\\left[ 1\\text{ + 2n}_{B}\\text{(}\\omega \\text{)}\\right] \\rangle $ = $\\langle \n\\frac{\\hbar }{\\text{4M}^{1\/2}\\alpha ^{1\/2}}\\left[ 1\\text{ + 2n}_{B}\\text{(}%\n\\omega \\text{)}\\right] \\rangle $, \\ \\ \\ \\ \\ \\ \\ (2)\n\nwhere n$_{B}$($\\omega $) is the Bose - Einstein statistical factor, $\\omega $\nis the frequency of a given phonon and $\\langle $...$\\rangle $ represents an\naverage over all phonon modes. \\ The average in r.h.s. of (2) is often\nsimplified by taking the value inside $\\langle $...$\\rangle $ at an average\nfrequency $\\omega _{D}$ which usually turns out to be close to the Debye\nfrequency. We should distinguish between the low temperature ($\\hbar \\omega $\n\\TEXTsymbol{>}\\TEXTsymbol{>} k$_{B}$T) and the high temperature ($\\hbar\n\\omega $ \\TEXTsymbol{<}\\TEXTsymbol{<} k$_{B}$T) limits and see:\n\n($\\hbar \\omega $ \\TEXTsymbol{>}\\TEXTsymbol{>} k$_{B}$T), \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n$\\langle $u$^{2}\\rangle $ = $\\frac{\\hbar }{\\text{4M}\\omega _{D}}$ $\\sim $ M$%\n^{-1\/2}$ \\ \\ independent of T and\n\n($\\hbar \\omega $ \\TEXTsymbol{<}\\TEXTsymbol{<} k$_{B}$T), \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n$\\langle $u$^{2}\\rangle $ = $\\frac{\\text{k}_{B}\\text{T}}{\\text{2M}\\omega ^{2}%\n}\\sim $ T \\ \\ \\ \\ \\ \\ \\ \\ \\ independent of M \\ \\ \\ \\ (3).\n\nUsing Eq. (1) we can find from last equations that $\\langle $u$^{2}\\rangle $%\n, the zero-point vibrational amplitude, is proportional to M$^{-1\/2}$ \\ at\nlow temperatures: it thus decrease with increasing M and vanishes for M$%\n\\longrightarrow \\infty $. For high T, however, we find that \\ $\\langle $u$%\n^{2}\\rangle $ is independent of M and linear in T (details see [3] and\nreferences therein).\n\nAnother type of isotope effects is produced by the isotopic mass\nfluctuations about the average mass $\\langle $M$\\rangle $. These\nfluctuations perturb the translational invariance of a crystal and lift, at\nleast in part, \\textbf{k} - vector conservation. The most striking effect of\nthis type is observed in the thermal conductivity which has a maximum at a\ntemperature T$_{M}$ \\TEXTsymbol{<}\\TEXTsymbol{<} $\\Theta _{D}$(here $\\Theta\n_{D}$ is Debye temperature, T$_{M}$ = 80 K for diamond, T$_{M}$ = 20 K for\nsilicon (see, also Figs. 64 - 66 in [3]). Reduction of the concentration of $%\n^{13}$C from the standard 1\\% (against 99\\% of $^{12}$C) by a factor of ten\nincreases the thermal conductivity of diamond by about a factor of two, a\nfact that leads to amplifications in situations where a large amount of\ngenerated heat has to be driven away (e.g. as substrates for high power\nelectronic devices [5]). As is well-known this maximum represents the\ntransition from boundary scattering to the phonon unklapp scattering regime\nand its value K$_{m}$ is determined by the isotopic fluctuation parameter g\n(mass variance):\n\ng = $\\frac{\\left\\langle \\text{M}^{2}\\right\\rangle }{\\left\\langle\nM\\right\\rangle ^{2}}$ - 1, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (4)\n\nthe larger g - the smaller K$_{m}$ [6].\n\nIt is known that materials having a diamond structure are characterized by\nthe triply degenerate phonon states in the $\\Gamma $ - point of the\nBrillouin zone (\\textbf{k} = 0). These phonons are active in the Raman\nscattering (RS) spectra, but not in the IR absorption ones \\ (see, e.g.\n[7]). First - order Raman light - scattering spectrum in diamond crystals\nincludes one line with the maximum at $\\omega _{LTO}$($\\Gamma $) = 1332.5 cm$%\n^{-1}$. In Fig. 1$^{a}$, \\ the first-order scattering spectrum in diamond\ncrystals with different isotope concentration is shown [8]. As was shown,\nthe maximum and the width of the first-order scattering line in\nisotopically-mixed diamond crystals are nonlinearly dependent on the\nconcentration of isotopes x \\ (see also [7]). The maximum shift of this line\nis 52.3 cm$^{-1}$, corresponding to the limiting values of x = 0 and x = 1.\n\nFig. 1$^{b}$ demonstrates the dependence of the shape and position of the\nfirst-order line of optical phonons in germanium crystal on the isotope\ncomposition at liquid nitrogen temperatures [9]. The coordinate of the\ncenter of the scattering line is proportional to the square root of the\nreduced mass of the unit cell, i.e. M$^{-1\/2}$. It is precisely this\ndependence that is expected in the harmonic approximation (details see [3]).\nAn additional frequency shift of the line is observed for the natural and\nenriched germanium specimens and is equal, as shown in Refs. [7, 9] to 0.34$%\n\\pm $0.04 and 1.06$\\pm $0.04 cm$^{-1}$, respectively (see also Fig. 7 in\nChap. 4 of Ref. [10]). Detailed calculation of the shape of the lines in RS\nof semiconductors have been performed by Spitzer et al. [11]. In their paper\na quantitative agreement with the experimental data on diamond and germanium\nhas been obtained. Comparing the half-widths of the scattering lines in\nfirst-order RS in diamond and germanium (see Fig. 1), it is easy to see that\nthe observed line broadening due to isotopic disorder in diamond is much\ngreater than that in germanium. The reason for this is that the \\textbf{k} =\n0 point is not the highest point in the diamond dispersion curve (see Fig. 10%\n$^{b}$ in Ref. [7]), whereas in the case of germanium it is the highest\npoint [12]. This shift of the maximum from the $\\Gamma $ - point (\\textbf{k}\n= 0) leads to a much larger density of states in the vicinity of $\\omega\n_{LTO}$ in comparison with the normal one calculated by the formula:\n\nN$_{d}$ $\\sim $ Re($\\omega _{LTO}$ - $\\omega $ + i$\\left[ \\frac{\\Delta\n\\omega _{LTO}}{\\text{2}}\\right] $)$^{1\/2}$ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (5).\n\n(for more details see Ref. [12]). The density of states in diamond is\nasymmetric with respect to $\\omega _{LTO}$, causing asymmetry in the shape\nof the scattering line [7]. This asymmetry also leads to the asymmetric\nconcentration dependence of the half-width of the scattering line. As was\nshown early (see, e.g. [3] and references therein), in the case of a weak\npotential of isotopic scattering of phonons, their self-energy $\\varepsilon\n\\left( \\omega \\right) $ does not depend on \\textbf{q} (- phonon\nquasiimpuls). This is precisely the situation observed for C and Ge. Indeed,\nif we express the mass fluctuation $\\Delta $M\/$\\overline{\\text{M}}$ ($%\n\\overline{\\text{M}}$ is the mean mass of all isotopes) in the form of the\nvariation of the phonon band width $\\Delta \\omega _{0}$ = 12 cm$^{-1}$ at \n\\textbf{q} = 0 and compare it with the width of the band of optical phonons\nin Ge equals to $\\approx $ 100 cm$^{-1}$, we will see that the variations\nvery small. Under this conditions the localization of optical phonons in Ge\nis naturally, absent, and as observed in experiment, they stay delocalized\n(see below, however opposite case in LiH$_{x}$D$_{1-x}$ crystals). Moreover,\ndirect measurements of the phonon lifetime in Ge show that, in the case of\nanharmonic decay, it is two orders of magnitude shorter than the lifetime\nthat is due to the additional scattering by isotopes, i.e. $\\tau _{anharm}$\n= $\\tau _{disord}\\cdot $ 10$^{-2}$[13]. Therefore, the contribution of\nanharmonicity to the half-width of the first-order light scattering line in\nGe is two orders of magnitude greater than that caused by the isotopic\ndisorder in crystal lattice. In conclusion of this part of our report we\nshould mention that analogous structure of first-order RS and their\ndependence on isotope composition has by now been observed many times, not\nonly in elementary Si and $\\alpha $-Sn, but also in compound CuCl, CuBr,\nZnSe, GaN semiconductors (details see Ref. [3]).\n\nIn Fig. 2 (curve 1) the spectrum of second-order RS of light in pure LiD\ncrystal is shown [7]. In spite of the fact, according to the nomogram of\nexciton states [14], the crystal studied should be considered to be pure,\nits RS spectrum contains a clear high-frequency peak around 1850 cm$^{-1}$.\nThe observed peak does not have an analogue in RS \\ of pure LiH (Fig. 2,\ncurve 4) and has already been observed earlier in the second-order RS and\nhas been interpreted (see [7] and references therein) as a local vibration\nof the hydrogen in LiD crystals. Further we note that as the concentration \\\ngrows further (x \\TEXTsymbol{>} 0.15) one observes in the spectra a\ndecreasing intensity in the maximum of 2LO($\\Gamma $) phonons in LiD crystal\nwith a simultaneous growth in intensity of the highest frequency peak in\nmixed LiH$_{x}$D$_{1-x}$crystals (Fig. 2, curve 3). The origin of the last\none is in the renormalization of LO($\\Gamma $) vibrations in mixed crystals\n[7]. Comparison of the structure of RS spectra (curves 1 and 2 in Fig. 2)\nallows us, therefore, to conclude that in the concentration range of 0.1 \n\\TEXTsymbol{<} x \\TEXTsymbol{<} 0.45 the RS spectra simultaneously contain\npeaks of the LO($\\Gamma $) phonon of pure LiD and the LO($\\Gamma $) phonon\nof the mixed LiH$_{x}$D$_{1-x}$crystal. Thus, the second-order RS spectra of\nLiH$_{x}$D$_{1-x}$crystals have one- and two-mode character for LO($\\Gamma $%\n) phonons, and also contain a contribution from the local excitation at\nsmall values of x. Moreover, we should add that an additional structure in\nRS spectra on the short-side of the 2LO($\\Gamma $) peak (see Fig. 21 in Ref.\n[7]) was observed relatively ago in mixed LiH$_{x}$D$_{1-x}$crystals \\ and,\nvery recently, in isotopically mixed crystals of diamond, germanium and $%\n\\alpha $-Sn (details see [3, 11]). These effects caused by isotopic disorder\nin the crystal lattice of isotopically mixed crystals [3]. The observation\nof two-mode behavior of the LO($\\Gamma $) phonons in RS spectra of LiH$_{x}$D%\n$_{1-x}$crystals contradicts the prediction of the CPA [15], according to\nwhich the width W of optical vibration band should be smaller than the\nfrequency \\ shift ($\\Delta $) of transverse optical phonon. However, as was\nshown early (see, e.g. [7] and references therein) in LiH$_{x}$D$_{1-x}$\nmixed crystals, the reverse inequality is valid, i.e. W \\TEXTsymbol{>} $%\n\\left\\vert \\Delta \\right\\vert $. According [16], this discrepancy between\nexperimental results and theory based on CPA [15] is mainly explained by the\nstrong potential of scattering of phonons, caused by a large change in the\nmass upon substitution of deuterium for hydrogen. Once more reason of the\ndiscrepancy between theory and results of the experiment may be connected\nwith not taking into account in theory the change of the force-constant at\nthe isotope substitution of the smaller in size D by H ion. We should stress\nonce more that among the various possible isotope substitution, by far the\nmost important in vibrational spectroscopy is the substitution of hydrogen\nby deuterium. As is well-known, in the limit of the Born-Oppenheimer\napproximation the force-constant calculated at the minimum of the total\nenergy depends upon the electronic structure and not upon the mass of the\natoms. It is usually assumed that the theoretical values of the phonon\nfrequencies depend upon the force-constants determined at the minimum of the\nadiabatic potential energy surface. This leads to a theoretical ratio $%\n\\omega \\left( \\text{H}\\right) $\/$\\omega \\left( \\text{D }\\right) $of the\nphonon frequencies that always exceed the experimental data. Very often\nanharmonicity has been proposed to be responsible for lower value of this\nratio. In isotope effect two different species of the same atom will have\ndifferent vibrational frequencies only because of the difference in isotopic\nmasses. The ratio p of the optical phonon frequencies for LiH and LiD\ncrystals is given in harmonic approximation by:\n\np = $\\frac{\\omega \\left( \\text{H}\\right) }{\\omega \\left( \\text{D }\\right) }$\n= $\\sqrt{\\frac{\\text{M}\\left( \\text{LiD}\\right) }{\\text{M}\\left( \\text{LiH}%\n\\right) }}\\simeq $ $\\sqrt{\\text{2 }}$ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (6)\n\nwhile the experimental value (which includes anharmonic effects) is 1.396 $%\n\\div $ 1.288 (see Table in Ref. [17]). In this Table there are the\nexperimental and theoretical values of p according to formula (6), as well\nas the deviation $\\delta $ = $\\frac{\\text{P}_{Theory}\\text{ - p}_{\\exp }}{%\n\\text{p}_{theory}}$ of these values from theoretical ones. Using the least\nsquares method it was found the empirical formula of ln($\\delta $\\%) $\\sim $\nf(ln[$\\frac{\\partial \\text{E}}{\\partial \\text{M}}]$) which is depicted on\nFig.3. As can be seen the indicated dependence has in the first\napproximation a linear character:\n\nln($\\delta $\\%) = -7.5 + 2ln($\\frac{\\partial \\text{E}}{\\partial \\text{M}}$).\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (7)\n\nFrom the results of Fig. 3, it can be concluded that only hydrogen compounds\n(and its isotope analog - deuterium) need to take into account the\nforce-constant changes in isotope effect. It is also seen that for\nsemiconductor compounds (on Fig. 3 - points, which is below of Ox line) the\nisotope effect has only the changes of the isotope mass (details see [3, 7]).\n\nThe dependence of the band gap energy on isotopic composition (via mechanism\nof electron-phonon interaction) has already been observed for insulators\n(Fig. 4) and lowest (indirect - direct) gap of different semiconductors ([3]\nand references therein). It has been shown to result primarily from the\neffect of the average \\ isotopic mass on the electron-phonon interaction,\nwith a smaller contribution from the change in lattice constant. It was the\nfirst paper [19] where the exciton binding energy E$_{B}$ was found to\ndepend on the isotopic composition. It was shown further that this change in\nE$_{B}$ was attributed to the exciton-phonon interaction (originally with LO\nphonons) (see, also [3]). At present time such dependence of E$_{B}$ $\\sim $\nf(x) (x- isotope concentration) was found for different bound excitons in\nsemiconductors [20 - 21]. The simplest approximation, in which crystals of\nmixed isotopic composition are treated as crystals of identical atoms having\nthe average isotopic mass is referred to as virtual crystal approximation\n(VCA) [15]. Going beyond the VCA, in isotopically mixed crystals one would\nalso expect local fluctuations in the band-gap energy from statistical\nfluctuations in local isotopic composition within some effective volume,\nsuch as that of an exciton (see, e.g. Fig. 2 of Ref. [18]). Using the\nleast-squares method it was found the empirical dependence of ln$\\left( \n\\frac{\\partial \\text{E}_{g}}{\\partial \\text{M}}\\right) $ $\\sim $ f$\\left( \n\\text{lnE}_{g}\\right) ,$ which is presented on Fig. 5. As can be seen the\nmentioned dependence has a parabolic character:\n\nln$\\left( \\frac{\\partial \\text{E}_{g}}{\\partial \\text{M}}\\right) $ = 6.105$%\n\\left( \\text{lnE}_{g}\\right) ^{2}$ - 7.870$\\left( \\text{lnE}_{g}\\right) $ +\n0.565. \\ \\ \\ \\ \\ \\ \\ \\ (8)\n\nFrom this figure it can be concluded also that the small variation of the\nnuclear mass (semiconductors) causes the small changes in E$_{g}$ also. When\nthe nuclear mass increases \\ it causes the large changes in E$_{g}$ (C, LiH,\nCsH, etc.) (details, see [18, 3]).\n\nDetail analyze the process of self-diffusion in isotope pure materials and\nhetero-structures was done in [5]. Interest in diffusion in solids is as old\nas metallurgy or ceramics, but the scientific study of the phenomenon may\nprobably be dated some sixth-seven decades ago. As is well-known, the\nmeasured diffusion coefficients depends on the chemistry and structure of\nthe sample on which it is measured. In cited paper [5] it was shown to use\nthe stable isotopes for the study of diffusion process in different\nsemiconducting structures (bulk, hetero-structures etc.).\n\nChapter 6 indicated book [5] describes the new reactor technology - neutron\ntransmutative doping (NTD). Capture of thermal neutrons by isotope nuclei\nfollowed by nuclear decay produces new elements, resulting in a very number\nof possibilities for isotope selective doping of solids. The importance of\nNTD technology for studies of the semiconductor doping as well as\nmetal-insulator transitions and neutral impurity scattering process is\nunderlined. The low-temperature mobility of free carriers in semiconductors\nis mainly determined by ionized- and neutral-impurity scattering. The\nionized-impurity scattering mechanism has been extensively studied (see e.g.\n[5] and references therein), and various aspects of this process are now\nquite well understood. Scattering by neutral impurities is much less than by\nionized centers, i.e., its contribution is significant only in crystals with\nlow compensation and at very low temperatures where most of the free\ncarriers are frozen on the impurity sites. The availability of highly\nenriched isotopes of Ge which can be purified to residual dopant levels \n\\TEXTsymbol{<} 10$^{12}$ cm$^{-3}$ has provided the first opportunity to\nmeasure neutral impurity scattering over a wide temperature range. In paper\n[22] three Ge isotopes transmute into shallow acceptors (Ga), shallow donors\n(As) and double donors (Se) (see also above):\n\n$_{32}^{70}$Ge + n $\\rightarrow $ $_{32}^{71}$Ge$_{EC(t_{1\/2}=11.2days)}%\n\\rightarrow $ $_{32}^{71}$Ga + $\\nu _{e},$\n\n$_{32}^{74}$Ge + n $\\rightarrow _{32}^{75}$Ge $_{\\beta ^{-}(t_{1\/2}\\text{ = }%\n82.2min)}$ $\\rightarrow $ $_{32}^{75}$As + $\\beta ^{-}$ + $\\bar \\nu _e$,\n\n$_{32}^{76}$Ge + n $\\rightarrow $ $_{32}^{77}$Ge$_{\\beta _{{}}^{-}\\text{(t}%\n_{1\/2}}$ $_{=}$ $_{11.3}$ $_{h\\text{) }}\\rightarrow $ $\\beta ^{-}$ + $\\bar{%\n\\nu}_{e}$ + $_{32}^{77}$As $_{\\beta ^{-}\\text{(t}_{1\/2}}$ $_{=}$ $_{38.8}$ $%\n_{h\\text{)}}$ $\\rightarrow $ $_{32}^{77}$Se + $\\beta ^{-}$ + $\\bar{\\nu}_{e}$%\n. (9)\n\nThe isotopes $^{72}$Ge and $^{73}$Ge are transmuted into the stable $^{73}$%\nGe and $^{74}$Ge respectively. Controlling the ratio of $^{70}$Ge and $^{74}$%\nGe in bulk Ge crystals allows fine tuning of the majority- as well as the\nminority carrier concentration. Currently, this is the best method to vary\nthe free-carrier concentration independently from compensation ratio. As\nopposed to other doping methods, NTD yields a very homogeneous, perfectly\nrandom distribution of the dopants down to the atomic levels [5]. Thus\nisotopically controlled crystals offer a unique possibility to study\nsystematically the scattering mechanism of the charge carriers in\nsemiconductors. Extensive Hall-effect and resistivity measurements from room\ntemperature down to 4.2K yielded very accurate free-carrier concentrations\nand mobilities as a function of temperature and doping level were done in\npaper [5]. Itoh et al. [22] have performed temperature-dependent Hall\nmeasurements on four different p-type and two-different n-type Ge crystals\n(Fig. 6). Fig. 6 shows the relative strength of the scattering from the\nionized and the neutral impurities. There is only a relatively small\ntemperature region in which the scattering from the neutral impurities\ndominates. This range extends to higher temperatures as the free-carrier\nconcentration is increased. The calculated \\textquotedblright transition\ntemperatures\\textquotedblright\\ above which the ionized impurities are the\nmain scattering centres compare very well with experimental results of Itoh\net al [22] (see also Fig. 6.31 in Ref. [5]). In order to demonstrate the\nimportance of the homogeneous dopant distribution, Itoh et al. have\nperformed the same study on samples cut from Ge : Ga crystals grown by the\nconventional Czochralski method, where Ga impurities were introduced to Ge\nmelt during the crystal growth. These authors observed deviations of the\nmeasured mobility from the theoretical calculations, which are most likely\ndue to inhomogeneous Ga impurity distributions in melt-doped Ge. Only the\nuse of NTD semiconductors with randomly distributed dopants allows for an\naccurate test of the neutral impurity-scattering models (details, see [5]).\n\nAnother application of isotope pure and isotope mixed crystals that will be\ndiscussed here is related to the possibility of using an isotopically mixed\nmedium (e.g. LiH$_{x}$D$_{1-x}$ or $^{12}$C$_{x}$ $^{13}$C$_{1-x}$) as an\noscillator of coherent radiation in the ultraviolet spectral range. To\nachieve this, the use of indirect electron transitions involving, say, LO\nphonons was planned [23]. The detection of LO phonon replicas of free -\nexciton luminescence in wide - gap insulators attracted considerable\nattention to these crystals (see e.g. [10; 23]). At the same time it is\nallowed one to pose a question about the possibility of obtaining stimulated\nemission in UV (VUV) region (4 - 6 eV) of the spectrum, where no solid state\nsources for coherent radiation exist yet. In the first place this related to\nthe emitters working on the transitions of the intrinsic electronic\nexcitation (exciton). The last one provides the high energetical yield of\nthe coherent emission per unit volume of the substance.\n\nIn this part we will discuss the investigation results of the influence of\nthe excitation light density on the resonant secondary emission spectra of\nthe free - exciton in the wide - gap insulator LiH$_{x}$D$_{1-x}$ (LiH$%\n_{1-x} $F$_{x}$) crystals. The cubic LiH crystals are typical wide - gap\nionic insulator with E$_{g}$ = 4.992 eV [10] with relatively weak exciton -\nphonon interaction however: E$_{B}$\/$\\hbar \\omega _{LO}$ = 0.29 where E$_{B%\n\\text{ }} $and $\\hbar \\omega _{LO}$ are exciton binding energy and\nlongitudinal optical phonon's energy , respectively. Besides it might be\npointed out that the analogous relation for CdS, diamond and NaI is 0.73;\n0.45 and 12.7, respectively . In the insert of Fig. 7 depicts the\nluminescence of 1LO and 2LO phonon replicas in LiH crystals. An increase in\nthe density of the exciting light causes a burst of the radiation energy in\nthe long-wave wing of the emission of the 1LO and 2LO repetitions (see Fig.\n7) at a rate is higher for the 1LO replica line [23]. A detailed dependence\nof the luminescence intensity and the shape of the 2LO phonon replica line\nare presented in Fig. 7. The further investigations have shown [5] that with\nthe increase of the excitation light intensity at the beginning a certain\nnarrowing can be observed, followed by widening of the line of 2LO phonon\nreplica with a simultaneous appearance of a characteristics, probably mode\nstructure (see Fig. 8.11 in Ref. [5]). From \\ this Fig. it can be seen that\nthe coupling between longwavelength luminescence intensity and excitation\nlight intensity is not only linear, but, in fact, of a threshold character\nas in case of other crystals . A proximity of the exciton parameters of LiH\nand CdS (ZnO) crystals allowed to carry out the interpretation of the\ndensity effects in LiH on the analogy with these semiconducting compounds.\nComing from this in the paper [23] it was shown that for the observed\nexperimental picture on LiH crystals to suppose the exciton-phonon mechanism\nof light generation [5] is enough the excitons density about 10$^{15}$ cm$%\n^{-3}$. This is reasonable value, if the high quality of the resonator\nmirrow - the crystal cleavage \\textquotedblright in situ\\textquotedblright\\\nand relatively large exciton radius (r = 40 \\AA\\ [10] )is taken into\naccount. To this light mechanism generation must be also promoting a large\nvalue of the LO phonon energy $\\left( \\hbar \\omega _{LO}\\text{ = 140 meV }%\n\\right) .$ Owing to this the radiative transition is being realized in the\nspectral region with a small value of the absorption coefficient, and thus\nwith a small losses in resonator (details see [5]).\n\nIn conclusion of this section we should underlined that if the observable\nmode structure is really caused by the laser generation it may be smoothly\ntuned in the region of energies 4.5 $\\pm $ 5.1 eV owing to smooth transition\nof the line emission energy in the LiH$_{x}$D$_{1-x}$ (LiH$_{x}$F$_{1-x}$;\nLiD$_{x}$F$_{1-x}$) mixed crystals as well as in the range 5.35 - 5.10 eV in \n$^{12}$C$_{x}$ $^{13}$C$_{1-x}$ mixed crystals (see also [10]).\n\nConcluding our report we should \\ be paid your attention to the reports of\nProfessors Schoven, Weston, Wendt as well as Dr. Chai of our conference\nwhich are devoted in the first step of radioactive isotope applications.\n\n\\bigskip\n\nFigure Captions.\n\nFig. 1. a) First-order Raman spectra of $^{12}$C$_{x}^{13}$C$_{1-x}$\ndiamonds with different isotope compositions. The labels A,B, C, D, E and F\ncorrespond to x = 0.989; 0.90; 0.60; 0.50; 0.30 and 0.01 respectively. The\nintensity is normalized at each peak (after [8]); b) First-order Raman\nscattering spectra in Ge with different isotope contents (after [13]).\n\nFig. 2. Second-order Raman spectra of LiH$_{x}$D$_{1-x}$ crystals at room\ntemperature: (1); (2); (3) and (4) x = 0; 0.42; 0.76 and 1, respectively\n(after [7]).\n\nFig. 3. The dependence of ln($\\delta $\\%) $\\sim $ f(ln[$\\frac{\\partial \\text{%\nE}}{\\partial \\text{M}}]$): points are experimental values and continuous\nline - calculation on the formula (7) (after [17]).\n\nFig. 4. Mirror reflection spectra of crystals: LiH, curve 1; LiH$_{x}$D$%\n_{1-x}$, curve 2 and LiD, curve 3 at 4.2 K. Light source without crystals,\ncurve 4 (after [18]).\n\nFig. 5. The dependence of ln$\\left( \\frac{\\partial \\text{E}_{g}}{\\partial \n\\text{M}}\\right) $ $\\sim $ f$\\left( \\text{lnE}_{g}\\right) $: points are\nexperimental date and continuous line - calculation on the formula (8)\n(after [18]).\n\nFig. 6. Temperature dependence of the carrier mobility of a) p - type and b)\nn - type NTD Ge crystals. c) Temperature dependence of relative\ncontributions to the mobility. Note that the mobility is dominated by\nneutral impurity scattering below 20 K ($^{70}$Ge:Ga $\\sharp $ 2 crystal)\n(after [22]).\n\nFig. 7. The dependence of the intensity in the maximum (1) and on the\nlong-wavelength side (2) of 2LO replica emission line of LiH crystals on the\nexcitation light intensity. In insert: luminescence spectra of free excitons\nin LiH crystals in the region of the emission lines of 1LO and 2LO phonon\nrepetitions at 4.2 K for low (1) and high (2) density of excitations of 4.99\neV photons (after [23]).\n\n\\bigskip\n\nReferences.\n\n1. I.M. Lifshitz,\\textit{\\ Physics of Real Crystals and Disordered Systems},\nSelected Works (Eds. M.I. Kaganov, A.M. Kosevich, Science, Moscow, 1987) (in\nRussian).\n\n2. A.A. Maradudin, E.W. Montroll, G.H. Weiss and I.P. Ipatova, \\textit{%\nTheory of Lattice Dynamics in the Harmonic Approximation}, Solid State\nPhysics, Vol.3, (Eds. F. Seitz, D. Turnbull and H. Ehrenreich, Academic, New\nYork, 1971).\n\n3. V.G. Plekhanov, Elementary Excitations in Isotope-Mixed Crystals, \\textit{%\nPhysics Reports}, \\textbf{410} [1-3] 1 (2005).\n\n4. J.P. Franck, in:\\textit{\\ Physical Properties of High T}$_{c}$\\textit{\\\nSuperconductors} (ed. D.M. Ginsberg, Vol. 4., World Scientific, Singapore,\n1984) p. 189.\n\n5. For a review, see, V.G. Plekhanov,\\textit{\\ Applications of the Isotopic\nEffect in Solids}, Springer, Berlin - Heidelberg, 2004.\n\n6. See, for example, R. Berman, \\textit{Thermal Conduction of Solids}\n(Clarendon Press, Oxford, 1976); T.M. Tritt, \\textit{Thermal Conductivity}\n(Springer, Berlin - Heidelberg, 2005).\n\n7. V.G. Plekhanov, Isotopic Effects in Lattice Dynamics, \\textit{Physics -\nUspekhi} (Moscow) 46 [7] 689 (2003).\n\n8. H. Hanzawa, N. Umemura, Y. Nisida et al., Disorder Effects of Nitrogen\nImpurities, Irradiation - Induced Defects, and $^{13}$C Isotope Composition\non the Raman Spectrum in Synthetic I$^{b}$ Diamond, \\textit{Phys. Rev.} \n\\textbf{B54} [6] 3793 (1996).\n\n9.M. Cardona, Semiconductor Crystals with Tailor - Made Isotopic\nCompositions, in: \\textit{Festkorperprobleme\/Advances in Solid State Physics}\n(ed. R. Helbig, Vieweg, Braunschweig, Wiesbaden, Vol. 34, 1994) p. 35.\n\n10. V.G. Plekhanov, \\textit{Isotope Effects in Solid State Physics}\n(Academic, New York, 2001).\n\n11. J. Spitzer, P. Etchegoin, M. Cardona et al., Isotopic - Disorder Induced\nRaman Scattering in Diamond, \\textit{Solid State Commun. }\\textbf{88} [6]\n509 (1993).\n\n12. M. Cardona, Isotopic Effects in the Phonon and Electron Dispersion\nRelations of Crystals, \\textit{Phys. Stat. Sol. }(b) \\textbf{220} [1] 5\n(2000).\n\n13. M. Cardona, P. Etchegoin, H.D. Fuchs et al., Effect of Isotopic Disorder\nand Mass on the Electronic and Vibronic Properties of Three-, Two- and One -\nDimensional Solids, \\textit{J. Phys.: Condens. Matter} \\textbf{5} [1] A61\n(1993).\n\n14. V.G. Plekhanov, Phonon renormalization of Interband Transition Energy in\nthe Mixed Crystals, \\textit{Solid State Commun.} \\textbf{76} [1] 51 (1990).\n\n15. R.J. Elliott, J.A. Krumhansl and P.L. Leath, The Theory and Properties\nof Randomly Disordered Crystals and Related Physical Systems,\\textit{\\ Rev.\nMod. Phys.} \\textbf{46} [3] 465 (1974).\n\n16. V.G. Plekhanov, Experimental Evidence of Strong Phonon Scattering in\nIsotope Disordered Systems: The Case of LiH$_{x}$D$_{1-x}$, \\textit{Phys.\nRev.} \\textbf{B51} [14] 8874 (1995).\n\n17. V.G. Plekhanov (unpublished results, 2004)\n\n18. V.G. Plekhanov, N.V. Plekhanov, Isotope Dependence of Band - Gap Energy, \n\\textit{Phys. Lett.}, \\textbf{A313} [3] 231 (2003).\n\n19. V.G. Plekhanov, T.A. Betenekova, V.A. Pustovarov, Excitons and\nPeculiarities of Exciton-Phonon Interaction in LiH and LiD, \\textit{Sov.\nPhys. Solid State} \\textbf{18} [8] 1422 (1976).\n\n20. M. Cardona, Dependence of the Excitation Energies of Boron in Diamond on\nIsotopic Mass, \\textit{Solid State Commun. }\\textbf{121} [1] 7 (2002).\n\n21. D. Karaiskaj, T.A. Meyer, M.L.W. Thewalt et al., Dependence of the\nIonization Energy of Shallow Donors and Acceptors in Silicon on the Host\nIsotopic Mass, \\textit{Phys. Rev.}\\textbf{\\ B68} [2] 121201 (2003).\n\n22. H.D. Fuchs, K.M. Itoh and E.E. Haller, Isotopically Controlled\nGermanium: A New Medium for the Study of Carrier Scattering by Neutral\nImpurities, \\textit{Philos. Mag.} \\textbf{B70} [2] 662 (1994); K.M. Itoh,\nE.E. Haller, V.I. Ozogin, Neutral - Impurity Scattering in Isotopically\nEngineering Ge, \\textit{Phys. Rev.} \\textbf{B50} [23] 16995 (1994).\n\n23. V.G. Plekhanov, Wide-Gap Insulators Excitonic Nonlinearity and Its\nPotential Applications in Solid State Lasers, in: \\textit{Proc. Int. Conf.\nAdvances Solid State Lasers}, USA, SOQUE, 1990.\n\n\\textbf{Table. }Values of the coefficients dE\/dM (mev, cm$^{-1}$) for the\noptical phonons and \\ the experimental and theoretical values of p as well\nas deviation $\\delta $\\% of these values from theoretical ones.\n\n\\begin{tabular}{lllll}\nSubstances & Frequencies & p$_{\\exp }$ & p$_{\\text{theory}}$ & $\\delta $\\% = \n$\\frac{\\text{p}_{\\text{theory}}\\text{ - p}_{\\text{exp}}}{\\text{p}_{\\text{%\ntheory}}}$ \\\\ \nLiH\/LiD & 140(meV)\/104(meV)[10,24] & 1.288-1.346 & $\\sqrt{\\text{2}}$ = 1.414\n& 4.8 - 8.9 \\\\ \nSiH$_{4}$\/SiD$_{\\text{4}}$ & 2186.87\/1563.3(cm$^{-1}$)[10] & 1.399 & $\\sqrt{%\n\\text{2}}$ = 1.414 & 1.5 \\\\ \n$^{12}$C\/$^{13}$C & 1332,5\/1280(cm$^{-1}$)[10;25] & 1.041 & $\\sqrt{\\frac{%\n\\text{13}}{\\text{12}}}$ = 1.041 & 0.001 \\\\ \n$^{70}$Ge\/$^{76}$Ge & 309.8\/297.7(cm$^{-1}$)[10,26,27] & 1.041 & $\\sqrt{%\n\\frac{\\text{76}}{\\text{70}}}$ = 1.042 & 0.096 \\\\ \n$^{28}$Si\/$^{30}$Si & 524.8\/509.8(cm$^{-1}$)[28] & 1.029 & $\\sqrt{\\frac{%\n\\text{30}}{\\text{28}}}$ = 1.035 & 0.58 \\\\ \n$^{64}$Zn$^{76}$Se\/$^{68}$Zn$^{80}$Se & 213.2\/207.4(cm$^{-1}$)[10] & 1.028 & \n$\\sqrt{\\frac{\\mu _{1}}{\\mu _{2}}}$ = 1.029 & 0.097 \\\\ \n$\\alpha -^{112}$Sn\/$\\alpha $- $^{124}$Sn & 206.5\/196.8(cm$^{-1}$)[10,30] & \n1.049 & $\\sqrt{\\frac{\\text{124}}{\\text{112}}}$ = 1.052 & 0.30 \\\\ \nGa$^{14}$N\/Ga$^{15}$N & 535\/518(cm$^{-1}$)[10,31] & 1.033 & $\\sqrt{\\frac{%\n\\text{15}}{\\text{14}}}$ = 1.035 & 0.19 \\\\ \n$^{63}$Cu$^{35}$Cl\/$^{65}$Cu$^{37}$Cl & 174.4\/171.6(cm$^{-1}$)[10,32] & 1.016\n& $\\sqrt{\\frac{\\mu _{1}}{\\mu _{2}}}$ = 1.022 & 0.59%\n\\end{tabular}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION} \n\\label{sec:introduction}\n\nFor particle acceleration in astrophysical sources, such as jets and\nshock waves, two kinds of Fermi acceleration mechanisms have typically\nbeen considered: the first order Fermi acceleration at the shock\nfronts, and the second order Fermi acceleration (often referred to as\nstochastic acceleration) in the turbulent plasma. The first order\nmechanism is well known to produce a power-law particle spectrum\n$N(E)\\propto E^{-s}$ with spectral index being a function $s =\n(r+2)\/(r-1)$ of the compression ratio $r$ for non-relativistic shocks\n\\citep[e.g.,][]{Drury1983} and in relativistic shocks approaching the\nvalue $s\\approx2.2$ at the ultra-relativistic limit\n\\citep[e.g.,][]{KirkEtAl2000,AchterbergEtAl2001,LP2003,VV2003a}, and\nit has been successfully applied in explaining the synchrotron\nproperties of, e.g., some active galactic nuclei (AGN)\n\\citep[e.g.,][]{TurlerEtAl2000}. For flatter spectra ($s\\lesssim2$)\nthe first order mechanism is not, however, as attractive. Although it\ncan produce hard spectra with spectral index $s\\to1$ depending, for\ninstance, on the injection model or the scattering center compression\nratio \\citep[e.g., \\citeauthor{EllisonEtAl1990}\n\\citeyear{EllisonEtAl1990}; Vainio, Virtanen \\& Schlickeiser 2003,\nhereafter referred to as][]{VVS}, it is not able to produce spectral\nindices flatter than $s=1$. Stochastic acceleration, on the other\nhand, has been known to exist and be present in the turbulent\ndownstream of shocks, but because it works on much longer time scales\nthan the first order mechanism \\citep[e.g.,][]{CS1992,VS1998} it\nhas been frequently neglected in studies of relativistic particle\nacceleration \\citep[however, note e.g.,][]{Ostrowski1993}.\n\nThe argument of longer acceleration time scale renders the\nsecond order mechanism less important than the first order one, when\nthe two mechanisms operate concurrently and, thus, the particle\nspectrum at the shock front is considered. When discussing non-thermal\nparticle distributions radiating in astrophysical objects, however, it\nis important to acknowledge that the bulk of radiation is emitted by\nthe particles that have already left the shock front towards the\ndownstream. Thus, the second order mechanism has a much longer time\navailable to accelerate the particles than the first order\nmechanism. Although one could justify the neglect of stochastic\nacceleration when calculating the particle spectrum right at the shock\nfront, it is not possible to neglect its effect on the spectrum of\nradiating particles in astrophysical shock waves in general as these\nobjects, especially those believed to host relativistic shock\nwaves, are often spatially unresolved.\n\nIn this paper we have studied the possible effects of stochastic\nelectron acceleration in parallel relativistic shock waves. We\napproach the subject via numerical test-particle \nsimulations and present our model\nincluding both the first and the second order Fermi acceleration. We\nemploy the model to study the effect of stochastic acceleration on (i)\nparticles injected at the shock front and subsequently accelerated by\nthe first order mechanism and (ii) particles drawn from the heated\n(but not shock accelerated) particle population of the downstream\nregion of the shock. We focus on shock waves that, in addition to\nbeing parallel, have small-to-intermediate Alfv\\'enic Mach\nnumbers. Low Mach-number relativistic shocks could prevail in\nmagnetically dominated jets that are lighter than their surroundings,\ne.g., because of having a pair-plasma composition.\n\nThe structure of this paper is as follows. In \\S\\ref{sec:model} we\npresent our model, state the limiting assumptions we have made, and\nbriefly discuss the limitations caused by the assumptions and\nsimplifications; the description of the numerical code, as well as the\nimplementations and mathematical details of the underlying physics are\ndescribed in Appendix A. In \\S\\ref{sec:Results} we\npresent the results achieved using the simulation code. We begin by\nshowing that when compared to some relevant previous studies -- both\nanalytical and numerical -- our results are in very good accordance\nwith those achieved previously by many authors. We then continue\npresenting the results for stochastic acceleration in various cases,\nand in \\S\\ref{sec:Discussion} we discuss our results and their\nrelationship to both the previous studies and possible future\napplications, and list the conclusions of our study. \nAlso the limitations caused by the test-particle approach are shortly \ndiscussed.\n\n\n\\section{MODEL} \n\\label{sec:model}\n\nIn this section we describe the properties of our model in general,\nincluding the employed assumptions and physics related to them. The numerical\nMonte-Carlo approach is described in detail in Appendix A, \nwhere also implementations of the model properties are discussed.\n\n\\subsection{Coordinate systems}\n\nBefore proceeding into the model, we define the coordinate systems\nemployed in our study: the frame where the shock front is at rest is\ncalled \\emph{the shock frame}. We consider parallel shocks, i.e.,\nshocks where the mean magnetic field and the plasma flow are directed\nalong the shock normal.\nThe frame where the bulk plasma flow is at rest is called \\emph{the\nlocal plasma frame}; this frame moves with the local flow speed, $V$,\nwith respect to the shock frame. Finally, we consider \\emph{the wave\nframe}, which moves with the phase speed $V_\\phi$ of the plasma waves\nwith respect to the local plasma frame and, thus, at speed\n$(V+V_\\phi)\/(1+VV_\\phi\/c^2)$ with respect to the shock frame denoting,\nas usual, the speed of light by $c$. If the scattering centers are\ntaken to be fluctuations frozen-in to the plasma then the speed of the\nwaves with respect to the underlying plasma flow is $V_\\phi=0$ and the\nplasma frame is also the rest frame of the scattering centers.\n\n\n\n\\subsection{Shock Structure}\n\\label{sub:shock_structure}\n\nWe use the hyperbolic tangent profile of Schneider \\& Kirk (1989) to\nmodel the flow velocities at different distances from the shock. \nThe width of the transition from the far upstream flow speed, $V_{1}$, to\nthat of the far downstream, $V_{2}$, takes place over a distance of\n$0.01\\lambda_{\\rm e}(\\Gamma_1)$ %\n(for which the shock can still be considered almost step-like; \nsee, e.g., \\citealt{VV2003a}), where $\\lambda_{\\rm e}(\\gamma)$\ndenotes the mean free path of the electrons as a function of Lorentz\nfactor\n\\begin{equation}\n \\gamma=1\/\\sqrt{1-(v\/c)^{2}},\n\\end{equation}\n$v$ is the electron speed, and $\\Gamma_1$ is the Lorentz factor of the\nupstream bulk flow. (We use the standard notation of subscript 1[2] \ndenoting the upstream [downstream] values.) The ratio of the flow speeds\non both sides of the shock gives the gas compression ratio\n\\begin{equation}\n r=\\frac{V_{1}}{V_{2}}. \\label{eq:compression_ratio}\n\\end{equation}\nWe fix the shock speed in the upstream rest frame and, thus, the\n(equal) upstream bulk speed $V_{1}$ in the shock frame. Using this we\ncompute the corresponding gas compression ratio for a given shock\nproper speed\n\\begin{equation}\n u_{1}=c\\sqrt{\\Gamma_{1}^{2}-1}\n\\end{equation}\nfollowing a scheme described by \\citet{VVS} and shown in Figure\n\\ref{fig:compression_ratio}. Finally, the flow speed in the far\ndownstream $V_2$ is given by equation (\\ref{eq:compression_ratio}).\n\\begin{figure}[t]\n \\plotone{f1.eps}\n \\caption{Gas compression ratio $r$ in a parallel shock propagating into\n a cold medium with proper speed $u_1$ \\protect\\citep{VVS}.}\n \\label{fig:compression_ratio}\n\\end{figure}\n\n\n\n\\subsection{Magnetic Field and Scattering} \n\\label{sub:magnetic_field}\n\nFor modeling the magnetic field structure we adopt the quasilinear\napproach, where the field $B$ is considered to consist of two parts:\nthe static, large scale background field $B_{0}$, and smaller scale\nfluctuations with amplitude $\\delta B< B_{0}$. We model the turbulence\nas being composed of a wide-band spectrum of Alfv\\'en waves propagating\nboth parallel and anti-parallel to the flow. In the local plasma frame\nthe waves propagate at Alfv\\'en speed\n\\begin{equation}\n v_{\\rm A}=\\frac{B_{0} c}{\\sqrt{4\\pi hn + B_{0}^{2}}} \\label{eq:alfven_speed}\n\\end{equation}\nwhere $h=(\\rho+P)\/n\\equiv w\/n$, $n$, $\\rho$, and $P$ are the specific\nenthalpy, the number density, the total energy density, and the gas\npressure -- all measured in the local plasma frame. The\nwave intensity as a function of wavenumber, $I(k)$, is assumed\nto have a power-law form for wavenumbers above an inverse correlation length\n $k_{0}.$ We write the power-law as \n\\begin{equation}\n I(k)=I_{0}(k_{0}\/k)^{q}, \\; k>k_0\n \\label{eq:fluctuation_spectrum}\n\\end{equation}\nwhere $q$ is the spectral index of the waves. For wavenumbers smaller\nthan $k_{0}$ the wave intensity per logarithmic bandwidth is assumed\nto be equal to the background field intensity, i.e., $I(k)=B_0^2\nk^{-1}$ for $k \\gamma_{0} \\equiv \\frac{\\Omega_{{\\rm e,0}}}{k_{0}c}\\gg1\n \\label{eq:gamma_0}\n\\end{equation}\nbecomes less efficient. Thus, we expect the electron acceleration\nefficiency to decrease at $\\gamma > \\gamma_0$. Instead of trying to\nfix the value of $k_0$, we use a constant value $\\gamma_{0}=10^{6}$,\nwhich corresponds to observations of maximum Lorentz factor of\nelectrons in some AGN jets \\citep{MeisenheimerEtAl1996}.\n\nIn addition to scattering, the particles are also assumed to lose\nenergy via the synchrotron emission. The average rate of energy loss\nfor an electron with Lorentz factor $\\gamma$ in the frame co-moving\nwith the plasma is given by\n\\begin{equation}\n \\frac{dE}{dt} = -\\frac{4}{3}\\sigma_{{\\rm T}}c\\gamma^{2}U_{B},\n \\label{eq:s_loss}\n\\end{equation}\nwhere $\\sigma_{{\\rm T}}$ is the Thompson cross-section and\n\\mbox{$U_{B}=B_{0}^{2}\/(8\\pi)$} is the magnetic field energy\ndensity. We calculate the latter in all simulations by assuming a\nhydrogen composition of the plasma.\n\n\n\\subsection{Alfv\\'en-Wave Transmission} \\label{sub:transmission}\n\nDownstream Alfv\\'en-wave intensities can be calculated from know\nupstream paramters \\citep[e.g.,][]{VS1998,VVS}. Regardless of the\ncross helicity of the upstream wave field (only parallel or\nanti-parallel waves, or both), there are always both wave modes\npresent in the downstream region. The transmission coefficients for\nthe magnetic field intensities of equation\n(\\ref{eq:fluctuation_spectrum}) of forward $(+)$ and backward $(-)$\nwaves at constant wavenumber $\\tilde{k}$, measured in the wave frame,\ncan be written \\citep[see][equations (22) and (23)]{VVS}, as\n\\begin{equation}\n \\tilde{T}_{\\tilde{k}\\pm}^{2} \\equiv \n \\frac{\\tilde{I}_{2}^{\\pm}(\\tilde{k})}{\\tilde{I}_{1}^{\\pm}(\\tilde{k})}\n \\hspace{0.5cm} {\\rm and} \\hspace{0.5cm}\n \\tilde{R}_{\\tilde{k}\\pm}^{2} \\equiv\n \\frac{\\tilde{I}_{2}^{\\mp}(\\tilde{k})}{\\tilde{I}_{1}^{\\pm}(\\tilde{k})}.\n \\label{eq:T_wk}\n\\end{equation}\nUsing these we can solve the amplification factor of the wave intensity\nfor both wave modes for constant wave frame wave number $\\tilde{k}$ as \n\\begin{equation}\n \\tilde{W}_{\\tilde{k}\\pm} = \n \\tilde{T}_{\\tilde{k}\\pm}^{2}+\\tilde{R}_{\\tilde{k}\\mp}^{2}.\n \\label{eq:amplification_factor}\n\\end{equation}\nAmplification factor depends on the strength of the magnetic field\nas well as on the form of the turbulence spectrum. The intensity \nratio of the backward waves to that of forward waves as a function \nof the quasi-Newtonian Alfv\\'enic Mach number, \n\\begin{equation}\n M=u_{1}\/u_{{\\rm A,1}},\n\\end{equation}\nis plotted in Figure \\ref{fig:amplification} for three different Alfv\\'en \nproper speed \n\\begin{equation}\n u_{{\\rm A,1}}=v_{{\\rm A,1}}\/\\sqrt{1-\\beta_{{\\rm A,1}}^{2}},\n\\end{equation}\nwhere $\\beta_{{\\rm A,1}}=v_{{\\rm A,1}}\/c$. The waves are seen to\npropagate predominantly backward for relatively low-Mach-number shocks\nas shown by \\citet{VVS} in case of relativistic shocks, and by\n\\citet{VS1998} for non-relativistic shocks. This enables the\nscattering center compression ratio $r_{k}$ to grow larger than the\ngas compression ratio $r$ and, thus, to cause significantly harder\nparticle spectra compared to the predictions of theories relying on\nfluctuations frozen-in to plasma flow. As the Mach number increases,\nthe downstream wave intensities approach equipartition at the\nultra-relativistic limit.\n\\begin{figure}[t]\n \\plotone{f2.eps}\n \\caption{Ratio of the amplified wave intensities as function of\n Alfv\\'enic Mach number. Cases corresponding to proper Alfv\\'en\n speeds $u_{{\\rm A,1}}$ of $10.0\\,c$, $1.0\\,c$, and $0.1\\,c$ are\n plotted with solid, dotted, and dashed lines, respectively, in the\n case of Kolmogorov turbulence, to $q=5\/3$.}\n \\label{fig:amplification}\n\\end{figure}\n\nWaves conserve their shock-frame frequencies during the shock crossing\n\\citep{VS1998}, and for given upstream wave-frame wavenumbers\n$\\tilde{k}_{1\\pm}$ (here equipartition of the upstream waves is\nassumed) the downstream wave frame values are obtained from\n\\citep{VVS}\n\\begin{equation}\n \\tilde{k}_{2+} = \\frac{ \\Gamma_{1\\pm} V_{1\\pm} }{ \\Gamma_{2+} V_{2+} }\n \\tilde{k}_{1\\pm} ; \\quad\n \\tilde{k}_{2-} = \\frac{ \\Gamma_{1\\pm} V_{1\\pm} }{ \\Gamma_{2-} V_{2-} }\n \\tilde{k}_{1\\pm}\n\\end{equation}\nfor both the forward ($\\tilde{k}_{2+}$) and the ($\\tilde{k}_{2-}$)\nwaves. Here $V_{j\\pm}$ and $\\Gamma_{j\\pm}$ refer to the wave speeds\nof forward (+) and backward ($-$) waves in the upstream ($j=1$) and\ndownstream ($j=2$) region and to the respective Lorentz factors. The\nfunctional form of the spectrum does not change on shock crossing and\nthe spectral index $q$ in equation \\ref{eq:fluctuation_spectrum} is\nthe same both sides of the shock.\n\n\n\n\\subsection{Testing the Model}\n\nWe have tested the ability of our model to produce results expected\nfrom previous numerical studies and theory. To test the model in the\ncase of first order acceleration we ran numerous %\ntest-particle simulations with\ndifferent injection energies and shock widths\n\\citep{VV2003a,VV2003b}. We found our results to be in very good\nagreement with the semi-analytical results for modified shocks of\n\\citet{SK1989}, as well as to the numerical results of, e.g.,\n\\citet{EllisonEtAl1990} for the corresponding parts of the studies.\nFor the step-shock approximation, spectra with indices close to the\npredicted value of $\\sim2.2$ were obtained. An example of the\ntest-runs is shown in Figure \\ref{fig:FI-spectrum} with the shock proper\nspeed $u_{1}=10\\, c$ and compression ratio $r\\approx3$, assuming\nscattering centers frozen-in into the plasma and turbulence having\na spectral index of $q=2$.\n\\begin{figure}[t]\n \\plotone{f3.eps}\n \\caption{Spectrum of the particles accelerated at the shock front\n in the step-shock approximation. Resulting spectral index \n for power-law -- extending over five orders of magnitude in energy -- \n is $s\\simeq2.2$. }\n \\label{fig:FI-spectrum}\n\\end{figure}\n\nFor testing the second order acceleration we chose an analytically\nwell known case, namely that of uniform flow with waves streaming in\nparallel and anti-parallel directions with equal intensities. In this\ncase the momentum diffusion coefficient of charged particles can be\ngiven as \\citep{Schlickeiser1989}\n\\begin{equation}\n A_{2}(x,p) \\simeq \\frac{2\\pi\\Omega_{{\\rm e}}^{2-q}k_0^qI_{0}}{q(q+2)B_{0}^{2}}\n \\frac{p^{2}v_{{\\rm A}}^{2}}{v^{3-q}},\n \\label{eq:mom_diff_coefficient}\n\\end{equation}\nwhich in the relativistic case ($v\\approx c$ and $p\\gg m_{{\\rm e}}c$)\nis found to depend on the momentum $p$ as \n\\begin{equation}\n A_{2}\\propto p^{q}.\\label{eq:mom_diff_coeff_relation}\n\\end{equation}\nFor a constant particle injection at low momenta, this leads to a\nsimple relation of the spectral index of the volume-integrated\nparticle energy spectrum, $s$, and the magnetic field fluctuations,\n$q$:\n\\begin{equation}\n s = q - 1.\n \\label{eq:index_relation}\n\\end{equation}\nAs a test case, we ran simulations involving only stochastic\nacceleration, and found that for values \\mbox{$q\\in[1,2]$} the model\nproduces exactly those indices expected from the analytical\ncalculation. The spectral indices obtained from the simulation are\nplotted together with the theoretical prediction in\nFigure~\\ref{fig:stochastic_test}.\n\\begin{figure}[t]\n \\plotone{f4.eps}\n \\caption{Comparison of the simulation model and the theory. \n Spectral index of the energy spectrum of particles accelerated \n stochastically for different spectral indices of the magnetic \n field fluctuations. Data from the simulations are plotted with crosses \n (statistical error of each plot is $0.02$ or less), and the\n theoretical prediction of $s=q-1$ is plotted as a straight line.}\n \\label{fig:stochastic_test}\n\\end{figure}\n\n\n\n\\section{RESULTS} \\label{sec:Results}\n\nIn this section we apply our model to stochastic particle acceleration\nin the downstream region of a relativistic parallel shock, using a \ntest-particle approach.\n\nFirst we\nshow how the stochastic process affects a non-thermal particle\npopulation, already accelerated at the shock via the first order\nmechanism. Then we consider particles injected throughout the\ndownstream region. Finally we present an example of the combination of\nboth injection schemes. Simulations were run separately for low,\nintermediate and high Alfv\\'enic-Mach-number shocks ($M=3$, $M=10,$\nand $M=1000$, respectively -- see Fig. \\ref{fig:amplification} for\nthe corresponding wave intensity ratios), and for four cases of\ndownstream turbulence; for turbulence spectral index $q=2$ and $q=5\/3$\nwith downstream wave field calculated using wave transmission analysis\ndescribed in \\S \\ref{sub:transmission}, and with the downstream forward\nand backward waves being in equipartition.%\n\\footnote{In the case of $M=1000$ the effects were -- at best --\n barely visible, as expected. For this reason these results are not\n included in this paper, but are available in an electronic form at\n \\url{http:\/\/www.astro.utu.fi\/red\/qshock.html}.} The proper speed of\nthe shock is set to $u_1 = 10\\, c$ in all simulations.\n\n\n\n\\subsection{Electrons Injected and Accelerated at the Shock}\n\nWe have studied the effect of stochastic acceleration on particles\nthat have been already accelerated at the shock. This was done by\ninjecting particles into the shock and the first order mechanism, and\nallowing them to continue accelerating via the stochastic process in\nthe downstream region. Injection of the particles took place in the\ndownstream immediately after the shock, and particles were given an\ninitial energy of a few times the energy of the thermal upstream\nparticles as seen from the downstream. This kind of injection\nsimulates some already-energized downstream particles returning into\nthe shock, but without the need of processing the time consuming bulk\nof non-accelerating thermal particles. The high-energy part of the\nparticle energy distribution -- which we are interested in in this\nstudy -- is similar, regardless of the injection energy.\n\\begin{figure*}\n \\begin{center} \\begin{minipage}{13.0cm}\n \\plotone{f5_lowres.eps}\n \\end{minipage}\\end{center}\n \\caption{Evolution of the particle spectra in the turbulent\n downstream region due to stochastic acceleration. Particles are\n initially injected and accelerated at the shock front, located at $x=0$\n near the left edge of the plots. Contours of $\\log(E\\frac{dN}{dE})$\n show the steady-state particle energy distribution. On the\n left-hand panels the wave transmission calculation of\n \\protect\\citet{VVS} is applied, whereas on the right-hand panels\n the downstream waves modes are assumed to be in equipartition. In\n the two uppermost rows $q=2$, and in the two lowermost panels\n $q=5\/3$. Results for Alfv\\'enic Mach numbers $M=3$ (first and\n third row) and $10$ (second and fourth row) are shown for each\n case. The physical sizes of the downstream region are\n approximately $10^{12}$ cm (first row), and $10^{13}$ cm (second\n row) for the $q=2$ cases, and $10^{11}$ cm (third row), and\n $10^{12}$ cm (fourth row) for the $q=5\/3$ cases. }\n \\label{fig:distributions_shock}\n\\end{figure*}\n\nWe found that in the case of high Alfv\\'enic Mach number ($M=1000$,\ncorresponding to magnetic field $B_0 \\simeq 1.4$ mG in a hydrogen\nplasma) the contribution of the stochastic process to the energy\ndistribution of the particles is, indeed, very insignificant compared\nto that of the first order acceleration at the shock; the energy\nspectrum maintains its shape and energy range unaltered at least for\ntens of thousands thermal electron mean free paths. This is the case\nregardless of the applied turbulence spectral index, and because the\nanalytically calculated wave intensities are very close to\nequipartition for high-Mach-number shock (see\nFig. \\ref{fig:amplification}), the difference between the analytically\ncalculated downstream wave field and the explicitely assumed\nequipartition case was, as expected, minimal.\n\nFor stronger magnetic fields ($M=10$ and $M=3$, corresponding to\n\\mbox{$0.14$~G} and \\mbox{$0.46$ G}, respectively, in a hydrogen\nplasma, and to $4.6$~mG and $15$~mG in a pair plasma) the effect of\nstochastic acceleration is, on the contrary to the high-$M$ case, very\npronounced. The stochastic process begins to re-accelerate particles\nimmediately after the shock front, and the whole spectrum slowly\nshifts to higher energies as shown in Figure\n\\ref{fig:distributions_shock}, where the results obtained using wave\ntransmission analysis are presented in the left-hand column, while the\nright-hand panels show the results of the equipartition assumption.\nThe acceleration rate depends on the wave spectrum in a way that for\n$q=2$, for which particles of all energies have the same mean free\npath, the stochastic process accelerates particles to higher energies\nat constant rate, whereas for the Kolmogorov turbulence the\nacceleration rate (like the mean free path of the particles) decreases\nas the energy increases, and the energization gradually slows down as\nthe bulk of the particle energy distribution rises to higher energies.\nAlso different composition of the downstream wave field leads to\nslightly different results: in the ``transmitted'' case (left-hand\npanels) the particle population immediately behind the shock front\nextends to slightly higher energies than in the equipartition case\n(right-hand panels), but for the latter the stochastic acceleration is\nclearly quicker. This can be seen best in the uppermost panels of\nFigure \\ref{fig:distributions_shock}, where the calculated transmitted\nwave field differed from the equipartition assumption the most.\n\\begin{figure}[t]\n \\plotone{f6.eps}\n \\caption{Energy distribution of the particles escaping the shock region \n and getting absorbed in the downstream. The Alfv\\'enic Mach number of \n the shock is $M=10$, the proper speed is $u_1 = 10 \\, c$, and the \n turbulence corresponds to $q=5\/3$. Note that the distribution is \n plotted as $\\log (dN \/ dE)$. }\n \\label{fig:spectrum_qkM10}\n\\end{figure}\n\nThe ``turnover'' of the scattering rate at energy $\\gamma_0 = 10^6$\ncauses the rate of energization to go down for energies greater than\n$\\gamma_0$. This is because the energy dependence of the mean free\npath of the particle changes when particles resonant wavenumber\n(eqn. (\\ref{eq:resonant_wavenumber})) decreases below $k_0$, which was\nuse to set the lower limit for the $I(k) \\propto k^{-q}$ power-law.\nFor particles with $\\gamma > \\gamma_0$ the mean free path starts to\nincrease much faster, leading not only to the decrease of the\nstochastic acceleration efficiency, but also to another notable\neffect: as its scattering mean free path increases at\n$\\gamma>\\gamma_0$, the particle will suddenly be able to move much\nmore easily in the downstream region and even be able to return back\nto the shock. Particles already energized first in the shock by the\nfirst order mechanism and then in the downstream by the second order\nacceleration, and returning back into the shock again, get\n``re-injected'' into the first order acceleration process. This\neffect, in general, is seen in all simulations with $M=3$ as a bending\nof the countours to the left at $\\gamma>\\gamma_0$ close to the shock\n(at $x=0$), but it is visible also in $M=10$ shocks in spectra of\nparticles collected at the downstream free-escape boundary. An\nexample of the latter case is shown in Figure\n\\ref{fig:spectrum_qkM10}.\n\n\n\\subsection{Electrons Injected from the Downstream Bulk Plasma}%\n\\label{sub:downstream_inj}\n\nOur second approach was to assume that a constant injection mechanism\nexists throughout the downstream region and investigate the stochastic\nacceleration process. (Physically, this mimics a case, where turbulent\nfluctuations cascade to higher wavenumbers and inject a fraction of\nthe thermal electrons to the stochastic acceleration process.) We\ninjected particles at constant energy -- corresponding to the energy\nequal to the energy of upstream electrons as seen from the downstream\nregion -- uniformly and isotropically within the whole downstream\nregion. The results for different cases are shown in Figure\n\\ref{fig:distributions_downstream}.\n\\begin{figure*}\n \\begin{center} \\begin{minipage}{13.0cm}\n \\plotone{f7_lowres.eps}\n \\end{minipage} \\end{center}\n \\caption{Same as Fig. \\ref{fig:distributions_shock} but for\n particles injected uniformly and isotropically across the\n downstream region. }\n \\label{fig:distributions_downstream}\n\\end{figure*}\n\nThe Figure shows similar behavior as in the case of particles injected\nat the shock: significant acceleration was seen only for shocks with\nstrong magnetic field, while in the $M=1000$ case practically no\nacceleration is seen. For the cases with strong magnetic field the\nacceleration is seen to work similarly as in the previous\nshock-related injection case, energizing particles at constant rate\nfor $q=2$ turbulence, and with decreasing rate for $q=5\/3$. Again the\nacceleration rate slows down when energies corresponding to $\\gamma_0$\nare reached. After this energy particles start to pile up and form a\nbump in the distribution immediately behind the $\\gamma_0$. Also here\n(at least for $M=3$) some particles with energy $\\gamma > \\gamma_0$\nare able to return to the shock and get re-injected to the first order\nprocess (see, e.g., the right-hand panel of Fig\n\\ref{fig:slices_downstream}).\n\nIn the energy range between the injection energy and $\\gamma_0$,\nparticles begin to form a power-law distribution with spectral index\ndepending on the magnetic field fluctuations, as expected from\nequation (\\ref{eq:index_relation}). For $q=2$, the particle spectral\nindex $s \\simeq 1$, and for $q=5\/3$, $s \\simeq 0.6$. The formation of\nthe power laws as function of distance from the shock is shown in the\nleft-hand panel of Figure \\ref{fig:slices_downstream} for $q=2$, and\nin the right-hand panel for $q=5\/3$. In the latter case also the\nformation of high-energy bump at the shock by the returning\naccelerated particles is seen.\n\\begin{figure*}\n \\begin{center} \\begin{minipage}{0.9\\linewidth}\n \\plotone{f8.eps}\n \\end{minipage} \\end{center}\n \\caption{Energy spectrum of the particles at different distances\n from the shock. The particles are injected in the downstream\n region and the distributions are from the simulation with $q=2$\n and $M=10$ (wave field transmitted) for the left-hand panel and\n from $q=5\/3$ and $M=3$ (wave field in equipartition) for the\n right-hand panel (corresponding to the left-hand panel of the\n second row and the top right panel of Fig.\n \\ref{fig:distributions_downstream}). Slices are from locations\n $x=0$ (solid line), $x=100$ (dashed), $x=400$ (dotted), $x=1000$\n (dash-dotted), and $x=2000$ (dash-dot-dot-dotted) for the\n left-hand panel, and from $x=0$ (solid line), $x=300$ (dashed),\n $x=600$ (dotted), $x=1500$ (dash-dotted), and $x=3000$\n (dash-dot-dot-dotted) for the right-hand panel.}\n \\label{fig:slices_downstream}\n\\end{figure*}\n\n\n\n\\subsection{Example of Combination}\n\nWe have investigated what kind of particle energy spectra the two\ndiscussed injection schemes -- one operating at the shock, and another\noperating uniformly throughout the downstream region -- are able to\ncreate. Next we will present an example of a combination of these. In\nthe simulation these two cases were kept separate for the sake of\nsimplicity, but there should be no reason to assume the separation be\npresent also in nature. Also the relative amounts of shock-injected\nand downstream-injected particles is not fixed here, but instead\nconsidered more or less free a parameter.\n\nAssuming different ratios of injected particle populations, the resulting \nspectrum would be very different. An example of combination of the \ntwo injection schemes in the case of shock with $u_1=10\\,c$ and $M=10$, and \nwith turbulence corresponding to $q=5\/3$, is shown in Figure \n\\ref{fig:qk_M10_combination}. \n\\begin{figure}[t]\n \\plotone{f9.eps}\n \\caption{Example of a combination (solid line) energy spectra of\n particles injected to the acceleration process at the shock\n (dashed) and throughout the downstream region (dotted). (Model\n parameters are $q=5\/3$ and $M=10$.) The particles are collected\n at the downstream free escape boundary, $\\sim 4\\times10^{12}$ cm\n away from the shock, and the number of the particles injected at\n the shock is larger than the number of uniformly injected\n particles by a factor of $100$.}\n \\label{fig:qk_M10_combination}\n\\end{figure}\n\n\n\n\\section{DISCUSSION AND CONCLUSIONS}\n\\label{sec:Discussion}\n\nWe have studied stochastic particle acceleration in the downstream\nregion of a relativistic parallel shock. Applying the wave\ntransmission calculations of \\citet{VVS} and assuming the cross\nhelicity to vanish in the upstream, we have modeled the turbulence of\nthe downstream region as a superposition of Alfv\\'en waves propagating\nparallel and anti-parallel to the plasma flow. Using a kinetic\nMonte-Carlo simulation we have modeled the second order Fermi\nacceleration of electrons in the shock environment, and considered\ncases of acceleration of downstream-injected particles, as well as\nthat of particles injected at the shock. We have shown that the\nstochastic acceleration can, indeed, have remarkable effects in both\ncases. This result is even more pronounced if the two downstream\nAlfv\\'en wave fields are assumed to be in equipartition.\n\nThe behavior of the particle energy distribution in the stochastic\nprocess depends heavily on the strength of the background magnetic\nfield; in the cases of weak magnetic field and quasi-Newtonian\nAlfv\\'enic Mach number much larger than the critical Mach number ($M\n\\gg M_{\\rm c} = \\sqrt{r}$) the effects of stochastic acceleration are\nfaded out by the much stronger first order acceleration. Also the\nmagnetic field turbulence spectrum affects the acceleration\nefficiency: for Kolmogorov turbulence with $q=5\/3$ the spatial scales\nare up to an order of magnitude shorter than in the case of $q=2$\nturbulence. Although the spatial scales in simulations presented here\nare enormous compared to those associated with shock acceleration (the\nfirst order process in the immediate vicinity of the shock front), in\ncase of blazars and other AGNi the scales are still orders of\nmagnitude too small to be resolved even in the VLBI pictures --\nregardless of the turbulence and used magnetic field strength. Also\nthe acceleration time scales are very short: the time required to\nshift the whole spectrum from the initial energy range to $\\gamma_{\\rm\nbulk} \\gtrsim 10^6$ ranged from $10$ to $50$ minutes in the $M=10$\ncase, and for $M=3$ the times were $\\lesssim 1$ minute, as measured in\nthe shock frame.\n\nIn addition to the magnetic field strength and turbulence, also the\ncomposition of the downstream wave field was seen to affect the\nresulting particle population. When comparing otherwise similar cases\nthat differ only for the downstream cross helicity (i.e., whether the\nwave field is resulting from the wave transmission calculations of\n\\citet{VVS} or an equipartition of parallel and anti-parallel waves is\nassumed), the calculated wave-transmission cases with more\nanti-parallel waves \\citep[][and Fig.\\ref{fig:amplification}]{VVS}\nshowed stronger first order acceleration, but weaker stochastic\nacceleration. This is because of the larger scattering center\ncompression ratio in the wave-transmission case leading to more\nefficient first order acceleration \\citep{VVS} and, on the other\nhand, faster momentum diffusion rate in the equipartition case leading\nto more efficient stochastic acceleration.\n\nIn the cases where the stochastic acceleration was quick enough for\nparticles to reach the $\\gamma_0$ energy while being still\nsufficiently close to the shock in order to be able to make their way\nback to the upstream region due to their prolonged mean free path, the\nfirst order mechanism was able to re-accelerate the returning\nhigh-energy particles to even higher energies. This led to forming of\na new (quasi-)power-law at energies $\\gamma \\gtrsim 10^7$ in some\ncases.\n\nOne notable feature of the present model is that in the case of a\nuniform injection process in the downstream region, power-law spectra\nwith high and low energy cut-offs are formed. Depending on the\nturbulence, particle energy spectra have power-law spectral indices of\n$0.5$--$1$ with lower and higher energy cut-offs at $\\gamma_1 \\approx\n10^1 \\simeq \\gamma_{\\rm injection}$ and $\\gamma_2 \\sim 10^6 =\n\\gamma_0$, respectively. These particles would produce synchrotron\nspectra with photon spectral indices $-0.5 < \\alpha < 0$ in the\nGHz--THz regime for various initial parameters. These properties are\nquite similar to those of flat-spectrum sources, for which typical\nspectra with $\\alpha \\gtrsim -0.5$ in the GHz region and flare spectra\nwith $\\alpha \\approx -0.2$ in the optically thin region of the\nspectrum are seen \\citep[e.g.,][and references\ntherein]{ValtaojaEtAl1988}. \n\nCombining the resulting energy distributions of both the particles\ninjected at the shock and the particles injected uniformly in the\ndownstream region can lead to very different results depending on the\nrelative amounts of particles injected in both cases. Because\ndifferent distributions produce various observable spectra, one might,\nin principle, be able to set constraints on different injection\nmechanisms, as well as the physical size of the turbulent downstream\nregion of the shock, comparing the evolution of observed radiated\nspectra and the predictions basing on several composite\ndistributions. Especially the maximum Lorentz factor of electrons\ncould be used to set limits for $\\gamma_0$, as well as the\nlowest-energy parts of the non-thermal power-law spectra could give\nhints of the injection process.\n\nOur simulations are based on test-particle approximation, \ni.e., the effects of the particles on the turbulent wave spectrum \nand on the shock structure are neglected. \nIncluding wave-particle interactions in a self-consistent\nmanner may modify the cross-helicities and wave intensities in the\ndownstream region and lead to notable effects on the accelerated\nparticle spectrum \\citep[e.g.,][]{Vainio2001}. One should note,\nhowever, that the wave particle interactions are competing with\nwave-wave interactions in the turbulent downstream region, which may\nmodify the the turbulence parameters in a different manner. Including\nthese effects to our model are, however, beyond of the scope of\nthe present simulations.\n\nTo conclude, the main results of this paper are: \n(i) Stochastic acceleration can be a very efficient mechanism in the\ndownstream region of parallel relativistic shocks, provided that the\nmagnetic field strength is large enough in order to make the\nAlfv\\'enic Mach number approach the critical Mach number ($M_{\\rm c} =\n\\sqrt{r}$) of the shock, i.e., to increase the downstream Alfv\\'en\nspeed enough to allow for sufficient difference in speeds of parallel\nand anti-parallel Alfv\\'en waves required for rapid stochastic\nacceleration.\n(ii) The forming of a power-law with very hard particle energy spectra\nbetween the injection energy and $\\gamma_0$ in the case of a\ncontinuous injection mechanism in the downstream region. The produced\nparticle populations could produce synchrotron spectra very similar to\nthose of flat spectrum sources.\n(iii) The interplay between the first and second order Fermi\nacceleration at relativistic shocks can produce a variety of spectral\nforms not limited to single power laws.\n\n\\acknowledgements\nJ.J.P.V. thanks the Finnish Cultural Foun\\-dation for financial support.\n\n\n\n\n\n\n\\section*{Appendix A. The Monte Carlo Code}\n\nIn this appendix we review the structure and implementation of our\nsimulation code. The code employs a kinetic test-particle approach; it\nfollows individual particles in a pre-defined and simplified shock\nenvironment, based on the assumptions and simplifications presented in\n\\S~\\ref{sec:model}. In short, the simulation works as follows: we\ntrace test particles under the guiding center approximation in a\nhomogeneous background magnetic field with superposed magnetic\nfluctuations. The fluctuations are assumed to be either static\ndisturbances frozen-in into the plasma flow, or Alfv\\'en waves\npropagating along the large-scale magnetic field parallel and\nantiparallel to the flow (in this paper we apply only the Alfv\\'en\nwave case). In each time-step the particle is moved and scattered;\nscatterings are modeled as pitch-angle diffusion with an isotropic\ndiffusion coefficient $D_{\\mu\\mu}\\propto1-\\mu^{2}$, where\n$\\mu=\\cos\\theta$ is the pitch-angle cosine of the particle.\nAlso for each simulation time-step the energy losses due to the \nsynchrotron emission (see eq. (\\ref{eq:s_loss})) are calculated.\n\nWhen the particle passes a free-escape boundary in the far downstream\nregion it is removed from the simulation. The injection of the\nparticles into the simulation is modeled using several methods. The\nfirst method involves a uniform and isotropic injection of particles\nwithin one mean free path of a thermal electron downstream of the\nshock front, simulating the already-energized supra-thermal particles\ncrossing and re-crossing the shock. This injection method allows us\nto concentrate on the non-thermal particles without having to spend\nmost of the computing time simulating the thermal body of the total\nparticle distribution. Other injection employed methods include an\ninjection of a cold distribution upstream, or an injection of\nparticles at thermal energies uniformly and isotropically in the\ndownstream region (the latter case is applied in \\S\n\\ref{sub:downstream_inj}).\n\nThe time unit of the simulation is chosen to be the inverse of the\nscattering frequency of an electron having energy $E_{\\Gamma_1} \\equiv\n\\Gamma_1 m_{\\rm e}c^2$, where $\\Gamma_{1}$ is the Lorentz factor of\nthe shock. The unit of velocity is chosen as $c$. With these choices\nthe unit of length equals the mean free path of electron with Lorentz\nfactor that of the shock: $\\lambda_{\\rm e}(\\Gamma_1) = 1 \/ \\nu_{\\rm\ne}(\\Gamma_1)$. \nThe time-step is chosen so that it is a small fraction of the inverse \nscattering time, $\\Delta t = a \\nu_{\\rm e}(\\gamma)$, where \n$a \\lesssim 0.01$.\n\nFor a typical simulation $\\sim 10^5$ particles are injected. The\nnumber of high-energy particles is further increased by applying a\nsplitting technique; if the energy of a particle exceeds some\npre-defined value, the particle is replaced by two ''daughter\nparticles'' which are otherwise identical to their ''mother'', but\nhave their statistical weight halved. The number of these splitting\nboundaries is chosen so that for each simulation the balance between\nthe statistics and the simulation time is optimal.\n\n\n\n\\subsection*{A.1. The Shock and the Flow Profile}\n\nWe consider a shock wave propagating at speed $V_{1}$ into a cold\nambient medium, which is taken to be initially at rest in the\nobserver's frame. In the shock frame the shock is, by definition, at\nrest and the upstream plasma flows in with speed $V_{1}$. At the shock\nfront the plasma slows down, undergoes compression, and flows out at\ndownstream flow speed $V_2 < V_1$. The shocked downstream values of\nthe plasma parameters are determined by the compression ratio of the\nshock defined in equation (\\ref{eq:compression_ratio}).\n\nFor the flow speed depending on the location $x$, we use the hyperbolic\ntangent form of \\cite*{SK1989}, \n\\begin{equation}\n V(x) = V_{1} - \\frac{V_{1}-V_{2}}{2} \\left[ 1 + \\tanh \\left(\n \\frac{x}{W\\lambda_{\\rm e}(\\Gamma_1)} \\right) \\right],\n \\label{eq:FlowProfile_original}\n\\end{equation}\nand fix the shock thickness to $W=0.01$, corresponding to a nearly \nstep-like shock. For thicker shocks the first order acceleration \nefficiency drops \nfast \\citep[e.g.][]{SK1989,VV2003b}, and also the simple wave-transmission \ncalculations of \\citet{VVS} would not be valid.\n\n\n\\subsection*{A.2. Magnetic Field}\n\nThe strength of the parallel magnetic field, $B_{0}$, in the\nsimulation is defined with respect to the ''critical strength'' for\nwhich the Alfv\\'en speed in the downstream region becomes equal to the\nlocal flow speed; for fields stronger than this the parallel shock\nbecomes non-evolutionary. The critical field, $B_{\\rm c}$, is\ncalculated from equation\n\\begin{equation}\n B_{\\rm c}c \/ \\sqrt{4\\pi h_{2}n_{2}+B_{c}^{2}} = V_{2},\n\\end{equation}\nfor which we get the downstream particle density $n_{2}$ and the\nspecific enthalpy $h_{2}$ using the magnetohydrodynamical jump\nconditions \\cite[e.g.,][]{KD1999} and the upstream values $n_1 = 1\\\n{\\rm cm^{-3}}$ and $h_{1}=(\\rho_{1}+P_{1})\/n_{1}=mc^2$ with $m=m_{\\rm\ne}+m_{\\rm p}$ in a hydrogen plasma and $m=2m_{\\rm e}$ in a pair plasma\n(throughout this work the upstream plasma is assumed to be cold so\nthat its pressure may be neglected compared to its rest\nenergy). Applying the equations for conservation of both energy\n\\begin{equation}\n h_{1}\\Gamma_{1}V_{1}=h_{2}\\Gamma_{2}V_{2}\n\\end{equation}\nand mass\n\\begin{equation}\n \\Gamma_{1}V_{1}n_{1}=\\Gamma_{2}V_{2}n_{2},\n\\end{equation}\nand a bit of straightforward algebra, the critical field can be written as\n\\begin{equation}\n B_{\\rm c}=\\sqrt{4\\pi V_{1}V_{2}mn_{1}}\\,\\Gamma_{1}.\n\\end{equation}\nFor example, for a shock with compression ratio $r=3$ propagating into\na cold ambient medium of $n_{1}=1\\,{\\rm cm^{-3}}$ at proper speed\n$u_{1}=10\\,c$ the critical field is $B_{\\rm c}\\approx0.8\\,{\\rm G}$ for\na hydrogen plasma and $0.03$~G for a pair plasma.\n\n\n\\subsection*{A.3. Particle Transport and Scattering}\n\nDuring each Monte Carlo time-step scatterings off the magnetic\nfluctuations are simulated by making small random displacements of the\ntip of the particle's momentum vector, and the particle is transported\naccording to its parallel (to the flow) speed in the fixed shock\nframe. In the case of stochastic acceleration, the particle is\nscattered twice for each Monte Carlo time-step. Instead of only one\nscattering off scattering centers frozen-in into the plasma, the\nscattering process is now divided into two parts: the particle is\nfirst scattered in the forward-wave frame, and then immediately again\nin the backward-wave frame. The scattering frequencies of both\nscatterings are adjusted so that the total statistical effect of the\nduplex scattering is consistent with the value of the momentary\nscattering mean free path. This is done by balancing the both\nscattering frequencies by weight values $\\omega_{\\pm}\\in[0,1]$, which\nare calculated from the ratio of the amplified waves (eq.\n\\ref{eq:amplification_factor} and Fig. \\ref{fig:amplification}), so\nthat they satisfy the relation $\\omega_{+}+\\omega_{-}=1$. %\nNeglecting, for simplicity of the simulations, the dependence of the\nscattering frequency on the particle's propagation direction, and\nassuming that the scattering is elastic in the rest frame of the\nscattering centers, we can use the quasilinear theory: the scattering\nfrequency of an electron as a function of its Lorentz factor (in the\nrest frame of the scattering centers, denoted here by prime) $\\gamma'$\nis\n\\begin{equation}\n \\nu(\\gamma') \\approx \\frac{\\pi}{2}\\frac{\\Omega_{0}}{\\gamma'}\n \\frac{k'I'(k')}{B^{2}} \\propto \n \\frac{\\left(\\gamma'^{2}-1\\right)^{(q-1)\/2}}{\\gamma'}.\n \\label{eq:ScatteringFrequency}\n\\end{equation}\nThis equation is applicable to particles scattering off waves with\nwavenumber larger than $k_{0}$; for particles with Lorentz factor\n$\\gamma' > \\gamma'_{0}$ the scattering frequency decreases as\n$\\nu\\propto\\gamma^{-1}$, as at these wavenumbers $k'I(k')=B_0^2$ is\nassumed. Scattering frequency as a function of particle's Lorentz\nfactor is plotted in Figure \\ref{fig:scattering-frequency} for\nmagnetic field fluctuations with $q=2$ (leading to energy-independent\nmean free path) and $q=5\/3$.\n\\begin{figure}[t]\n \\plotone{f10.eps}\n \\caption{Scattering frequency as a function of Lorentz factor. The\n turn-over energy (eq. (\\ref{eq:gamma_0}), see text for details)\n $\\gamma_{0}=10^{6}$ is marked with a vertical dotted line. The\n values of $\\nu(\\mu)$ corresponding to $q=5\/3$ and $q=2$ are\n plotted with solid and dashed curves, respectively.}\n \\label{fig:scattering-frequency}\n\\end{figure}\n\nIn each scattering, the velocity vector of the particle is first\nLorentz-transformed into the frame of the scatterers, then the new\npitch-angle cosine (in the scatterer frame) is computed from the\nformula \\citep[e.g.,][]{EllisonEtAl1990}\n\\begin{equation}\n \\mu' \\gets \\mu'\\cos\\vartheta+\\sqrt{1-\\mu'^{2}}\\sin\\vartheta\\cos\\phi,\n\\end{equation}\nwhere the angle between the velocities before ($\\mathbf{v_0}$) and\nafter ($\\mathbf{v}$) the scattering, $\\vartheta\\in[0,\\pi)$, and the\nangle measured around the scattering axis, $\\phi\\in[0,2\\pi)$ (angles\nand the geometry is sketched in\nFig. \\ref{fig:geom_of_the_scattering}), are picked via a random\ngenerator from exponential and uniform distributions, respectively\n\\citep[see][for details]{VainioEtAl2000}. The new velocity vector is\nthen Lorentz transformed back to the shock frame, and finally the the\nparticle is moved according to its new parallel velocity.\n\\begin{figure}[t]\n \\plotone{f11.eps}\n \\caption{The geometry of the scattering event. Particle is initially\n moving at velocity $\\mathbf{v}_{0}$; the angle between the\n velocity and magnetic field $\\mathbf{B}$ is $\\theta_{0}$. (Thus,\n $\\theta_0=\\arccos\\mu_0'$.) Scattering changes the direction of the\n particle by an angle $\\vartheta$ so that the new velocity\n $\\mathbf{v}$ has an angle $\\theta$ between it and the magnetic\n field. $\\phi$ is the azimuth angle of the new velocity measured\n from the plane defined by $\\mathbf{B}$ and $\\mathbf{v}_{0}$.}\n \\label{fig:geom_of_the_scattering}\n\\end{figure}\n\n\n\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter*{Abstract}\n\n\n \\newpage\n\n\n\n\\chapter*{Abstract}\n\nThis thesis consists of two parts. In part I we consider a discrepancy in the derivation of the electromagnetic self force for a point charge. The self force is given by the Abraham-von Laue vector, which consists of the radiation reaction term proportional to the $4$-acceleration, and the Schott term proportional to the $4$-jerk. In the point charge framework the self force can be defined as an integral of the Li$\\text{\\'{e}}$nard-Wiechert stress 3-forms over a suitably defined worldtube. In order to define such a worldtube it is necessary to identify a map which associates a unique point along the worldline of the source with every field point off the worldline. One choice of map is the Dirac time, which gives rise to a spacelike displacement vector field and a Dirac tube with spacelike caps. Another choice is the retarded time, which gives rise to a null displacement vector field and a Bhabha tube with null caps. In previous calculations which use the Dirac time the integration yields the complete self force, however in previous calculations which use the retarded time the integration produces only the radiation reaction term and the Schott term is absent. We show in this thesis that the Schott term may be obtained using a null displacement vector providing certain conditions are realized.\\\\[0.2cm]\n\n Part II comprises an investigation into a problem in accelerator physics. In a high energy accelerator the cross-section of the beampipe is not continuous and there exist geometric discontinuities\n such as collimators and cavities. When a relativistic bunch of particles passes such a discontinuity the field generated by a leading charge can interact with the wall and consequently affect the motion of trailing charges. The fields acting on the trailing charges are known as (geometric) wakefields. We model a bunch of particles as a one dimensional continuum of point charges and by calculating the accumulated Li$\\text{\\'{e}}$nard-Wiechert fields we address the possibility of reducing wakefields at a collimator interface by altering the path of the\n beam prior to collimation. This approach is facilitated by the highly relativistic regime in which lepton accelerators operate, where the Coulomb field given from the Li$\\text{\\'{e}}$nard-Wiechert potential is highly collimated\n in the direction of motion. It will be seen that the potential reduction depends upon the ratio of the bunch length to the width of the collimator aperture as well as the relativistic factor and path of the beam. Given that the aperture of the collimator is generally on the order of millimetres we will see that for very short bunches, on the order of hundredths of a picosecond, a significant reduction is achieved.\n\n\n\\newpage\n\\chapter*{Author's declaration}\nI declare that the original ideas contained in this thesis are the result of my own work conducted in collaboration\nwith my supervisor Dr Jonathan Gratus. An article based on the ideas in Part I has been published in Journal of Mathematical Physics (JMP) \\cite{FerrisGratus11}. A letter describing the key results of Part II has been submitted to European Physical Letters (EPL)\\cite{GratusFerris11}.\n\\newpage\n\\chapter*{Acknowledgements}\nI would like to express my very great appreciation to my supervisor Jonathan Gratus for his guidance\nthroughout my time at Lancaster and for his expertise and enthusiasm in our\n many discussions. I would like to thank other members of the Mathematical\nPhysics Group for their hospitality and willingness to discuss any ideas of interest. I would also like to thank the wider Department of Physics at Lancaster University for welcoming me and creating a friendly atmosphere\nin which to live and work.\n\n I would like to thank the Cockcroft Institute for the many interesting lectures and discussions on topics in accelerator physics, and I would especially like to thank STFC for providing the funding to make this PhD thesis possible.\n I would like to offer my special thanks to my examiners David Burton and Andy Wolski for their extremely useful comments. \n \n \n Finally I would like to thank Uma Athale for her support and patience during the writing of this thesis, the many incredible people who have offered me their friendship during my time as a student, and my family, for their encouragement and continuous support in whatever I pursue. \n\n\\tableofcontents{\\markboth{\\slshape{Contents}}{}}\n\\newpage\n\\phantomsection\n\n\\listoffigures\n\\addcontentsline{toc}{chapter}{List of Figures and Tables}\n\\begingroup\n\\let\\clearpage\\relax\n\\listoftables\n\\endgroup\n\n\n\n\n\\end{frontmatter}\n\n\n\\renewcommand{\\headrulewidth}{0.0005cm}\n\\renewcommand{\\footrulewidth}{0.cm}\n\\fancyhfoffset[L,R]{\\marginparsep+\\marginparwidth}\n\\fancyhead[LE,RO]{\\slshape \\rightmark}\n\\fancyhead[RE, LO]{\\slshape \\leftmark}\n\\fancyfoot[C]{\\thepage}\n\\renewcommand{\\sectionmark}[1]{\\markboth{#1}{}}\n\\renewcommand{\\subsectionmark}[1]{\\markright{#1}{}}\n\n\n\\begin{mainmatter}\n\n\n\n\n\n\n\n\n\n\n\n\\chapter*{Guide to Notation}\n\\addcontentsline{toc}{chapter}{Guide to Notation}\nAll fields will be regarded as sections of tensor bundles over\nappropriate domains of Minkowski space $\\mathcal{M}$. Sections of the tangent bundle over $\\mathcal{M}$\nwill be denoted $\\Gamma \\textup{T} \\mathcal{M}$ while sections of the bundle of exterior\n$p$-forms will be denoted $\\Gamma \\Lambda^p \\mathcal{M}$. Given a single worldline $C$ in free space sections over the whole of spacetime\nexcluding the worldline will be written $\\Gamma \\textup{T} {(\\m\\backslashC)}$ and\n$\\Gamma\\Lambda^p {(\\m\\backslashC)} $. We use the SI unit convention. Appendix \\ref{app_dimensions} provides a brief summary of the dimensions of various mathematical objects.\n\n\n\n\n\n\\chapter{Introduction}\n\\label{maxlorentz}\nIn this chapter we introduce the defining characteristics of Minkowski space, namely the metric and the affine structure, and the fundamental equations of Maxwell-Lorentz electrodynamics. We use the term \\emph{Maxwell-Lorentz electrodynamics} to denote the microscopic vacuum Maxwell equations, first derived by Lorentz from the macroscopic Maxwell equations (see \\cite{deGroot72, Rohrlich65}) and often called the \\emph{Maxwell-Lorentz equations}, together with the Lorentz force equation. We introduce the general form of the electromagnetic stress $3$-forms and show that they give rise to a set of conservation laws. A brief introduction to the necessary mathematics can be found in Appendix \\ref{diffgeom}.\n\n\\section{Minkowski space}\n\\begin{definition}\n\\label{def_metric}\nMinkowski space is the pseudo-Euclidean space defined by the pair $(\\mathcal{M}, g)$, where $\\mathcal{M}$ is the four dimensional real vector space $\\mathbb{R} ^4$ and $g$ is the Minkowski metric.\nWith respect to a global Lorentzian coordinate basis $(y^0, y^1, y^2, y^3)$ on $\\mathcal{M}$ the Minkowski metric is defined by\n\\begin{align}\ng= - d\\xt\\otimes d\\xt+d\\xx \\otimes d\\xx +d\\xy \\otimes d\\xy +d\\xz \\otimes d\\xz.\n\\label{gmink_def}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\n\\label{lem_za}\nGiven a new set of coordinates $(z^0, z^1, z^2, z^3)$ on $\\mathcal{M}$ we write $g$ in terms of the new basis using the transformations (\\ref{cov_trans}),\n\\begin{align}\ng=g_{ab}dy^a\\otimes dy^b&=g_{ab} \\frac{\\partial y^a}{\\partial z^c}\\frac{\\partial y^b}{\\partial z^d} d z^c \\wedge d z^d=g^{(z)}_{cd} d z^c \\wedge d z^d\n\\end{align}\nwhere\n\\begin{align}\ng^{(z)}_{cd}=g_{ab} \\frac{\\partial y^a}{\\partial z^c}\\frac{\\partial y^b}{\\partial z^d}.\n\\label{gmat}\n\\end{align}\n\\end{lemma}\n\\begin{lemma}\n\\label{lem_starone}\nThe coordinate basis $(y^0, y^1, y^2, y^3)$ on $\\mathcal{M}$ naturally gives rise to the basis $(d\\xt, d\\xx, d\\xy, d\\xz)$ of $1-$forms on $\\textup{T}^\\ast \\mathcal{M}$. This forms a $g$-orthonormal basis and from definition \\ref{def_starone} it follows\n\\begin{align}\n\\star 1 = d\\xt\\wedge d\\xx \\wedge d\\xy \\wedge d\\xz\n\\end{align}\nis the volume form on $\\mathcal{M}$. In terms of a different coordinate basis $(z^0, z^1, z^2, z^3)$ it is given by\n\\begin{align}\n\\star 1= \\sqrt{|\\textup{det}(g^z)|}d z^0\\wedge d z^1 \\wedge dz^2 \\wedge d z^3\n\\label{starone_basis}\n\\end{align}\nwhere $g^z$ is the matrix of metric components $g^{(z)}_{ab}$ defined by (\\ref{gmat}).\n\\end{lemma}\n\n\\begin{definition}\n\\label{def_affine}\nLet $V$ be a vector field over set of points $M$. If the map\n\\begin{align}\nM \\times M \\longrightarrow V, \\qquad (x, y)\\longrightarrow x-y,\n\\end{align}\nexists and satisfies\n\\begin{align}\n1.&\\quad \\textup{For all}\\quad x \\in M,\\quad \\textup{for all}\\quad v \\in V \\notag\\\\\n&\\quad \\textup{there exists} \\quad y\\in M\\quad \\textup{such that} \\quad y-x= v,\\notag\\\\\n2.& \\quad \\textup{For all}\\quad x, y, z \\in M, \\quad (x-y) + (z-x) = z-y,\n\\end{align}\n then $M$ is an affine space.\n \\end{definition}\n\nFor any integer $n$ the space $\\mathbb{R} ^n$ is affine. It follows that Minkowski space is an affine space.\nIn some calculations it will be necessary to endow Minkowski space with an origin, thus transforming it into a vector space, however the results of such calculations will not depend on the vector space structure but only the affine structure.\n\n\n\\section{Maxwell-Lorentz Equations}\n\n The equations which describe the interaction between matter and the electromagnetic field were first formulated by Maxwell in 1865\\cite{Maxwell65}. Maxwell's equations form a continuum theory of electrodynamics due to their origins in macroscopic experiment. In this thesis we are interested in the interaction of point charges and their fields, therefore we need equations which are valid on the microscopic scale.\n\n\\begin{definition}\n\\label{max_lor}\nThe Maxwell-Lorentz equations, or the microscopic vacuum Maxwell equations are given by\n\\begin{align}\nd\\mathcal{F}&=0\\label{dF}\\\\\n\\epsilon_0 d\\star \\mathcal{F}&=\\mathcal{J}\\label{dstarF}\n\\end{align}\nwhere $\\mathcal{F}\\in\\Gamma\\Lambda^2\\mathcal{M}$ is the electromagnetic $2$-form, $\\mathcal{J}\\in\\Gamma\\Lambda^3\\mathcal{M}$ is the current $3$-form and $\\epsilon_0$ is the permittivity of free space.\nIf we introduce the $1$-form potential\n\\begin{align}\n\\mathcal{A} \\in\\Gamma\\Lambda^1\\mathcal{M} \\qquad \\textup{such that} \\qquad \\mathcal{F}=d\\mathcal{A},\n\\label{A_maxwell}\n\\end{align}\nthen (\\ref{dF}) and (\\ref{dstarF}) reduce to the single equation\n\\begin{align}\n\\qquad\\epsilon_0 d\\star d\\mathcal{A}&=\\mathcal{J},\n\\label{A_maxwelltwo}\n\\end{align}\nwhere (\\ref{dF}) is satisfied automatically because the double action of the exterior derivative is zero.\n\\end{definition}\n\n\\begin{definition}\n\\label{max_lor_dist}\nIn terms of distributional forms (see \\ref{dist_form_def})\n\\begin{align}\n\\mathcal{A}^D \\in\\Gamma_D\\Lambda^1\\mathcal{M}, \\qquad \\mathcal{F}^D=d\\mathcal{A}^D,\n\\label{dist_maxwell}\n\\end{align}\nMaxwell's second equation is given by\n\\begin{align}\n\\qquad\\epsilon_0 d\\star d\\mathcal{A}^D [\\varphi]&=\\mathcal{J}^D[\\varphi]=\\int_{\\mathcal{M}} \\varphi \\wedge \\mathcal{J} .\n\\label{A_maxwell_dist}\n\\end{align}\nwhere $\\varphi\\in\\Gamma_0 \\Lambda^1\\mathcal{M}$ is any test $1$-form (see \\ref{def_test_form}).\n\\end{definition}\n\\subsection*{$3+1$ decomposition}\n\\begin{definition}\n\\label{def_split_one}\nGiven any velocity vector field $U \\in \\Gamma \\textup{T} \\mathcal{M}$ satisfying\\\\\n $g(U, U)=-1$, the electromagnetic $2$-form $\\mathcal{F}$ may be written\n\\begin{align}\n\\mathcal{F}=\\widetilde{\\e} \\wedge \\widetilde{U} + c\\mathbf{B},\n\\label{f_def}\n\\end{align}\nwhere $\\widetilde{\\e} \\in \\Gamma \\Lambda^1 \\mathcal{M}$ and $\\mathbf{B}\\in \\Gamma \\Lambda^2\\mathcal{M}$ are the electric $1$-form and magnetic $2$-form associated with $U$ and $\\mathcal{F}$, and satisfy\n\\begin{align}\ni_{U} \\widetilde{\\e}=i_{U} \\mathbf{B}=0.\n\\end{align}\nHere $\\quad\\widetilde{\\quad}\\quad$ is the metric dual operator defined by (\\ref{dual_def}) and $c$ is the speed of light in a vacuum.\n\\end{definition}\n\\begin{lemma}\nAccording to observers whose worldlines coincide with integral curves of $U$, the electric field $\\mathcal{E}\\in \\Gamma \\textup{T} \\mathcal{M}$ is given by\n\\begin{align}\n\\mathcal{E}=&\\widetilde{i_{U}\\mathcal{F}},\\notag\\\\\n\\label{eb_def}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFollows trivially from (\\ref{f_def})\n\\end{proof}\n\\begin{definition}\n\\label{def_gthree}\nWe may use the vector field $U$ to write the Minkowski metric $g$ in terms of a metric $\\underline{g}$ on the instantaneous 3-spaces\n\\begin{align}\ng=-\\widetilde{U} \\otimes \\widetilde{U} + \\underline{g}.\n\\end{align}\nLet $\\#$ be the Hodge map associated with the instantaneous 3-space such that for $\\alpha\\in \\Gamma\\Lambda^p \\mathcal{M}$\n \\begin{align}\n \\#: \\Gamma\\Lambda^p \\mathcal{M} \\rightarrow \\Gamma \\Lambda^{3-p}\\mathcal{M}, \\qquad \\alpha \\mapsto \\# \\alpha = (-1)^{p+1} i_U \\star \\alpha\n \\label{starthree}\n \\end{align}\nThe Minkowski Hodge dual is then given by\n\\begin{align}\n\\star \\alpha=(-1)^p \\widetilde{U} \\wedge \\#\\alpha.\n\\label{starfour}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\nThe Hodge dual of $\\mathcal{F}$ is given by\n\\begin{align}\n\\star \\mathcal{F}= \\#\\widetilde{\\e}-c\\#\\mathbf{B}\\wedge\\widetilde{U}\n\\label{star_f_three}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\\\\\n\\begin{align}\n\\star \\mathcal{F} =\\star( \\widetilde{\\e} \\wedge \\widetilde{U}) + c\\star\\mathbf{B},\n\\end{align}\nIt follows from (\\ref{starthree}) that if $\\alpha\\in \\Lambda^1\\mathcal{M}$ then $\\#\\alpha=i_{U}\\star\\alpha=\\star(\\alpha\\wedge\\widetilde{U})$. Thus $\\star( \\widetilde{\\e} \\wedge \\widetilde{U})= \\#\\widetilde{\\e}$. Similarly it follows from (\\ref{starfour}) that if $\\beta\\in \\Lambda^2\\mathcal{M}$ then $\\star\\beta=\\widetilde{U}\\wedge \\#\\beta$, thus $\\star \\mathbf{B}=\\widetilde{U}\\wedge \\#\\mathbf{B}$.\n\\end{proof}\n\\begin{lemma}\n\\label{mag_def}\nLet $\\widetilde{\\mathcal{B}}=-\\#\\mathbf{B}$ where $\\mathcal{B}\\in\\Gamma \\textup{T} \\mathcal{M}$ is the magnetic field, then according to observers whose worldlines coincide with integral curves of $U$ it is given by\n\\begin{align}\n\\mathcal{B}=\\frac{1}{c}\\widetilde{i_{U}\\star\\mathcal{F}}, \\qquad .\n\\label{bvecdef}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nConsider (\\ref{star_f_three}). Since $i_{U} \\#\\alpha=i_{U} \\# \\beta=0$ it follows that $i_{U} \\star \\mathcal{F}=-c\\#\\mathbf{B}$.\n\\end{proof}\n\\begin{lemma}\nIn terms of $\\mathcal{E}$ and $\\mathcal{B}$ the 2-forms $\\mathcal{F}$ and $\\star \\mathcal{F}$ are given by\n\\begin{align}\n\\mathcal{F}=&\\widetilde{\\e} \\wedge \\widetilde{U} -c\\#\\widetilde{\\mathcal{B}}=\\widetilde{\\e} \\wedge \\widetilde{U} -c\\star(\\widetilde{\\mathcal{B}}\\wedge \\widetilde{U})\\label{f_plus},\\\\\n\\star \\mathcal{F}=&\\#\\widetilde{\\e}+c\\widetilde{\\mathcal{B}}\\wedge\\widetilde{U}= \\star(\\widetilde{\\e}\\wedge \\widetilde{U})+c\\widetilde{\\mathcal{B}}\\wedge\\widetilde{U}\\label{starf_plus}.\n\\end{align}\n \\end{lemma}\n\\begin{proof}\nSince $\\widetilde{\\mathcal{B}}=-\\#\\mathbf{B}$ and $\\#\\#\\mathbf{B}=\\mathbf{B}$ it follows that $\\#\\widetilde{\\mathcal{B}}=-\\mathbf{B}$. Substituting this into (\\ref{f_def}) yields (\\ref{f_plus}). Similarly substituting the first relation into\n(\\ref{star_f_three}) yields (\\ref{starf_plus}).\n\\end{proof}\n\\subsection*{The Lorentz Force}\n\\begin{definition}\n\\label{lorentz_def}\nLet $C:I\\subset\\mathbb{R} \\to\\mathcal{M}$ be the proper time parameterized inextendible worldline of a point particle with\nobserved rest mass $m$ and charge $q$. For $\\tau\\in I$\n\\begin{align}\n\\dot{\\c}=C_{\\ast}(d\/d \\tau),\\qquad \\ddot{\\c}=\\nabla_{\\dot{\\c}}\\dot{\\c},\\qquad \\textup{and} \\qquad \\dddot{\\c}=\\nabla_{\\dot{\\c}} \\nabla_{\\dot{\\c}} \\dot{\\c}\n\\end{align}\nare the velocity, acceleration and jerk of the particle respectively. Here the pushforward map ${}_{\\ast}$ is defined by (\\ref{pushstart})-(\\ref{pushend}) and $\\nabla$ is the Levi-Civita connection (see \\ref{connection}).\nIn this introductory chapter and in Part II we assign the dimension of time to proper time $\\tau$ such that \n\\begin{align}\ng(\\dot{\\c},\\dot{\\c})=-c^2,\n\\label{gCdCd}\n\\end{align}\nHowever the reader should note that Part I we will find it convenient to assign the dimension of length to proper time so that\n\\begin{align}\ng(\\dot{\\c},\\dot{\\c})=-1,\n\\label{gCdCdtwo}\n\\end{align}\nFor further details about dimensions see appendix \\ref{app_dimensions}.\n\\end{definition}\n\\begin{lemma}\n\\label{orthog}\n\\begin{align}\n&g(\\dot{\\c}, \\ddot{\\c})=0,\\label{g_Cd_Cdd}\\\\\n\\qquad \\textup{and} \\qquad\n&g(\\dot{\\c}, \\dddot{\\c})=-g(\\ddot{\\c}, \\ddot{\\c}).\\label{g_Cd_Cddd}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nEquation (\\ref{g_Cd_Cdd}) follows by differentiating (\\ref{gCdCd}) with respect to $\\tau$. Similarly, equation (\\ref{g_Cd_Cddd}) follows by differentiating (\\ref{g_Cd_Cdd}).\n\\end{proof}\n\\begin{definition}\n\\label{lorentz_deftwo}\nThe force on a point particle with worldline $C(\\tau)$ due to an external field $\\flw_{\\textup{ext}}\\in\\Gamma\\Lambda^2\\mathcal{M}$ is given by the Lorentz force $f_{\\textup{L}}$, where\n\\begin{align}\nf_{\\textup{L}}\\in\\Gamma \\textup{T} \\mathcal{M},\\qquad f_{\\textup{L}}=\\frac{q}{c} \\widetilde{i_{\\dot{\\c}}\\flw_{\\textup{ext}}}.\n\\end{align}\n\\end{definition}\nIn 1916 Lorentz writes\\cite{Lorentz16}\n\\begin{quote}\nLike our former equations [Maxwell's equations], it is got by generalizing the results of electromagnetic experiments\n\\end{quote}\n\n\\section{Conservation Laws}\n\\label{chap_stress}\n\n\\begin{definition}\n\\label{kill_def}\nA vector field $V$ is a Killing field if it satisfies\n\\begin{align}\n\\mathcal{L}_{V} g=0.\n\\label{Killing_def}\n\\end{align}\nIn terms of coordinate basis $\\{y^i\\}$ the metric may be written $g=g_{a b}(y^i) dy^a \\otimes dy^b$, thus for vector field $V=\\frac{\\partial}{\\partial y^a}$ the left hand side of (\\ref{Killing_def}) yields\n\\begin{align}\n\\mathcal{L}_{\\partial_{\\textup{K}}} g= \\frac{\\partial g_{a b}}{\\partial y^{\\textup{K}}} dy^a \\otimes dy^b,\n\\end{align}\nwhere $\\partial_\\textup{K}=\\tfrac{\\partial}{\\partial y^\\textup{K}}$. In Minkowski space $g_{0 0}=-1$ and $g_{a b} =\\delta^a_b$ for $a=1, 2, 3$. Thus for the four translational vectors $\\frac{\\partial}{\\partial \\xt}, \\frac{\\partial}{\\partial \\xx}, \\frac{\\partial}{\\partial \\xy}, \\frac{\\partial}{\\partial \\xz}$ (\\ref{Killing_def}) is trivially satisfied. In fact there are 10 killing vector fields on Minkowski space.\n\nLet $V$ be a Killing vector, then another property of Killing fields we shall use is\n\\begin{align}\n\\mathcal{L}_{V} \\star =\\star \\mathcal{L}_{V},\n\\label{kill_star}\n\\end{align}\n\n\\end{definition}\n\\begin{definition}\n\\label{stressk_def}\nThe electromagnetic stress $3$-forms $\\mathcal{S}_{\\textup{K}} \\in \\Gamma \\Lambda^3 \\mathcal{M}$ are given by\n\\begin{align}\n\\mathcal{S}_{\\textup{K}}=&\\frac{\\epsilon_0}{2 c}\\big(i_{\\partial_{\\textup{K}} }\\mathcal{F}\\wedge\\star\\mathcal{F}-i_{\\partial_{\\textup{K}} }\\star\\mathcal{F}\\wedge\\mathcal{F}\\big)\n\\label{stress_def}\n\\end{align}\nwhere $\\partial_\\textup{K}=\\tfrac{\\partial}{\\partial y^\\textup{K}}$ are the four translational Killing vectors. These 3-forms can be obtained from the Lagrangian density for the electromagnetic field using Noether's theorem, see \\cite{ Obukhov} for a detailed exposition.\n\\label{stresst_lem}\nThe stress forms are\nrelated to the symmetric stress-energy-momentum tensor $\\displaystyle{\\mathcal{T} \\in \\Gamma \\textstyle{\\bigotimes}^{[\\mathds{V}, \\mathds{V}]} \\mathcal{M}}$ by\n\\begin{align}\n\\mathcal{T}^{a \\textup{K}}= i_{\\frac{\\partial}{\\partial y^a}} \\star \\mathcal{S}_{\\textup{K}}, \\qquad \\mathcal{S}_{\\textup{K}}=\\star\\Big(\\big(\\mathcal{T}(d y^{\\textup{K}}, -)\\big)\\widetilde{\\,\\,\\,}\\Big)\n\\label{stresst_def}\n\\end{align}\nwhere $\\displaystyle{\\mathcal{T}=\\mathcal{T}^{a b} \\frac{\\partial}{\\partial y^a}\\otimes \\frac{\\partial}{\\partial y^b}}$.\n\\end{definition}\n\\begin{lemma}\nThe stress forms satisfy\n\\begin{align}\nd \\mathcal{S}_{\\textup{K}}=-\\frac{1}{c}i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge \\mathcal{J},\n\\label{dtauk_def}\n\\end{align}\nand thus for any source free region $N\\subset\\mathcal{M}$\n\\begin{align}\nd \\mathcal{S}_{\\textup{K}}=0.\n\\label{dtauk_0}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n\\begin{align}\nd\\mathcal{S}_{\\textup{K}}=&\\frac{\\epsilon_0}{2c}d\\big(i_{\\partial_{\\textup{K}} }\\mathcal{F}\\wedge\\star\\mathcal{F}-i_{\\partial_{\\textup{K}} }\\star\\mathcal{F}\\wedge\\mathcal{F}\\big)\\notag\\\\\n=&\\frac{\\epsilon_0}{2c}\\big(d i_{\\partial_{\\textup{K}} }\\mathcal{F}\\wedge\\star\\mathcal{F}-i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge d \\star\\mathcal{F}-d i_{\\partial_{\\textup{K}} }\\star\\mathcal{F}\\wedge\\mathcal{F}+i_{\\partial_{\\textup{K}} }\\star\\mathcal{F}\\wedge d\\mathcal{F}\\big)\\notag\\\\\n=&\\frac{\\epsilon_0}{2c}\\big(d i_{\\partial_{\\textup{K}} }\\mathcal{F}\\wedge\\star\\mathcal{F}-d i_{\\partial_{\\textup{K}} }\\star\\mathcal{F}\\wedge\\mathcal{F}\\big)-\\frac{\\epsilon_0}{2c}i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge d \\star\\mathcal{F}.\\label{dtau_k}\n\\end{align}\nFrom (\\ref{dF}) and (\\ref{cartan}) it follows that\n\\begin{align}\n\\mathcal{L}_{\\partial_{\\textup{K}} }\\mathcal{F}= di_{\\partial_{\\textup{K}} }\\mathcal{F}.\\label{lief}\n\\end{align}\nUsing (\\ref{lief}), (\\ref{hodgeab}) and (\\ref{kill_star}) respectively yields\n\\begin{align}\nd i_{\\partial_{\\textup{K}} }\\mathcal{F}\\wedge\\star\\mathcal{F}= \\mathcal{F} \\wedge \\star d i_{\\partial_{\\textup{K}} }\\mathcal{F} = \\mathcal{F} \\wedge \\star \\mathcal{L}_{\\partial_{\\textup{K}}}\\mathcal{F} =\\mathcal{F} \\wedge \\mathcal{L}_{\\partial_{\\textup{K}}} \\star\\mathcal{F} = \\mathcal{F} \\wedge d i_{\\partial_{\\textup{K}} }\\star \\mathcal{F}+\\mathcal{F} \\wedge i_{\\partial_{\\textup{K}}} d \\star \\mathcal{F}\n\\label{dikf}\n\\end{align}\nSubstituting (\\ref{dikf}) into (\\ref{dtau_k}) yields\n\\begin{align}\nd\\mathcal{S}_{\\textup{K}}= &\\frac{\\epsilon_0}{2c}\\big(\\mathcal{F} \\wedge d i_{\\partial_{\\textup{K}} }\\star \\mathcal{F}+\\mathcal{F} \\wedge i_{\\partial_{\\textup{K}}} d \\star \\mathcal{F}-d i_{\\partial_{\\textup{K}} }\\star\\mathcal{F}\\wedge\\mathcal{F}\\big)-\\frac{\\epsilon_0}{2c}i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge d \\star\\mathcal{F}\\notag\\\\\n=&\\frac{\\epsilon_0}{2c}\\big(\\mathcal{F} \\wedge i_{\\partial_{\\textup{K}}} d \\star \\mathcal{F}-i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge d \\star\\mathcal{F}\\big)\\label{eqstress}\n\\end{align}\nSince $\\mathcal{F}$ is a 2-form and $d\\star\\mathcal{F}$ is a 3-form it follows that\n\\begin{align}\ni_{\\partial_{\\textup{K}} }(\\mathcal{F}\\wedge d\\star \\mathcal{F})= i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge d \\star\\mathcal{F}+\\mathcal{F} \\wedge i_{\\partial_{\\textup{K}}} d \\star \\mathcal{F}=0.\n\\end{align}\nThus substituting $\\mathcal{F} \\wedge i_{\\partial_{\\textup{K}}} d \\star \\mathcal{F}=-i_{\\partial_{\\textup{K}} } \\mathcal{F}\\wedge d \\star\\mathcal{F}$ and (\\ref{dstarF}) into (\\ref{eqstress}) yields result.\n\\end{proof}\n\\begin{lemma}\nFor any source free region $N\\subset\\mathcal{M}$\n\\begin{align}\n\\int_{\\partial \\mathcal{N}} \\mathcal{S}_{\\textup{K}} = \\int_{\\mathcal{N}}d \\mathcal{S}_{\\textup{K}} =0.\n\\label{Stokes}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFollows trivially from (\\ref{dtauk_0}) and Stokes' theorem (\\ref{stokes_def}).\n\\end{proof}\n\\begin{lemma}\nIf $U$ is a timelike Killing vector then applying the 3+1 decomposition yields\n\\begin{align}\n\\mathcal{S}_{U}=\\epsilon_0\\widetilde{\\e}\\wedge\\widetilde{\\bvec}\\wedge \\widetilde{U} +\\frac{\\epsilon_0}{2c}(\\widetilde{\\e}\\wedge\\#\\widetilde{\\e}+c^2\\widetilde{\\bvec}\\wedge \\#\\widetilde{\\bvec}),\n\\end{align}\nwhere $\\epsilon_0\\widetilde{\\e}\\wedge\\widetilde{\\bvec}$ is the Poynting $2$-form, and $\\tfrac{\\epsilon_0}{2c}(\\widetilde{\\e}\\wedge\\#\\widetilde{\\e}+c^2\\widetilde{\\bvec}\\wedge \\#\\widetilde{\\bvec})$ the energy density 3-form.\n\\end{lemma}\n\\begin{proof}\\\\\nUsing definition \\ref{stressk_def}\n\\begin{align}\n\\mathcal{S}_{U}=&\\frac{\\epsilon_0}{2c}\\big(i_{\\partial_{U}}\\mathcal{F}\\wedge\\star\\mathcal{F}-i_{\\partial_{U}}\\star\\mathcal{F}\\wedge\\mathcal{F}\\big)\\notag\n\\end{align}\nSubstituting (\\ref{starf_plus}) and (\\ref{f_plus}) and using the relations (\\ref{eb_def}) and (\\ref{bvecdef}) yields\n\\begin{align}\n\\mathcal{S}_{U}=&\\frac{\\epsilon_0}{2c}\\big(\\widetilde{\\e}\\wedge(\\#\\widetilde{\\e}+c\\widetilde{\\bvec}\\wedge\\widetilde{U})-c\\widetilde{\\bvec}\\wedge(\\widetilde{\\e}\\wedge\\widetilde{U}-c\\# \\widetilde{\\bvec})\\big),\\notag\\\\\n=&\\frac{\\epsilon_0}{2c}\\big(\\widetilde{\\e}\\wedge\\#\\widetilde{\\e} +c^2\\widetilde{\\bvec}\\wedge\\#\\widetilde{\\bvec}\\Big) +\\epsilon_0\\widetilde{\\e}\\wedge\\widetilde{\\bvec}\\wedge\\widetilde{U}.\\notag\n\\end{align}\n\n\n\\end{proof}\n\n\\section{The source $\\mathcal{J}$ for a point charge}\nWe now consider the particular form of the current $3$-form $\\mathcal{J} \\in \\Lambda^3\\mathcal{M}$ for a point charge. We use notation $\\mathrm{J}=\\mathcal{J}_{\\textup{point charge}}$ in order to emphasize that $\\mathrm{J}$ is a particular choice for $\\mathcal{J}$. The source is located only on the worldline of the particle therefore we expect the source distribution $\\mathrm{J}^D\\in \\Gamma_D \\Lambda^3 \\mathcal{M}$ to have the form of a Dirac delta distribution.\n\\begin{definition}\n\\label{def_jvec}\nGiven the four $0$-form distributions $j^a \\in \\Gamma_D \\Lambda^0 \\mathcal{M}$, where for $x\\in \\mathcal{M}$\n\\begin{align}\nj^a(x) = q \\int_\\tau \\dot{\\c}^a(\\tau) \\delta^{(4)}(x - C(\\tau))d \\tau,\n\\label{ja_def}\n\\end{align}\nwe define the distributional current vector field by\n\\begin{align}\n j= j^a(x) \\frac{\\partial}{\\partial y^a}, \\qquad\n\\end{align}\nThe distributions $j^a(x)$ are non-zero only when $x=C(\\tau)$.\nThe $3$-form $\\mathrm{J}\\in \\Lambda^3 \\mathcal{M}$ is given by\n\\begin{align}\n \\mathrm{J}=\\star\\widetilde{j}.\n \\label{J_def}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\n\\label{jd_def}\nThe current $3$-form distribution $\\mathrm{J}^D\\in\\Gamma_D \\Lambda^3 \\mathcal{M}$ is given by\n\\begin{align}\n\\mathrm{J}^{D}[\\varphi] &=q\\int_I C^\\ast \\varphi\n\\end{align}\nfor any test $1$-form $\\varphi\\in \\Gamma_0 \\Lambda^1 \\mathcal{M}$.\n\\end{lemma}\n\\begin{proof}\\\\\nFrom (\\ref{J_def}) the distribution $\\mathrm{J}^D$ is given by\n\\begin{align*}\n\\mathrm{J}^{D} [\\varphi] &=\\int_\\mathcal{M} \\star \\widetilde{j}\\wedge \\varphi ,\\\\\n&=\\int_\\mathcal{M} i_j \\star 1\\wedge \\varphi ,\\\\\n&= \\int_\\mathcal{M} j^a i_{\\frac{\\partial}{\\partial x^a}} \\varphi \\star 1,\\\\\n&=\\int_\\mathcal{M} j^a \\varphi_a \\star 1.\n\\end{align*}\nSubstitution of (\\ref{ja_def}) yields\n\\begin{align*}\n\\mathrm{J}^{D} [\\varphi] &= q\\int_\\mathcal{M} \\int_\\tau \\dot{\\c}^a (\\tau) \\delta^{(4)} (x - C(\\tau))d\\tau \\varphi_a \\star 1\\\\\n&= q \\int_\\tau \\dot{\\c}^a (\\tau)\\varphi_a (C(\\tau))d\\tau \\\\\n&=q\\int \\varphi_a (C(\\tau)) \\frac{d C^a}{d \\tau} d\\tau\\\\\n&= q\\int \\varphi_a (C(\\tau))d C^a\\\\\n&=q \\int \\varphi_a (C(\\tau))C^\\ast(d y^a)\\\\\n&=q\\int_I C^\\ast \\varphi\n\\end{align*}\nwhere $ C^\\ast(y^a) = y^a \\circ C = C^a$.\n\\end{proof}\n\n\n\\section{Worldline geometry}\n\\label{worldline_geom}\n\n Given the proper time parameterized inextendible worldline\n\n\\begin{align}\nC &: I \\subset \\mathbb{R} \\rightarrow \\mathcal{M}, \\quad \\tau \\mapsto C(\\tau),\n\\label{C_def}\n\\end{align}\nwe required a way to locally map each point $x\\in {(\\m\\backslashC)}$ to a unique point $C(\\tau'(x))$ along the worldline.\nConsider the region $N=\\widetilde{N}\\backslash C$ where $\\widetilde{N} \\subset \\mathcal{M}$ is a local neighborhood of the worldline.\nThe affine structure of $\\mathcal{M}$ permits the construction of a unique displacement vector $Z|_{x}$ defined as the difference between the two points (see figure \\ref{vecz}),\n \\begin{align}\n Z|_{x}=x-C(\\tau'(x)).\n \\end{align}\nNote that the definition of $Z$ only requires the affine structure of $\\mathcal{M}$. It does not require $\\mathcal{M}$ to be converted into a vector space by assigning an origin.\n \\newpage\n \\begin{figure}\n\\setlength{\\unitlength}{1cm}\n\\centerline{\n\\begin{picture}(10, 10)\n\\put(0, 0){\\includegraphics[ width=10\\unitlength]{radii2.pdf}}\n\\put(0.4, 0.7){$C(\\tau)$}\n\\put(7.6, 8.7){\\begin{turn}{45}null\\end{turn}}\n\\put(9.4, 8.3){$x$}\n\\put(3, 5){$\\dot{\\c}|_{\\tau'}$}\n\\put(4, 4){$\\tau'(x)$}\n\\put(10, 8.4){$V'|_{x}$}\n\\put(7, 7){$Z|_{x}$}\n\\put(9.5, 7.3){$Z_{||}$}\n\\put(7, 5.4){$Z_{\\perp}$}\n\\put(-1.7, 2.9){\\begin{turn}{20}plane perpendicular \\end{turn}}\n\\put(-1.5, 2.4){\\begin{turn}{20}to $\\dot{\\c}|_{\\tau'}$\\end{turn}}\n\\end{picture}\n}\n\\caption{Displacement vector $Z|_x$}\n\\label{vecz}\n\\end{figure}\n\nWe may construct a local vector field $Z \\in \\Gamma \\textup{T} N$ such that\n \\begin{align}\n Z=Z^a \\frac{\\partial}{\\partial y^a},\\qquad \\text{where}\\qquad Z^a= x^a-C^a(\\tau'(x)).\n \\label{zvec_def}\n \\end{align}\n for all $x\\in N$. Here $y^a (x)=x^a$.\n\n Since $Z$ is defined for every $x \\in N$ the only requirement needed to define $Z$ completely is to fix $\\tau'(x)$.\n We are free to choose $\\tau'(x)$ in any way we like however particular choices are beneficial for certain problems. In one choice the vector $Z$ lies in the plane perpendicular to $\\dot{\\c}(\\tau'(x))$ (see figure \\ref{vecY}). In this case we use the notation $\\tau'=\\tau_D$, where $\\tau_D$ is the \\emph{Dirac time}. The Dirac time associates each point $x \\in N$ with the time $\\tau_D(x)$ given by the solution to\n \\begin{align}\n g\\big(x-C(\\tau_D(x)), \\dot{\\c}(\\tau_D(x))\\big)=0.\n \\end{align}\n In this case $Z|_x=Z_{\\perp}=x-C(\\tau_D(x))$. We use the special notation\n \\begin{align}\n Y=x-C(\\tau_D(x)).\n \\end{align}\n\n\n \\begin{figure}\n\\setlength{\\unitlength}{1cm}\n\\centerline{\n\\begin{picture}(10, 10)\n\\put(0, 0){\\includegraphics[ width=10\\unitlength]{radiiY.pdf}}\n\\put(0.4, 0.7){$C(\\tau)$}\n\\put(7.6, 8.7){\\begin{turn}{45}null\\end{turn}}\n\\put(9, 6){$x$}\n\\put(3, 5){$\\dot{\\c}|_{\\tau_D}$}\n\\put(4, 4){$\\tau_D(x)$}\n\\put(9.5, 6.6){$V_D|_{x}$}\n\\put(6.5, 5){$Y|_x$}\n\\put(-1.7, 2.9){\\begin{turn}{20}plane perpendicular \\end{turn}}\n\\put(-1.5, 2.4){\\begin{turn}{20}to $\\dot{\\c}|_{\\tau_D}$\\end{turn}}\n\\put(3.5, 2){Let the norm $||.||$ be defined by \n$||Z||=\\sqrt{g(Z, Z)^{2}}$.}\n\\put(3.5, 1){Then $||Y_{||}||=0$, and $||Y||=||Y_{\\perp}||=g(Y, Y)$. }\n\\end{picture}\n}\n\\caption{Displacement vector $Y|_x$. }\n\\label{vecY}\n\\end{figure}\nThe map $\\tau_D:\\mathcal{M} \\rightarrow C$ is not unique for every $x\\in \\mathcal{M}$, for example in figure \\ref{nontaud} we see that a single point can be mapped to multiple points along the worldline. However for a sufficiently small neighborhood $N \\subset {(\\m\\backslashC)}$ uniqueness can be ensured. In appendix \\ref{dirac_apend} we explore this geometry further.\n\n \\begin{figure}\n\\setlength{\\unitlength}{1cm}\n\\centerline{\n\\begin{picture}(10, 10)\n\\put(0, 0){\\includegraphics[ width=10\\unitlength]{radiidirac.pdf}}\n\\put(2, 0.7){$C(\\tau)$}\n\\put(8, 9.3){\\begin{turn}{45}null\\end{turn}}\n\\put(8, 8.5){\\begin{turn}{45}null\\end{turn}}\n\\put(9, 6){$x$}\n\\put(2.4, 3.8){$\\dot{\\c}|_{\\tau_D}$}\n\\put(3.4, 3.3){$\\tau_D(x)$}\n\\put(3, 5.7){$\\dot{\\c}|_{\\tau_D'}$}\n\\put(4.2, 4.8){$\\tau_D'(x)$}\n\\put(6.5, 4){$Y|_x$}\n\\put(6.5, 5.7){$Y'|_x$}\n\\put(-1.7, 2){\\begin{turn}{20}plane perpendicular \\end{turn}}\n\\put(-1.5, 1.5){\\begin{turn}{20}to $\\dot{\\c}|_{\\tau_D}$\\end{turn}}\n\\put(-1.7, 4.7){\\begin{turn}{6}plane perpendicular \\end{turn}}\n\\put(-1.5, 4.2){\\begin{turn}{6}to $\\dot{\\c}|_{\\tau_D'}$\\end{turn}}\n\\end{picture}\n}\n\\caption{Globally the map $\\tau_D$ is non-unique.}\n\\label{nontaud}\n\\end{figure}\n\nIn another choice the vector $Z$ lies on the null cone (see figure \\ref{vecX}). In this case we use the notation $\\tau'=\\tau_r$, where $\\tau_r$ is the \\emph{retarded time}. The retarded time associates a point $x \\in {(\\m\\backslashC)}$ with the time $\\tau_r(x)$ given by the solution to\n \\begin{align}\n g\\big(x-C(\\tau_r(x)), x-C(\\tau_r(x))\\big)=0, \\qquad x^0>C^0(\\tau_r(x))\n \\label{nullx_def}\n \\end{align}\n In this case $Z|_x=x-C(\\tau_r(x))$. We use the special notation\n \\begin{align}\n X=x-C(\\tau_r(x)).\n \\label{xnull_def}\n \\end{align}\n\n \\begin{figure}\n\\setlength{\\unitlength}{1cm}\n\\centerline{\n\\begin{picture}(10, 10)\n\\put(0, 0){\\includegraphics[ width=10\\unitlength]{radiiX.pdf}}\n\\put(0.4, 0.7){$C(\\tau)$}\n\\put(8.9, 9.6){\\begin{turn}{45}null\\end{turn}}\n\\put(9, 8.8){$x$}\n\\put(3, 5){$\\dot{\\c}|_{\\tau_r}$}\n\\put(4, 4){$\\tau_r(x)$}\n\\put(7.8, 9.4){$V|_{x}$}\n\\put(6, 7.2){$X|_{x}$}\n\\put(8, 6.5){$X_{||}$}\n\\put(6.5, 5){$X_{\\perp}$}\n\\put(-1.7, 2.9){\\begin{turn}{20}plane perpendicular \\end{turn}}\n\\put(-1.5, 2.4){\\begin{turn}{20}to $\\dot{\\c}|_{\\tau_r}$\\end{turn}}\n\\put(3.5, 2){Let the norm $||.||$ be defined by \n$||Z||=\\sqrt{g(Z, Z)^{2}}$.}\n\\put(3.5, 1){Then $||X||=0$ and $||X_{||}||=||X_{\\perp}||=g(X, \\dot{\\c}|_{\\tau_r(x)})^2$. }\n\\end{picture}\n}\n\\caption{Displacement vector $X|_x$. }\n\\label{vecX}\n\\end{figure}\nThere is another possible choice in which $Z$ is a vector in the advanced null cone at $x$ (see figure \\ref{vecX}). In this case we use the notation $\\tau'=\\tau_a$, where $\\tau_a$ is the \\emph{advanced time}. The advanced time associates a point $x \\in {(\\m\\backslashC)}$ with the time $\\tau_a(x)$ given by the solution to\n \\begin{align}\n g\\big(x-C(\\tau_a(x)), x-C(\\tau_a(x))\\big)=0, \\qquad x^01$ therefore it lies outside the contour.\nThe residue of $I$ at $z=\\beta$ is given by\n\\begin{align}\nres= \\frac{4 i \\sin(\\theta)(\\alpha + \\beta)}{(-\\beta+\\alpha)^3},\n\\end{align}\nTherefore by the residue theorem (see for example \\cite{Howie}),\n\\begin{align}\nI=2\\pi c^2 \\int_0^\\pi \\frac{\\sin(\\theta) (\\dot{\\c}^3\\cos(\\theta) -\\dot{\\c}^0)}{((\\dot{\\c}^3\\cos(\\theta)-\\dot{\\c}^0)^2 +(\\dot{\\c}^{2^2} +\\dot{\\c}^{1^2})\\sin^2(\\theta))^{\\frac{3}{2}}}d\\theta\n\\end{align}\nand integration using a computer gives\n\\begin{align}\nI=\\frac{4 \\pi c^2}{-\\dot{\\c}^{0^2} + \\dot{\\c}^{1^2} + \\dot{\\c}^{2^2} + \\dot{\\c}^{3^2}} =4\\pi\\frac{c^2}{c^2}\n\\end{align}\nhence\n\\begin{align}\n\\int_{\\partial \\mathbb{B}} \\varphi \\wedge \\star \\flw_{\\textup{C}}=\\frac{q}{\\epsilon_0} \\int_{\\tau=-\\infty}^{\\infty}Y_\\tau^0 (\\tau)d\\tau=\\frac{q}{\\epsilon_0} \\int_{\\tau=-\\infty}^{\\infty}\\hat{\\varphi}_\\tau (\\tau)d\\tau= \\frac{q}{\\epsilon_0} \\int_{I} C^\\ast \\phi,\n\\end{align}\nand comparison with lemma \\ref{jd_def} yields\n\\begin{align}\n\\epsilon_0 d \\ast \\mathrm{F}^D [\\phi]= q \\int_{I} C^\\ast \\phi=\\mathrm{J}^D[\\phi]\n\\end{align}\n\\end{proof}\n\n\n\n\n\n\n\\newpage\n\n\\addcontentsline{toc}{chapter}{Part I - The self force and the Schott term discrepancy}\n\n\\vspace{3cm}\n\\begin{center}\n\\begin{minipage}{0.95\\textwidth}\n\\vspace{6cm}\n\\begin{center}{\\bf \\huge\n\nPART I\\\\\nThe self force and the Schott term discrepancy\n\n} \\vspace{3cm}\n\n\\end{center}\n\\end{minipage}\\vspace{10cm}\n\\end{center}\n\n\\chapter{Introduction}\n\\label{intro}\n\nIn chapter \\ref{maxlorentz} we assign the dimension of time to proper time $\\tau$ so that (\\ref{gCdCd}) is satisfied. We will return to this convention in Part II where we derive the electric and magnetic fields in the standard $3$-vector notation. In Part I it is convenient to assign the dimension of length to $\\tau$ so that (\\ref{gCdCdtwo}) is satisfied. For details see appendix \\ref{app_dimensions}.\n\n\\section{The self force, mass renormalization and the equation of motion}\n\nIt is a consequence of Maxwell-Lorentz electrodynamics that any source of an electromagnetic field will be subject to interaction with that field. The resulting force on the source is known as the \\emph{self force}. For a charged particle undergoing inertial motion the self force is zero, however for an accelerating charge the force is non-zero and tends to act as a damping term \\cite{Landau80}. It is well known that an accelerating charge loses energy due to the emission of radiation, where the instantaneous loss of momentum due to radiation, $\\dot{\\textup{P}}_{\\textup{RAD}}\\in\\Gamma\\textup{T} \\mathcal{M}$, is given by the Larmor-Abraham formula \\cite{Rohrlich97}\n \\begin{align}\n \\dot{\\textup{P}}_{\\textup{RAD}}=\\frac{q^2}{6 \\pi \\epsilon_0 } g(\\ddot{\\c}, \\ddot{\\c})\\dot{\\c}.\n \\label{larmor_abraham}\n \\end{align}\n The negative of this force is the \\emph{radiation reaction} force, which must be a contribution to the self force. This has led many authors to use the term \\emph{radiation reaction} synonymously with self force, however in the fully relativistic case there is an extra term in the self force in addition to the negative of (\\ref{larmor_abraham}). This additional term is known as the \\emph{Schott term}, and has lead to some controversy. The fully relativistic self force is given by the Abraham-von Laue vector \\cite{erber61}\n \\begin{align}\nf_{\\textup{self}}=\\frac{q^2}{6 \\pi \\epsilon_0 } (\\dddot{\\c}-g(\\ddot{\\c}, \\ddot{\\c})\\dot{\\c}),\n\\label{abraham_laue}\n \\end{align}\n where the Schott term is third order with respect to the worldline.\n \n The zeroth component of the radiation reaction force is Larmor's equation for the rate of radiation. The spatial part is proportional to the negative of the Newtonian velocity and may be interpreted as the radiation reaction force of the particle.\n The physical nature of the Schott term has been a topic for debate.\n Its presence leads to two interesting results: i) the self force can vanish even when the radiation rate is non-zero, for example in the case of uniform circular motion, and ii) the self force can be non-zero even when there is momentarily no radiation being emitted. Thus the identification of the whole of (\\ref{abraham_laue}) as a radiation reaction force would be misleading. The Schott term is a total derivative, so it does not\n correspond to an irreversible loss of momentum by the\n particle, but plays an important role in the momentum\n balance between the radiation and the particle \\cite{Rohrlich65}.\n \n With the self force given by (\\ref{abraham_laue}) the resulting equation of motion for a charged particle undergoing arbitrary motion is given by the Abraham-Lorentz-Dirac (ALD) equation\n \\begin{align}\nm \\nabla_{\\dot{\\c}}\\dot{\\c}=f_{\\textup{L}} +f_{\\textup{self}} +f_{\\textup{ext}},\n\\label{lorentz_dirac}\n \\end{align}\n where $m$ is the observed rest mass of the particle, $f_{\\textup{L}}\\in\\Gamma\\textup{T}\\mathcal{M}$ is the Lorentz force due to the external field, $f_{\\textup{ext}}\\in\\Gamma\\textup{T}\\mathcal{M}$ is the force due to non-electromagnetic effects \\footnote{In general these are not known but could include effects due to gravity or collision with neutral particles. It is common to assume $f_{\\textup{ext}}=0$.}, and $f_{\\textup{self}}\\in\\Gamma\\textup{T}\\mathcal{M}$ is given by (\\ref{abraham_laue}). All three forces on the right of (\\ref{lorentz_dirac}) are vector fields with support on finite closed regions of the worldline\\footnote{When looking at the solution of this equation it is often useful to consider the external force (EM or non-EM) to be a pulse \\cite{Dirac38, Eliezer, Bonnor}, however this is a mathematical idealization.} thus Rohrlich's dynamic asymptotic condition \\cite{Rohrlich65},\n \\begin{align}\n \\lim_{|\\tau|\\rightarrow \\infty} \\nabla_{\\dot{\\c}}\\dot{\\c}=0,\n \\end{align}\nis satisfied.\n The third order nature of the Schott term has instilled doubts about the validity of the ALD equation, since it leads to particular classes of solution which are foreign to classical physics. These solutions include \\emph{preacceleration}, where a particle may begin to accelerate before a force has been applied, and \\emph{runaway solutions}, where a particle may continue to accelerate exponentially even for a static force (see \\cite{Rohrlich65, Parrott86, Poisson99}). Further doubts about the validity of the ALD equation are raised by the fact that there remains to this day no derivation of (\\ref{lorentz_dirac}) which is completely free from ambiguity. The most widely known difficulty is that of mass renormalization.\n\n The origin of mass renormalization can be found in the early attempts to calculate the self force based on extended models for the electron. At the dawn of the twentieth century the limitations imposed by quantum physics were unknown and it was widely believed the dynamics of an electron could be established by supposing a classical model for the particle. The model was based on the idea of a macroscopic charged object reduced to the microscopic scale. There is an inherent problem with this approach because macroscopic charged objects are stable only because of the intermolecular forces binding them together. As an elementary particle the electron is necessarily devoid of these forces, thus within such a model the particle would have a tendency to blow itself apart due to the mutual repulsion of its volume elements. The solution of this difficulty, proposed by Poincar$\\acute{\\textup{e}}$, was to postulate the existence of an additional cohesive force which would exactly cancel the repulsion. This cohesive force would enable the electron to remain stable, however it would by definition have no effect on the motion of the particle and its physical nature would remain unknown.\n\n If we accept the Poincar$\\acute{\\textup{e}}$'s hypothesis and assume an extended model for the electron, then the self force may be calculated using the Lorentz force law. It is possible to calculate the Lorentz force acting on a particular volume element due to the rest of the charge distribution. The self force is then given by net force on the particle due to the respective Lorentz forces on each of the volume elements. In order to calculate this force it is necessary to postulate an additional condition on the model, that of rigidity. The most common notion of rigidity is that of Born rigidity, where the particle is rigid in its rest frame. In the early 1900s Lorentz \\cite{Lorentz} and Schott\\cite{Schott}, amongst others, were able to calculate the resulting force for a number of different charge distributions. Non-relativistically, for a Born rigid , spherically symmetric charge distribution instantaneously at rest, the calculation yields\\cite{Rohrlich65}\n\\begin{align}\n\\underline{f_{\\textup{self}}}\\approx -\\frac{2}{3c^2}q\\kappa U \\underline{\\ddot{x}} + \\frac{2}{3c^3}q\\kappa \\underline{\\dddot{x}} -\\frac{2}{3c^2}q\\kappa\\sum_{n=2}^\\infty \\frac{(-1)^n}{n!}\\frac{d^{n} \\underline{\\ddot{x}}}{c^n dt^{n}}\\mathcal{O}( r^{n-1}) ,\n\\end{align}\nwhere $\\underline{f_{\\textup{self}}}$ is a 3-vector, $\\underline{\\dot{x}}$ is the 3-acceleration of the charge and the dot denotes differentiation with respect to time. The constant $\\kappa$ is defined by (\\ref{def_kappa}) and $r$ denotes the radius of the distribution. The constant $U$ is given by,\n\\begin{align}\nU= \\int \\int \\frac{n(\\underline{x})n(\\underline{x}')}{r}d^3x d^3x'\n\\end{align}\nwhere $n(\\underline{x})\/q$ is the normalized charge distribution.\n In the limit $r\\rightarrow 0$, i.e. the point charge limit, the terms in the summation vanish. The resulting equation of motion for $r\\rightarrow 0$ is given by\n\\begin{align}\nm_0\\underline{\\ddot{x}} \\approx -\\frac{2}{3c^2}q\\kappa U \\underline{\\ddot{x}} + \\frac{2}{3c^3}q\\kappa \\underline{\\dddot{x}} +\\underline{f_{\\textup{L}}} + \\underline{f_{\\textup{ext}}},\n\\label{eq_early}\n\\end{align}\nwhere $m_0$ is the \\emph{bare mass} and $\\underline{f_{\\textup{L}}}$ and $\\underline{f_{\\textup{ext}}}$ are 3-vectors. We notice the first term on the right hand side is proportional to the acceleration of the electron. This led to the identification of the coefficient $m_e=\\frac{2}{3c^2}q\\kappa U $ as an electromagnetic contribution to the observed rest mass of the particle. This enables the term to be shifted to the left hand side of (\\ref{eq_early}), resulting in the equation of motion\n\\begin{align}\nm\\underline{\\ddot{x}} = \\frac{2}{3c^3}q\\kappa \\underline{\\dddot{x}} +\\underline{f_{\\textup{L}}}+\\underline{f_{\\textup{ext}}},\n\\label{eq_early2}\n\\end{align}\nwhere $m=m_0+m_e$ is the observed rest mass given by the sum of the electromagnetic mass and bare mass. This is known as the Lorentz-Abraham equation, and is the non-relativistic limit of (\\ref{lorentz_dirac}). The process of shifting the term $m_e\\underline{\\dot{x}}$ to the left hand side is known as \\emph{mass renormalization}. In the point charge approach mass renormalization is still required, however the electromagnetic mass of the point particle is found to be infinite. This means the bare mass must be assumed to be negatively infinite in order to leave a finite observed mass and a meaningful equation of motion. This process of adding two infinite quantities to give a finite mass is undesirable and brings into question the validity of the resulting equation of motion. \\footnote{There have been attempts to eradicate mass normalization, for example see \\cite{Norton}, where different mathematical techniques are used to cancel the singular terms, however there remains no physical justification. The inability of classical physics to consistently treat field divergences has lead to further needs for renormalization in quantum field theory.}.\n\n With the advance of physics since the early twentieth century it is now clear that any notion of rigidity is incompatible with special relativity. It is also known that electrons and other charged elementary particles exhibit wave particle duality and other quantum behavior. This has lead to almost complete abandonment of the macroscopic model in favor of other models which do not cling to the idea of miniature classical distributions of charge. The simplest such model is that of a point charge. However if we adopt the point charge model from the outset it is not obvious how to define the self force because the Li$\\text{\\'{e}}$nard-Wiechert field is singular at the position of the particle. In 1938 Dirac proposed a method by which the self force arises as an integral of the stress-energy-momentum tensor associated with the Li$\\text{\\'{e}}$nard-Wiechert field. In 1973 Rohrlich writes \\cite{Mehra73}\n \\begin{quote}\n Whatever one may think today of Dirac's reasons in developing a classical theory of a point electron, it is by many contemporary views (and I completely concur), the correct thing to do: if one does not wish to exceed the applicability limits of classical (i.e., non-quantum) physics one cannot explore the electron down to distances so short that its structure (whatever it might be) would become apparent. Thus for the classical physicist the electron is a point charge within his limits of observation.\n \\end{quote}\n\n\\section{The point charge approach and the Schott term discrepancy}\n\n\n\nWithin the point model framework the components of the instantaneous change in electromagnetic 4-momentum $\\dot{\\textup{P}}_{\\textup{EM}}\\in \\Gamma \\textup{T} \\mathcal{M}$ arise as integrals of the Li$\\text{\\'{e}}$nard-Wiechert stress 3-forms over a suitable three dimensional domain of spacetime. This instantaneous change in 4-momentum is identified as the negative of the self force but with an additional singular term which can be discarded by mass renormalization.\n In Dirac's calculation the domain is the side\n$\\Sigma^{\\textup{D}}_T$ of a narrow tube, of spatial radius\n${R}_{\\textup{D}0}$, enclosing a section of the worldline $C$. See\nFIG. \\ref{fig_Tubes}.\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{0.8cm}\n\\begin{picture}(20, 13.5)\n\\put(0, 0){\\includegraphics[ width=10\\unitlength]{dirac_tubecol_intro.pdf}}\n\\put(9, 0){\\includegraphics[ width=10\\unitlength]{Bhabha_tubecol_intro.pdf}}\n\\put(6.2, 9.9){$\\tau_2$}\n\\put(4.5, 3.9){$\\tau_1$}\n\\put(15.2, 9.9){$\\tau_2$}\n\\put(13.5, 3.9){$\\tau_1$}\n\\put(10, 1){$C(\\tau)$}\n\\put(1, 1){$C(\\tau)$}\n\\put(6.2, 7.7){${R}_{\\textup{D}0}$}\n\\put(15.3, 7.3){${R}_0$}\n\\end{picture}\n\\end{center}\n\\caption[The Dirac and Bhabha tubes]{The Dirac tube (left) and Bhabha tube (right)}\n\\label{fig_Tubes}\n\\end{figure}\n The displacement vector $Y$ defining\n the Dirac tube is spacelike, therefore the Li$\\text{\\'{e}}$nard-Wiechert potential\nis not naturally given in terms of the Dirac time $\\tau_D(x)$. However in appendix \\ref{dirac_apend} we show that for small $\\displaystyle{R_D=g(Y, Y)}$, and hence small $\\tau_D-\\tau_r$ due to (\\ref{dirac_taur}), it can be expressed as the series\n\\begin{align}\n\\frac{1}{\\kappa}\\mathrm{A}|_{x}=&-\\frac{V_D}{R_D} +\\big(A_D +\\frac{1}{2}g(n_D, A_D)V_D\\big)\\notag\\\\\n&+\\Big(V_D\\big(\\frac{1}{8}g(A_D, A_D)-\\frac{1}{8}g(n_D, A_D)^2-\\frac{1}{3}g(n_D, \\dot{A}_D)\\big) -\\frac{1}{2}\\dot{A}_D-\\frac{1}{2}g(n_D, A_D)A_D\\Big)R_D\\notag\\\\\n& +\\mathcal{O}(R_D^2),\\tag{\\ref{lw_dirac}}\n\\end{align}\nwhere the vector fields $V_D, A_D, \\dot{A}_D \\in \\Gamma \\textup{T} {(\\m\\backslashC)} $ are defined as\n\\begin{align}\nV_D|_\\point&=\\dot{\\c}^j(\\tau_D(x)) \\frac{\\partial}{\\partial y^j},\\quadA_D|_\\point=\\ddot{\\c}^j(\\tau_D(x))\\frac{\\partial}{\\partial y^j}\\quad \\text{and} \\quad\\dot{A}_D|_\\point=\\dddot{\\c}^j(\\tau_D(x)) \\frac{\\partial}{\\partial y^j},\n\\tag{\\ref{defdirac_V_A}}\n\\end{align}\n\nWhen using a Dirac tube the integration of the stress 3-forms\ngives for the instantaneous EM 4-momentum \\cite{Dirac38,Teitelboim70,Galtsov02,Norton}\n\\begin{align}\n-\\pdot^{\\textup{D}}_{\\textup{EM}}=q\n\\kappa\\Big(\\frac{2}{3}\\big( \\dddot{\\c}-g(\\ddot{\\c}, \\ddot{\\c})\\dot{\\c}\\big)\n-\\lim_{{R}_{\\textup{D}0} \\rightarrow 0}\\frac{1 }{2 {R}_{\\textup{D}0}}\\ddot{\\c}\\Big),\n\\label{f_self}\n\\end{align}\n\n\nThis is the Abraham-von Laue vector with the additional singular term which depends on\nthe shrinking of the Dirac tube onto the worldline.\n\n\n\nAn alternative approach, first used by Bhabha\\cite{Bhabha39} in 1939 ,\nis to integrate the Li$\\text{\\'{e}}$nard-Wiechert stress forms over the side\n$\\Sigma_{\\textup{T}}$ of the Bhabha tube with spatial radius ${R}_0$. The\nprincipal advantage of this approach is that the displacement vector\n$X$ is lightlike and as a result the Li$\\text{\\'{e}}$nard-Wiechert potential can written explicitly,\n\\begin{align}\n\\frac{1}{\\kappa}\\mathrm{A}|_\\point = \\frac{\\widetilde{V}}{g(V, X)}.\\tag{\\ref{LW_def}}\n\\end{align}\nIt follows that the corresponding stress 3-forms can also be written explicitly.\nHowever previous articles which use a\nBhabha tube to evaluate the instantaneous EM 4-momentum give the following expression\n\\cite{Norton, Bhabha39,Poisson99,Tucker06,Parrott86}\n\\begin{align}\n-\\pdot_{\\textup{EM}}&=-q\\kappa\\Big(\\frac{2}{3}g(\\ddot{\\c},\n\\ddot{\\c})\\dot{\\c}+\\lim_{{R}_0 \\rightarrow 0}\\frac{1 }{2 {R}_0}\\ddot{\\c}\\Big)\n=\n-\\pdot^{\\textup{D}}_{\\textup{EM}}-\\frac{2}{3}q\\kappa\\dddot{\\c}.\n\\label{f_self_wrong}\n\\end{align}\nThis is the radiation reaction force with the additional singular term which depends on\nthe shrinking of the Bhabha tube onto the worldline. The Schott term $2q\\kappa\\frac{\\dddot{\\c}}{3}$ is missing from the approaches employing the Bhabha tube.\nIn 2006 Gal'tsov and Spirin \\cite{Galtsov02} draw attention to this discrepancy. They claim the Schott term should\narise directly from the electromagnetic stress-energy-momentum tensor and provide a derivation using Dirac geometry in order to show this. However they propose the missing term in \\eqref{f_self_wrong} is a consequence of the null geometry used to define the Bhabha tube. We show in this thesis that the term may be obtained using null geometry providing certain conditions are realized.\n\n\\section{Regaining the Schott term}\n\n\n\n\\subsection*{Addition to non-EM momentum}\n\n The standard approach which has been used in articles \\cite{Norton,Bhabha39,Poisson99,Tucker06}, is to simply add the term to the non-EM momentum of the particle. This method will give the correct form for the ALD equation, however it is not physically justified since the self force is by nature an electromagnetic effect.\n\n\n\n We suppose a balance of momentum\n\\begin{align}\n\\pdot_{\\text{PART}} +\\pdot_{\\text{EM}}=f_{\\text{ext}}\n\\label{Ppart_PEM}\n\\end{align}\nwhere total momentum has been separated into electromagnetic contribution $P_{\\text{EM}}$ and non-electromagnetic\ncontribution $P_{\\text{PART}}$, and $\\dot{P}=\\nabla_{\\dot{\\c}}P$. All the external forces acting on the particle are denoted by $f_{\\text{ext}}$.\nA suitable choice for the non-electromagnetic momentum\n$P_{\\text{PART}}$ has to be made.\nMost external forces $f_{\\text{ext}}$, including the Lorentz force,\n are orthogonal to $\\dot{\\c}$:\n\\begin{align}\ng(f_{\\text{ext}},\\dot{\\c})=0.\n\\label{f_ext_orthog}\n\\end{align}\nFor such an external force, if (\\ref{f_self}) is obtained then a natural choice for $P_{\\text{PART}}$ is\n\\begin{align}\nP_{\\text{PART}}=m_0 \\dot{\\c}.\n\\label{P_not_inc_Schott}\n\\end{align}\nThis is the correct term for the 4-momentum of a particle if its spin has been neglected.\nCombining (\\ref{f_self}), (\\ref{Ppart_PEM}) and\n(\\ref{P_not_inc_Schott}) gives\n\\begin{equation}\n\\begin{aligned}\nm_0 \\ddot{\\c} &= f_{\\text{ext}}-\\pdot^{\\textup{D}}_{\\textup{EM}}\\\\\n &= f_{\\text{ext}}-q\\kappa\\Big(\n\\tfrac{2}{3}g(\\ddot{\\c}, \\ddot{\\c})\\dot{\\c}\n+\\dddot{\\c}\n- \\lim_{{R}_{\\textup{D}0} \\rightarrow 0} \\frac{1}{2{R}_{\\textup{D}0}}\\ddot{\\c}\\Big).\n\\end{aligned}\n\\label{balance}\n\\end{equation}\nThus assuming the observed rest mass $m$ to be given by\n\\begin{align}\nm=m_0+\\lim_{{R}_{\\textup{D}0} \\rightarrow 0} \\frac{q \\kappa}{2{R}_{\\textup{D}0}}\\ddot{\\c}\n\\label{inf_terms}\n\\end{align}\n we satisfy the orthogonality condition (\\ref{g_Cd_Cdd}). By contrast, if (\\ref{f_self_wrong}) is obtained one cannot set\n\\begin{align}\nm_0\\ddot{\\c} = f_{\\text{ext}} -\\pdot_{\\textup{EM}}\\qquad \\textup{and} \\qquad m=m_0+\\lim_{{R}_0 \\rightarrow 0} \\frac{q\\kappa}{2{R}_0}\\ddot{\\c}\n\\end{align}\nand satisfy (\\ref{g_Cd_Cdd}). This has lead some authors \\cite{Norton,Bhabha39,Poisson99,Tucker06} to add an ad hoc term\nto the non-electromagnetic contribution to the force.\n\\begin{align}\n\\dot{P}^{\\textup{B}}_{\\text{PART}}=m_0 \\ddot{\\c} + \\tfrac23q\\kappa \\dddot{\\c}.\n\\label{P_inc_Schott}\n\\end{align}\nThis ad hoc term will ensure the orthogonality condition is satisfied and hence compensate for the missing Schott term.\n\n\\subsection*{Regaining the term by careful analysis of limits}\n\n\nWe will show that the calculation of the self force using null geometry requires\nthree limits to be taken (see figure \\ref{limits}), the shrinking of the Bhabha tube $\\Sigma_{\\textup{T}}$ onto the\nworldline $C$ i.e. ${R}_0\\to 0$, and the bringing together of the lightlike caps $\\Sigma_1$ and $\\Sigma_2$\nonto the lightlike cone with vertex $C(\\tau_0)$ i.e. $\\tau_1\\to\\tau_0$ $\\tau_2\\to\\tau_0$, where $\\tau_0$ is the proper time at which we wish to evaluate the self force (see FIG.\\ref{fig_Tubes}).\nWe therefore have the freedom to choose the order of these limits. We choose to let the three limits take place simultaneously,\nsubject to the constraint that\n\\begin{align}\n\\lambda=\\raisebox{0.4cm}{$\\displaystyle{\\lim_{\\substack{{R}_0\n \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0\\\\\\tau_2 \\rightarrow\n \\tau_0}}}$}\n\\Big( \\frac{\\tau_1+\\tau_2-2\\tau_0}{4{R}_0}\\Big)\n\\label{def_lambda}\n\\end{align}\nwhere $\\lambda\\in\\mathbb{R} $ is finite. This gives the self force as\n\\begin{align}\nf_{\\textup{self}}\n&=\n-q\\kappa\\Big(\n\\tfrac{2}{3}g(\\ddot{\\c}, \\ddot{\\c})\\dot{\\c}\n+\\lambda\\dddot{\\c}\n+ \\lim_{{R}_0 \\rightarrow 0} \\frac{1}{2{R}_0}\\ddot{\\c}\n\\Big)\n\\label{Intr_Pdot_res_expan}\n\\end{align}\nwhich is in agreement with $f^{\\textup{D}}_{\\textup{self}}$ if\n$\\lambda=-\\tfrac23$, hence the Schott term arises by direct integration of the stress forms using null geometry.\n\n\\begin{figure}\n\\setlength{\\unitlength}{1cm}\n\\centerline{\n\\begin{picture}(14, 19)\n\\put(0, 0){\\includegraphics[ width=14\\unitlength]{limits.pdf}}\n\\put(4, 17){$\\tau_2$}\n\\put(3.9, 15.7){$\\tau_1$}\n\\put(10.3, 16.4){$\\tau_1=\\tau_2$}\n\\put(4, 10.5){$\\tau_1=\\tau_2$}\n\\put(3.8, 9.3){$\\tau_0$}\n\\put(9.8, 10.3){$\\tau_0=\\tau_1=\\tau_2$}\n\\put(3.4, 4.2){${R}_0$}\n\\put(9.8,3){${R}_0=0$}\n\\end{picture}\n}\n\\caption[Limits required in definition of self force]{Three independent limits are required. The converging of the caps $\\tau_1 \\rightarrow \\tau_2$, the movement of the apex of the squashed tube to the point where the self force will be evaluated $\\tau_2 \\rightarrow \\tau_0$, and the shrinking of the radius ${R}_0 \\rightarrow 0$.}\n\\label{limits}\n\\end{figure}\n\n\\chapter{Defining the self force for a point charge}\n\nIn this chapter we formally define the Dirac and Bhabha tubes. We also present the definition of the self force based on conservation of four momentum within a Bhabha tube. For this definition we take the limit as the tube approaches an arbitrary point on the worldline.\n\n\\section{The Dirac and Bhabha tubes}\n\n\\begin{definition}\n\\label{worldtube_def}\nConsider the region $N=\\widetilde{N}\\backslash C$ where $\\widetilde{N} \\subset \\mathcal{M}$ is a local neighborhood of the worldline. Suppose the two continuous maps\n\\begin{align}\n&\\tau': N \\rightarrow \\mathbb{R} , \\quad x \\mapsto \\tau'(x)\\\\\n&R' : N \\rightarrow \\mathbb{R} ^+, \\quad x \\mapsto R' (x),\n\\end{align}\n are well defined for all $x \\in N$. Here $\\mathbb{R} ^+$ denotes the positive real numbers and we shall call $R'(x)$ the displacement of $x$ from $C(\\tau'(x))$. Furthermore for $\\lambda \\in \\mathbb{R} ^+$ let\n\\begin{align}\n&\\tau'\\Big(\\lambda\\Big(x-C\\big(\\tau'(x)\\big)\\Big)+C\\big(\\tau'(x)\\big)\\Big)=\\tau'(x),\\\\\\notag\n\\textup{and}\\quad & R'\\Big(\\lambda\\Big(x-C\\big(\\tau'(x)\\big)\\Big)+C\\big(\\tau'(x)\\big)\\Big)=\\lambda R'(x).\n\\end{align}\nThese relations ensure that for an arbitrary point $\\tau_0$ on the worldline with $C (\\tau_0) \\in \\widetilde{N}$\n\\begin{align}\n\\lim_{x \\rightarrow C (\\tau_0)} \\tau'(x) = \\tau_0 \\qquad \\textup{and} \\qquad \\lim_{x \\rightarrow C (\\tau_0)} R'(x)=0\n\\end{align}\n\\end{definition}\nSince $N$ is open there exist values $\\tau_{\\textup{min}}, \\tau_{\\textup{max}}, R_{\\textup{max}} \\in N$ such that the 4-region\n\\begin{align}\n\\mathds{S}= \\Big\\{\\quad x\\quad \\Big|\\quad \\tau_{\\textup{min}}<\\tau'(x)<\\tau_{\\textup{max}},\\quad 00$ is a measure of the cross-sectional radius of the Dirac tube, see figure \\ref{D_Tube}.\n\\end{lemma}\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{1.2cm}\n\\begin{picture}(10, 11)\n\\put(0, -2){\\includegraphics[ width=10\\unitlength]{diractube2_crop.pdf}}\n\\put(4, 6){$\\Sigma_{\\textup{T}}^D$}\n\\put(5, 8){$\\Sigma_2^D$}\n\\put(3, 1.2){$\\Sigma_1^D$}\n\\put(6.2, 8.3){$\\tau_0+\\delta$}\n\\put(4.4, 2){$\\tau_0$}\n\\put(6.2, 5.7){${R}_{\\textup{D}0}$}\n\\end{picture}\n\\caption{The Dirac Tube}\n\\end{center}\n\\label{D_Tube}\n\\end{figure}\n\\newpage\n\\begin{lemma}\n\\label{btube_def}\nWhen using null geometry $\\tau'=\\tau_r$ and $R'=R=-g(X, V)$. The surfaces $\\Sigma_{i}$ are subregions of the forward null cones at $C(\\tau_i)$.\nThe \\begin{bf}Bhabha tube\\end{bf} $\\mathds{T}_B$ is given by $\\displaystyle{\\Sigma_1 \\cup \\Sigma_2 \\cup \\Sigma_{\\textup{T}}}$ where for $i=1, 2$\n\\begin{align}\n\\Sigma_i &= \\Big\\{C(\\tau_i)+X\\quad\\Big|\\quad g(X, X)=0,\\quad -g(X,\n\\dot{\\c})<{R}_0\\Big\\}\\,,\\notag\n\\\\\n\\Sigma_{\\textup{T}} &= \\Big\\{C(\\tau)+X\\quad\\Big|\\quad g(X,\nX)=0,\\quad-g(X, \\dot{\\c})={R}_0, \\quad\\tau_1\\leq\\tau\\leq\\tau_2\\Big\\}.\n\\label{Bhabha_Tube}\n\\end{align}\nThe parameter ${R}_0>0$ is a measure of the cross-sectional radius of the Bhabha tube, see figure \\ref{B_tube}.\n\\end{lemma}\n\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{1.2cm}\n\\begin{picture}(10, 12)\n\\put(0, -2){\\includegraphics[ width=10\\unitlength]{Bhabha_tubecol_intro.pdf}}\n\\put(4, 6){$\\Sigma_{\\textup{T}}$}\n\\put(5.4, 9.2){$\\Sigma_2$}\n\\put(3.8, 2.6){$\\Sigma_1$}\n\\put(6.2, 8.3){$\\tau_0+\\delta$}\n\\put(4.4, 2){$\\tau_0$}\n\\put(6.2, 5.3){${R}_0$}\n\\end{picture}\n\\end{center}\n\\caption{The Bhabha Tube}\n\\label{B_tube}\n\\end{figure}\n\n\\section{Conservation of 4-momentum}\n\n\\begin{lemma}\n\\label{cons_lem}\nConsider figure \\ref{N_tube}. The two Bhabha tubes $\\mathds{T}^{\\textup{in}}$ and $\\mathds{T}^{\\textup{out}}$ given by\n \\begin{align}\n \\mathds{T}^{\\textup{in}}=& \\Sigma^{\\textup{in}}_1 \\cup \\Sigma^{\\textup{in}}_2 \\cup \\Sigma_{\\textup{T}}^{\\textup{in}},\\notag\\\\\n \\textup{and} \\qquad \\mathds{T}^{\\textup{out}}=& \\Sigma^{\\textup{out}}_1 \\cup \\Sigma^{\\textup{out}}_2 \\cup \\Sigma_{\\textup{T}}^{\\textup{out}},\n \\end{align}\n have different radii $R^{\\textup{in}}$ and $R^{\\textup{out}}$. The surfaces $\\Sigma^{\\textup{diff}}_1$ and $\\Sigma^{\\textup{diff}}_2$ are the differences between the caps of the two tubes,\n \\begin{align}\n \\Sigma^{\\textup{diff}}_1=& \\Sigma^{\\textup{out}}_1\\backslash \\Sigma^{\\textup{in}}_1\\notag\\\\\n \\textup{and}\\qquad \\Sigma^{\\textup{diff}}_2=& \\Sigma^{\\textup{out}}_2\\backslash \\Sigma^{\\textup{in}}_2.\n \\end{align}\n Let the 4-region $\\mathds{S}$ enclosed by the two tubes be finite and source free, with boundary\n \\begin{align}\n \\partial \\mathds{S}= \\Sigma^{\\textup{diff}}_1-\\Sigma^{\\textup{diff}}_2+\\Sigma_{\\textup{T}}^{\\textup{out}}-\\Sigma_{\\textup{T}}^{\\textup{in}}.\n \\end{align}\n then the following relation is true\n \\begin{align}\n\\int_{\\Sigma^{\\textup{diff}}_1}\\mathrm{S}_{\\textup{K}} -\\int_{\\Sigma^{\\textup{diff}}_2}\\mathrm{S}_{\\textup{K}} =\\int_{\\Sigma_{\\textup{T}}^{\\textup{in}}}\\mathrm{S}_{\\textup{K}} -\\int_{\\Sigma_{\\textup{T}}^{\\textup{out}}}\\mathrm{S}_{\\textup{K}}\n\\label{mom_flux}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nUsing Stokes Theorem (\\ref{stokes_def}) and (\\ref{dtauk_0}) it follows that\n\\begin{align}\n0=&\\int_{N}d \\mathrm{S}_{\\textup{K}} =\\int_{\\partial N} \\mathrm{S}_{\\textup{K}} \\notag\\\\\n=&\\int_{\\Sigma^{\\textup{diff}}_1}\\mathrm{S}_{\\textup{K}} -\\int_{\\Sigma^{\\textup{diff}}_2}\\mathrm{S}_{\\textup{K}} +\\int_{\\Sigma_{\\textup{T}}^{\\textup{out}}}\\mathrm{S}_{\\textup{K}} -\\int_{\\Sigma_{\\textup{T}}^{\\textup{in}}}\\mathrm{S}_{\\textup{K}} ,\n\\label{stokes_N}\n\\end{align}\n\\end{proof}\n\n\\begin{figure}\n\\setlength{\\unitlength}{1.3cm}\n\\centerline{\n\\begin{picture}(10, 12)\n\\put(0, -2){\\includegraphics[ width=10\\unitlength]{Bhabha_inside2.pdf}}\n\\put(6.2, 8.3){$\\tau_2$}\n\\put(4.4, 2){$\\tau_1$}\n\\put(5.2, 6.2){$\\Sigma_{\\textup{T}}^{\\textup{in}}$}\n\\put(6.5, 9.5){$\\Sigma^{\\textup{diff}}_2$}\n\\put(6, 4.3){$\\Sigma^{\\textup{diff}}_1$}\n\\put(4.2, 6.2){$\\Sigma_{\\textup{T}}^{\\textup{out}}$}\n\\put(6, 6.8){$\\mathds{S}$}\n\\put(4,1){$C$}\n\\end{picture}\n}\n\\caption{Stokes theorem applied to two worldtubes}\n\\label{N_tube}\n\\end{figure}\n\n\n\n\n\\section{The instantaneous force at an arbitrary point on the worldline}\n\n\\begin{definition}\n\\label{mom_flux_def}\nThe k-component of 4-momentum flux $\\mathrm{P}_{\\textup{K}} ^{(\\Sigma)}\\in\\Gamma\\Lambda^0 \\mathbf{M} $ through an arbitrary source-free 3-surface $\\Sigma\\subset \\mathcal{M}$ is defined by\n\\begin{align}\n\\mathrm{P}_{\\textup{K}} ^{(\\Sigma)} &=\\int_{\\Sigma} \\mathrm{S}_{\\textup{K}} ,\n\\label{def_Pk}\n\\end{align}\n\\end{definition}\n\n\nWe cannot use definition \\ref{mom_flux_def} to give the flux of momentum through the caps $\\Sigma_i$ of the Bhabha tube because they are not source free; there is a singularity at the point intersected by the worldline. Instead in the following we employ Stokes theorem in order to heuristically justify defining the difference in momentum between the two caps as an integral over the side $\\Sigma_{\\textup{T}}$.\n\n\\begin{definition}\n\\label{pk_def}\nConsider the Bhabha tube $\\mathds{T}= \\Sigma_1 \\cup\\Sigma_2\\cup\\Sigma_{\\textup{T}}$ with $R={R}_0$ where ${R}_0$ is constant.\nLet $\\tau_1=\\tau_r(\\Sigma_1)$ and $\\tau_2=\\tau_r(\\Sigma_2)$ with $\\tau_2>\\tau_1$. The instantaneous change in 4-momentum at arbitrary proper time $\\tau_0$ is defined by\n\\begin{align}\n\\pdot_{\\textup{K}} (\\tau_0)=&\\raisebox{0.4cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0\\\\\\tau_2 \\rightarrow \\tau_0}}}$}\\bigg(\\frac{1}{\\tau_2-\\tau_1}\\int_{\\Sigma_{\\textup{T}}}\\mathrm{S}_{\\textup{K}} \\bigg).\n\\label{def_Pdot}\n\\end{align}\n\\end{definition}\n\nThis definition\nis justified heuristically as follows. Inspired by definition \\ref{mom_flux_def} we wish to\nwrite\n\\begin{align}\n\\Delta \\mathrm{P}_{\\textup{K}} =\\textup{P}_{\\textup{K}} ^{(\\Sigma_2)}-\\textup{P}_{\\textup{K}} ^{(\\Sigma_1)}= \\int_{\\Sigma_{\\textup{T}}}\\mathrm{S}_{\\textup{K}}\n\\label{def_Pk_diff}\n\\end{align}\nHowever the integrals $\\textup{P}_{\\textup{K}} ^{(\\Sigma_1)}$ and $\\textup{P}_{\\textup{K}} ^{(\\Sigma_2)}$ are\nboth infinite \\footnote{There are methods by which these infinities can be avoided, for example Rowe \\cite{Rowe75} uses distribution theory in order to eradicate singularity,\nand Norton \\cite{Norton} uses a method where the zero limit is not used.}. Ignoring this, we assert\n\\begin{equation}\n\\begin{aligned}\n\\pdot_{\\textup{K}} (\\tau_0) = \\raisebox{0.4cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0\\\\\\tau_2 \\rightarrow \\tau_0}}}$}\\bigg(\\frac{1}{\\tau_2-\\tau_1}\\Big(\\textup{P}_{\\textup{K}} ^{(\\Sigma_2)}-\\textup{P}_{\\textup{K}} ^{(\\Sigma_1)}\\Big)\\bigg)\n\\end{aligned}\n\\label{def_Pk_dot}\n\\end{equation}\nInserting (\\ref{def_Pk_diff}) into (\\ref{def_Pk_dot}) yields (\\ref{def_Pdot}).\n\\begin{definition}\n\\label{pdem_def}\nThe vector $\\pdot_{\\text{EM}}(\\tau_0) \\in T_{C(\\tau_0)}\\mathcal{M}$ is defined by\n\\begin{align}\n\\pdot_{\\text{EM}}(\\tau_0)=\\pdot_{\\textup{K}} (\\tau_0) g^{\\textup{K} l}\\frac{\\partial}{\\partial y^l}\n\\label{def_Pdot_dual}\n\\end{align}\nwhere $g^{\\textup{K} l}=g^{-1}(dy^\\textup{K}, dy^l)$ and $g^{-1}$ is the inverse metric on $\\mathcal{M}$.\nSince $\\tau_0$ is arbitrary there is an induced vector field $\\pdot_{\\text{EM}}$ on the curve $C$.\n\\end{definition}\n\n\\begin{lemma}\n\\label{stress_tensor_def}\nLet Li$\\text{\\'{e}}$nard-Wiechert stress 3-forms $\\mathrm{S}_{\\textup{K}} \\in \\Gamma \\Lambda^{3}\\mathcal{M}$ be given by\n\\begin{align}\n\\mathrm{S}_{\\textup{K}}= \\mathrm{S}_{\\textup{K}}^{\\textup{R}}+\\mathrm{S}_{\\textup{K}}^{\\textup{C}}+ \\mathrm{S}_{\\textup{K}}^{\\textup{R C}}\n\\end{align}\nwhere\n\\begin{align}\n\\mathrm{S}_{\\textup{K}}^{\\textup{R}}=& \\frac{\\epsilon_0}{2 }\\big(i_{\\partial_{\\textup{K}} }\\flw_{\\textup{R}}\\wedge\\star\\flw_{\\textup{R}}-i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{R}}\\wedge\\flw_{\\textup{R}}\\big)\\notag\\\\\n\\mathrm{S}_{\\textup{K}}^{\\textup{C}}=&\\frac{\\epsilon_0}{2 }\\big(i_{\\partial_{\\textup{K}} }\\flw_{\\textup{C}}\\wedge\\star\\flw_{\\textup{C}}-i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{C}}\\wedge\\flw_{\\textup{C}}\\big)\\notag\\\\\n\\mathrm{S}_{\\textup{K}}^{\\textup{RC}}=&\\frac{\\epsilon_0}{2 }\\big(i_{\\partial_{\\textup{K}} }\\flw_{\\textup{C}}\\wedge\\star\\flw_{\\textup{R}}-i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{C}}\\wedge\\flw_{\\textup{R}}+i_{\\partial_{\\textup{K}} }\\flw_{\\textup{R}}\\wedge\\star\\flw_{\\textup{C}}-i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{R}}\\wedge\\flw_{\\textup{C}}\\big)\\label{srr}\n\\end{align}\nThen\n\\begin{align}\n\\mathrm{S}_{\\textup{K}}^{\\textup{R}}=&q^2\\frac{g(A, A)-g(n, A)^2}{16\\pi^2 \\epsilon_0 R^2}n_{\\textup{K}} \\star\\widetilde{n},\\notag\\\\\n\\mathrm{S}_{\\textup{K}}^{\\textup{C}}=&\\frac{q^2}{16\\pi^2 \\epsilon_0R^4}\\big(n_{\\textup{K}} \\star\\widetilde{V} + V_{\\textup{K}} \\star\\widetilde{n} - n_{\\textup{K}} \\star\\widetilde{n} + g_{ \\textup{K} a}\\star dx^a\\big),\\notag\\\\\n\\mathrm{S}_{\\textup{K}}^{\\textup{RC}}=&-\\frac{q^2}{8\\pi^2\\epsilon_0 R^3}\\big(A_{\\textup{K}} \\star\\widetilde{n} + n_{\\textup{K}} \\star\\widetilde{A} +g(A, n)(V_{\\textup{K}} \\star\\widetilde{n} +n_{\\textup{K}} \\star\\widetilde{V}-2n_{\\textup{K}} \\star\\widetilde{n})\\big).\n\\label{null_stress}\n \\end{align}\n Note that the factor of $c$ in (\\ref{stress_def}) is absent from (\\ref{srr}). This is due to our decision to assign the dimension of length to proper time.\n \\end{lemma}\n\\begin{proof}\\\\\nWe will show only for for $\\mathrm{S}_{\\textup{K}}^{\\textup{R}}$, the other results follow similarly.\nFrom (\\ref{FR_def}) we have\n\\begin{align}\n\\frac{1}{\\kappa}i_{\\partial_{\\textup{K}}}\\flw_{\\textup{R}}=& \\frac{1}{g(X, V)^2}i_{\\partial_{\\textup{K}}}(\\widetilde{\\x} \\wedge \\widetilde{A})- \\frac{g(X, A)}{g(X, V)^3}i_{\\partial_{\\textup{K}}}(\\widetilde{\\x} \\wedge \\widetilde{V}) \\notag\\\\\n=&\\frac{1}{g(X, V)^2}(X_{\\textup{K}} \\widetilde{A} - A_{\\textup{K}} \\widetilde{\\x})- \\frac{g(X, A)}{g(X, V)^3} (X_{\\textup{K}} \\widetilde{V} - V_{\\textup{K}} \\widetilde{\\x})\\notag\n\\end{align}\nwhere $\\kappa$ is defined by (\\ref{def_kappa}). Thus\n\\begin{align}\n\\frac{1}{\\kappa^2}i_{\\partial_{\\textup{K}}}\\flw_{\\textup{R}}\\wedge \\star \\flw_{\\textup{R}}=& \\frac{1}{g(X, V)^4} \\Big( X_{\\textup{K}} \\widetilde{A} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{A}) -A_{\\textup{K}} \\widetilde{\\x} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{A})\\Big)\\notag\\\\\n&- \\frac{g(X, A)}{g(X, V)^5}\\Big( X_{\\textup{K}} \\widetilde{A} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{V}) -A_{\\textup{K}} \\widetilde{\\x} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{V})\\Big)\\notag\\\\\n&-\\frac{g(X, A)}{g(X, V)^5}\\Big( X_{\\textup{K}} \\widetilde{V} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{A}) -V_{\\textup{K}} \\widetilde{\\x} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{A})\\Big)\\notag\\\\\n&+\\frac{g(X, A)^2}{g(X, V)^6}\\Big( X_{\\textup{K}} \\widetilde{V} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{V}) -V_{\\textup{K}} \\widetilde{\\x} \\wedge \\star (\\widetilde{\\x} \\wedge \\widetilde{V})\\Big)\n\\end{align}\nNow using lemma \\ref{one_two} yields\n\\begin{align}\n\\frac{1}{\\kappa^2}i_{\\partial_{\\textup{K}}}\\flw_{\\textup{R}}\\wedge \\star \\flw_{\\textup{R}}=&\\frac{1}{g(X, V)^4} \\Big( X_{\\textup{K}}g(A, A)\\star \\widetilde{\\x} - X_{\\textup{K}}g(A, X)\\star \\widetilde{A}-A_{\\textup{K}}g(X, A)\\star \\widetilde{\\x}\\Big)\\notag\\\\\n&- \\frac{g(X, A)}{g(X, V)^5}\\Big( -X_{\\textup{K}}g(A, X)\\star \\widetilde{V} -A_{\\textup{K}}g(V, X)\\star \\widetilde{\\x} \\Big)\\notag\\\\\n&- \\frac{g(X, A)}{g(X, V)^5}\\Big( -X_{\\textup{K}}g(V, X)\\star \\widetilde{A} -V_{\\textup{K}}g(A, X)\\star \\widetilde{\\x} \\Big)\\notag\\\\\n&+\\frac{g(X, A)^2}{g(X, V)^6}\\Big(-X_{\\textup{K}}\\star \\widetilde{\\x} -X_{\\textup{K}}g(V, X)\\star \\widetilde{V} -V_{\\textup{K}}g(V, X)\\star \\widetilde{\\x} \\Big)\\notag\n\\end{align}\nExpanding and cancelling like terms yields\n\\begin{align}\n \\frac{1}{\\kappa^2}i_{\\partial_{\\textup{K}}}\\flw_{\\textup{R}}\\wedge \\star \\flw_{\\textup{R}}=&\\Bigg(g(A, A)-\\frac{g(X, A)^2}{g(X, V)^2}\\Bigg)\\frac{X_{\\textup{K}}\\star \\widetilde{\\x}}{g(X, V)^4}\n \\end{align}\n Now we need to calculate the second term in $\\mathrm{S}_{\\textup{K}}^{\\textup{R}}$ in (\\ref{srr}). From (\\ref{FR_def}) we have\n \\begin{align}\n\\frac{1}{\\kappa^2} i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{R}}\\wedge\\flw_{\\textup{R}}=& \\frac{1}{g(X, V)^4} \\Big( i_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{A})\\wedge \\widetilde{\\x} \\wedge \\widetilde{A}\\Big)\\notag\\\\\n&- \\frac{g(X, A)}{g(X, V)^5}\\Big( i_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{A})\\wedge \\widetilde{\\x} \\wedge \\widetilde{V}\\Big)\\notag\\\\\n&-\\frac{g(X, A)}{g(X, V)^5}\\Big( i_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{V})\\wedge \\widetilde{\\x} \\wedge \\widetilde{A}\\Big)\\notag\\\\\n&+\\frac{g(X, A)^2}{g(X, V)^6}\\Big( i_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{V})\\wedge \\widetilde{\\x} \\wedge \\widetilde{V}\\Big)\n\\label{frfrstar}\n\\end{align}\nNow using lemma \\ref{fiveoneforms} yields\n\\begin{small}\n\\begin{align}\ni_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{A})\\wedge \\widetilde{\\x} \\wedge \\widetilde{A}=& \\big(A_{\\textup{K}} g(X, A)-X_{\\textup{K}} g(A, A)\\big)\\star \\widetilde{\\x}+ X_{\\textup{K}} g(X, A)\\star \\widetilde{A} - g(X, A)^2 g_{\\textup{K} a}\\star d y^a\\notag\\\\\ni_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{A})\\wedge \\widetilde{\\x} \\wedge \\widetilde{V}=& g(A, X)V_{\\textup{K}}\\star \\widetilde{\\x}+ X_{\\textup{K}} g(X, V)\\star \\widetilde{A} - g(X, A)g(X, V) g_{\\textup{K} a}\\star d y^a\\notag\\\\\ni_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{V})\\wedge \\widetilde{\\x} \\wedge \\widetilde{A}=&g(A, X)X_{\\textup{K}}\\star \\widetilde{V}- X_{\\textup{K}} g(X, A)\\star \\widetilde{\\x} - g(X, A)g(X, V)g_{\\textup{K} a} \\star d y^a\\notag\\\\\ni_{\\partial_{\\textup{K}}} \\star(\\widetilde{\\x}\\wedge \\widetilde{V})\\wedge \\widetilde{\\x} \\wedge \\widetilde{V}=&\\big(V_{\\textup{K}} g(X, V)+X_{\\textup{K}} \\big)\\star \\widetilde{\\x}+ X_{\\textup{K}} g(X, V)\\star \\widetilde{V} - g(X, V)^2 g_{\\textup{K} a}\\star d y^a\n\\label{relslem}\n\\end{align}\n\\end{small}\n Substituting (\\ref{relslem}) into (\\ref{frfrstar}) and expanding yields\n\\begin{align}\n\\frac{1}{\\kappa^2}i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{R}}\\wedge\\flw_{\\textup{R}}=& \\Bigg(\\frac{g(X, A)^2}{g(X, V)^2}-g(A, A)\\Bigg)\\frac{X_{\\textup{K}}\\star \\widetilde{\\x}}{g(X, V)^4}\n\\end{align}\nThus\n\\begin{align}\n\\mathrm{S}_{\\textup{K}}^{\\textup{R}}=& \\kappa^2 \\frac{\\epsilon_0}{2 }\\big(i_{\\partial_{\\textup{K}} }\\flw_{\\textup{R}}\\wedge\\star\\flw_{\\textup{R}}-i_{\\partial_{\\textup{K}} }\\star\\flw_{\\textup{R}}\\wedge\\flw_{\\textup{R}}\\big)\\notag\\\\\n=& \\frac{q^2}{16 \\pi^2 \\epsilon_0}\\Bigg(g(A, A)-\\frac{g(X, A)^2}{g(X, V)^2}\\Bigg)\\frac{X_{\\textup{K}}\\star \\widetilde{\\x}}{g(X, V)^4}\n\\end{align}\n\n\\end{proof}\n\\chapter{The resulting expression}\n\nIn this chapter we give the result of carrying out the integration in definition (\\ref{pk_def}). We use a computer to carry out the calculation and make use of the Newman-Unti coordinate system. The input code for the MAPLE mathematical software can be found in appendix \\ref{maple_ptwo}. Here we state the result.\n\n\\section{Arbitrary co-moving frame}\nSetting $\\tau=\\tau_0+\\delta$ we expand $\\mathcal{S}_{\\textup{K}} $ in powers of $\\delta$. We adapt the global Lorentz frame such that\n\\begin{equation}\n\\begin{aligned}\n&y^j(C(\\tau_0))=0\\quad\\text{for} \\quad j=0, 1, 2, 3\n\\\\\n\\text{and} \\quad \\quad&\\dot{\\c}(\\tau_0)= \\frac{\\partial}{\\partial y^0},\\quad\\ddot{\\c}(\\tau_0)= a\\frac{\\partial}{\\partial y^3},\\quad \\dddot{\\c}(\\tau_0)=b^j \\frac{\\partial}{\\partial y^j}\n\\end{aligned}\n\\label{def_x_i}\n\\end{equation}\nwhere $a,b^j \\in\\mathbb{R} $ are constants given by\n\\begin{align}\na=\\sqrt{g(\\ddot{\\c}(\\tau_0), \\ddot{\\c}(\\tau_0))}, \\quad\\quad\nb^j=dy^j(\\dddot{\\c}(\\tau_0))\n\\label{def_ab}\n\\end{align}\nand from (\\ref{g_Cd_Cdd})\n\\begin{align*}\nb^{0}=a^2\n\\end{align*}\nThus expanding $\\dot{\\c}$ and $\\ddot{\\c}$ we have\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\c}(\\delta+\\tau_0)\n&=\n\\Big(1+ \\frac{b^0}{2}\\delta^2\\Big)\\frac{\\partial}{\\partial\n y^0}+\\frac{b^1}{2}\\delta^2\\frac{\\partial}{\\partial y^1}+\n\\frac{b^2}{2}\\delta^2\\frac{\\partial}{\\partial y^2}+\\Big(a \\delta+\n\\frac{b^3}{2}\\delta^2\\Big)\n\\frac{\\partial}{\\partial y^3}+\\mathcal{O}(\\delta^3),\n\\\\\n\\ddot{\\c}(\\delta+\\tau_0)\n&=\nb^0 \\delta \\frac{\\partial}{\\partial y^0}+b^1 \\delta \\frac{\\partial}{\\partial y^1}+b^2 \\delta \\frac{\\partial}{\\partial y^2}+\\Big(a+b^3 \\delta\\Big) \\frac{\\partial}{\\partial y^3}+\\mathcal{O}(\\delta^2)\n\\end{aligned}\n\\label{cd_expansion}\n\\end{equation}\nFrom (\\ref{def_V_A}) and (\\ref{tau_taur_R}) we have\n\\begin{align}\nV|_{(\\delta+\\tau_0,R,\\theta,\\phi)}\n&=\n\\dot{\\c}(\\delta+\\tau_0)\n\\qquad\\text{and}\\qquad\nA|_{(\\delta+\\tau_0,R,\\theta,\\phi)}=\\ddot{\\c}(\\delta+\\tau_0)\n\\label{va_expand}\n\\end{align}\nIt is useful to express $V$ and $A$ in mixed coordinates, with the\nbasis vectors in terms of the global Lorentz coordinates, but the\ncoefficients expressed in terms of the Newman-Unti coordinates.\n\n\\subsection*{The result}\nHere we outline the steps taken in the MAPLE code.\nWe begin with the expression (\\ref{LW_coords}) for the Li$\\text{\\'{e}}$nard-Wiechert potential $\\mathrm{A}$ in Newman-Unti\ncoordinates. Taking the exterior derivative we obtain\nthe field $2-$form $\\mathrm{F}$ and its Hodge dual $\\displaystyle{\\star\\mathrm{F}}$.\nWe obtain expressions for the four translational Killing vectors\n$\\frac{\\partial}{\\partial y^k}$ in Newman-Unti coordinates and using\n(\\ref{stress_def}) we obtain expressions for the four electromagnetic\nstress $3-$forms $\\mathrm{S}_{\\textup{K}} $. Substituting the expansions (\\ref{cd_expansion})\ninto these expressions we obtain the integrands, and finally using (\\ref{SigmaT_coords})\n we integrate over $\\Sigma_{\\textup{T}}$. The result is\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{q\\kappa}\\int_{\\Sigma_{\\textup{T}}} S_{0}&= -\\frac{1}{4}b^0 \\frac{\\delta_2^2-\\delta_1^2}{{R}_0}-\\frac{2}{3}a^2(\\delta_2-\\delta_1)-\\frac{2}{3}a b^3(\\delta_2^2-\\delta_1^2)+\\mathcal{O}(\\delta_1^3)+\\mathcal{O}(\\delta_2^3),\n\\\\\n\\frac{1}{q\\kappa}\\int_{\\Sigma_{\\textup{T}}} S_{1}&= +\\frac{1}{4}b^1 \\frac{\\delta_2^2-\\delta_1^2}{{R}_0}+\\mathcal{O}(\\delta_1^3)+\\mathcal{O}(\\delta_2^3),\n\\\\\n\\frac{1}{q\\kappa}\\int_{\\Sigma_{\\textup{T}}} S_{2}&= +\\frac{1}{4}b^2 \\frac{\\delta_2^2-\\delta_1^2}{{R}_0}+\\mathcal{O}(\\delta_1^3)+\\mathcal{O}(\\delta_2^3),\n\\\\\n\\frac{1}{q\\kappa}\\int_{\\Sigma_{\\textup{T}}} S_{3}&=+\\frac{1}{4}b^3\n\\frac{\\delta_2^2-\\delta_1^2}{{R}_0}+\\frac{1}{2}a\\frac{\\delta_2-\\delta_1}{{R}_0}+\\frac{1}{3}a^3(\\delta_2^2-\\delta_1^2)\n+\\mathcal{O}(\\delta_1^3)+\\mathcal{O}(\\delta_2^3)\n\\end{aligned}\n\\label{int_Sk}\n\\end{equation}\nwhere\n\\begin{align*}\n\\delta_1=\\tau_1-\\tau_0, \\quad\\quad \\delta_2=\\tau_2-\\tau_0\n\\end{align*}\nand $\\kappa$ is given by (\\ref{def_kappa}).\n\nCombining (\\ref{int_Sk}) into a single expression and using\n(\\ref{def_Pdot}) and (\\ref{def_Pdot_dual}) we obtain the following\nexpression for $\\pdot(\\tau_0)\\in \\textup{T}_{C(\\tau_0)}\\mathcal{M}$\n\\begin{align}\n\\frac{1}{q\\kappa}\\pdot(\\tau_0)=&\\tfrac{2}{3}a^2 \\tfrac{\\partial}{\\partial y^0} +\\lim_{{R}_0 \\rightarrow 0} \\frac{1}{2{R}_0}a \\tfrac{\\partial}{\\partial y^3}+ \\raisebox{0.4cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0\\\\\\tau_2 \\rightarrow \\tau_0}}}$} \\Big( \\frac{\\tau_1+\\tau_2-2\\tau_0}{4 {R}_0}\\Big)b^j \\tfrac{\\partial}{\\partial y^j} +\\mathcal{O}\\big(\\delta_1^2\\big)+\\mathcal{O}\\big(\\delta_2^2\\big).\n\\label{res_raw}\n\\end{align}\nHence from the definition of $\\lambda$ (\\ref{def_lambda}) and (\\ref{def_x_i}),\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{q\\kappa}\\pdot(\\tau_0)&=+\\tfrac{2}{3}g\\big(\\ddot{\\c}(\\tau_0),\n\\ddot{\\c}(\\tau_0)\\big)\\dot{\\c}(\\tau_0) + \\lim_{{R}_0 \\rightarrow 0}\n\\frac{1}{2{R}_0}\\ddot{\\c}(\\tau_0)+\\lambda\\dddot{\\c}(\\tau_0)\n+\\mathcal{O}\\big(\\delta_1^2\\big)+\\mathcal{O}\\big(\\delta_2^2\\big).\n\\end{aligned}\n\\label{Pdot_res_expan}\n\\end{equation}\nThe first term in (\\ref{Pdot_res_expan}) is the standard radiation\nreaction term and the second term is the singular term to be renormalized.\nThe third term is proportional to $\\dddot{\\c}(\\tau_0)$ and therefore may\nbe recognised as the Schott term providing the coefficient is well\ndefined in the limit.\n\n\n\n\nIf $\\lambda$ is chosen to be finite it follows immediately that all\nhigher order terms in the series vanish. This is because ${R}_0^{-1}$ is\nthe most divergent power of ${R}_0$ appearing in the\nseries. Mathematically we are free to choose $\\lambda$ to diverge, in\nwhich case higher order terms could be made finite. However this would require\nextra renormalization in order to accommodate the $\\lambda$ terms and\nthe resulting equation of motion would not resemble the\nLorentz-Abraham-Dirac equation.\n\nChoosing $\\lambda$ to be finite yields for\n$\\pdot_{\\text{EM}}(\\tau)\\in \\textup{T}_{C(\\tau)}\\mathcal{M}$\n\\begin{align}\n\\frac{1}{q\\kappa}\\dot{\\mathrm{P}}_{\\text{EM}} &= \\tfrac23 g(\\ddot{\\c}, \\ddot{\\c})\\dot{\\c} +\\lambda \\dddot{\\c} +\\lim_{{R}_0 \\rightarrow 0}\\frac{1}{2 {R}_0}\\ddot{\\c}.\n\\end{align}\nThe value of $\\lambda$ may be fixed by satisfying the orthogonality\ncondition (\\ref{g_Cd_Cdd}),\n\\begin{align}\n0=\\frac{1}{q\\kappa}g(\\pdot_{\\text{EM}}, \\dot{\\c})&=-\\tfrac23g(\\ddot{\\c}, \\ddot{\\c}) +\\lambda g(\\dddot{\\c}, \\dot{\\c})=-(\\tfrac23+\\lambda )g(\\ddot{\\c}, \\ddot{\\c}).\n\\end{align}\nTherefore $\\lambda=-\\tfrac{2}{3}$ and the final covariant expression\nfor $f_{\\textup{self}}$ is given by\n\\begin{align}\nf_{\\textup{self}}=-\\pdot_{\\textup{EM}} &= \\tfrac23\\kappa \\big(\\dddot{\\c}-g(\\ddot{\\c}, \\ddot{\\c})\\big)\\dot{\\c}-\\lim_{{R}_0 \\rightarrow 0}\\frac{\\kappa}{2 {R}_0}\\ddot{\\c},\n\\end{align}\nwhich is identical to (\\ref{f_self}).\n\\section{Conclusion}\n\nWe have shown that the complete self force may be obtained directly from the Li$\\text{\\'{e}}$nard-Wiechert stress 3-forms when using the null geometry with the Bhabha tube as the domain of integration. This eliminates the need to introduce the extra ad hoc term in \\eqref{P_inc_Schott}. It also proves the reason for the missing term in previous calculations is the procedure followed in taking the limits, and not the nature of the coordinates used as proposed by Gal'tsov and Spirin \\cite{Galtsov02}.\n\n We have seen that a requirement for the term to appear is that the ratio of limits $\\lambda$, which describes the way in which the Bhabha tube is collapsed onto the worldline, is made finite. This is a natural choice because it demands $\\delta_1$, $\\delta_2$ and ${R}_0$ to be the same order of magnitude. The specific value $\\lambda=-\\tfrac23$ is fixed by the orthogonality condition (\\ref{g_Cd_Cdd}), however the physical justification for imposing this particular geometry on the Bhabha tube is currently unknown.\n\nConsider figure \\ref{limits}. It is easily seen that definition (\\ref{pk_def}) is equivalent to\n\\begin{align}\n\\pdot_{\\textup{K}} (\\tau_0)=&\\raisebox{0.4cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_2\\\\\\tau_2 \\rightarrow \\tau_0}}}$}\\bigg(\\frac{1}{\\tau_1-\\tau_2}\\int_{\\Sigma_{\\textup{T}}}\\mathrm{S}_{\\textup{K}} \\bigg).\n\\label{def_Pdot_lim}\n\\end{align}\nIt turns out that the limit $\\tau_1\\rightarrow \\tau_2$ may be taken before the other two limits. This results in the following lemma.\n\\begin{lemma}\n\\label{lem_final}\n\\begin{align}\n\\pdot_{\\textup{K}} (\\tau_0)=&\\raisebox{0.25cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\ \\tau_1 \\rightarrow \\tau_0}}}$}\\int_{S^2(\\tau_1)}i_{\\frac{\\partial}{\\partial \\tau}}\\mathrm{S}_{\\textup{K}}\n\\label{sphere_def}\n\\end{align}\nwhere $S^2(\\tau_1)$ is the 2-sphere given in Newman-Unti coordinates by\n\\begin{align}\nS^2 &= \\Big\\{(\\tau , R, \\theta, \\phi)\\Big| \\tau=\\tau_1,\\quad R={R}_0,\\quad 0\\leq\\theta\\leq \\pi, \\quad 0\\leq\\phi\\leq 2\\pi\\Big\\}\n\\label{sphere_def_lem}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nIn Newman-Unti coordinates the side $\\Sigma_{\\textup{T}}$ of the Bhabha tube is given by\n\\begin{align}\n\\Sigma_{\\textup{T}} &= \\Big\\{(\\tau, R, \\theta, \\phi)\\Big|\\tau_1\\leq\\tau\\leq\\tau_2,\\quad R={R}_0,\\quad 0\\leq\\theta\\leq\\pi,\\quad 0\\leq \\phi\\leq 2\\pi\\Big\\}.\n\\label{SigmaT_coords}\n\\end{align}\nIt follows from (\\ref{def_Pdot_lim})that\n\\begin{align}\n\\pdot_{\\textup{K}} (\\tau_0)=&\\raisebox{0.4cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_2 \\rightarrow \\tau_1\\\\\\tau_1 \\rightarrow \\tau_0}}}$}\\bigg(\\frac{1}{\\tau_1-\\tau_2}\\int_{\\tau=\\tau_1}^{\\tau_2}\\int_{S^2(\\tau)} \\mathrm{S}_{\\textup{K}} (\\tau, {R}_0, \\theta, \\phi)\\bigg)\\notag\\\\\n=&\\raisebox{0.25cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0}}}$}\\Bigg(\\lim_{\\tau_2 \\rightarrow \\tau_1}\\Big(\\frac{1}{\\tau_1-\\tau_2}\\int_{\\tau=\\tau_1}^{\\tau_2}\\int_{S^2(\\tau)} \\mathrm{S}_{\\textup{K}} (\\tau, {R}_0, \\theta, \\phi)\\Big)\\Bigg)\n\\end{align}\nApplying theorem (\\ref{lie_int}) yields result.\n\\end{proof}\n\nLemma \\ref{lem_final} shows the important step in the calculation is taking the limits ${R}_0 \\rightarrow 0$ and $\\tau_1=\\tau_2 \\rightarrow \\tau_0$ simultaneously. If we use (\\ref{sphere_def}) instead of (\\ref{pk_def}) for our definition of the self force then we obtain the following in place of (\\ref{res_raw}),\n\n\\begin{align}\n\\frac{1}{q\\kappa}\\pdot(\\tau_0)=&\\tfrac{2}{3}a^2 \\tfrac{\\partial}{\\partial y^0} +\\lim_{{R}_0 \\rightarrow 0} \\frac{1}{2{R}_0}a \\tfrac{\\partial}{\\partial y^3}+ \\raisebox{0.25cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0}}}$} \\Big( \\frac{\\tau_1-\\tau_0}{2 {R}_0}\\Big)b^j \\tfrac{\\partial}{\\partial y^j} +\\mathcal{O}\\big(\\delta_1^2\\big)+\\mathcal{O}\\big(\\delta_2^2\\big).\n\\end{align}\nand the orthogonality condition (\\ref{g_Cd_Cdd}) yields the key result\n\\begin{align}\n\\raisebox{0.25cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\tau_1 \\rightarrow \\tau_0}}}$} \\Big( \\frac{\\tau_1-\\tau_0}{2 {R}_0}\\Big)= \\raisebox{0.25cm}{$\\displaystyle{\\lim_{\\substack{{R}_0 \\rightarrow 0\\\\\\delta_1 \\rightarrow 0}}}$}\\Big( \\frac{\\delta_1}{2 {R}_0}\\Big)=-\\frac{2}{3}.\n\\label{key_result}\n\\end{align}\nIn Dirac's calculation $\\tau_1 = \\tau_0$ and so the Schott term arises naturally without having to take this limit. However when using Dirac geometry the Li$\\text{\\'{e}}$nard-Wiechert potential and stress forms have to be expanded in a Taylor series about the retarded time $\\tau_r$ (see appendix \\ref{dirac_lw}). In this process the relationship (\\ref{dirac_taur}) is used which gives a relationship between ${R}_{\\textup{D}0}$ and $\\delta_r=\\tau_D-\\tau_r$ which is similar to (\\ref{key_result}).\n\n\n\n\n\n\n\\newpage\n\n\\addcontentsline{toc}{chapter}{Part II - A new approach to the reduction of wakefields}\n\n\\vspace{3cm}\n\\begin{center}\n\\begin{minipage}{0.95\\textwidth}\n\\vspace{6cm}\n\\begin{center}{\\bf \\huge\n\nPART II\\\\\nA new approach to the reduction of wakefields\n\n} \\vspace{3cm}\n\n\\end{center}\n\\end{minipage}\\vspace{10cm}\n\\end{center}\n\n\n\n\\chapter{Introduction}\n\n\n\\section{Collimation and Wakefields in a particle accelerator }\n\n\n It is common for accelerators to have bunches of order $10^8$ particles or more. For example, ALICE, at Daresbury Laboratory, uses bunches with bunch charges of $20$pC to $80$pC, which represents approximately $1.25\\times 10^8$ to $5\\times 10^8$ electrons. As the bunch traverses the accelerator some of these particles will be perturbed from the ideal orbit or trajectory. This may be due to collective instabilities elsewhere in the beam or deflection due to residual gas that could not be removed from the vacuum chamber. In addition particles from the wall of the beam pipe can be accelerated in a process known as \\emph{self injection}. All of these stray particles will form a low density region of charge around the beam which is called the \\emph{beam halo}.\n\n The presence of a large beam halo is generally undesirable. In colliders the halo particles reduce the accuracy of measurements at the interaction region, and in medical accelerators they can cause severe consequences as a result of highly energetic particles missing the desired target. In order to remove the halo from a beam specially designed apparatus called collimating systems are used.\n\n In general collimating systems incorporate regions where the cross-sectional area of the beam pipe is reduced. Collimators are specific sections of the beam pipe which undergo a narrowing in one or both of the transverse dimensions. There are many possible configurations depending on the design requirements of individual projects. In high energy accelerators the presence of collimators can also have an adverse effect on the beam due to so called \\emph{wakefields}. Electromagnetic fields due to highly relativistic particle beams can interact with the walls of the collimator and induce \\emph{image charges} on the wall. The fields resulting from these image charges are known as wakefields and can effect the motion of trailing charges, often inducing instabilities and emittance growth. Generally fields caused by large scale geometric discontinuities, for example in cavities and collimators, are known as \\emph{geometric wakefields} and fields caused by resistivity in the wall are known as \\emph{resistive wakefields}.\n\n\\section{Present approaches to the reduction of wakefields}\n\nThere is much interest in methods for reducing the geometric wakefields produced by a charged bunch of particles passing through a\ncollimator. The customary approach is to reduce the taper angle of the collimator. Early work on the calculation of wakefields from smoothly tapered structures was pioneered by Yokoya \\cite{Yokoya90}, Warnock \\cite{Warnock93} and Stupakov \\cite{Stupakov96, Stupakov01}. More recent investigations by Stupakov, Bane and Zagorodnov \\cite{Stupakov07, Bane07, Bane10} and Podobedov and Krinsky \\cite{Podobedov06, Podobedov07} have also looked at the effect of altering the transverse cross section of the collimator. A detailed analysis of the numerical and analytic calculation of collimator wakefields, including an informative introduction to the topic, may be found in \\cite{Smith11}. All of the present methods for minimizing wakefields rely on altering the geometry of the collimator. In this thesis we propose a new method where the trajectory of the beam is altered.\n\n\\section{The relativistic Li$\\text{\\'{e}}$nard-Wiechert field}\n\nA relativistic particle undergoing nonlinear acceleration will generate a radiation field primarily in the instantaneous direction of motion (see figure \\ref{fig_SR}). This is known as \\begin{bf}synchrotron radiation\\end{bf}. For high $\\gamma$-factors the bulk of the field lies inside an angle $\\Delta\\phi\\sim 1\/\\gamma$ where $\\Delta\\phi$ is the angle from the direction of motion.\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{0.4cm}\n\\begin{picture}(30, 15)\n\\put(0, 0){\\includegraphics[ width=30\\unitlength]{synchrotron3.pdf}}\n\\put(15.3, 13.7){$\\Delta\\phi$}\n\\end{picture}\n\\end{center}\n\\caption{Synchrotron radiation}\n\\label{fig_SR}\n\\end{figure}\nBy contrast, a relativistic particle with constant velocity will generate no radiation field. This is easily seen from the form of $\\flw_{\\textup{R}}$ in equation (\\ref{FR_def}) since when the acceleration is zero this term vanishes. It is well known in accelerator physics that the Coulomb field $\\flw_{\\textup{C}}$ generated by a relativistic particle moving with constant velocity is flattened towards the plane orthogonal to the direction of motion, and is often called a \\begin{bf}pancake field\\end{bf} (see figure \\ref{fig_pcake}).\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{pancake2.pdf}\n\\end{center}\n\\caption{The pancake field }\n\\label{fig_pcake}\n\\end{figure}\n\nConsider figure \\ref{fig_Fields}. The magnitude of the Coulombic $\\flw_{\\textup{C}}$ and radiative $\\flw_{\\textup{R}}$ Li$\\text{\\'{e}}$nard-Wiechert fields are plotted as height above the sphere for high $\\gamma$ . In both cases the field is distributed in a narrow spike protruding from the sphere in a small angle from the direction of motion. In both cases the bulk of the field lies inside an angle $\\Delta\\phi\\sim 1\/\\gamma$ from the direction of motion.\n\n\\begin{figure}\n\\setlength{\\unitlength}{0.4cm}\n\\begin{center}\n\\begin{picture}(30, 10)\n\\put(0, 1.8){\\includegraphics[width=14\\unitlength,viewport=22 293 553 515]{Coulomb-field.pdf}}\n\\put(15, 1){\\includegraphics[width=18\\unitlength,viewport=25 306 540 500]{Radiation-field.pdf}}\n\\put(3, 0){Coulomb field}\n\\put(18, 0){Radiation field}\n\\end{picture}\n\\end{center}\n\\caption[relativistic Li$\\text{\\'{e}}$nard-Wiechert fields as heights above the sphere]{The magnitude of the Coulomb and radiative fields for a high\n $\\gamma$, given as height above the sphere. The bulk of the fields is\nin the direction of motion.}\n\\label{fig_Fields}\n\\end{figure}\n\nAt first glance the plot of the Coulomb field seems to contradict figure \\ref{fig_pcake} which shows the field flattened in the transverse plane. It is reasonable to ask how these two\nradically different behaviours can be consistent.\n\\begin{figure}\n\\centerline{\n\\setlength{\\unitlength}{0.09\\textwidth}\n\\begin{picture}(10,3)\n\\put(0,0){\\includegraphics[width=10\\unitlength]{Catchup.pdf}}\n\\put(8.6,1.5){$h$}\n\\put(5,0.7){$v{t_\\text{s}}$}\n\\put(5,2.1){$c{t_\\text{s}}$}\n\\put(8.6,2.5){R}\n\\put(8.6,.7){Q}\n\\put(1.0,.7){P}\n\\end{picture}\n}\n\\caption{Showing the communication between a particle and its pancake}\n\\label{fig_Catchup}\n\\end{figure}\nConsider a particle moving at velocity $v$ along the horizontal line $PQ$\nin figure \\ref{fig_Catchup}. Let $R$ be a point in the pancake a\ndistance $h$ from the particle, when the particle is at $Q$. The last\npoint at which the particle could communicate with the point $R$ is at\n$P$, a length $v{t_\\text{s}}$ from $Q$. Here ${t_\\text{s}}$ is the time\nit takes for light to travel from $P$ to $R$ and also the time for\nthe particle to travel from $P$ to $Q$. Then\n$\\norm{PR}=c{t_\\text{s}}$ and $\\norm{PQ}=v{t_\\text{s}}$. Thus\n$(c{t_\\text{s}})^2=h^2+(v{t_\\text{s}})^2$. Hence\n$h^2=c^2{t_\\text{s}}^2(1-v^2\/c^2)=c^2{t_\\text{s}}^2\/\\gamma^{2}$ so\n${t_\\text{s}}=\\gamma h\/c$ and $\\norm{PQ}=\\gamma h v\/c$. Thus a\nparticle needs to have travelled in a straight line for a length\n$\\norm{PQ}=\\gamma h v\/c$ in order for a pancake of radius $h$ to\ndevelop. Looking at the fields which originate at $P$ and arrive at $R$, they\nare at an angle approximately $\\norm{RQ}\/\\norm{PR}=1\/\\gamma$. This is\nconsistent with figure \\ref{fig_Fields}.\n\n\n\n\n\\section{Proposal}\nWe investigate the possibility of reducing wakefields in accelerators by placing structures which give rise to geometric wakefields, such as collimators and cavities, directly after a bending dipole. We model a beam of charged particles as a one dimensional continuum of point charges undergoing the same motion in space, but at a different time.\n In our analysis we envisage a collimator as the source of wakefields and we calculate the field strength at the entrance of the collimator due to the collective Li$\\text{\\'{e}}$nard-Wiechert field of the particles in the beam. We do not consider any boundary conditions imposed by the beam pipe or the collimator itself. We propose the new method of reduction of wakefields should be used parasitically on existing bends so that there is no additional beam disruption due to coherent synchrotron radiation wakefields (CSR wakes) or loss or energy due to synchrotron radiation (SR). In particular an accelerator which requires the following:\n\\begin{itemize}\n\\item\nShort bunches (much shorter bunch length that the aperture of the\ncollimator).\n\\item\nThe bending of the bunches, via the use of dipoles.\n\\item\nCollimation.\n\\end{itemize}\ncan achieve the collimation for free, i.e. with no additional loss of\nenergy or disruption to the bunches from geometric or CSR wakes, by\nplacing the collimator just after the bend. \\\\\n\nWe have seen that in order for a pancake of radius $h$ to develop the particle needs to have travelled in a straight line for a distance $\\gamma h v\/c$. For highly relativistic motion this is approximately $\\gamma h$. Our proposed method of reduction of wakefields relies on this result.\n The idea is to bend the beam slightly before it enters the collimator (see figure \\ref{setup}). Most of the Coulomb\nfield generated by the particle before the bend will continue in a straight line. By sufficiently\nenlarging the beam pipe in this direction the wakefield due to this part of the\nfield can be neglected. If the distance, $Z$, of the\nstraight line segment from the terminus of the bend to the centre\nof the collimator is sufficiently small, then the resulting pancake field will be\ntoo small to reach the sides of the structure. Of course bending\nthe beam will generate additional radiation fields, however by judicious choice of\ngeometry of the beam these can be minimized.\n\\begin{figure}\n\\centerline{\n\\setlength{\\unitlength}{0.013\\textwidth}\n\\begin{picture}(100,50)\n\\put(0,0){\\includegraphics[width=100\\unitlength]{path.pdf}}\n\\put(2,15){\\makebox(0,0)[tl]{\\rotatebox{-8}{Path of beam $\\Big(\\!x(\\tau),y(\\tau),z(\\tau)\\!\\Big)$}}}\n\\put(65,35){\\makebox(0,0)[tl]{\\rotatebox{-24}{Collimator}}}\n\\put(40,40){\\makebox(0,0)[tl]{Field measurement}}\n\\put(40,37){\\makebox(0,0)[tl]{point ${\\boldsymbol X}$}}\n\\put(35,17){\\makebox(0,0)[tr]{$R$}}\n\\put(37,20){\\makebox(0,0)[tl]{$\\Theta$}}\n\\put(61,16){\\makebox(0,0)[t]{$Z$}}\n\\put(92.1,21){\\makebox(0,0)[tr]{$2h$}}\n\\put(63,7){\\makebox(0,0)[tl]{$z$}}\n\\put(85,2){\\makebox(0,0)[tl]{$x$}}\n\\put(55,44){\\makebox(0,0)[b]{$y$}}\n\\end{picture}\n}\n\\caption{Setup for beam trajectory and collimator}\n\\label{setup}\n\\end{figure}\n\n\nLet $h$ denote half the aperture of the\ncollimator and let $L$ represent the spatial length of the bunch. The following two scenarios will be considered:\n\\begin{itemize}\n\\item Long smooth bunches\nwhere $L>h$ and\nany variation in the density of the bunch is over length\nscales longer than $h$,\n\\item Bunches where variation in density is over short length scales less\nthan about $0.2h$. This includes the case of very short bunches where\n$L\\ll 0.2 h$.\n\\end{itemize}\n\nThese two scenarios are both applicable to present day machines, where the bunch length depends upon the\nspecific objectives and engineering considerations of individual projects.\n\nIn chapter \\ref{chap_pointline} we show that the coherent electric (magnetic) fields due to a bunch modeled as a 1D continuum of point particles are given by the convolution of the electric (magnetic) field due to a single particle with the charge profile. In chapter \\ref{bendingbeams} we carry out a numerical investigation using the mathematical software MAPLE. Assuming a Gaussian charge profile we minimize the electric field generated by the bunch by calculating the field due to different beam trajectories. Having optimized the trajectory we calculate the electric field at a specific point, representing a point on the collimator wall, for a selection of different bunch lengths which are attainable at present day facilities.\nCalculating the secondary electromagnetic fields generated by the collimator is a\nboundary value problem, hence calculating the full wakefield kick due to the collimator and\na bent beam would require detailed knowledge of the geometry and material\nproperties of the beam pipe. This will not be undertaken in this thesis. However we will show that the field incident on the boundary may be reduced by a factor of 7, and since the wakefields are, to a large extent, proportional to the fields at the boundary, the field in the beam pipe will automatically be reduced by approximately the same factor.\nWe will find that for short bunches, or bunches with large\namounts of micro-bunching, it is possible to make a significant reduction\nin wakefields. This is applicable to present day free electron lasers, which employ bunch compressors to produce\nvery short bunches, for example in LCLS $L\/c \\approx 0.008$ps. Assuming a collimator of half aperture $h=0.5$mm then in this case $L=0.0048h$. It turns out that electromagnetic fields due to\nlong smooth bunches may not be reduced significantly. In many present day colliders the bunches are designed to be long and\nsmooth, however in the future short bunch colliders may be desirable (see Table \\ref{table1}).\n\n\n\\begin{table}\n\\begin{center}\n\\caption{Bunch lengths for some modern colliders and FELs}\n\\label{table1}\n\\begin{tabular}{|c| c |c|}\\hline\nCollider & Year of & Bunch length [ps]\\\\\n &Commissioning & \\\\\\hline\nSLC, SLAC & $1989$ & $3$\\\\\nILC & $\\geq 2015$ & $1$\\\\\nCLIC & $\\geq 2025$ & $0.15$\\\\\\hline\nFree Electron Laser & & Min. bunch length [ps]\\\\\\hline\nFLASH, DESY & $2005$ & $0.05$\\\\\nLCLS, SLAC & $2009$ & $0.008$\\\\\nXFEL, DESY & $2014$ & $0.08$\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\chapter{The field of a 1D continuum of point charges}\n\n\\label{chap_pointline}\n\nIn this chapter we consider the field generated by a 1D continuum of point charges on an arbitrary trajectory. The key result is that the electric field for the continuum is given by the convolution of the electric field for a single point charge with the charge profile. This result will be used in the next chapter where we adopt the 1D continuum as our model for a bunch of particles in an accelerator.\n\\section{The Li$\\acute{\\textup{e}}$nard-Wiechert field in $3$-vector notation}\n\n\\begin{definition}\n\\label{def_X_three}\nGiven a choice of time coordinate $t$ such that $\\frac{\\partial}{\\partial t}$ is Killing we can write $\\mathcal{M}=\\mathbb{R} \\times \\underline{\\m}$, where $\\underline{\\m}$ is Euclidean three space. We denote the points $x\\in {(\\m\\backslashC)}$ and $C(\\tau)\\in \\mathcal{M}$ by\n\\begin{align}\nx=(c T, \\VX), \\qquad \\text{and} \\qquad C(\\tau)=(c t,{\\boldsymbol x})\n\\end{align}\nwhere $T, t\\in \\mathbb{R} $ and $\\VX, {\\boldsymbol x} \\in \\underline{\\m}$. The null displacement vector $X$ is given by\n\\begin{align}\nX=(c T-c t, \\VX-{\\boldsymbol x}), \\qquad \\textup{where} \\qquad t=\\gamma\\tau_r(c T, \\VX)\n\\end{align}\nwhere the difference $\\VX-{\\boldsymbol x}$ is a $3$-vector at point $\\VX\\in \\underline{\\m}$. It follows from the definition of $\\tau_r$ that $T>t$.\n\\end{definition}\n\\begin{definition}\n\\label{def_rn}\nThe spatial displacement between the field point $\\VX$ and the emission point ${\\boldsymbol x}$ will be denoted by\n\\begin{align}\n\\boldsymbol r=||\\VX-{\\boldsymbol x}||,\\label{r_def}\n\\end{align}\nwhere $||.||$ is the Euclidean norm. We define the unit $3$-vector $\\Vn \\in \\textup{T}_{\\VX}\\underline{\\m}$ by\n\\begin{align}\n\\Vn=\\frac{\\VX-{\\boldsymbol x}}{||\\VX-{\\boldsymbol x}||}=\\frac{\\VX-{\\boldsymbol x}}{\\boldsymbol r}, \\qquad \\Vn \\centerdot \\Vn=1\n\\end{align}\nwhere the dot denotes the standard scalar product.\n\\end{definition}\n\\begin{lemma}\n\\label{rt_lem}\nIt follows from the null condition that\n\\begin{align}\n\\boldsymbol r=c T-c t\\label{rt}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n\\begin{align}\n0=g(X, X)=&g\\big((c T-c t, \\VX-{\\boldsymbol x}), (c T-c t, \\VX-{\\boldsymbol x})\\big)\\notag\\\\\n=&-(c T-c t)^2 +||\\VX-{\\boldsymbol x}||^2\\notag\n\\end{align}\nThus\n\\begin{align}\n||c T-c t||=||\\VX-{\\boldsymbol x}||=\\boldsymbol r\n\\end{align}\nThe result (\\ref{rt}) follows from noticing $T>t$.\n\\end{proof}\n\nIt follows trivially from definitions \\ref{def_X_three} and \\ref{def_rn} and lemma \\ref{rt_lem} that the null 4-vector is given by\n\\begin{align}\nX=\\boldsymbol r(1, \\Vn)\\label{x_three}\n\\end{align}\n\n\\begin{definition}\n\\label{beta_def}\nThe normalized Newtonian velocity $\\Vbeta$ and acceleration ${\\boldsymbol a}$ are defined by\n\\begin{align}\n\\Vbeta=&\\frac{1}{c } \\frac{d{\\boldsymbol x}}{d t }=\\frac{1}{c\\gamma} \\frac{d{\\boldsymbol x}}{d\\tau}, \\qquad\\text{and}\\qquad {\\boldsymbol a}= \\frac{d\\Vbeta}{d t }=\\frac{1}{\\gamma} \\frac{d\\Vbeta}{d\\tau}\n\\end{align}\n\\begin{lemma}\n\\label{lemfourvecs}\nThus the $4$-vectors $\\dot{\\c}, \\ddot{\\c} \\in \\textup{T}_{C(\\tau)}\\mathcal{M}$ are given by\n\\begin{align}\n&\\dot{\\c}=c\\gamma(1, \\Vbeta)\\notag\\\\\n\\textup{and} \\qquad &\\ddot{\\c}=c\\gamma^4({\\boldsymbol a}\\centerdot\\Vbeta)(1, \\Vbeta)+c\\gamma^2(0, {\\boldsymbol a})\n\\label{v_three}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFirst note that\n\\begin{align*}\n \\frac{d\\gamma}{d t}= \\frac{d}{d t}(1-\\Vbeta\\centerdot\\Vbeta)^{-\\frac{1}{2}} =-\\frac{1}{2}(1-\\Vbeta\\centerdot\\Vbeta)^{-\\frac{3}{2}}\\frac{d}{dt}(-\\Vbeta\\centerdot\\Vbeta),\n\\end{align*}\nthus since $\\frac{d}{dt}(-\\Vbeta\\centerdot\\Vbeta)=-2{\\boldsymbol a}\\centerdot\\Vbeta$ it follows that\n\\begin{align}\n \\frac{d\\gamma}{d t}=\\gamma^3{\\boldsymbol a}\\centerdot\\Vbeta\n \\label{gam_t}\n\\end{align}\nFrom (\\ref{def_X_three}) we have $C=(c t, {\\boldsymbol x})$. Thus\n\\begin{align*}\n \\dot{\\c}=\\frac{dC}{d \\tau}=\\gamma \\frac{d C}{d t}=\\gamma (c, \\frac{d{\\boldsymbol x}}{d t})\\notag\n\\end{align*}\nAlso\n\\begin{align*}\n \\ddot{\\c}=\\frac{d\\dot{\\c}}{d\\tau}=\\gamma \\frac{d\\dot{\\c}}{d t}=\\gamma c \\frac{d \\gamma}{d t} (1, \\Vbeta)+\\gamma^2 c \\frac{d}{d t}(1, \\Vbeta).\n\\end{align*}\nSubstituting (\\ref{gam_t}) and (\\ref{beta_def}) yields the result \\ref{lemfourvecs}.\n\\end{proof}\n\\end{definition}\n\\begin{lemma}The following relations are true\n\\begin{align}\n&g(X, \\dot{\\c})=\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1)\\notag\\\\\n\\textup{and} \\qquad &g(X, \\ddot{\\c})= \\boldsymbol r c \\gamma^4 (\\Vbeta\\centerdot\\Vn-1)({\\boldsymbol a}\\centerdot\\Vbeta)+\\boldsymbol r c \\gamma^2 ({\\boldsymbol a}\\centerdot\\Vn)\\label{gwv_three}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\ng(X, \\dot{\\c})=& g\\big( \\boldsymbol r (1, \\Vn), c \\gamma (1, \\Vbeta)\\big)= \\boldsymbol r c \\gamma (\\Vbeta\\centerdot\\Vn-1)\n\\end{align*}\nAlso\n\\begin{align*}\ng(X, \\ddot{\\c})=&g\\big(\\boldsymbol r (1, \\Vn), \\gamma c \\frac{d \\gamma}{d t} (1, \\Vbeta)\\big)+ g\\big(\\boldsymbol r (1, \\Vn), \\gamma^2 c (0, {\\boldsymbol a}) \\big)\\\\\n=&\\boldsymbol r c \\gamma^4 \\big(({\\boldsymbol a} \\centerdot\\Vbeta)\\Vbeta\\centerdot\\Vn-({\\boldsymbol a}\\centerdot\\Vbeta)\\big)+\\boldsymbol r c \\gamma^2 ({\\boldsymbol a}\\centerdot\\Vn).\n\\end{align*}\n\\end{proof}\n\\begin{lemma}\n\\label{ecer_lem}\nIn chapter \\ref{maxlorentz} equations (\\ref{eb_def}) and (\\ref{bvecdef}) we define the electric and magnetic 1-forms $\\widetilde{\\e}, \\widetilde{\\mathcal{B}}\\in \\Gamma \\textup{T} \\mathcal{M}$ for a timelike observer curve $U$. Given a coordinate chart $(y^0, y^1, y^2, y^3)$ let $U=\\partial_{y^0}$ and let $\\mathrm{E}=\\mathrm{E}_{\\textup{C}}+\\mathrm{E}_{\\textup{R}}$ where\n\\begin{align}\n\\widetilde{\\elw}_{\\textup{C}}=i_{\\partial_{y^0}}\\flw_{\\textup{C}}\\qquad \\textup{and} \\qquad \\widetilde{\\elw}_{\\textup{R}}= i_{\\partial_{y^0}}\\flw_{\\textup{R}}.\n\\label{ebvec_def}\n\\end{align}\nIf $\\widetilde{\\elw}_{\\textup{C}}=\\mathrm{E}_{\\textup{C}a}dy^a$ and $\\widetilde{\\elw}_{\\textup{R}}=\\mathrm{E}_{\\textup{R}a}dy^a$ for $a=1, 2, 3$ then\n\\begin{align}\n\\mathrm{E}_{\\textup{C}a}= \\frac{q}{4\\pi \\epsilon_0}\\frac{(\\Vn-\\Vbeta)_a}{\\boldsymbol r^2 \\gamma^2 (1-\\Vn\\centerdot\\Vbeta)^3}\\qquad \\textup{and}\\qquad \\mathrm{E}_{\\textup{R}a}= \\frac{q}{4\\pi \\epsilon_0}\\frac{(\\Vn\\times(\\Vn-\\Vbeta)\\times{\\boldsymbol a})_a}{\\boldsymbol r c \\gamma (1-\\Vn\\centerdot\\Vbeta)^3}.\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nLet $\\flw_{\\textup{C}}=\\mathrm{F}_{\\textup{C} a b} dz^a \\wedge dz^b$, then from (\\ref{FC_def}) and (\\ref{gwv_three}) it follows that\n\\begin{align}\n\\frac{1}{\\kappa}\\mathrm{F}_{\\textup{C} a b}=\\frac{-c^2(X_a \\dot{\\c}_b-X_b\\dot{\\c}_a)}{(rc\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}.\n\\end{align}\nwhere $\\kappa=\\frac{q}{4\\pi \\epsilon_0}$. Thus\n\\begin{align}\n\\frac{1}{\\kappa}\\mathrm{E}_{\\textup{C} a}=\\frac{1}{\\kappa}\\mathrm{F}_{\\textup{C} a 0}= \\frac{-c^2(X_a \\dot{\\c}_0-X_0\\dot{\\c}_a)}{(rc\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}.\n\\end{align}\nIt follows from (\\ref{x_three}) and (\\ref{v_three}) that\n\\begin{align*}\nX_0=-\\boldsymbol r, \\qquad X_a=\\boldsymbol r\\Vn_a, \\qquad \\dot{\\c}_0=-c \\gamma \\qquad \\textup{and}\\qquad \\dot{\\c}_a=c \\gamma\\Vbeta_a,\n\\end{align*}\nthus\n\\begin{align}\n\\frac{1}{\\kappa}\\mathrm{E}_{\\textup{C} a}=&\\frac{c ^3\\gamma \\boldsymbol r\\Vn_a-\\boldsymbol r c ^3\\gamma\\Vbeta_a}{(\\boldsymbol r c \\gamma (\\Vn \\centerdot \\Vbeta-1))^3}=\\frac{(\\Vn-\\Vbeta)_a}{\\boldsymbol r^2\\gamma^2 (1-\\Vn \\centerdot \\Vbeta)^3}\n\\end{align}\nSimilarly let $\\flw_{\\textup{R}}=\\mathrm{F}_{\\textup{R} a b} dz^a \\wedge dz^b$. It follows from (\\ref{FR_def}) and (\\ref{gwv_three}) that\n\\begin{align*}\n\\frac{1}{\\kappa}\\mathrm{F}_{\\textup{R} a b}=&\\frac{\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1)(X_a \\ddot{\\c}_b-X_b\\ddot{\\c}_a)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}-\\frac{\\boldsymbol r c\\gamma \\Vn \\centerdot {\\boldsymbol a}(X_a \\dot{\\c}_b-X_b\\dot{\\c}_a)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}\\\\\n=&\\frac{(X_a \\ddot{\\c}_b-X_b\\ddot{\\c}_a)}{(\\boldsymbol r c \\gamma (\\Vn \\centerdot \\Vbeta-1))^2}-\\frac{\\boldsymbol r c \\gamma \\Vn \\centerdot {\\boldsymbol a}(X_a \\dot{\\c}_b-X_b\\dot{\\c}_a)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}\n\\end{align*}\nThus\n\\begin{align}\n\\frac{1}{\\kappa}\\mathrm{E}_{\\textup{R} a}=\\frac{1}{\\kappa}\\mathrm{F}_{\\textup{R} a 0}= \\frac{(X_a \\ddot{\\c}_0-X_0\\ddot{\\c}_a)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^2}-\\frac{\\boldsymbol r c\\gamma (\\Vn \\centerdot {\\boldsymbol a})(X_a \\dot{\\c}_0-X_0\\dot{\\c}_a)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}\\centerdot\n\\label{ea_def}\n\\end{align}\nIt follows from (\\ref{v_three}) that\n\\begin{align}\n\\ddot{\\c}_0=-c\\gamma^4{\\boldsymbol a}\\centerdot\\Vbeta \\textup{and} \\qquad \\ddot{\\c}_a=c\\gamma^2{\\boldsymbol a}_a +c\\gamma^4({\\boldsymbol a}\\centerdot\\Vbeta)\\Vbeta_a,\n\\end{align}\nthus the first term in (\\ref{ea_def}) yields\n\\begin{align}\n\\textup{first term}=&\\frac{-\\boldsymbol r c \\gamma^4({\\boldsymbol a}\\centerdot\\Vbeta)\\Vn_a+\\boldsymbol r\\big(c \\gamma^2 {\\boldsymbol a}_a+c \\gamma^4({\\boldsymbol a}\\centerdot\\Vbeta)\\Vbeta_a\\big)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^2}\\notag\\\\\n=&\\frac{{\\boldsymbol a}_a}{\\boldsymbol r c (\\Vn\\centerdot\\Vbeta-1)^2}+\\frac{\\gamma^2({\\boldsymbol a}\\centerdot\\Vbeta)(\\Vbeta_a-\\Vn_a)}{\\boldsymbol r c (\\Vn\\centerdot\\Vbeta-1)^2}\n\\label{first_tm}\n\\end{align}\nand the second term yields\n\\begin{align*}\n \\textup{second term}=& -\\frac{\\big((-\\boldsymbol r c\\gamma \\Vn_a+\\boldsymbol r c \\gamma \\Vbeta_a)(\\boldsymbol r c \\gamma^4({\\boldsymbol a}\\centerdot \\Vbeta)(\\Vbeta\\centerdot \\Vn-1)+\\boldsymbol r c \\gamma^2({\\boldsymbol a}\\centerdot \\Vn))\\big)}{(\\boldsymbol r c\\gamma (\\Vn \\centerdot \\Vbeta-1))^3}\\\\\n =&\\frac{\\gamma^2(\\Vn_a-\\Vbeta_a)({\\boldsymbol a}\\centerdot \\Vbeta)}{\\boldsymbol r c (\\Vn\\centerdot \\Vbeta-1)^2}+\\frac{(\\Vn_a-\\beta_a)({\\boldsymbol a}\\centerdot\\Vn)}{\\boldsymbol r c (\\Vn\\centerdot \\Vbeta-1)^3}\n \\label{sec_tm}\n\\end{align*}\nThus adding the two term yields\n\\begin{align*}\n \\frac{1}{\\kappa} \\mathrm{E}_{\\textup{R} a}=\\frac{(\\Vn\\centerdot\\Vbeta-1){\\boldsymbol a}_a}{\\boldsymbol r c (\\Vn\\centerdot\\Vbeta-1)^3}+\\frac{(\\Vn_a-\\beta_a)({\\boldsymbol a}\\centerdot\\Vn)}{\\boldsymbol r c (\\Vn\\centerdot\\Vbeta-1)^3}\n\\end{align*}\nThe result follows from the rule for triple vector products.\n\\end{proof}\n\\begin{definition}\n\\label{mag_def}\nSimilarly let $\\widetilde{\\blw}=\\widetilde{\\blw}_{\\textup{C}}+\\widetilde{\\blw}_{\\textup{R}}$ where\n \\begin{align}\n\\mathcal{B}_{\\textup{C}}= \\frac{1}{c}\\widetilde{i_{\\partial_{z^0}}\\star\\flw_{\\textup{C}}}\\qquad \\textup{and} \\qquad \\mathcal{B}_{\\textup{R}}= \\frac{1}{c}\\widetilde{i_{\\partial_{z^0}}\\star\\flw_{\\textup{R}}},\n\\end{align}\nThen if $\\widetilde{\\blw}_{\\textup{C}}=\\mathrm{B}_{\\textup{C}a}dy^a$ and $\\widetilde{\\blw}_{\\textup{R}}=\\mathrm{B}_{\\textup{R}a}dy^a$ it can be show that\n\\begin{align}\n\\mathcal{B}_{\\textup{C}a}=\\frac{1}{c}(\\Vn \\times {\\boldsymbol E} _{\\textup{C}})_a \\qquad \\textup{and} \\qquad \\mathcal{B}_{\\textup{R}a}=\\frac{1}{c}(\\Vn \\times {\\boldsymbol E} _{\\textup{R}})_a\n\\end{align}\n\\end{definition}\n\n\\section{The model of a beam}\n\\label{beam_model}\nWe model our bunch of particles as a one dimensional bunch where each particle undergoes the\nsame motion in space but at a different time. This bunch is moving at\na constant speed with relativistic factor $\\gamma$. Let $\\nu$ label the points in the bunch,\nwhich will be called body points. The profile of the bunch is given by\n$\\rho(\\nu)$.\\footnote{ Note that $\\nu$ has the dimension of time.}\n\n\\begin{definition}\n\\label{model_beam}\n Let ${\\boldsymbol x}_\\nu(\\tau)$ represent the position of body point $\\nu$ at proper time $\\tau$, and for each body point $\\nu$ let\n\\begin{align}\nt_\\nu(\\tau)=(\\tau+\\nu)\/\\gamma.\n\\label{eqns_path_nu}\n\\end{align}\n\\end{definition}\n\n\n\\begin{definition}\nThe retarded time for the body\npoint $\\nu$ corresponding to the fields measured at ${\\boldsymbol X}$ at\nlaboratory time $T$ is denoted by $\\hat\\tau({\\boldsymbol X},T,\\nu)$. Similarly the arrival time at ${\\boldsymbol X}$ of the field\ngenerated by body point $\\nu$ at proper time $\\tau$ is denoted by $\\hat T( \\nu,\\tau,{\\boldsymbol X})$.\\\\\nFor the $\\nu=0$ particle we define\n\\begin{align}\n\\hat\\tau_{0}({\\boldsymbol X}, T)=\\hat\\tau({\\boldsymbol X},T,0)\n\\quadtext{and}\n\\hat T_{0}(\\tau,{\\boldsymbol X})=\\hat T(0,\\tau,{\\boldsymbol X}).\n\\label{eqns_zero}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\n\\label{T_tau_lem}\nIt follows that\n\\begin{align}\n\\hat\\tau_0({\\boldsymbol X},\\hat T(\\nu,\\tau,{\\boldsymbol X})-\\nu\/\\gamma)=\\tau.\n\\label{eqns_tau_T_Invers_sub}\n\\end{align}\nand\n\\begin{align}\n\\hat\\tau({\\boldsymbol X},T,\\nu)=\\hat\\tau_{0}({\\boldsymbol X}, T-\\nu\/\\gamma).\n\\label{eqns_tau_nu}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nThe retarded time condition is given by\n\\begin{align}\ncT-ct_\\nu\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big)=\n\\norm{{\\boldsymbol X}-{\\boldsymbol x}_\\nu\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big)},\n\\label{eqns_ret_time}\n\\end{align}\nand hence\n\\begin{align}\ncT-c\\hat\\tau({\\boldsymbol X},T,\\nu)\/\\gamma-c\\nu\/\\gamma =\n\\norm{{\\boldsymbol X}-{\\boldsymbol x}_\\nu\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big)}.\n\\label{eqns_ret_time_res}\n\\end{align}\nThus\n\\begin{align}\nc\\hat T(\\nu,\\tau,{\\boldsymbol X})\n&=\nct_\\nu(\\tau)+\\norm{{\\boldsymbol X}-{\\boldsymbol x}_\\nu(\\tau)}\\nonumber\\\\\n&=c(\\tau+\\nu)\/\\gamma+\\norm{{\\boldsymbol X}-{\\boldsymbol x}_\\nu(\\tau)}.\n\\label{eqns_def_That}\n\\end{align}\nFrom (\\ref{eqns_ret_time_res}) and (\\ref{eqns_def_That})\n\\begin{align}\ncT&=c\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)+\\nu\\big)\/\\gamma+\\norm{{\\boldsymbol X}-{\\boldsymbol x}_\\nu\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big)}\\notag\\\\\n&=c\\hat T(\\nu,\\hat\\tau({\\boldsymbol X},T,\\nu),{\\boldsymbol X}).\n\\label{eqns_tau_T_Invers_a}\n\\end{align}\nSince $\\hat T$ is increasing and the range of $\\hat\\tau$ is\nfrom $-\\infty$ to $+\\infty$ it follows that $\\hat T$ and $\\hat\\tau$ are inverse to each other, yielding (\\ref{eqns_tau_T_Invers_a})\nand\n\\begin{align}\n\\hat\\tau({\\boldsymbol X},\\hat T(\\nu,\\tau,{\\boldsymbol X}),\\nu)=\\tau.\n\\label{eqns_tau_T_Invers_b}\n\\end{align}\nNow $\\hat T(\\nu,\\tau,{\\boldsymbol X})$ and $\\hat\\tau({\\boldsymbol X},T,\\nu)$ may be written in terms of\n$\\hat T_{0}(\\tau,{\\boldsymbol X})$ and $\\hat\\tau_{0}({\\boldsymbol X},T)$. From (\\ref{eqns_def_That})\n\\begin{align}\n\\hat T(\\nu,\\tau,{\\boldsymbol X})=\\hat T_{0}(\\tau,{\\boldsymbol X})+\\nu\/\\gamma.\n\\label{eqn_THat_nu}\n\\end{align}\nFrom (\\ref{eqns_tau_T_Invers_a}), (\\ref{eqns_tau_T_Invers_b}) and\n(\\ref{eqns_zero}),\n\\begin{align}\n\\hat T_0(\\hat\\tau_0({\\boldsymbol X},T), {\\boldsymbol X})=T\n\\label{eqns_tau_T_Invers_zero_a}\n\\end{align}\nand\n\\begin{align}\n\\hat\\tau_0({\\boldsymbol X},\\hat T_0(\\tau,{\\boldsymbol X}))=\\tau.\n\\label{eqns_tau_T_Invers_zero_b}\n\\end{align}\nSubstituting (\\ref{eqn_THat_nu}) into (\\ref{eqns_tau_T_Invers_zero_b}) leads to\n\\begin{align}\n\\hat\\tau_0({\\boldsymbol X},\\hat T(\\nu,\\tau,{\\boldsymbol X})-\\nu\/\\gamma)=\\tau.\n\\end{align}\nSubstituting $\\tau=\\hat\\tau({\\boldsymbol X},T,\\nu)$ and using (\\ref{eqns_tau_T_Invers_a}) yields\n\\begin{align}\n\\hat\\tau({\\boldsymbol X},T,\\nu)=\\hat\\tau_{0}({\\boldsymbol X}, T-\\nu\/\\gamma).\n\\end{align}\n\\end{proof}\n\n\\section*{Statistics for independent identical distributions}\n\\begin{definition}\nWe assume the $\\nu$ for each particle\nhas the identical distributions $\\rho(\\nu)$, where\n\\begin{align}\n\\int \\rho(\\nu) d\\nu =1\n\\label{Stat_normal}\n\\end{align}\nThat is the probability\nthat particle $k$ has displacement $\\nu_k$ is $\\rho(\\nu_k)d\\nu$.\n\\end{definition}\n\\begin{definition}\nGiven a function of $H(\\nu_1,\\ldots,\\nu_N)$ of all the random variables\nwe define the expectation of $H$ as\n\\begin{align}\n\\Exx{H}=\n\\int d\\nu_1 \\rho(\\nu_1) \\cdots\n\\int d\\nu_N \\rho(\\nu_N) H(\\nu_1,\\ldots,\\nu_N)\n\\label{Stat_def}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\n\\label{sum_lem}\nFor a function which is simply the sum of functions $\\sum_{k=1}^N\nh(\\nu_k)$ we have\n\n\\begin{align}\n\\Exx{\\sum_{k=1}^N h(\\nu_k)}\n=\nN \\ExxOP{h}\n\\label{Stat_Expect_Sum}\n\\end{align}\nwhere\n\\begin{align}\n\\ExxOP{h}\n=\n\\int \\rho(\\nu) h(\\nu)\\,d\\nu\n\\label{Stat_Expect_onepart_def}\n\\end{align}\nis the one particle expectation.\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\n\\Exx{\\sum_{k=1}^N h(\\nu_k)}\n&=\n\\int d\\nu_1 \\rho(\\nu_1) \\cdots\n\\int d\\nu_N \\rho(\\nu_N) \\sum_{k=1}^N h(\\nu_k)\n\\\\&=\n\\sum_{k=1}^N \\int d\\nu_1 \\rho(\\nu_1) \\cdots\n\\int d\\nu_N \\rho(\\nu_N) h(\\nu_k)\n\\\\&=\n\\sum_{k=1}^N\n\\int d\\nu_k \\rho(\\nu_k) h(\\nu_k)\n=\nN \\int d\\nu \\, \\rho(\\nu) h(\\nu)\n\\end{align*}\n\\end{proof}\n\\begin{lemma}\nThe expectation sum of product of the two functions\n\\begin{align*}\nH(\\nu_1,\\ldots,\\nu_N)\n&=\n\\Big(\\sum_{k=1}^N h(\\nu_k)\\Big)\n\\Big(\\sum_{m=1}^N g(\\nu_m)\\Big)\n\\end{align*}\nis given by\n\\begin{align}\n\\Exx{H}=\nN\\ExxOP{hg}+ (N^2-N)\n\\ExxOP{h}\\ExxOP{g}\n\\label{Stat_Expect_prod}\n\\end{align}\nThis is important since it corresponds to components of the energy, momentum and stress of the electromagnetic field. In particular, the energy of the electromagnetic field is determined in section \\ref{sec_energy}.\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\n\\Exx{H}\n=&\n\\Exx{\n\\Big(\\sum_{k=1}^N h(\\nu_k)\\Big)\\Big(\\sum_{m=1}^N g(\\nu_m)\\Big)\n}\n=\n\\sum_{k=1}^N \\sum_{m=1}^N \\Exx{\nh(\\nu_k)\ng(\\nu_m)\n}\n\\\\=&\n\\sum_{k=1}^N \\sum_{m=1}^N\n\\int d\\nu_1 \\rho(\\nu_1) \\cdots\n\\int d\\nu_N \\rho(\\nu_N)\nh(\\nu_k)\ng(\\nu_m)\n\\\\=&\n\\sum_{k=1}^N \\sum_{m=k}\n\\int d\\nu_1 \\rho(\\nu_1) \\cdots\n\\int d\\nu_N \\rho(\\nu_N)\nh(\\nu_k)\ng(\\nu_m)\\\\\n&+\n\\sum_{k=1}^N \\sum_{m\\ne k}\n\\int d\\nu_1 \\rho(\\nu_1) \\cdots\n\\int d\\nu_N \\rho(\\nu_N)\nh(\\nu_k)\ng(\\nu_m)\n\\\\=&\n\\sum_{k=1}^N\n\\int d\\nu_k \\rho(\\nu_k)\nh(\\nu_k)\ng(\\nu_k)\n+\n\\sum_{k=1}^N \\sum_{m\\ne k}\n\\int d\\nu_k \\rho(\\nu_k)\n\\int d\\nu_m \\rho(\\nu_m)\nh(\\nu_k)\ng(\\nu_m)\n\\\\=&\nN\\int d\\nu \\rho(\\nu)\nh(\\nu)\ng(\\nu)\n+\n\\sum_{k=1}^N \\sum_{m\\ne k}\n\\Big(\\int d\\nu_k \\rho(\\nu_k) h(\\nu_k)\\Big)\n\\Big(\\int d\\nu_m \\rho(\\nu_m)g(\\nu_m)\\Big)\n\\\\=&\nN\\ExxOP{hg}+\n\\sum_{k=1}^N \\sum_{m\\ne k}\n\\ExxOP{h}\\ExxOP{g}\n=\nN\\ExxOP{hg}+ (N^2-N)\n\\ExxOP{h}\\ExxOP{g}\n\\end{align*}\nNote the structure of the expectation of $H$, in particular the appearance of $N$ and $N^2-N$.\n\\end{proof}\n\\begin{lemma}\n\\label{shift}\nThe one particle expectation of a shifted function is given by\n\\begin{align}\n\\ExxOP{g(T-\\gamma^{-1}\\nu)}\n&=\n\\int \\rho_{{\\textup{Lab}}}(T-T')g(T')d T'.\n\\label{Stat_1P_shift}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\n\\ExxOP{g(T-\\gamma^{-1}\\nu)}\n&=\n\\int \\rho(\\nu) g(T-\\gamma^{-1}\\nu)\\,d\\nu\\\\\n&=\n\\int \\gamma \\rho\\big(\\gamma (T-T')\\big)g(T')d T'\\\\\n&=\n\\int \\rho_{{\\textup{Lab}}}(T-T')g(T')d T'\n\\end{align*}\nwhere $T'=T-\\gamma^{-1}\\nu$, and\n\\begin{align}\n\\rho_{{\\textup{Lab}}}(T)=\\gamma\\rho(\\gamma T)\n\\label{Stat_rho_Lab}\n\\end{align}\nis the charge density as measured in the laboratory frame.\n\\end{proof}\n\n\n\\section{Expectation of electric and magnetic fields}\n\\begin{definition}\n\\label{defEXT}\nFor a particle of charge $q$ undergoing arbitrary motion ${\\boldsymbol x}(\\tau)$, where\n$\\tau$ is the particle's proper time, the Li\\'{e}nard-Wiechert fields\nat point ${\\boldsymbol X}$ and time $T$ are given \\cite{Jackson99} by\n\\begin{align}\n{\\boldsymbol E}({\\boldsymbol X},T)=\\mathrm{E}\\Big({\\boldsymbol X}-{\\boldsymbol x}(\\tau_R),{\\boldsymbol \\beta}(\\tau_R),{\\boldsymbol a}(\\tau_R)\\Big)\n\\label{EB_TX}\n\\end{align}\nand\n\\begin{align}\n{\\boldsymbol B}({\\boldsymbol X}, T)=\\mathrm{B}\\Big({\\boldsymbol X}-{\\boldsymbol x}(\\tau_R),{\\boldsymbol \\beta}(\\tau_R),{\\boldsymbol a}(\\tau_R)\\Big).\n\\label{BB_TX}\n\\end{align}\nwhere $\\mathrm{E}$ and $\\mathrm{B}$ are defined in lemma \\ref{ecer_lem} and definition \\ref{mag_def} respectively.\n\nFor the body point $\\nu$ the Li\\'{e}nard-Wiechert electric and magnetic\nfields at point ${\\boldsymbol X}$ and time $T$ are given by substituting\n$\\tau_R=\\hat\\tau({\\boldsymbol X},T,\\nu)$ into (\\ref{EB_TX}),\n\\begin{align*}\n&{\\boldsymbol E}({\\boldsymbol X},T,\\nu)=\n\\mathrm{E}\\Big({\\boldsymbol X}-{\\boldsymbol x}\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big),\n{\\boldsymbol \\beta}\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big),\n{\\boldsymbol a}\\big(\\hat\\tau({\\boldsymbol X},T,\\nu)\\big)\\Big)\n\\end{align*}\nand likewise for ${\\boldsymbol B}({\\boldsymbol X},T,\\nu)$.\n\\end{definition}\nLet ${\\boldsymbol{E_0}}({\\boldsymbol X},T)$ be the electric field at point ${\\boldsymbol X}$ and time $T$ due to the body point $\\nu=0$ given by\n\\begin{align*}\n{\\boldsymbol{E_0}}({\\boldsymbol X},T)=\n\\mathrm{E}\\Big({\\boldsymbol X}-{\\boldsymbol x}\\big(\\hat\\tau_0({\\boldsymbol X},T)\\big),\n{\\boldsymbol \\beta}\\big(\\hat\\tau_0({\\boldsymbol X}, T)\\big),\n{\\boldsymbol a}\\big(\\hat\\tau_0({\\boldsymbol X},T)\\big)\\Big)\n\\end{align*}\nUsing\n(\\ref{eqns_tau_nu}) it follows\n\\begin{align}\n{\\boldsymbol E}({\\boldsymbol X},T,\\nu)\n&=\n{\\boldsymbol{E_0}}({\\boldsymbol X},T-\\nu\/\\gamma).\n\\label{E_shift}\n\\end{align}\nand\n\\begin{align}\n{\\boldsymbol B}({\\boldsymbol X},T,\\nu)\n&=\n{\\boldsymbol{B_0}}({\\boldsymbol X},T-\\nu\/\\gamma).\n\\label{B_shift}\n\\end{align}\n\n\\begin{lemma}\nThe total electric and magnetic fields at time $T$ at the point ${\\boldsymbol X}$\nare given by\n\\begin{align}\n{\\boldsymbol E}_{\\textup{Tot}}({\\boldsymbol X}, T,\\nu_1,\\ldots,\\nu_N)\n=\n\\sum_{k=1}^N {\\boldsymbol E}({\\boldsymbol X}, T,\\nu_k)\n\\label{Eng_E_n1N}\n\\end{align}\nand\n\\begin{align}\n{\\boldsymbol B}_{\\textup{Tot}}({\\boldsymbol X}, T,\\nu_1,\\ldots,\\nu_N)\n=\n\\sum_{k=1}^N {\\boldsymbol B}({\\boldsymbol X}, T,\\nu_k).\n\\label{Eng_B_n1N}\n\\end{align}\nIt follows that\n\\begin{align}\n\\Exx{{\\boldsymbol E}_{\\textup{Tot}}({\\boldsymbol X}, T,\\nu_1,\\ldots,\\nu_N)}\n=\nN \\int \\rho(\\nu) {\\boldsymbol{E_0}}({\\boldsymbol X},T-\\nu\/\\gamma)\\,d\\nu\n\\label{Eng_E_n1N}\n\\end{align}\nand\n\\begin{align}\n\\Exx{{\\boldsymbol B}_{\\textup{Tot}}({\\boldsymbol X}, T,\\nu_1,\\ldots,\\nu_N)}\n=\nN \\int \\rho(\\nu) {\\boldsymbol{B_0}}({\\boldsymbol X},T-\\nu\/\\gamma)\\,d\\nu.\n\\label{Eng_B_n1N}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\\\\\nThe result follows from lemma \\ref{sum_lem} and equations (\\ref{E_shift}) and (\\ref{B_shift}).\n\\end{proof}\\\\\nLet total electric field at the point ${\\boldsymbol X}$ at time $T$ be given by\n\\begin{align}\n{\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X},T)=\\frac{1}{N}\\Exx{{\\boldsymbol E}_{\\textup{Tot}}({\\boldsymbol X}, T,\\nu_1,\\ldots,\\nu_N)}\n\\end{align}\nWe notice that (\\ref{E_shift}) and (\\ref{B_shift}) are functions\nwhere the dependence on $\\nu$ is simply shifted $g(T-\\gamma^{-1}\\nu)$. Thus by lemma \\ref{shift} it follows\n\\begin{align*}\n{\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X},T)\n&=\n\\int\\rho(\\nu){\\boldsymbol E}({\\boldsymbol X},T,\\nu)d\\nu\\\\\n&=\n\\int\\rho(\\nu){\\boldsymbol{E_0}}({\\boldsymbol X},T-\\nu\/\\gamma)d\\nu\n\\\\&=\n\\int\\gamma\\rho\\big(\\gamma(T-T')\\big){\\boldsymbol{E_0}}({\\boldsymbol X}, T')d T'\\\\\n&=\n\\int {\\rho_{\\textup{Lab}}}(T-T'){\\boldsymbol{E_0}}({\\boldsymbol X},T')d T',\n\\end{align*}\nwhere $T'=T-\\nu\/\\gamma$, and $q{\\rho_{\\textup{Lab}}}(T)=q\\gamma\\rho(\\gamma T)$ is the\ncharge density as measured in the laboratory frame. Thus the key result\nis that the total electric field is given by the\nconvolution\n\\begin{align}\n{\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X}, T)\n&=\n\\int {\\rho_{\\textup{Lab}}}(T-T'){\\boldsymbol{E_0}}({\\boldsymbol X}, T')d T'\n\\label{E_Tot}.\n\\end{align}\nThe above can be repeated for the total magnetic field\n${\\boldsymbol B}_{\\text{Tot}}({\\boldsymbol X}, T )$.\nClearly ${\\boldsymbol{E_0}}({\\boldsymbol X}, T')$ will depend on the energy of the beam $\\gamma$\nand the path of the beam ${\\boldsymbol x}(\\tau)$.\n\n\\section{Expectation of field energy and coherence}\n\\label{sec_energy}\n\\begin{definition}\nThe energy of the electromagnetic field at time $T$ at the\npoint ${\\boldsymbol X}$ for the $N$ particles is defined as the expectation\n\\begin{align}\n\\phi({\\boldsymbol X},T)\n=\n\\Exx{\n\\norm{{\\boldsymbol E}_{\\textup{Tot}}({\\boldsymbol X}, T,\\nu_1,\\ldots,\\nu_N)}^2+\n\\norm{{\\boldsymbol B}_{\\textup{Tot}}({\\boldsymbol X},T,\\nu_1,\\ldots,\\nu_N)}^2}\n\\label{Eng_Eng_def}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\n\\begin{align}\n\\phi({\\boldsymbol X},T)=N\\phi_{\\textup{inc}}({\\boldsymbol X},T)+(N^2-N)\\phi_{\\textup{coh}}({\\boldsymbol X},T)\n\\label{Eng_coh_inc}\n\\end{align}\nwhere the incoherent field is given by\n\\begin{align}\n\\phi_{\\textup{inc}}({\\boldsymbol X},T) =\n\\ExxOP{\\norm{{\\boldsymbol E}({\\boldsymbol X},T,\\nu)}^2+\\norm{{\\boldsymbol B}({\\boldsymbol X},T,\\nu)}^2}\n\\label{Eng_phi_inc}\n\\end{align}\nand the coherent field is given by\n\\begin{align}\n\\phi_{\\textup{coh}}({\\boldsymbol X},T) = \\norm{{\\boldsymbol E}_{\\textup{cts}}({\\boldsymbol X},T)}^2 + \\norm{{\\boldsymbol B}_{\\textup{cts}}({\\boldsymbol X},T)}^2\n\\label{Eng_phi_coh}\n\\end{align}\nwhere the one particle continuous electromagnetic fields are given by\n\\begin{align}\n{\\boldsymbol E}_{\\textup{cts}}({\\boldsymbol X},T)=\\ExxOP{{\\boldsymbol E}({\\boldsymbol X},T,\\nu)}\n\\qquadtext{and}\n{\\boldsymbol B}_{\\textup{cts}}({\\boldsymbol X},T)=\\ExxOP{{\\boldsymbol B}({\\boldsymbol X},T,\\nu)}\n\\label{Eng_E_B_cts}\n\\end{align}\nI.e. ${\\boldsymbol E}_{\\textup{cts}}({\\boldsymbol X},T)$ and ${\\boldsymbol B}_{\\textup{cts}}({\\boldsymbol X},T)$ correspond to the\nelectric and magnetic fields due to a continuous distributions of\ncharge with distribution given by $\\rho(\\nu)$.\n\n\n\\end{lemma}\n\\begin{proof}\nExpanding (\\ref{Eng_Eng_def}) we see that this is simply a sum of products\n\\begin{align*}\n\\phi({\\boldsymbol X},T)\n=&\n\\sum_{i}^3\n\\Exx{\nE_{i,\\textup{Tot}}({\\boldsymbol X},T,\\nu_1,\\ldots,\\nu_N)\\\nE_{i,\\textup{Tot}}({\\boldsymbol X},T,\\nu_1,\\ldots,\\nu_N)}\\\\\n&+\n\\sum_{i}^3\n\\Exx{\nB_{i,\\textup{Tot}}({\\boldsymbol X},T,\\nu_1,\\ldots,\\nu_N)\\\nB_{i,\\textup{Tot}}({\\boldsymbol X},T,\\nu_1,\\ldots,\\nu_N)}\n\\end{align*}\nwhere $E_{i,\\textup{Tot}}$ is the $i$'th component of ${\\boldsymbol E}_{\\textup{Tot}}$.\nFrom (\\ref{Stat_Expect_prod}) we have\n\\begin{align*}\n\\phi({\\boldsymbol X},T)\n&=\nN\\sum_{i}^3 \\ExxOP{E_i({\\boldsymbol X},T,\\nu)^2} + (N^2-N)\\sum_{i}^3 \\ExxOP{E_i({\\boldsymbol X},T,\\nu)}^2\n\\\\&\\qquad+\nN\\sum_{i}^3 \\ExxOP{B_i({\\boldsymbol X},T,\\nu)^2} + (N^2-N)\\sum_{i}^3 \\ExxOP{B_i({\\boldsymbol X},T,\\nu)}^2\n\\end{align*}\nwhere $E_i({\\boldsymbol X},T,\\nu)$ is the $i$'th component of ${\\boldsymbol E}({\\boldsymbol X},T,\\nu)$.\n\\end{proof}\n\n\n\nWe've already seen from (\\ref{E_shift}) and (\\ref{B_shift}) that the electric and\nmagnetic fields are simply shifted functions so we can use\n(\\ref{Stat_1P_shift}) to give the coherent and incoherent fields in\nterms of convolutions\n\\begin{align}\n\\phi_{\\textup{inc}}({\\boldsymbol X},T) =\n\\int \\rho_{{\\textup{Lab}}}(T-T') \\Big(\\norm{{\\boldsymbol{E_0}}({\\boldsymbol X},T')}^2+\\norm{{\\boldsymbol{B_0}}({\\boldsymbol X},T')}^2\\Big) d T'\n\\label{Eng_phi_inc_conv}\n\\end{align}\nand\n\\begin{align}\n{\\boldsymbol E}_{\\textup{cts}}({\\boldsymbol X},T)=&\\int \\rho_{{\\textup{Lab}}}(T-T'){\\boldsymbol{E_0}}({\\boldsymbol X},T') d T'\\notag\\\\\n \\textup{and}\\quad {\\boldsymbol B}_{\\textup{cts}}({\\boldsymbol X},T)=&\\int \\rho_{{\\textup{Lab}}}(T-T'){\\boldsymbol{B_0}}({\\boldsymbol X},T') d T'\n\\label{Eng_E_B_cts_conv}\n\\end{align}\n\n\n\\chapter[Numerical results]{Numerical results}\n\\label{bendingbeams}\n\nIn this chapter we present the results of a numerical investigation carried out with the mathematical software MAPLE. The relevant code can be found in appendix \\ref{mapletwo}. In the following we give a brief outline of the calculations involved and state the main results.\n\n\\section{The field at a fixed point ${\\boldsymbol X}$ for a single particle }\n\\sectionmark{The field of a single particle}\n\n\nConsider figure \\ref{setup}. Half the aperture of the collimator is given by distance $h$. We have seen (figure \\ref{fig_Catchup}) that for high $\\gamma$, a pancake of radius $h$ can develop only if the particle has been travelling in a straight line over a displacement of at least $\\gamma h v\/c \\approx \\gamma h$. Therefore for our proposal to be effective the distance $Z$ in figure \\ref{setup} should be less than $\\gamma h$. Preliminary results show that the optimum value for $Z$ is $Z\\lesssim 10h$, with the field varying little with lower values, thus in the following analysis we fix the field measurement point ${\\boldsymbol X}=(0, h, 10h)$ and consider the magnitude of the electric field at ${\\boldsymbol X}$ due a single particle approaching and passing through the collimator. In all calculations we use $q=-1.60217 \\times 10^{-19}$C. \n\n We consider the path constructed from a straight line followed by an arc\nof a circle of radius $R$ followed by another straight line. Observe\nthat this path is unrealistic since it would require large magnets to\nremove the \\emph{magnetic leakage}. Magnetic leakage is defined as the passage of magnetic flux outside the path along which it can do useful work. In general in a bending dipole the magnetic leakage causes the path of the charge to be slightly smoothed out at the ends of the dipole, so that in a real bending magnet the path of the charge would not be precisely the arc of a circle. We assume that the smoothing of the path\ncorresponding to real dipoles would not significantly change the nature\nof the result. \n\n Let $\\Theta$ denote the angle of arc. The\ncoordinate system is chosen so that the direction of the second\nstraight line is along the $z$ axis and the arc is in the $x-z$ plane,\nfinishing at the origin. We refer to this trajectory as the \\emph{pre-bent} trajectory in contradistinction with that of a particle approaching from $(x, y, z)=(0, 0 -\\infty)$ on a straight line towards the origin, which we refer to as the \\emph{straight} trajectory.\n\nThe pre-bent trajectory is given by ${\\boldsymbol x}(\\tau)=(x(\\tau), y(\\tau), z(\\tau))$ where\n\\begin{align}\nx(\\tau)=&\\begin{cases}\nR(\\cos\\Theta-1)+(\\Theta R+\\gamma v\\tau)\\sin\\Theta \\quad \\textup{for} \\quad -\\infty<\\tau<-{ R\\Theta}\/{\\gamma v}\\\\\nR\\Big(\\cos({\\gamma v\\tau}\/{R})-1\\Big)\\quad \\textup{for} \\quad -{R\\Theta}\/{\\gamma v}<\\tau<0\\\\\n0\\quad \\textup{for} \\quad 0<\\tau<\\infty,\n\\end{cases}\\notag\\\\[0.3cm]\ny(\\tau)=&\\begin{cases}\n 0 \\quad \\textup{for} \\quad -\\infty<\\tau<\\infty,\n\\end{cases}\\label{prebent_path}\\\\[0.3cm]\n\\textup{and}\\quad z(\\tau)=&\\begin{cases}\n -R\\sin\\Theta+(\\Theta R+\\gamma v\\tau)\\cos\\Theta\\quad \\textup{for} \\quad -\\infty<\\tau<-{ R\\Theta}\/{\\gamma v}\\\\\n R\\sin({\\gamma v\\tau}\/{ R}) \\quad \\textup{for} \\quad -{ R\\Theta}\/{\\gamma v}<\\tau<0\\\\\n \\gamma v\\tau \\quad \\textup{for} \\quad 0<\\tau<\\infty.\n\\end{cases}\\notag\n\\end{align}\nThe straight trajectory is given by\n\\begin{align}\n(x(\\tau), y(\\tau), z(\\tau))=(0, 0, \\gamma v \\tau) \\quad \\textup{for} \\quad -\\infty<\\tau<\\infty.\n\\end{align}\n\n\n\n\\subsubsection*{Calculating the field at ${\\boldsymbol X}$ due to a specific path}\nWe use a coordinate system $\\{\\tau, \\mathsf{r}, \\hat{\\theta}, \\hat{\\phi}\\}$ adapted from the Newman-Unti coordinates $\\{\\tau, R, \\theta, \\phi\\}$. The coordinate transformation is given by (\\ref{coords_new}). Comparison with (\\ref{nu_def}) yields $\\mathsf{r}=\\frac{R}{\\alpha}$ where $R=-g(X, V)$ is the Newman-Unti radial parameter.\n We require the electric and magnetic fields due to a particle on a given trajectory. For fixed field point $({\\boldsymbol X}, T)=(X_0, Y_0, Z_0, T_0)$ there exist parameters $\\hat{\\mathsf{r}}$, $\\hat{\\theta}$, and $ \\hat{\\phi}$ which satisfy\n\\begin{align}\n&c T_0=C^0 (\\tau) + \\hat{\\mathsf{r}}\\notag\\\\\n&X_0=C^1(\\tau) + \\hat{\\mathsf{r}}\\sin(\\hat{\\theta})\\cos(\\hat{\\phi})\\notag\\\\\n&Y_0=C^2 (\\tau) + \\hat{\\mathsf{r}} \\sin(\\hat{\\theta})\\sin(\\hat{\\phi})\\notag\\\\\n&Z_0=C^3(\\tau) +\\hat{\\mathsf{r}} \\cos(\\hat{\\theta}).\n\\label{T0_def}\n\\end{align}\nRearranging yields the relations\n\\begin{align}\n\\hat{\\mathsf{r}}=& \\sqrt{(X_0-C^1)^2+(Y_0-C^2)^2+(Z_0-C^3)^2}\\notag\\\\\nc T_0(\\tau)=&\\hat{\\mathsf{r}} + C^0\\notag\\\\\n\\cos(\\hat{\\theta})=&\\frac{Z_0-C^3}{\\hat{\\mathsf{r}}}\\notag\\\\\n\\sin(\\hat{\\theta})=&\\frac {\\sqrt{(X_0-C^1)^2+(Y_0-C^2)^2}}{\\hat{\\mathsf{r}}}\\notag\\\\\n\\cos(\\hat{\\phi})=&\\frac{X_0-C^1}{\\sqrt{(X_0-C^1)^2+(Y_0-C^2)^2}}\\notag\\\\\n\\sin(\\hat{\\phi})=&\\frac{Y_0-C^2}{\\sqrt{(X_0-C^1)^2+(Y_0-C^2)^2}}\n\\label{path_ref}\n\\end{align}\nThese relations can be substituted into the expressions (\\ref{e_newcoords}) for the radiative $\\mathrm{E}_{\\textup{R}}(\\tau, \\mathsf{r}, \\theta, \\phi)$ and Coulombic $\\mathrm{E}_{\\textup{C}}(\\tau, \\mathsf{r}, \\theta, \\phi)$ electric fields (or magnetic fields). This gives the electric field (magnetic field) as a function of the components $C^0, C^1, C^2, C^3$ and the coordinates $T_0, X_0, Y_0, Z_0$.\n\nWhen considering the electric field due to a particle on a specific trajectory we need only substitute the correct components for $C$. For example in order to calculate the electric field for the pre-bent path we consider the three sections of the path independently. For each of the three intervals in (\\ref{prebent_path}) we input the trajectory by defining components\n\\begin{align}\nC^0=\\gamma \\tau\\notag\\\\\nC^1(\\tau)=x(\\tau)\\notag\\\\\nC^2(\\tau)=y(\\tau)\\notag\\\\\nC^3(\\tau=z(\\tau)\n\\label{worldline_comps}\n\\end{align}\nwhere the corresponding values for $x(\\tau), y(\\tau)$ and $z(\\tau)$ are defined in (\\ref{prebent_path}). See appendix \\ref{mapletwo} lines {\\footnotesize \\color{red} \\tt 131-147}.\n\nIn the Maple code we have written a procedure which will take\n a selection of variable input parameters and output any field as a function of $\\tau$. See (\\ref{get_fields}). The variable input parameters are the the components $C^0, C^1, C^2, C^3$ and a list of numerical values for the parameters $X_0, Y_0, Z_0$ and $R, \\Theta$, $\\gamma$ as well as an initial value for $\\tau$.\n\n\\subsubsection*{Lab time}\n\nThe ranges of $\\tau$ for the three different trajectories are obtained by substituting the numerical input values for $R, \\Theta$ and $\\gamma$ into the intervals in (\\ref{prebent_path}). Thus for a given set of input variables we are able to plot any desired field component for a particular section of the path against $\\tau$ for the range of $\\tau$ appropriate to that section. In order to plot the field component against $\\tau$ for the whole path we simply display the three plots corresponding to the three sections of the trajectory on the same graph.\n\nThe lab time $T_0(\\tau)$ is a different function of $\\tau$ for each of the three sections. This follows from (\\ref{T0_def}). For a particular section we may obtain $T_0(\\tau)$ by substituting our variable input parameters into (\\ref{T0_def}) and thus we may plot any desired field against $T_0$ for that section of path by plotting the field and the time $T_0$ as parametric equations in $\\tau$. To plot the field over the whole range of $T_0$ we simply display all three plots on the same graph as before.\n\n\n\n\\subsubsection*{Optimizing the values of $R$ and $\\Theta$}\n\n We have control over the variable parameters $R$ and $\\Theta$. In order to establish the optimum set of parameters to minimize the field at ${\\boldsymbol X}$ we calculate the peak energy of the electric field $||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$ for a range of $T$, performing a parameter sweep for a selection of values for $R$ and $\\Theta$. We set $\\gamma=1000$, which represents an energy level easily obtained in modern accelerators. The procedure used in MAPLE is given in \\ref{minimize_app} and the results are displayed in figure \\ref{minimize}. We have displayed three different views of the same graph. The numerical values for the electric field are absent because we are interested only in the relative values for the different trajectories. The values for $\\Theta$ and $R$ used in these plots are a selection of the values tested, however they are sufficient to show the trend. \n \n Recall that $R$ determines the curvature of the bend and $\\Theta$ determines the length of the bend. Looking at the first graph we see that in the far corner of the graph, where $\\Theta$ and $R$ are at a minimum, the field is at a maximum. As we approach the near region of the graph the magnitude decreases very rapidly with increasing $\\Theta$. We interpret this as follows. For a short bend the radial distance from the point ${\\boldsymbol X}$ to the continuation of the straight section will be small, and thus the pancake which developed on the straight section will be strongly encountered at point ${\\boldsymbol X}$. For a longer bend this contribution will be greatly reduced due to both the increased radial distance of ${\\boldsymbol X}$ from the continuation of the straight section and the increased longitudinal distance of ${\\boldsymbol X}$ from the terminus of the straight section. The latter distance is important because once the straight section ends and the bend begins the pancake is no longer travelling with the particle and the field strength within the pancake is decreasing. In consequence we interpret the ridge in the graph where the steep section ends as the cut off where the point X no longer encounters a significant field due to the pancake.\n \n In reality we cannot adopt the smallest $R$ and largest $\\Theta$ because they are impractical in the design considerations of real machines. We chose to restrict the trajectory to the values $\\Theta=0.13$rad and $R=0.5$m because the bend is sufficient to reduce the field at $X$ while also maintaining a minimal length and intensity in order to suppress radiation and CSR wakes. In addition a bend of this size would be practical from an engineering perspective.\n \n\n\\begin{table}\n\\begin{center}\n\\begin{singlespacing}\n\\begin{tabular}{|c|c| c|}\\hline\n& $R$ & $\\Theta $\\\\\\hline\nmin & $500$ & $1\/95$\\\\\n&$1000$&$1\/90$\\\\\n&$1500$&$1\/85$\\\\\n&$2000$&$1\/80$\\\\\n&$2500$&$1\/75$\\\\\n&$3000$&$1\/70$\\\\\n&$3500$&$1\/65$\\\\\n&$4000$&$1\/60$\\\\\n&$4500$&$1\/55$\\\\\n&$5000$&$1\/50$\\\\\n&$5500$&$1\/45$\\\\\n&$6000$&$1\/40$\\\\\n&$6500$&$1\/35$\\\\\n&$7000$&$1\/30$\\\\\n&$7500$&$1\/25$\\\\\n&$8000$&$1\/20$\\\\\n&$8500$&$1\/15$\\\\\n&$9000$&$1\/10$\\\\\n&$9500$&$1\/5$\\\\\nmax&$10000$&$1$\\\\\\hline\n\\end{tabular}\n\\end{singlespacing}\n\\end{center}\n\\caption{Input values for $R$ and $\\Theta$.}\n\\label{minimize_table}\n\\end{table}\n\n\\begin{figure}\n\\setlength{\\unitlength}{0.9cm}\n\\begin{center}\n\\begin{picture}(15, 23)\n\\put(2, 10){\\includegraphics[width=12\\unitlength]{minimize_new1.pdf}}\n\\put(-1.5,0){\\includegraphics[width=10\\unitlength]{minimize_new2.pdf}}\n\\put(7.5, 0){\\includegraphics[width=10\\unitlength]{minimize_new3.pdf}}\n\n\\put(4, 12){$\\Theta$}\n\\put(10, 11){$R$}\n\n\\put(3, 1){$\\Theta$}\n\\put(12, 1){$R$}\n\n\\put(0, 1){\\small{max}}\n\\put(6, 1){\\small{min}}\n\\put(9, 1){\\small{max}}\n\\put(15, 1){\\small{min}}\n\\put(2, 13){\\small{min}}\n\\put(5.4, 10){\\small{max}}\n\\put(7.3, 10){\\small{max}}\n\\put(12.5, 11.5){\\small{min}}\n\n\\put(-0.6, 4){\\rotatebox{90}{{ $ ||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$}}}\n\\put(8.6, 4){\\rotatebox{90}{{ $ ||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$}}}\n\\put(2.2,16){\\rotatebox{90}{{ $ ||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$}}}\n\\end{picture}\n\\end{center}\n\\caption{Field strength for different values of $R$ and $\\Theta$. We see clearly that the minimum field energy occurs when the $R$ is at its minimum value and $\\Theta$ is at its maximum value.}\n\\label{minimize}\n\\end{figure}\n\n\n\n\n\n\n\\subsection*{The field due to a particle on the optimised pre-bent trajectory compared with that of a particle on the straight trajectory}\nConsider the two cases given in figure \\ref{fig_fields} in which\n$\\gamma=1000$, $x=0$, $y=h$ and $z=10h$. In the straight\nline case the peak field is $\\approx75$Vm$^{-1}$ and the majority of the field arrives within an interval of\n$0.015$ps. In fact it is easy to show that for a straight\nline path the peak field increases with $\\gamma$ and the width decreases\nwith $\\gamma$ leading to the classic pancake. By contrast for the pre-bent case\n the peak field is significantly reduced to only $\\approx7.7$Vm$^{-1}$, however\n the interval over which the field arrives is now $0.35$ps for the right hand peak,\nand $0.1$ps for the left hand peak. The reason for these\ntwo peaks is that the left hand peak is the coulomb field due to\nthe first straight line segment, whereas the second peak is due to the\nradiation from the circular part of the beam path. The discontinuity is a result of\n the discontinuity in acceleration for this trajectory. Repeating the\ncalculation with higher $\\gamma$-factors does not significantly change the\nheight or shape of the second peak.\n\nFigure \\ref{fig_fields_comp} shows the cartesian components of the electric field for the particle on the pre-bent trajectory. We see that the field is largely in the $x$ and $y$ directions, with the peak field in the $y$ direction. By contrast for a particle on the straight trajectory the $y$ component dominates with the $z$ component negligible and the $x$ component zero. This means that for a straight trajectory the field is primarily directed in the transverse direction as expected. The nonzero $x$ component in the pre-bent case is due to the incident angle of the initial straight section as well as the radiation caused by the bend.\n\\setlength{\\unitlength}{0.9cm}\n\\begin{figure}\n\\begin{center}\n\\begin{picture}(10.5,10.7)\n\\put(0.2,0.5){\n\\includegraphics[height=10.7\\unitlength]{straight_ps.pdf}\n}\n\\put(0,4){\\rotatebox{90}{{ $||{\\boldsymbol{E_0}}({\\boldsymbol X}, \\hat T_0)||$ $[\\textup{Vm}^{-1}]$}}}\n\\put(5,0.4){{$\\hat T_0$ [ps]} }\n\\put(4, -0.6){Straight path}\n\\end{picture}\\\\[1.5cm]\n\\begin{picture}(10.5,10.7)\n\\put(0.5,0.5){\n\\includegraphics[height=10.7\\unitlength]{prebent_ps.pdf}\n}\n\\put(0,4){\\rotatebox{90}{{$||{\\boldsymbol{E_0}}({\\boldsymbol X}, \\hat T_0)||$ $[\\textup{Vm}^{-1}]$}}}\n\\put(5,0.4){{$\\hat T_0$ [ps]}}\n\\put(4, -0.6){Pre-bent path}\n\\end{picture}\n\\end{center}\n\\caption[Electric field $||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$ for straight and pre-bent trajectories]{The electric field strength $||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$ at ${\\boldsymbol X}=(0, h, 10 h)$, with $h=0.5$mm, due to a body point following a\n straight path along the $z$-axis and a body point following the \\emph{pre-bent} path given in\n (\\ref{prebent_path}) with $\\Theta=0.13$rad and $R=0.5$m. }\n \\label{fig_fields}\n\\end{figure}\n\n\\setlength{\\unitlength}{0.7cm}\n\\begin{figure}\n\\begin{center}\n\\begin{picture}(10.5,11.2)\n\\put(-0.5,0.5){\n\\includegraphics[height=10.7\\unitlength]{componentZ.pdf}\n}\n\\put(4, 11.2){$z$-component}\n\\put(-1,4){\\rotatebox{90}{{ $({\\boldsymbol{E_0}})_z({\\boldsymbol X}, \\hat T_0)$ $[\\textup{Vm}^{-1}]$}}}\n\\put(5,0){{$\\hat T_0$ [ps]} }\n\\end{picture}\\\\[1.2cm]\n\\begin{picture}(21,10.7)\n\\put(-0.5,0.5){\n\\includegraphics[height=10.7\\unitlength]{componentY.pdf}\n}\n\\put(4, 11.2){$y$-component}\n\\put(-1,4){\\rotatebox{90}{{$({\\boldsymbol{E_0}})_y({\\boldsymbol X}, \\hat T_0)$ $[\\textup{Vm}^{-1}]$}}}\n\\put(5,0){{$\\hat T_0$ [ps]}}\n\\put(11,0.5){\n\\includegraphics[height=10.7\\unitlength]{componentX.pdf}\n}\n\\put(15.7, 11.2){$x$-component}\n\\put(10.5,4){\\rotatebox{90}{{$({\\boldsymbol{E_0}})_x({\\boldsymbol X}, \\hat T_0)$ $[\\textup{Vm}^{-1}]$}}}\n\\put(15.5,0){{$\\hat T_0$ [ps]}}\n\\end{picture}\n\\end{center}\n\\caption[Electric field components $({\\boldsymbol{E_0}})_x, ({\\boldsymbol{E_0}})_y$ and $({\\boldsymbol{E_0}})_z$ for pre-bent trajectories]{Electric field components $({\\boldsymbol{E_0}})_x, ({\\boldsymbol{E_0}})_y$ and $({\\boldsymbol{E_0}})_z$ at point ${\\boldsymbol X}=(0, h, 10 h)$, with $h=0.5$mm, due to a body point following a\n \\emph{pre-bent} path with $\\Theta=0.13$rad and $R=0.5$m. }\n \\label{fig_fields_comp}\n\\end{figure}\n\n\\section{The coherent field at ${\\boldsymbol X}$ due to a bunch}\n\nConsider a bunch modelled as a one dimensional continuum of point\nparticles with a low density halo. The fields generated in the halo will\nnot be considered. The one dimensional continuum is a good model for beams with transverse\ndimension significantly smaller than the bunch length. The\nassumption is made that the majority of the bunch charge is contained in the\none-dimensional core and only the halo is removed by collimation. Within\nthis model, each particle in the core undergoes the same motion in space but at a\ndifferent time and is moving at a constant speed with\nrelativistic factor $\\gamma$.\n\nUsing (\\ref{E_Tot}) we calculate the field due to a Gaussian particle distribution $\\rho_{\\text{Lab}}$ for the two cases\nin figure \\ref{fig_fields}. The code for the convolution can be found in \\ref{convolution_app}. We input the time $T_0=t$ at which the convolution should be centered,\n the bunch length (FWHM of the Gaussian) as upper and lower values of $T_0$, and the number\n of points $N$ over which the samples should be taken. The procedure can be summed up in the\n following steps which are followed in a do loop for $j=1..N$.\n \\begin{itemize}\n \\item Define ${\\rho_{\\textup{Lab}}}:= (t, a, b) -> 1\/(a \\sqrt(2\\pi))\\exp\\big((-(t-b)^2)\/(2 a^2)\\big)$\n\n\n \\item Solve $T_0(\\tau_j)=t_j$ for $\\tau_j$, where $t_j=t-a-(b-a)\/N)(j+1\/2)$ and $a$ and $b$ are the upper and lower bounds on the range of $T_0$ respectively.\\\\\n\n \\item Substitute $\\tau=\\tau_j$ into the electric field $\\mathrm{E}(T_0(\\tau))$ to give the field strength at time $T_0=t_j$\\\\\n\n \\item Multiply $\\mathrm{E}(t_j)$ by ${\\rho_{\\textup{Lab}}}(t_j)$, where ${\\rho_{\\textup{Lab}}}$ is a specific Gaussian defined by inputting FWHM.\n\n \\item Sum the result over $j$, $\\textup{sum}=\\displaystyle{\\sum_{j=1}^{N} \\mathrm{E}(t_j) {\\rho_{\\textup{Lab}}}(t_j)}$\n\n \\item Normalize by dividing $\\textup{sum}$ by $\\displaystyle{\\sum_{j=1}^{N} {\\rho_{\\textup{Lab}}}(t_j)}$\n\n \\end{itemize}\n\n Table \\ref{table3} displays the results for a selection of bunch lengths which are attainable in some present day machines.\n\\begin{table}\n\\begin{center}\n\\caption{Peak field strength for different sized bunches with h=0.5mm.}\n\\begin{tabular}{| @{\\ \\ }l @{\\ \\ }| @{\\ \\ } l@{\\ \\ }| @{\\ \\ }l@{\\ \\ }|@{\\ \\ }l@{\\ \\ }|}\\hline\n\\multicolumn{2}{|c|@{\\ \\ }}{Bunch Length} &\\multicolumn{2}{c|}{Peak $||{\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X},T)||$ [Vm$^{-1}$]}\\\\\\hline\n \\multicolumn{1}{|c|@{\\ \\ }}{L[h]} & \\multicolumn{1}{c|@{\\ \\ }}{(L\/c)[ps]} & \\multicolumn{1}{c|@{\\ \\ }}{straight}& \\multicolumn{1}{c|}{pre-bent} \\\\\\hline\n $1.8 \\times 10^1 $ & $3.00\\times 10^0 $ & $1.97 \\times 10^{-1}$ & $1.97 \\times 10^{-1}$\\\\\n $6.00 \\times 10^{-1}$ & $1.00 \\times 10^0 $ & $5.91 \\times 10^{-1}$ & $5.89 \\times 10^{-1}$\\\\\n $9.00 \\times 10^{-2}$ & $1.50 \\times 10^{-1}$ & $3.93 \\times 10^0 $ & $3.48 \\times 10^0 $\\\\\n $4.80 \\times 10^{-2}$ & $8.00 \\times 10^{-2} $ & $7.33 \\times 10^0 $ & $5.27 \\times 10^0 $\\\\\n $3.00\\times 10^{-2} $ & $5.00 \\times 10^{-2} $ & $1.16 \\times 10^1 $ & $6.36\\times 10^0 $\\\\\n $4.80 \\times 10^{-3}$ & $8.00\\times 10^{-3} $ & $5.12 \\times 10^1 $ & $7.53 \\times 10^0 $\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\label{table3}\n\\end{table}\n\n\\subsection{Long bunches}\n\nIf the bunch is long and smooth, i.e. longer than the collimator\naperture, so that there is no significant change in ${\\rho_{\\textup{Lab}}}$ over the\nwidth of ${\\boldsymbol{E_0}}({\\boldsymbol X},T')$, then ${\\boldsymbol{E_0}}({\\boldsymbol X}, T')$ may be crudely regarded as a\n$\\delta$-function and ${\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X},T)$ is given by\n\\begin{align}\n{\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X},T)\n&\\approx\n{\\rho_{\\textup{Lab}}}(T)\n\\int {\\boldsymbol{E_0}}({\\boldsymbol X},T')d T'.\n\\label{E_Tot_smooth}\n\\end{align}\nIntegration of ${\\boldsymbol{E_0}}({\\boldsymbol X},T')$ for the straight and pre-bent trajectories\n reveals that\n\n\\begin{align}\n||{\\boldsymbol{E}}_{\\textup{Tot}}({\\boldsymbol X}, T)||\\approx\\frac{q}{2\\pi\\epsilon_0 c}\\frac{{\\rho_{\\textup{Lab}}}(T)}{||{\\boldsymbol X}||}\n\\label{E_Tot_smooth_res}\n\\end{align}\nThis value of ${\\boldsymbol{E}}_{\\textup{Tot}}$ is independent of $R$ and\n$\\Theta$ for all paths where $R$ is large compared to $L$. To see why\nthis is the case consider our one dimensional beam of particles as a\ncontinuous flow of charge, similar to a line charge in a wire but\nwithout the background ions. The fields due to this flow may be\ncalculated using the Biot-Savart law. Since $h\\ll R$ the field is\ndominated by the nearby current and hence no variation of $R$,\n$\\Theta$ or $Z$ will alter the fields.\nWe find that calculations using the Biot-Savart Law agree very closely with equation (\\ref{E_Tot_smooth_res}), thus providing verification of our code.\n\n\n\\subsection{Short bunches}\n\nIf the beam has bunches of length $L\\lesssim 0.05 h$ then it follows from\n(\\ref{E_Tot}) and figure \\ref{fig_fields} that a considerable reduction in fields is possible. If $\\rho_{\\text{Lab}}$ has full width\nat half maximum $L\/c=0.008$ps with corresponding bunch length $L=0.0048 h$, then the peak value for the total electric field in the straight line case is given by $\\approx 51.2 \\textup{Vm}^{-1}$. By contrast, in the pre-bent case the peak value\nfor the total electric field is $\\approx 7.5\\textup{Vm}^{-1}$,\ngiving an approximate factor of 7 reduction in field. This is approaching\nthe maximal factor of 10 improvement one can achieve with $\\gamma=1000$, which occurs when\nthe bunch length is small enough that the convolution gives the peak values for the fields in figure \\ref{fig_fields}. With\nhigher energies and shorter bunch lengths the radiation peak remains unchanged, whereas the electric field for the\nstraight path grows linearly with $\\gamma$. Thus even greater\nimprovements can be made.\n\n\n\n\n\n\n\n\\section{Conclusion}\n\n\nIn our analysis we have chosen a specific point ${\\boldsymbol X}=(0, 0.5$mm$, 5$mm$)$ and minimized the field at this point. We have shown that the magnitude of the electric field due to a single particle can be considerably reduced by altering the path of the beam, however the duration for which the field is non-zero is increased. We have used this result to show that the coherent field for a short bunch can be reduced significantly by bending the beam, with reductions of up to $85\\%$ feasible for some present day FELs and future colliders. No reduction in the coherent field can be made for long smooth bunches. We assume the coherent field will dominate the incoherent field because of the $N$ Vs $N^2$ behaviour given in (\\ref{Eng_coh_inc}), however the incoherent fields are always present.\n\n If the field point ${\\boldsymbol X}$ is instead displaced in the positive $x$ direction, then a\nsignificant increase in field strength is observed. This increase results from\nboth the Coulomb field from the straight section of the path before\nthe arc and the radiation from the circular part of the path.\n\n Consider figure \\ref{fig_contours}. The beam has been pre-bent from the left before passing through the origin of the graph, hence while on the bend the direction of motion is in the positive $x$ direction. The magnitude of the field is shown as a contour plot. We see the magnitude increasing as we pass from the origin along the x-axis and then decreasing again after a very dense region. This pattern is what we expect to find from the synchrotron radiation emitted from the bend. The darkest parts of the graph are where the majority of the synchrotron radiation passed through the x-y plane. There are four discrete spots; two very dark spots at approximately $x=1.4$mm and two slightly less intense spots to the left of these at $x=1.0$mm. The two dark spots are approximately ten times the magnitude of the other two. All the remaining field shown in the plot is negligible in comparison to these four peaks. It will be necessary to alter the shape of the collimator to avoid the high field regions interacting with the material in the\ncollimator. This need not affect the efficacy of the collimator to\nremove the halo, for example see figure \\ref{fig_Modified_coll}.\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{0.035\\textwidth}\n\\begin{picture}(20,20)\n\\put(0,0){\\includegraphics[height=20\\unitlength]{Max-Fields.pdf}}\n\\put(0,10){\\makebox(0,0)[r]{-2}}\n\\put(20,10){\\makebox(0,0)[l]{2}}\n\\put(10,0){\\makebox(0,0)[t]{-2}}\n\\put(10,20){\\makebox(0,0)[b]{2}}\n\\put(9.,5){\\makebox(0,0)[l]{y}}\n\\put(5,9.3){\\makebox(0,0)[t]{x}}\n\\end{picture}\n\\end{center}\n\\caption[Electric field magnitude in $(x, y, Z)$ plane]{Contour plot for the maximum fields $||{\\boldsymbol{E_0}}({\\boldsymbol X}, T)||$ in the $(x,y)$ plane transverse to beam at $z=Z$. The plot\n represents a 4mm$\\times$4mm region around the beam. The beam\n has been bent from the left before passing though the origin of the\n graph. Twenty timesteps were taken and the lack of\n smoothness is due to numerical errors. Most of the field is between 0$\\textup{Vm}^{-1}$ and 100$\\textup{Vm}^{-1}$ in magnitude however the two black spots represent regions where the field is approximately 1000 times greater.}\n\\label{fig_contours}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{0.04\\textwidth}\n\\begin{picture}(10,10)\n\\put(0,0){\\rotatebox{90}{\\includegraphics[width=10\\unitlength]{AltCollimator.pdf}}}\n\\put(-0.5,7){\\rotatebox{45}{beam Pipe}}\n\\put(2.2,3){\\rotatebox{-45}{halo}}\n\\put(4,4.8){\\rotatebox{0}{beam}}\n\\put(1,6.2){\\rotatebox{0}{collimator}}\n\\put(6.7,5){{High}}\n\\put(6.7,4.3){{Fields}}\n\\end{picture}\n\\end{center}\n\\vspace{-2em}\n\\caption{Modified collimator in the plane transverse to the path of the beam.}\n\\label{fig_Modified_coll}\n\\end{figure}\n\nOne criticism of our work is the fact that we have used the Li$\\text{\\'{e}}$nard-Wiechert field, which is strictly accurate only for a particle in free space. In the accelerator community the following formula is often employed to describe the field of a bunch traversing a circular beam pipe\n\\begin{align}\nE_r=\\frac{2\\lambda}{r},\n\\end{align}\nwhere $\\lambda$ is the longitudinal charge distribution and $r$ is the radial distance from the axis. This formula assumes $\\sigma_z \\geq r\/\\gamma$, where $\\sigma_z$ is the rms bunch length. For typical machines this is the normal regime, for example with $r=1cm$, $\\gamma=1000$ then $\\sigma_z \\geq 10\\mu m$. In our investigation we have shown that the most substantial reductions in wakefield are expected for very small bunch lengths, therefore even without the difficulty introduced by the bend this formula would be inappropriate. The correct form for the field in a curved beam pipe is a boundary value problem depending on the intricate geometry of the beampipe, see for example \\cite{GotoTucker}.\\\\[3pt]\n\n\n\n\n\nIn this investigation we have adopted the \\emph{rigid beam approximation} so that the charge profile\nremains constant throughout the whole trajectory. This approximation is generally adopted in calculating\nthe field generated by a relativistic beam traveling in a straight line, however for a pre-bent trajectory\nthere will be CSR wakes and energy loss due to radiation which in general will disrupt the charge profile.\nIn order to determine whether there will be an advantage in bending the beam before collimation it will be necessary to calculate the net effect of bending plus reduced collimation wakes compared with collimation wakes on a straight beam trajectory. It is probable that the adverse effects of implementing an additional bend will be too severe for there to be any advantage in this new approach. However all accelerators, even linacs, already have to bend the beam using dipoles in certain places. Therefore it seems natural to\nplace a collimator directly after a bending magnet in order to prevent additional adverse effects. The optimum design of the beam path, beam\ntube and collimator shape, for particular machines will require a\ncombination of analytic, numerical and experimental research.\nClearly long tapers will reduce the advantage gained by bending the beam\nsince it will give time for the pancake to form. However it may be\nadvantageous to use a short taper.\n\n \n\n\n\n\n\n\n\n\n\\begin{appendices}\n\n \\chapter{Dimensional Analysis}\n\\label{app_dimensions}\n The SI base units are given by\n \\begin{align}\n L&=\\textup{metre},\\quad m\\notag\\\\\n T&=\\textup{second}, \\quad s\\notag\\\\\n M&=\\textup{kilogram}, \\quad kg\\notag\\\\\n A&=\\textup{Ampere}, \\quad A\n \\end{align}\n It is more convenient to use the dimension of charge $Q$ instead of $A$, with derived unit the Coulomb $C=sA$. We use square brackets to denote the dimensions of the enclosed object. The constants $\\epsilon_0, \\mu_0$ where $c^{-2}=\\epsilon_0 \\mu_0$ have dimensions\n \\begin{align}\n [\\epsilon_0]=&\\frac{Q^2 T^2}{M L^3}\\notag\\\\\n [\\mu_0]=&\\frac{ML}{Q^2}\n \\end{align}\n The electric current 3-form has the dimension of charge $[\\underset{{\\footnotesize (3)}}{\\mathcal{J}}]=Q$. Here the \\begin{footnotesize}$(3)$\\end{footnotesize} denotes the degree of the differential form. Dimensions of the electric and magnetic fields are given by\n\\begin{align}\n[\\underset{{\\footnotesize (0)}}{\\mathcal{E}^i}]=&\\frac{M L}{Q T^2}, \\qquad [\\mathcal{E}]=\\frac{M}{Q T^2} \\qquad \\textup{and} \\qquad\n[\\underset{{\\footnotesize (1)}}{\\widetilde{\\e}}]=\\frac{M L^2}{Q T^2}\\notag\\\\\n[\\underset{{\\footnotesize (0)}}{\\mathcal{B}^i}]=&\\frac{M }{Q T},\\qquad [\\mathcal{B}]=\\frac{M}{L Q T^2} \\qquad \\textup{and} \\qquad\n[\\underset{{\\footnotesize (1)}}{\\widetilde{\\mathcal{B}}}]=\\frac{M L }{Q T}\n\\end{align}\nand dimensions of the electromagnetic 1-from potential $\\mathcal{A}$ and 2-forms $\\mathcal{F}$ and $\\star\\mathcal{F}$ are given by\n\\begin{align}\n[\\underset{{\\footnotesize (1)}}{\\mathcal{A}}]=[\\underset{{\\footnotesize (2)}}{\\mathcal{F}}]=[\\underset{{\\footnotesize (2)}}{\\star\\mathcal{F}}]=\\frac{M L^3}{Q T^2}\n\\end{align}\nAlso\n\\begin{align}\n[\\underset{{\\footnotesize (3)}}{\\mathcal{S}_{\\textup{K}}}]=& [\\underset{{\\footnotesize (0)}}{\\textup{P}_{\\textup{\\textup{K}}}}]=\\frac{ML}{T}\\qquad\\textup{and}\\qquad [\\underset{{\\footnotesize (0)}}{\\dot{\\textup{P}}_{\\textup{\\textup{K}}}}]= [\\textup{force}]= \\frac{ML}{T^2}.\n\\end{align}\n Base quantities have dimensions\n \\begin{align}\n[x^a]=[dx^a]=L, \\qquad\\textup{and} \\qquad\\Big[\\frac{\\partial}{\\partial x^a}\\Big]=\\frac{1}{L}\n\\end{align}\nsuch that $[g]= L^2$ and $[\\g^{-1}]= \\tfrac{1}{L^2}$.\n\n\n\\section{Dimensions in chapter \\ref{maxlorentz} and Part II}\n\nWe choose proper time $\\tau$ to have the dimension of time $T$ such that\n\\begin{align}\n[C^a(\\tau)]&=L, \\quad\\quad [\\dot{\\c}^a(\\tau)]=\\frac{L}{T}, \\quad\\quad [\\ddot{\\c}^a(\\tau)]=\\frac{L}{T^2}\n\\end{align}\nIt follows\n\\begin{align}\n[X]=&[x^a-C^a(\\tau)]\\big[\\frac{\\partial}{\\partial x^a}\\Big]=1 , \\qquad [\\widetilde{X}]=[g(-,X)]=L^2,\\notag\\\\\n[V]=&[\\dot{\\c}^a(\\tau)]\\Big[\\frac{\\partial}{\\partial x^a}\\Big]=\\frac{1}{T}, \\qquad [\\widetilde{V}]=[g(-, V)]=\\frac{L^2}{T},\\notag\\\\\n[A]=&[\\ddot{\\c}^a(\\tau)]\\Big[\\frac{\\partial}{\\partial x^a}\\Big]=\\frac{1}{T^2}, \\qquad[\\widetilde{A}]=[g(-, A)]=\\frac{L^2}{T^2}\n\\end{align}\nand\n\\begin{align}\n[g(V, V)]=&\\frac{L^2}{T^2}, \\quad\\quad [g(X, V)]=\\frac{L^2}{T}, \\quad\\quad [g(X, A)]=\\frac{L^2}{T^2}\n\\end{align}\n\n\n\n\\section{Dimensions in Part I}\n\n\nWe choose proper time $\\tau$ to have the dimension of time $L$ such that\n\\begin{align}\n[C^a(\\tau)]&=L, \\quad\\quad [\\dot{\\c}^a(\\tau)]=1, \\quad\\quad [\\ddot{\\c}^a(\\tau)]=\\frac{1}{L}\n\\end{align}\nIt follows\n\\begin{align}\n[X]=&[x^a-C^a(\\tau)]\\big[\\frac{\\partial}{\\partial x^a}\\Big]=1 , \\qquad [\\widetilde{X}]=[g(-,X)]=L^2,\\notag\\\\\n[V]=&[\\dot{\\c}^a(\\tau)]\\Big[\\frac{\\partial}{\\partial x^a}\\Big]=\\frac{1}{L}, \\qquad [\\widetilde{V}]=[g(-, V)]=L,\\notag\\\\\n[A]=&[\\ddot{\\c}^a(\\tau)]\\Big[\\frac{\\partial}{\\partial x^a}\\Big]=\\frac{1}{L^2}, \\qquad[\\widetilde{A}]=[g(-, A)]=1\n\\end{align}\nand\n\\begin{align}\n[g(V, V)]=&1, \\quad\\quad [g(X, V)]=L, \\quad\\quad [g(X, A)]=1\n\\end{align}\n\n\\chapter{Differential Geometry}\n\\label{diffgeom}\n\nIn this appendix we present a brief introduction to the geometric constructs and notations encountered in the thesis. This introduction is by no means complete, and for a deeper understanding of the subject we refer the reader to the vast collection of introductory books on the subject, of which \\cite{BennTucker, Frankel03} are good examples.\n\n\\section{Tensor Fields and differential forms}\n\\subsection*{Vector fields}\n\\begin{definition}\nLet $\\mathbf{M}$ be an arbitrary differential manifold of dimension $m$, and $x$ an arbitrary point in $\\mathbf{M}$. The \\begin{bf} space of smooth real valued $c^{\\infty}$ functions \\end{bf} over $\\mathbf{M}$ is denoted $\\mathcal{F}(\\mathbf{M})$,\n\\begin{align}\nf \\in\\mathcal{F}(\\mathbf{M})\\quad \\text{implies} \\quad f : \\mathbf{M} &\\rightarrow \\mathbb{R} , \\qquad x \\mapsto f(x)\n\\end{align}\n\\end{definition}\n\\begin{definition} A (contravariant)\\begin{bf} vector at a point \\end{bf} $V|_x$ is a map from $\\mathcal{F}(\\mathbf{M})$ to $\\mathbb{R} $,\n\\begin{align}\nV|_{x} : \\mathcal{F}(\\mathbf{M})\\rightarrow \\mathbb{R} , \\qquad f \\mapsto V|_{x}(f),\n\\end{align}\nwhich satisfies\n \\begin{align}\n V|_{x} (f + g) &= V|_{x}(f) + V|_{x}(g)\\notag,\\\\\n V|_{x}(\\lambda f)&= \\lambda V|_{x}(f)\\notag,\\\\\n\\text{and}\\qquad V|_{x} (f g) &= V|_{x}(f)g + f V|_{x}(g),\n\\label{vp_def}\n \\end{align}\n where $f, g \\in \\mathcal{F}(\\mathbf{M})$ and $\\lambda$ is an arbitrary scalar.\n \\end{definition}\n\n\\begin{definition}\nFor every $x\\in \\mathbf{M}$ the \\begin{bf}tangent space\\end{bf} to $\\mathbf{M}$ at point $x$, written $\\textup{T}_x \\mathbf{M}$, is the $m$ dimensional vector space whose elements are the vectors at $x$, i.e. $V|_{x} \\in \\textup{T}_x \\mathbf{M}$. The \\begin{bf} tangent bundle\\end{bf} is the $2m$ dimensional manifold given by the set theoretic union of the tangent spaces $\\textup{T}_x \\mathbf{M}$ for all $x\\in \\mathbf{M}$,\n\\begin{align}\n\\textup{T} \\mathbf{M}=\\bigcup_{x\\in \\mathbf{M}} \\textup{T}_x \\mathbf{M}.\n\\end{align}\nWe call $\\mathbf{M}$ the base space.\n\\end{definition}\n\\begin{definition}\nGiven the projection map\n\\begin{align}\n\\pi:\\textup{T} \\mathbf{M}\\rightarrow \\mathbf{M},\\qquad V|_{x}\\mapsto x,\n\\end{align}\na \\begin{bf} section of the tangent bundle\\end{bf} is a continuous map\n\\begin{align}\n&V: \\mathbf{M} \\rightarrow \\textup{T} \\mathbf{M},\\qquad x \\mapsto V|_{x}\\notag\\\\\n\\text{such that}\\qquad\n&\\pi(V|_{x})=x \\qquad \\text{for all}\\qquad x\\in \\mathbf{M}.\n\\label{vecfield_def}\n\\end{align}\nThe map $V$ identifies a vector at a point for each point in the base space\\footnote{We have used the notation $V|_{x}$ to denote a vector at a point as well as a vector field evaluated at a point}, therefore when acting on a smooth function it is the map\n\\begin{align}\nV: \\mathcal{F}(\\mathbf{M}) \\rightarrow \\mathbb{R} , \\qquad f\\mapsto V(f)\n\\end{align}\n with the properties (\\ref{vp_def}). A smooth section of the tangent bundle is a \\begin{bf} vector field \\end{bf}. The space of vector fields over $\\mathbf{M}$ is written $\\Gamma \\textup{T} \\mathbf{M}$.\n \\end{definition}\n \\begin{definition}\nGiven a local coordinate basis $y^a$ on $\\mathbf{M}$ there exists an induced local basis $\\tfrac{\\partial}{\\partial y^a}$ on $\\textup{T} \\mathbf{M}$. In terms of this basis a vector field $V\\in\\textup{T}\\mathbf{M}$ is given by\n\\begin{align}\nV=V^a\\frac{\\partial}{\\partial y^a}\n\\end{align}\nwhere the Einstein summation convention is used for $a=1..m$ and $V^a$ are smooth functions on $\\mathbf{M}$. In terms of a different local coordinate basis $z^a$\n\\begin{align}\nV=V^a \\frac{\\partial}{\\partial y^a}=V^a \\frac{\\partial z^b}{\\partial y^a}\\frac{\\partial}{\\partial z^b}.\n\\end{align}\nwhere $V^a$ are the components of $V$ in the $y^a$ coordinate basis and $V^a \\partial z^b \/ \\partial y^a$ are the components of $V$ in the $z^b$ coordinate basis.\n\\end{definition}\n\\subsection*{Differential $1$-forms}\n\\begin{definition}\nThe space dual to the tangent space $\\textup{T}_x \\mathbf{M}$ is called the cotangent space at $x$ and is denoted by $\\textup{T}_x^\\ast \\mathbf{M}$.\nElements $\\zeta|_{x}\\in \\textup{T}_x^\\ast \\mathbf{M} $ are covariant vectors or covectors at $x$ and satisfy\n\\begin{align}\n\\zeta|_{x}: \\textup{T}_x \\mathbf{M} \\rightarrow \\mathbb{R} , \\qquad V\\mapsto \\zeta|_{x}(V),\n\\end{align}\nwith the properties\n \\begin{align}\n\\zeta|_{x}(V|_{x} + W|_{x}) &= \\zeta|_{x}(V|_{x}) +\\zeta|_{x}(W|_{x})\\notag,\\\\\n\\zeta|_{x} (f V|_{x} )&= f \\zeta|_{x} (V|_{x}) .\n\\label{flin_def}\n \\end{align}\n \\end{definition}\n \\begin{definition}\nThe \\begin{bf} cotangent bundle\\end{bf} is the $2 m$ dimensional manifold $\\textup{T}^\\ast \\mathbf{M}$ defined as the set theoretic union of the cotangent spaces $\\textup{T}_x^\\ast \\mathbf{M}$ for all $x\\in \\mathbf{M}$,\n\\begin{align}\n\\textup{T}^\\ast \\mathbf{M}=\\bigcup_{x\\in \\mathbf{M}} \\textup{T}_x^\\ast \\mathbf{M}.\n\\end{align}\n\\end{definition}\n\\begin{definition}\nGiven the projection map\n\\begin{align}\n\\pi_c:\\textup{T}^{\\ast}\\mathbf{M}\\rightarrow \\mathbf{M},\\qquad \\zeta|_{x} \\mapsto x,\n\\end{align}\na \\begin{bf} section of the cotangent bundle\\end{bf} is a continuous map\n\\begin{align}\n&\\zeta: \\mathbf{M} \\rightarrow \\textup{T}^\\ast \\mathbf{M},\\qquad x \\mapsto \\zeta|_{x}\\notag\\\\\n\\text{such that}\\qquad\n&\\pi_c(\\zeta|_{x})=x \\qquad \\text{for all}\\qquad x\\in \\mathbf{M}.\n\\end{align}\nThe map $\\zeta$ identifies a covector at a point for each point in the base space. When acting on a vector field it is the map \\begin{align}\n\\zeta: \\Gamma \\textup{T} \\mathbf{M} \\rightarrow \\mathcal{F}(\\mathbf{M}), \\qquad V\\mapsto \\zeta(V)\n\\end{align}\n with the properties\n \\begin{align}\n\\zeta(V + W) &= \\zeta(V) +\\zeta(W)\\notag,\\\\\n\\zeta (f V )&= f \\zeta (V) .\n\\label{flin_def_field}\n \\end{align}\n A smooth section of the tangent bundle is called a \\begin{bf} covector field \\end{bf}or \\begin{bf} (differential) $1$-form \\end{bf}. The space of $1$-forms is written $\\Gamma \\textup{T}^\\ast \\mathbf{M}$.\n\\end{definition}\n\\begin{lemma}\n The duality of $\\textup{T}_x \\mathbf{M}$ and $\\textup{T}_x^\\ast \\mathbf{M}$ demands\n \\begin{align}\n \\zeta|_{x} (V|_{x})=V|_{x}(\\zeta|_{x})\n \\label{eq_def}\n \\end{align}\n where $V|_{x}(\\zeta|_{x})$ satisfies the reversal of (\\ref{flin_def}) with respect to vectors and covectors.\n\\end{lemma}\n \\begin{definition}\nGiven a local coordinate basis $y^a$ on $\\mathbf{M}$ there exists an induced local basis $dy^a$ on $\\textup{T}^\\ast \\mathbf{M}$.\nIn terms of this basis a differential 1-form $\\zeta\\in\\textup{T}^\\ast\\mathbf{M}$ is given by\n\\begin{align}\n\\zeta=\\zeta_a dy^a\n\\end{align}\nwhere $\\zeta_a$ are smooth functions on $\\mathbf{M}$. In terms of a different local coordinate basis $z^a$\n\\begin{align}\n\\zeta=\\zeta_a dy^a=\\zeta_a \\frac{\\partial y^a}{\\partial z^b} d z^b.\n\\label{cov_trans}\n\\end{align}\nwhere $\\zeta^a$ are the components of $\\zeta$ in the $y^a$ coordinate basis and $\\zeta_a \\frac{\\partial y^a}{\\partial z^b} $ are the components of $\\zeta$ in the $z^a$ coordinate basis. \n\\end{definition}\n\n\n\\subsection*{Tensor fields}\n\n\n\\begin{definition}\nThe degree of an arbitrary tensor will be represented as an ordered list\n$s$ of $0$ or more entries. Each entry is either the symbol $\\mathds{F}$ (for 1-form) or $\\mathds{V}$\n(for vector) e.g. $s = [\\mathds{V}, \\mathds{F}, \\mathds{F}, \\mathds{V}]$. The space of tensors of degree $s$ over $\\mathbf{M}$ is\ndenoted $\\bigotimes^s \\mathbf{M}$ with sections $\\Gamma \\bigotimes^s \\mathbf{M}$. Let the tangent space and the cotangent space be denoted\n\\begin{align}\n\\textup{T} \\mathbf{M}= \\textstyle{\\bigotimes}^{[\\mathds{V}]} \\mathbf{M} \\qquad \\textup{and} \\qquad \\textup{T}^{\\ast} \\mathbf{M}= \\textstyle{\\bigotimes}^{[\\mathds{F}]} \\mathbf{M}\n\\end{align}\nthen arbitrary degree tensors are constructed using the tensor product,\n\\begin{align}\n\\otimes : \\textstyle{\\bigotimes}^s \\mathbf{M} \\times \\textstyle{\\bigotimes}^t \\mathbf{M} \\rightarrow \\textstyle{\\bigotimes}^{[s, t]} \\mathbf{M}, \\qquad (\\mathbf{T}, \\mathbf{S}) \\mapsto \\mathbf{T} \\otimes \\mathbf{S}\n\\end{align}\nwhere $s$ and $t$ are ordered lists and $[s, t]$ is simply the concatenation of the two lists. For example given vector field $V\\in \\Gamma \\textup{T} \\mathbf{M}$ and $1$-forms $\\alpha, \\beta \\in \\Gamma \\textup{T}^{\\ast} \\mathbf{M}$ we may define a degree $[\\mathds{V}, \\mathds{F}, \\mathds{F}]$ tensor field by\n\\begin{align}\nV \\otimes \\alpha \\otimes \\beta \\in \\Gamma \\textstyle{\\bigotimes}^{[\\mathds{V}, \\mathds{F}, \\mathds{F}]} \\mathbf{M}.\n\\end{align}\nThe tensor product satisfies\n\\begin{align}\n\\mathbf{T} \\otimes (\\mathbf{S} \\otimes \\mathbf{R}) =& (\\mathbf{T} \\otimes \\mathbf{S}) \\otimes \\mathbf{R}\\notag\\\\\n(\\mathbf{T}_1 + \\mathbf{T}_2) \\otimes \\mathbf{S} =& \\mathbf{T}_1 \\otimes \\mathbf{S} + \\mathbf{T}_2 \\otimes \\mathbf{S}\n\\end{align}\nand\n\\begin{align}\nf (\\mathbf{T} \\otimes \\mathbf{S})=& (f \\mathbf{T}) \\otimes \\mathbf{S}= \\mathbf{T} \\otimes (f\\mathbf{S})\n\\end{align}\nfor $\\mathbf{T}, \\mathbf{T}_1, \\mathbf{T}_2 \\in \\textstyle{\\bigotimes}^s \\mathbf{M}$, $\\mathbf{S}\\in \\textstyle{\\bigotimes}^t \\mathbf{M}$, $\\mathbf{R} \\in \\textstyle{\\bigotimes}^u \\mathbf{M}$ and $f \\in \\mathcal{F}(\\mathbf{M})$.\nThe dual space of $\\textstyle{\\bigotimes}^{s}\\mathbf{M}$ is denoted $\\textstyle{\\bigotimes}^{\\bar{s}}\\mathbf{M}$, where $\\bar{s}$ is the list obtained by interchanging the symbols $\\mathds{F}$ and $\\mathds{V}$ in $s$. The total contraction of elements in $\\textstyle{\\bigotimes}^{\\bar{s}}\\mathbf{M}$ with elements in $\\textstyle{\\bigotimes}^{s}\\mathbf{M}$ is written\n\\begin{align}\n\\textstyle{\\bigotimes}^{\\bar{s}}\\mathbf{M} \\times \\textstyle{\\bigotimes}^{s}\\mathbf{M} \\rightarrow \\mathcal{F}(\\mathbf{M}), \\qquad (\\mathbf{T}, \\mathbf{R}) \\mapsto \\mathbf{T}:\\mathbf{R}\n\\end{align}\nwhere $\\mathbf{T}\\in \\textstyle{\\bigotimes}^{\\bar{s}}\\mathbf{M} $ and $ \\mathbf{R} \\in \\textstyle{\\bigotimes}^{s}\\mathbf{M} $. It is defined inductively via\n\\begin{align}\nV:\\alpha=\\alpha:V=\\alpha(V) \\qquad \\textup{where} \\qquad \\alpha \\in \\textstyle{\\bigotimes}^{[\\mathds{F}]} \\mathbf{M}\\qquad \\textup{and}\\qquad V \\in \\textstyle{\\bigotimes}^{[\\mathds{V}]},\n\\end{align}\nand extended to arbitrary tensors by\n\\begin{align}\n(\\mathbf{S} \\otimes \\mathbf{T}):(\\mathbf{R} \\otimes \\mathbf{U})=(\\mathbf{S} : \\mathbf{R})(\\mathbf{T}:\\mathbf{U})\n\\end{align}\n\\end{definition}\n\n\n\\subsection*{The metric}\n\\begin{definition}\nOf special importance is the symmetric, non-degenerate degree $[ \\mathds{F}, \\mathds{F}]$ tensor field $g\\in \\bigotimes^{[ \\mathds{F}, \\mathds{F}]} \\mathbf{M}$ with the properties\n\\begin{align}\n&g(V, W)=g(W, V),\\notag\\\\\n\\text{and}\\quad &g(V, W)=0\\quad \\text{for all}\\quad V \\neq 0 \\Rightarrow W=0.\n\\end{align}\nThis tensor is called the metric. It provides an isomorphism between the covariant and contravariant vector fields.\nThe metric dual $\\widetilde{\\vvec}$ of vector field $V$ is the differential $1$-form given by\n\\begin{align}\n\\widetilde{\\vvec}=g(V, -),\n\\label{dual_def}\n\\end{align}\nwhere $(-)$ denotes an empty argument. There exists a symmetric, non-degenerate degree $[ \\mathds{V}, \\mathds{V}]$ tensor field $\\g^{-1} \\in \\bigotimes^{[ \\mathds{V}, \\mathds{V}]} \\mathbf{M}$ which satisfies\n\\begin{align}\n&\\g^{-1}(\\widetilde{\\vvec}, \\widetilde{\\wvec})=g(V, W)\\quad\\text{and}\\quad V=\\g^{-1}(\\widetilde{\\vvec}, -).\n\\label{gdual_def}\n\\end{align}\nGiven the local coordinate basis $y^a$ on $\\mathbf{M}$ the metric $g$ is given by\n\\begin{align}\ng=g_{ab}d y^a \\otimes d y^b\n\\end{align}\nwhere the functions $g_{ab}$ are determined by\n\\begin{align}\ng_{ab}=g\\Big(\\frac{\\partial}{\\partial y^a},\\frac{\\partial}{\\partial y^b} \\Big)\n\\end{align}\n\\end{definition}\n\\subsection*{Differential $p$-forms}\n\\begin{definition}\nAn important subspace of $ \\bigotimes \\mathbf{M}$ is the space of totally antisymmetric degree $[\\mathds{F}_1,...,\\mathds{F}_p]$ tensors denoted $\\Lambda^p \\mathbf{M}$. The space of smooth sections of $\\Lambda^p \\mathbf{M}$ is denoted $\\Gamma \\Lambda^p \\mathbf{M}$ and elements $\\Psi\\in \\Gamma \\Lambda^p \\mathbf{M}$ are called (differential) $p$-forms. For example for $\\nu, \\omega \\in \\Gamma \\bigotimes^{[\\mathds{F}]} \\mathbf{M}$ the totally antisymmetric part of the degree $[\\mathds{F}, \\mathds{F}]$ tensor field $\\nu \\otimes \\omega$ is the difference\n\\begin{align}\n\\frac{1}{2}(\\nu\\otimes\\omega-\\omega \\otimes \\nu)=\\Psi\n\\label{form_def}\n\\end{align}\nsince reversing the positions of $\\nu$ and $\\omega$ yields\n\\begin{align}\n\\frac{1}{2}(\\omega\\otimes\\nu-\\nu \\otimes \\omega)=-\\Psi,\n\\label{form_def}\n\\end{align}\n threfore $\\Psi \\in\\Gamma \\Lambda^2 \\mathbf{M}$ is a differential $2$-form. The space of differential $0$-forms $\\Gamma \\Lambda^0 \\mathbf{M}$ is defined as the space of smooth functions over $\\mathbf{M}$, i.e. $\\mathcal{F} (\\mathbf{M}) = \\Gamma \\Lambda^0 \\mathbf{M}$ and the space of differential $1$-forms $\\Gamma \\textup{T}^\\ast \\mathbf{M}=\\Gamma \\bigotimes^{[\\mathds{F}]} \\mathbf{M}$ is now also written as as $\\Gamma \\Lambda^1 \\mathbf{M}$.\n\nHigher degree forms are obtained using the \\begin{bf} exterior or wedge product\\end{bf}. The wedge product of $\\alpha\\in \\Gamma \\Lambda^p \\mathbf{M}$ and $ \\beta \\in \\Gamma \\Lambda^q \\mathbf{M}$ is the map\n\\begin{align}\n\\wedge : \\Gamma \\Lambda^p \\mathbf{M} \\times \\Gamma \\Lambda^q \\mathbf{M} \\quad \\rightarrow \\quad \\Gamma \\Lambda^{p+q} \\mathbf{M}, \\qquad \\alpha, \\beta \\quad \\mapsto \\quad \\alpha\\wedge\\beta,\n\\end{align}\nwhere the $(p+q)$-form $\\alpha\\wedge \\beta$ is the totally antisymmetric part of the tensor field $\\alpha\\otimes \\beta$.\nThe wedge product satisfies\n\\begin{align}\n\\alpha \\wedge (\\beta \\wedge \\gamma) =& (\\alpha \\wedge \\beta) \\wedge \\gamma ,\\notag\\\\\n(\\alpha_1 + \\alpha_2) \\wedge \\beta =& \\alpha_1 \\wedge \\beta + \\alpha_2 \\wedge \\beta,\n\\end{align}\nand\n\\begin{align}\nf (\\alpha \\wedge \\beta)=& (f \\alpha) \\wedge \\beta= \\alpha \\wedge (f\\beta),\n\\end{align}\nand\n\\begin{align}\n\\alpha \\wedge \\beta=(-1)^{p q} \\beta\\wedge \\alpha\n\\end{align}\nfor $\\alpha,\\alpha_1, \\alpha_2 \\in \\Lambda^{p}\\mathbf{M}$, $\\beta \\in \\Lambda^{q}\\mathbf{M}$, $\\gamma\\in \\Lambda^{r}\\mathbf{M}$ and $f \\in \\mathcal{F}(\\mathbf{M})$. It follows any arbitrary $p$-form can be reduced to a linear superposition of wedge products of $p$ differential $1$-forms. Given local coordinate basis $y^a$ on $\\mathbf{M}$,\n\\begin{align}\n\\alpha\\in\\Gamma\\Lambda^p \\mathbf{M}, \\qquad \\alpha=\\alpha_{a_1 a_2 ..a_p}dy^{a_1}\\wedge dy^{a_2}\\wedge..\\wedge dy^{a_p}.\n\\label{form_comp}\n\\end{align}\n\\end{definition}\n\n\n\\section{Differential operators}\n\\begin{definition}For the following definitions it is useful to define the map\n\\begin{align}\n&\\eta: \\Lambda^p \\mathbf{M} \\rightarrow \\Lambda^p \\mathbf{M}, \\qquad \\alpha \\mapsto \\eta(\\alpha),\\notag\\\\\n\\text{where} \\quad &\\eta(\\alpha) = (-1)^{p}\\alpha \\quad \\text{for all}\\quad \\alpha\\in \\Lambda^p \\mathbf{M}.\n\\end{align}\n\\end{definition}\n\\subsection*{Exterior derivative}\n\\begin{definition}\nThe \\begin{bf}exterior derivative\\end{bf} of a differential form is the map\n\\begin{align}\nd: \\Gamma\\Lambda^p \\mathbf{M} \\rightarrow \\Gamma \\Lambda^{p+1}\\mathbf{M}, \\qquad \\alpha \\mapsto d\\alpha,\n\\end{align}\nwhere\n\\begin{align}\ndf (V)&=V(f),\\notag\\\\\nd(\\alpha \\wedge \\beta)&=d\\alpha\\wedge\\beta+\\eta (\\alpha) \\wedge d\\beta,\\notag\\\\\nd(d\\alpha)&=0\n\\end{align}\nfor all $f \\in\\Gamma\\Lambda^0 \\mathbf{M}$, $V \\in \\Gamma \\textup{T} \\mathbf{M}$, $\\alpha \\in \\Gamma \\Lambda^p \\mathbf{M}$ and $\\beta \\in \\Gamma \\Lambda^q \\mathbf{M}$\n\\end{definition}\n\\begin{lemma}\nFor $1$-form $\\nu \\in \\Gamma\\Lambda^1 \\mathbf{M}$ and vector fields $V, W \\in \\Gamma \\textup{T} \\mathbf{M}$ the following relation holds\n\\begin{align}\nd\\nu (V, W) =& V(\\nu( W))-W (\\nu( V))-\\nu([V, W]),\n\\label{twoform}\n\\end{align}\nwhere $[, ]$ denotes the \\emph{Lie Bracket}.\n\\end{lemma}\n\\begin{proofs}\nIt is sufficient to prove for the $1-form$ $\\nu=f dg$, where $f, g \\in \\mathcal{F}(\\mathbf{M})$. In this case $d\\nu=df\\wedge dg$ and thus\n\\begin{align}\nd\\nu (V, W)=& df \\wedge dg (V, W),\\notag\\\\\n=&\\frac{1}{2}\\big(df ( V)dg(W) - dg ( V)df ( W)\\big),\\notag\\\\\n=&\\frac{1}{2}\\big(V(f)W(g)-V(g)W(f)\\big).\n\\end{align}\nHere $d\\nu (V, W)$ is the action of the $2-$form $d\\nu$ on the ordered pair of vector fields $(V, W)$.\nAlso\n\\begin{align}\nV(\\nu (W))-W(\\nu(V))&-\\nu( [V, W])\\notag\\\\\n=& V(fdg(W))-W(fdg(V))-fdg([V, W])\\notag\\\\\n=&V(f ( W(g)))-W(f V(g))-f[V, W](g)\\notag\\\\\n=&V(f)W(g)+fV(W(g))-W(f)V(g) \\notag\\\\\n&-fW(V(g))-f[V, W](g)\\notag\\\\\n=&V(f)W(g)-W(f)V(g)\\notag\\\\\n=&2 d\\nu (V, W)\n\\end{align}\n\n\\end{proofs}\n\\subsection*{Interior contraction}\n\\begin{definition} The \\begin{bf}interior contraction\\end{bf} of a $p$-form with respect to a vector field is defined by\n\\begin{align}\ni: \\Gamma \\textup{T} \\mathbf{M} \\times \\Gamma\\Lambda^p \\mathbf{M} \\rightarrow \\Gamma \\Lambda^{p-1}\\mathbf{M}, \\qquad V, \\alpha \\mapsto i_V \\alpha\n\\end{align}\nwhere\n\\begin{align}\ni_V \\nu&=\\nu(V)\\notag\\\\\ni_V (\\alpha\\wedge\\beta)&=i_V \\alpha \\wedge\\beta +\\eta(\\alpha)\\wedge i_V \\beta\\notag\\\\\ni_V i_V \\alpha&=0\n\\label{internal_defs}\n\\end{align}\nfor all $V \\in \\Gamma \\textup{T} \\mathbf{M}$, $\\nu \\in\\Gamma\\Lambda^1 \\mathbf{M}$, $\\alpha\\in \\Gamma \\Lambda^p \\mathbf{M}$ and $\\beta \\in \\Gamma \\Lambda^q \\mathbf{M}$.\n\\end{definition}\n\n\n\\subsection*{Hodge Dual}\n\\begin{definition}\n\\label{def_starone}\nIntroduce a $g$-orthonormal frame $e^a$ such that\n\\begin{align}\ng=g_{ab}dy^a\\otimes dy^b=\\eta_{ab}e^a\\otimes e^b\n\\end{align}\nwhere $\\eta_{ab}=\\pm 1$ for $a=b$ and $\\eta_{ab}=0$ for $a\\neq b$. Then\n\\begin{align}\n\\star 1\\in \\Lambda^m \\mathbf{M}, \\qquad \\star1 = e^0\\wedge e^1\\wedge..\\wedge e^{m-1},\n\\end{align}\nis the \\begin{bf} volume form \\end{bf} on $\\mathbf{M}$.\n\\end{definition}\n\\begin{definition}\n\\label{hodge_def}\nThe \\begin{bf} Hodge dual \\end{bf} is the map\n\\begin{align}\n\\star : \\Lambda^p \\mathbf{M} \\rightarrow \\Lambda^{m-p}\\mathbf{M}, \\qquad \\alpha \\mapsto \\star \\alpha\n\\end{align}\nwhere\n\\begin{align}\n\\star (1) &= \\star 1\\notag\\\\\n\\star \\nu &= i_{\\widetilde{\\nu}}\\star 1,\\notag\\\\\n\\star(\\alpha\\wedge\\nu)&=i_{\\widetilde{\\nu}}\\star\\alpha,\\notag\\\\\n\\star (f \\alpha)&=f\\star \\alpha.\n\\end{align}\nfor all $\\nu \\in\\Gamma\\Lambda^1 \\mathbf{M}$, $\\alpha\\in \\Gamma \\Lambda^p \\mathbf{M}$ and $\\beta \\in \\Gamma \\Lambda^q \\mathbf{M}$.\nProperties (\\ref{form_comp}) and (\\ref{hodge_def}) define the action of the Hodge dual on any arbitrary $p$-form.\n\\end{definition}\n\\begin{lemma}\n\\label{one_one}\nGiven two vector fields $V, W \\in \\Gamma \\textup{T} \\mathbf{M}$ the following relation is true\n\\begin{align}\n\\widetilde{\\vvec} \\wedge \\star \\widetilde{\\wvec} = g(V, W)\\star 1\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align}\n\\widetilde{\\vvec} \\wedge \\star \\widetilde{\\wvec}=& \\widetilde{\\vvec} \\wedge i_{W} \\star 1\\notag\\\\\n=& V_a W^b dz^a \\wedge i_{\\frac{\\partial}{\\partial z^b}} \\star 1\\label{starshow}\n\\end{align}\nUsing the alternating Leibniz rule (\\ref{internal_defs}) yields\n\\begin{align}\n0=i_{\\partial z^b}(dz^a\\wedge \\star 1)&=i_{\\partial z^b}dz^a\\wedge \\star 1-d z^a \\wedge i_{\\partial z^b}\\star 1\\notag\\\\\n\\textup{and thus}\\qquad dz^a \\wedge i_{\\partial z^b}\\star 1 &= \\delta^a_b \\star 1.\n\\label{star_two}\n\\end{align}\nSubstituting (\\ref{star_two}) into (\\ref{starshow}) yields\n\\begin{align}\n\\widetilde{\\vvec} \\wedge \\star \\widetilde{\\wvec}=& V_b W^b \\star 1\\notag\\\\\n=&g(V, W) \\star 1\n\\end{align}\n\\end{proofs}\n\n\\begin{lemma}\n\\label{hodge_lemma_def}\nGiven $\\nu \\in \\Gamma \\Lambda^1 \\mathbf{M}$ and $\\alpha \\in \\Gamma \\Lambda^p \\mathbf{M}$ the following is true\n\\begin{align}\n\\nu \\wedge \\star \\alpha =& -\\star(i_{\\widetilde{\\nu}}\\alpha^{\\eta})\n\\end{align}\nwhere $\\alpha^\\eta=\\eta(\\alpha)$.\n\\end{lemma}\n\\begin{proofs}\nBy lemma \\ref{one_one} it is clearly true for $\\textup{deg}(\\alpha)=1$. Assume true for $\\textup{deg}(\\alpha)=p$, then for $\\omega \\in \\Gamma \\Lambda^1 \\mathbf{M}$\n\\begin{align}\n\\star i_{\\widetilde{\\nu}}(\\alpha\\wedge \\omega) =&\\star (i_{\\widetilde{\\nu}}\\alpha\\wedge \\omega) + \\star (\\alpha^{\\eta}\\wedge i_{\\widetilde{\\nu}} \\omega)\\notag\\\\\n=&\\star (i_{\\widetilde{\\nu}}\\alpha\\wedge \\omega) + g(\\widetilde{\\omega}, \\widetilde{\\nu})\\star \\alpha^{\\eta} \\notag\\\\\n=&\\star (i_{\\widetilde{\\nu}}\\alpha\\wedge \\omega) + i_{\\widetilde{\\omega}}(\\nu \\wedge \\star \\alpha^\\eta) + \\nu \\wedge i_{\\widetilde{\\omega}}\\star \\alpha^\\eta\\notag\\\\\n\\end{align}\nEvaluating the first two terms yields\n\\begin{align}\n\\star (i_{\\widetilde{\\nu}}\\alpha\\wedge \\omega) + i_{\\widetilde{\\omega}}(\\nu \\wedge \\star \\alpha^\\eta)=& \\star (\\omega \\wedge i_{\\widetilde{\\nu}}\\alpha^\\eta)- i_{\\widetilde{\\omega}}(\\star(i_{\\widetilde{\\nu}}\\alpha))\\notag\\\\\n=&\\star (\\omega \\wedge i_{\\widetilde{\\nu}}\\alpha^\\eta)- \\star(i_{\\widetilde{\\nu}}\\alpha\\wedge \\omega)\\notag\\\\\n=&\\star (\\omega \\wedge i_{\\widetilde{\\nu}}\\alpha^\\eta)- \\star(\\omega \\wedge i_{\\widetilde{\\nu}}\\alpha^\\eta)\\notag\\\\\n=&0\n\\end{align}\nThus\n\\begin{align}\n-\\star i_{\\widetilde{\\nu}}(\\alpha\\wedge \\omega) =&-\\nu \\wedge i_{\\widetilde{\\omega}}\\star \\alpha^\\eta= \\nu \\wedge \\star( \\alpha\\wedge \\omega)^\\eta\\notag\\\\\n\\end{align}\n\\end{proofs}\n\\begin{lemma}\n\\label{one_two}\nGiven 1-forms $\\nu, \\omega, \\alpha \\in \\Gamma \\Lambda^1 \\mathbf{M}$ the following is true\n\\begin{align}\n\\nu \\wedge \\star (\\omega\\wedge \\alpha) =& g(\\widetilde{\\nu}, \\widetilde{\\alpha})\\star \\omega - g(\\widetilde{\\nu}, \\widetilde{\\omega})\\star \\alpha\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\nBy lemma \\ref{hodge_lemma_def} we have\n\\begin{align}\n\\nu \\wedge \\star (\\omega \\wedge \\alpha) =& -\\star(i_{\\widetilde{\\nu}} (\\omega\\wedge \\alpha)\\notag\\\\\n=& -\\star(i_{\\widetilde{\\nu}}\\omega \\wedge \\alpha - \\omega \\wedge i_{\\widetilde{\\nu}}\\alpha)\\notag\\\\\n=& \\star(g(\\widetilde{\\nu}, \\widetilde{\\alpha})\\omega)- \\star(g(\\widetilde{\\nu}, \\widetilde{\\omega})\\alpha)\\notag\\\\\n=& g(\\widetilde{\\nu}, \\widetilde{\\alpha})\\star \\omega - g(\\widetilde{\\nu}, \\widetilde{\\omega})\\star \\alpha\\notag\n\\end{align}\n\\end{proofs}\n\\begin{lemma}\n\\label{fiveoneforms}\nGiven one forms $\\alpha, \\beta, \\gamma, \\nu, \\omega \\in \\Lambda^1 \\mathbf{M}$ the following is true\n\\begin{align}\ni_{\\widetilde{\\omega}}\\star(\\alpha \\wedge \\beta)\\wedge \\gamma \\wedge \\nu=& \\big(\\beta (\\widetilde{\\gamma})\\omega (\\widetilde{\\nu})-\\omega(\\widetilde{\\gamma})\\beta(\\widetilde{\\nu})\\big)\\star \\alpha\\notag\\\\\n&+ \\big(\\omega (\\widetilde{\\gamma})\\alpha (\\widetilde{\\nu})-\\alpha(\\widetilde{\\gamma})\\omega(\\widetilde{\\nu})\\big)\\star \\beta\\notag\\\\\n&+\\big(\\alpha(\\widetilde{\\gamma})\\beta (\\widetilde{\\nu})-\\beta(\\widetilde{\\gamma})\\alpha(\\widetilde{\\nu})\\big)\\star \\omega\n\\end{align}\n\\begin{proofs}\n\\begin{align}\ni_{\\widetilde{\\omega}}\\star(\\alpha \\wedge \\beta)\\wedge \\gamma \\wedge \\nu =& \\star(\\alpha\\wedge \\beta \\wedge \\omega) \\wedge \\gamma \\wedge \\nu\\notag\\\\\n=& -\\gamma \\wedge \\star(\\alpha\\wedge \\beta \\wedge \\omega)\\wedge \\nu\n\\label{firstone}\n\\end{align}\nUsing (\\ref{hodge_lemma_def}) yields\n\\begin{align}\n \\gamma \\wedge \\star(\\alpha\\wedge \\beta \\wedge \\omega)=&-\\star\\big(i_{\\widetilde{\\gamma}}(-\\alpha\\wedge \\beta \\wedge \\omega)\\big)\\notag\\\\\n =&\\star \\big(i_{\\widetilde{\\gamma}}(\\alpha\\wedge \\beta \\wedge \\omega)\\big)\\notag\\\\\n =& \\star \\big(\\alpha(\\widetilde{\\gamma})\\beta \\wedge \\omega - \\beta(\\widetilde{\\gamma})\\alpha \\wedge \\omega + \\omega (\\widetilde{\\gamma}) \\alpha \\wedge \\beta\\big)\\label{secondone}\n\\end{align}\nThus substituting (\\ref{secondone}) into (\\ref{firstone}) yields\n\\begin{align}\ni_{\\widetilde{\\omega}}\\star(\\alpha \\wedge \\beta)\\wedge \\gamma \\wedge \\nu=-\\nu \\wedge \\star \\big(\\alpha(\\widetilde{\\gamma})\\beta \\wedge \\omega - \\beta(\\widetilde{\\gamma})\\alpha \\wedge \\omega + \\omega (\\widetilde{\\gamma}) \\alpha \\wedge \\beta\\big)\n\\end{align}\nNow using (\\ref{hodge_lemma_def}) again yields\n\\begin{align}\ni_{\\widetilde{\\omega}}\\star(\\alpha \\wedge \\beta)\\wedge \\gamma \\wedge \\nu=& \\star\\Big(i_{\\widetilde{\\omega}}\\big(\\alpha(\\widetilde{\\gamma})\\beta \\wedge \\omega \\big)- i_{\\widetilde{\\omega}}\\big(\\beta(\\widetilde{\\gamma})\\alpha \\wedge \\omega \\big)+i_{\\widetilde{\\omega}}\\big(\\omega(\\widetilde{\\gamma})\\alpha \\wedge \\beta \\big)\\Big)\\notag\\\\\n=& \\big(\\beta (\\widetilde{\\gamma})\\omega (\\widetilde{\\nu})-\\omega(\\widetilde{\\gamma})\\beta(\\widetilde{\\nu})\\big)\\star \\alpha\\notag\\\\\n&+ \\big(\\omega (\\widetilde{\\gamma})\\alpha (\\widetilde{\\nu})-\\alpha(\\widetilde{\\gamma})\\omega(\\widetilde{\\nu})\\big)\\star \\beta\\notag\\\\\n&+\\big(\\alpha(\\widetilde{\\gamma})\\beta (\\widetilde{\\nu})-\\beta(\\widetilde{\\gamma})\\alpha(\\widetilde{\\nu})\\big)\\star \\omega\\notag\n\\end{align}\n \\end{proofs}\n\\end{lemma}\n\\begin{lemma}\n\\label{hodgeab}\nFor two forms $\\alpha, \\beta\\in \\Gamma \\Lambda^p \\mathbf{M}$ of the same degree the following is true\n\\begin{align}\n\\alpha\\wedge\\star\\beta=\\beta\\wedge\\star\\alpha.\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\nBy (\\ref{one_one}) it is clearly true for $\\textup{deg}(\\alpha)=\\textup{deg}(\\beta)=1$. Assume true for $\\textup{deg}(\\alpha)=\\textup{deg}(\\beta)=p$, then for for $\\alpha\\in \\Gamma \\Lambda^{p+1} \\mathbf{M}$, $\\beta \\in \\Gamma \\Lambda^p \\mathbf{M}$ and $\\nu \\in \\Gamma \\Lambda^1 \\mathbf{M}$ we have\n\\begin{align}\n\\alpha\\wedge\\star(\\beta\\wedge \\nu)=& \\alpha \\wedge i_{\\widetilde{\\nu}}\\star \\beta\\notag\\\\\n=&i_{\\widetilde{\\nu}}(\\alpha^\\eta \\wedge \\star \\beta)- i_{\\widetilde{\\nu}}\\alpha^\\eta \\wedge \\star \\beta\\notag\\\\\n=&-\\beta\\wedge \\star i_{\\widetilde{\\nu}}\\alpha^\\eta \\notag\\\\\n=& \\beta \\wedge \\nu \\wedge \\star \\alpha\n\\end{align}\n\\end{proofs}\n\n\\subsection*{Lie Derivative}\n\\begin{definition}\nThe Lie derivative is the the map\n\\begin{align}\n\\mathcal{L}: \\Gamma \\textup{T} \\mathbf{M} \\times \\Gamma \\textstyle{\\bigotimes}^{[s]} \\mathbf{M} \\rightarrow \\Gamma \\textstyle{\\bigotimes}^{[s]} \\mathbf{M}, \\qquad V, \\mathbf{T} \\mapsto \\mathcal{L}_V \\mathbf{T},\n\\end{align}\nwhich is additive linear in both arguments\n\\begin{align}\n\\mathcal{L}_V (\\mathbf{T}+\\mathbf{S})= \\mathcal{L}_V \\mathbf{T} + \\mathcal{L}_{V} \\mathbf{S},\\notag\\\\\n\\mathcal{L}_{V+W} \\mathbf{T}= \\mathcal{L}_V \\mathbf{T} + \\mathcal{L}_{W} \\mathbf{T},\n\\end{align}\nhas the properties\n\\begin{align}\n& \\mathcal{L}_V f = V(f),\\notag\\\\\n& \\mathcal{L}_V W = [V, W],\n\\end{align}\nand obeys the Leibniz rule for tensor products, wedge products and contractions\n\\begin{align}\n& \\mathcal{L}_V (\\mathbf{T} \\otimes \\mathbf{S})=\\mathcal{L}_V\\mathbf{T}\\otimes\\mathbf{S}+\\mathbf{T}\\otimes\\mathcal{L}_V\\mathbf{S},\\\\\n& \\mathcal{L}_V (\\alpha \\wedge \\beta)=\\mathcal{L}_V\\alpha\\wedge\\beta+\\alpha\\wedge\\mathcal{L}_V\\beta,\\\\\n& \\mathcal{L}_V(\\alpha(W))=\\mathcal{L}_V\\alpha(W)+\\alpha(\\mathcal{L}_VW).\\label{lie_contract}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\nCartan's formula\n\\begin{align}\n\\mathcal{L}_V=di_V+i_V d\n\\label{cartan}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\nTrivial for $0$-forms. First prove for $1$-form $\\nu \\in \\Lambda^1\\mathbf{M}$. From (\\ref{lie_contract}) and (\\ref{twoform})\n\\begin{align}\n\\mathcal{L}_V\\nu(W)=& \\mathcal{L}_V(\\nu ( W)) - \\nu (\\mathcal{L}_V W)\\notag\\\\\n=&V(\\nu(W))-\\nu ([V, W])\\notag\\\\\n=&2d\\nu (V, W)+W(\\nu( V))\\notag\\\\\n=&i_V d\\nu ( W) + d(\\nu (V))(W)\\notag\\\\\n=&i_V d\\nu ( W) + d i_V \\nu(W)\\notag\\\\\n=&(i_V d\\nu+di_V \\nu)(W)\n\\end{align}\nNow assume true for $p$-form $\\alpha \\in \\Gamma \\Lambda^p \\mathbf{M}$, show true for $(p+1)$-form $\\alpha \\wedge \\nu$.\n\\begin{align}\n(i_V d+ d i_V) (\\alpha \\wedge \\nu)=& i_V(d\\alpha\\wedge \\nu+(-1)^p\\alpha \\wedge d \\nu) + d(i_V \\alpha \\wedge \\nu + (-1)^p \\alpha \\wedge i_V \\nu)\\notag\\\\\n=& i_V (d\\alpha \\wedge \\nu) +(-1)^p i_V (\\alpha \\wedge d\\nu) +d(i_V\\alpha \\wedge \\nu) +(-1)^p d(\\alpha \\wedge i_V \\nu)\\notag\\\\\n=&i_V d \\alpha\\wedge\\nu +((-1)^{p+1}+(-1)^p)d\\alpha\\wedge i_V \\nu +di_V \\alpha \\wedge \\nu \\notag\\\\\n&+((-1)^{p-1}+(-1)^p)i_V \\alpha \\wedge d\\nu + (-1)^{2p} (\\alpha \\wedge i_V d \\nu + \\alpha \\wedge d i_V \\nu)\\notag\\\\\n=&(i_V d+ d i_V) \\alpha \\wedge \\nu + \\alpha \\wedge (i_V d+ d i_V)\\nu\\notag\\\\\n=&\\mathcal{L}_V \\alpha \\wedge \\nu + \\alpha \\wedge \\mathcal{L}_V \\nu\\notag\\\\\n=& \\mathcal{L}_V (\\alpha \\wedge \\nu)\\notag\n\\end{align}\nThus by induction true for all $p$-forms.\n\\end{proofs}\n\n\n\\begin{lemma}\nThe Lie derivative commutes with the exterior derivative\n\\begin{align}\n\\mathcal{L}_V d = d \\mathcal{L}_V\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\nfollows trivially from \\ref{cartan}\n\\end{proofs}\n\n\n\n\\subsection*{Levi-Civita Connection}\n\\label{connection}\nAn affine connection is a map\n\\begin{align}\n\\nabla: \\Gamma \\textup{T} \\mathbf{M} \\times \\Gamma \\textstyle{\\bigotimes}^{[s]} \\mathbf{M}\\quad \\rightarrow \\quad \\Gamma \\textstyle{\\bigotimes}^{[s]} \\mathbf{M}, \\qquad (V,\\mathbf{T}) \\mapsto \\nabla_V \\mathbf{T}\n\\end{align}\nwhich is additive linear in both arguments,\n\\begin{align}\n\\nabla_V (\\mathbf{T}+\\mathbf{S})= \\nabla_V \\mathbf{T} + \\nabla_{V} \\mathbf{S},\\notag\\\\\n\\nabla_{V+W} \\mathbf{T}= \\nabla_V \\mathbf{T} + \\nabla_{W} \\mathbf{T},\n\\end{align}\nand satisfies\n\\begin{align}\n& \\nabla_V f = V(f),\\notag\\\\\n& \\nabla_{fV} \\mathbf{T} = f \\nabla_V \\mathbf{T},\\notag\\\\\n& \\nabla_V (\\mathbf{T} \\otimes \\mathbf{S})= \\nabla_V\\mathbf{T} \\otimes \\mathbf{S}+ \\mathbf{T} \\otimes \\nabla_V\\mathbf{S},\\notag\\\\\n& \\nabla_V (\\alpha \\wedge \\beta)=\\nabla_V\\alpha\\wedge\\beta+\\alpha\\wedge\\nabla_V\\beta.\n\\end{align}\nThe Levi-Civita connection on $\\mathbf{M}$ is the unique torsion free metric compatible affine connection.\n\\section{Pushforwards, pullbacks and curves}\n\\section*{Pushforward map}\n\\begin{definition}\nGiven differentiable manifolds $\\mathbf{M}$ and $\\mathbf{N}$ and the smooth map $\\phi : \\mathbf{M} \\longrightarrow \\mathbf{N} , \\quad x \\longmapsto \\phi (x)$, the \\begin{bf}pushforward\\end{bf} of a vector at a point $ V|_\\point \\in \\textup{T}_x \\mathbf{M}$ with respect to $\\phi$ is the map\n\\begin{align}\n\\phi_{\\ast} : \\textup{T}_x\\mathbf{M} \\longrightarrow \\textup{T}_{\\phi (x)}\\mathbf{N}, \\qquad V|_\\point \\longmapsto \\phi_{\\ast} V|_\\point,\\label{pushstart}\n\\end{align}\nwhere\n\\begin{align}\n\\phi_{\\ast }V|_\\point(f) &= V|_\\point(f \\circ \\phi),\\notag\\\\\n\\phi_{\\ast}(V|_\\point + W |_\\point) &= \\phi_{\\ast}V|_\\point + \\phi_{\\ast} W|_\\point\\nonumber,\\\\\n\\text{and}\\quad \\phi_{\\ast}(V|_\\point W|_\\point) &= \\phi_{\\ast}V|_\\point\\phi_{\\ast}W|_\\point\n\\label{pushforward}\n\\end{align}\n for $V|_\\point, W|_\\point \\in \\textup{T}_x \\mathbf{M}$ and $f\\in \\Gamma \\Lambda^{0} \\mathbf{N}$.\n The pushforward of a vector at a point is naturally extended using (\\ref{vecfield_def}) to obtain the pushforward of a vector field\n \\begin{align}\n\\phi_{\\ast} : \\Gamma \\textup{T} \\mathbf{M} \\longrightarrow \\Gamma \\textup{T} \\mathbf{N}, \\qquad V \\longmapsto \\phi_{\\ast} V.\n\\end{align}\nLet $\\mathbf{M}$, $\\mathbf{N}$ and $\\mathbf{O}$ be differential manifolds and let $\\phi_{\\ast} : \\textup{T}_x \\mathbf{M} \\longrightarrow \\textup{T}_{\\phi (x)}\\mathbf{N}$ and $\\psi_{\\ast} : \\textup{T}_x \\mathbf{N} \\longrightarrow \\textup{T}_{\\phi (x)}\\mathbf{O}$, then\n\\end{definition}\n \\begin{lemma}The composition of the pushforwards is the pushforward of the composition\n \\begin{align}\n \\psi _{\\ast} \\circ \\phi_{\\ast} = (\\psi \\circ \\phi)_{\\ast}\n \\end{align}\n \\end{lemma}\n \\begin{proofs}\n \\begin{align*}\n (\\psi \\circ \\phi)_{\\ast}V(f) &= V(f \\circ \\psi \\circ \\phi)\\\\\n &=\\phi_{\\ast}V(f \\circ \\psi)\\\\\n &=\\psi_{\\ast} \\phi_{\\ast}V(f)\n \\end{align*}\n\\end{proofs}\n\n\\begin{lemma}\n Let $(x^{1}, x^{2},...,x^{m})$ be a coordinate basis of $\\mathbb{R} ^{M}$ and $(y^{1}, y^{2},...,y^{n})$ a coordinate basis of $\\mathbb{R} ^{N}$. Given a diffeomorphism $\\phi$ where\n\\begin{align}\n&\\phi : \\mathbb{R} ^{M} \\longrightarrow \\mathbb{R} ^{N},\\notag\\\\\n&x=x^{a}(x)= (x^{1}(x), ..,x^{m}(x))\\quad \\longmapsto\\quad \\phi(x)= y^{b}(\\phi(x^{1}(x), .., x^{m}(x))),\\notag\n\\end{align}\nthen\n\\begin{align}\n\\phi_{\\ast}\\frac{\\partial}{\\partial x^{a}}\\Big|_x = \\frac{\\partial \\phi^{b}}{\\partial x^{a}} \\frac{\\partial}{\\partial y^{b}}, \\qquad \\text{where}\\qquad \\phi^{b} = y^{b} \\circ \\phi.\\label{pushend}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\n\\phi_{\\ast}\\frac{\\partial}{\\partial x^{a}}\\Big|_x (f) &= \\frac{\\partial}{\\partial x^{a}}\\Big|_x (f \\circ \\phi)\\\\\n&= \\frac{\\partial}{\\partial x^{a}}\\Big|_x (f \\circ y^{b} \\circ \\phi)\\\\\n&= \\frac{\\partial}{\\partial x^{a}} f(y^{b}(\\phi(x)))\n\\end{align*}\nwhere $y^{b}(\\phi(x)) = \\phi^{b}$ by definition. Evaluating using the chain rule yields\n\\begin{align*}\n\\frac{\\partial}{\\partial x^{a}}(f(\\phi^{b})) &= \\frac{\\partial \\phi^{b}}{\\partial x^{a}} \\frac{\\partial f}{\\partial y^{b}}\\\\\n&= \\frac{\\partial \\phi^{b}}{\\partial x^{a}} \\frac{\\partial}{\\partial y^{b}}(f)\n\\end{align*}\n\\end{proofs}\n\\section*{Pullback map}\n\\begin{definition}\nFor the smooth maps $\\phi : \\mathbf{M} \\rightarrow \\mathbf{N}$ and $f : \\mathbf{N} \\rightarrow \\mathbb{R} $ the \\begin{bf}pullback\\end{bf} of $f$ with respect to $\\phi$ is given by the composition:\n\\begin{align}\n\\phi^{\\ast}f = \\psi \\circ f\n\\end{align}\n\\end{definition}\n\\begin{lemma}\n For the smooth maps $\\phi : \\mathbf{M} \\rightarrow \\mathbf{N}$, $\\psi : \\mathbf{N} \\rightarrow \\mathbf{O}$, and $f : \\mathbf{N} \\rightarrow \\mathbb{R} $ the \\begin{bf}pullback\\end{bf} of the composition is the composition of the pullbacks\n\\begin{align}\n(\\phi \\circ \\psi)^{\\ast} = \\phi^{\\ast} \\circ \\psi^{\\ast}\n\\end{align}\n\\end{lemma}\n\n\\begin{proofs}\n\\begin{align*}\n \\psi^{\\ast} (\\phi^{\\ast} f) &= (f \\circ \\phi) \\circ \\psi\\\\\n&= f \\circ \\phi \\circ \\psi\\\\\n&= (\\phi \\circ \\psi )^{\\ast} f\n\\end{align*}\n\\end{proofs}\n\\begin{definition}\n The pullback of a differential $p$-form with respect to the smooth map $\\phi : \\mathbf{M} \\longrightarrow \\mathbf{N} ,\\quad x \\longmapsto \\phi (x)$ is given by:\n\\begin{align}\n\\phi^{\\ast} : \\Gamma \\Lambda^{p} \\mathbf{N} \\longrightarrow \\Gamma \\Lambda^{p} \\mathbf{M} , \\qquad \\alpha \\longmapsto \\phi^{\\ast}\\alpha\n\\end{align}\nwhere\n\\begin{align}\n\\phi^{\\ast}(\\alpha + \\beta) &= \\phi^{\\ast}\\alpha + \\phi^{\\ast}\\beta\\notag\\\\\n\\phi^{\\ast}(\\alpha \\wedge \\beta) &= \\phi^{\\ast} \\alpha \\wedge \\phi^{\\ast} \\beta\n\\end{align}\nfor all $\\alpha \\in \\Gamma \\Lambda^p \\mathbf{N}$ and $\\beta \\in \\Gamma \\Lambda^q \\mathbf{N}$.\n\\end{definition}\n\\begin{definition}\nThe pullback of a 1-form acting on a vector field is the 1-form acting on the pushforward of the vector field\n\\begin{align}\n\\phi^{\\ast}df(V) = df (\\phi_{\\ast} V),\n\\end{align}\nfor all $f\\in \\Gamma \\Lambda^0 \\mathbf{N}$ and $V \\in \\Gamma \\textup{T} \\mathbf{M}$.\n\\end{definition}\n\\begin{lemma}\n the pullback commutes with the exterior derivative\n\\begin{align}\nd\\phi^{\\ast}\\alpha = \\phi^{\\ast}d \\alpha\n \\end{align}\nfor all $\\alpha\\in\\Gamma\\Lambda^p\\mathbf{M}$.\n\\end{lemma}\n\\begin{proofs}\nFirst show for $f \\in \\Gamma \\Lambda^{0} \\mathbf{M}$\n\\begin{align*}\n\\phi^{\\ast}df(V)&=df(\\phi_{\\ast}V)\\\\\n&=\\phi_{\\ast}V(f)\\\\\n&=V(f\\circ \\phi)\\\\\n&=V(\\phi^{\\ast} f)\\\\\n&=d\\phi^{\\ast}f(V)\n\\end{align*}\nNow show for $\\omega =g df \\in \\Gamma \\Lambda^{1}\\mathbf{M}$ where $g,f \\in \\Gamma \\Lambda^{0}\\mathbf{M}$\n\\begin{align*}\n\\phi^{\\ast}d(g df) &=\\phi^{\\ast}(dg\\wedge df)\\\\\n&=(\\phi^{\\ast}dg)\\wedge (\\phi^{\\ast}df)\\\\\n&=d\\phi^{\\ast}g\\wedge \\phi^{\\ast}df\\\\\n&=d(\\phi^{\\ast}g \\phi^{\\ast}df)\\\\\n&=d\\phi^{\\ast}(g df)\n\\end{align*}\nproof for a general 1-form $\\nu =\\nu_{i}d x^{i}$ follows by linearity. \\\\\nNow, assuming true for $\\alpha \\in \\Gamma \\Lambda^{p}\\mathbf{M}$\n\\begin{align*}\n\\phi^{\\ast}d(\\nu \\wedge \\alpha) &= \\phi^{\\ast}[d\\nu \\wedge \\alpha - \\nu \\wedge d\\alpha]\\\\\n&=\\phi^{\\ast}(d\\nu) \\wedge \\phi^{\\ast}\\alpha - \\phi^{\\ast}\\nu \\wedge \\phi^{\\ast}d \\alpha\\\\\n&=d\\phi^{\\ast}\\nu \\wedge \\phi^{\\ast}\\alpha - \\phi^{\\ast}\\nu \\wedge d\\phi^{\\ast} \\alpha\\\\\n&=d[\\phi^{\\ast}\\nu \\wedge \\phi^{\\ast} \\alpha]\\\\\n&=d\\phi^{\\ast}(\\nu \\wedge \\alpha)\n\\end{align*}\nHence by induction the relation holds for all $p$-forms.\n\\end{proofs}\n\\begin{lemma}\n The internal contraction of a pullback with respect to a vector field $V$, is equal to the pullback of the internal contraction with respect to the pushforward of $V$\n\\begin{align}\n i_{V} \\phi^{\\ast} \\alpha = \\phi^{\\ast} (i_{\\phi_{\\ast V}} \\alpha)\n \\end{align}\n \\end{lemma}\n\\begin{proofs} (by induction)\\\\\nTrivial for 0-forms. First prove for 1-form $\\omega = g df \\quad \\in \\Gamma \\Lambda^{1} \\mathbf{N}$\n\\begin{align*}\ni_{V}(\\phi^{\\ast}g d f) &=i_{V}(\\phi^{\\ast}g\\phi^{\\ast}df)\\\\\n&= \\phi^{\\ast}g i_{V} \\phi^{\\ast}df\\\\\n&= \\phi^{\\ast}g(\\phi^{\\ast}df) . V\n\\end{align*}\nNoticing that $\\phi^{\\ast}df(V) = df(\\phi_{\\ast}V)$, and remembering that the pullback of a number doesn't change the number, ie\n\\begin{align*}\ndf(\\phi_{\\ast}V) &= \\phi^{\\ast}(df.\\phi_{\\ast} V)\\\\\n&= \\phi^{\\ast}(i_{\\phi_{\\ast} V}df)\n\\end{align*}\nWe can therefore write\n\\begin{align*}\ni_{V}(\\phi^{\\ast}g df) &= \\phi^{\\ast}g \\phi^{\\ast}(i_{\\phi_{\\ast} V}df)\\\\\n&= \\phi^{\\ast}(g i_{\\phi_{\\ast} V} df)\\\\\n&=\\phi^{\\ast}[i_{\\phi_{\\ast} V}(g df)]\\\\\n&= \\phi^{\\ast}(i_{\\phi_{\\ast} V}\\nu)\n\\end{align*}\nproof for a general 1-form $\\nu =\\nu_{i}d x^{i}$ follows by linearity. \\\\\nNow, assuming the relation holds for $\\alpha \\epsilon \\Gamma \\Lambda^{p}\\mathbf{N}$, we have\n\\begin{align*}\ni_{V}\\phi^{\\ast}(\\nu \\wedge \\alpha)& =i_{V}(\\phi^{\\ast}\\nu \\wedge \\phi^{\\ast} \\alpha)\\\\\n&= i_{V}\\phi^{\\ast} \\nu \\wedge \\phi^{\\ast} \\alpha - \\phi^{\\ast} \\nu \\wedge \\phi^{\\ast} \\alpha\\\\\n&=\\phi^{\\ast}(i_{\\phi_{\\ast} V} \\nu) \\wedge \\phi^{\\ast}\\alpha - \\phi^{\\ast}\\nu \\wedge \\phi^{\\ast}(i_{\\phi_{\\ast}V}\\alpha)\\\\\n&=\\phi^{\\ast}(i_{\\phi_{\\ast} V}(\\nu \\wedge \\alpha))\n\\end{align*}\nThus we have proved by induction that the relation must hold for all $(p+1)$ forms, and therefore for any general form $ \\beta\\in \\Gamma \\Lambda^{q} \\mathbf{N}$.\n\\end{proofs}\n\\section*{Curves}\n\\begin{definition}\nA smooth parameterized curve $C(s)$ on a manifold $\\mathbf{M}$ is a smooth map from an open interval $I\\subset \\mathbb{R} $ to $\\mathbf{M}$,\n\\begin{align}\nC: I \\rightarrow \\mathbf{M}, \\qquad s \\mapsto C(s).\n\\end{align}\nIf $y^a$ are local coordinates on $\\mathbf{M}$ then we use the notation\n\\begin{align}\ny^a \\circ C(s)= C^a(s),\n\\end{align}\nthus if $C(s_0)=x$ is any point on the image of $C$ then\n\\begin{align}\ny^a(x) =C^a(s_0).\n\\end{align}\nThe tangent vector to $C$ at $x$ is\n\\begin{align}\n\\dot{\\c}|_\\point\\in T_x \\mathbf{M}, \\qquad \\dot{\\c}|_\\point = C_\\ast\\Big(\\frac{\\partial}{\\partial s}\\Big) \\Big|_{s_0}\n\\end{align}\nFor any $f\\in \\mathcal{F}(\\mathbf{M})$\n\\begin{align}\nC_\\ast\\Big(\\frac{\\partial}{\\partial s}\\Big) \\Big|_{s_0}(f)= \\frac{\\partial}{\\partial s}(C^\\ast f)\\Big|_{s_0}= \\frac{\\partial}{\\partial s}(f \\circ C(s_0))= \\frac{\\partial f}{\\partial y^a}\\frac{\\partial C^a}{\\partial s}\\Big|_{s_0},\n\\end{align}\nhence\n\\begin{align}\n\\dot{\\c}|_\\point=\\dot{\\c}^a \\frac{\\partial}{\\partial y^a}\\Big|_{s_0}= \\frac{\\partial C^a}{\\partial s}\\Big|_{s_0}\\frac{\\partial}{\\partial y^a}.\n\\end{align}\n\\end{definition}\nThere is an induced vector field $\\dot{\\c} \\in \\Gamma \\textup{T} \\mathbf{M}$ where $\\dot{\\c}|_\\point$ is the tangent vector at $x$ for all $x\\in C(s)$.\n\n\\section{Integration of $p$-forms}\n\\begin{definition}Let $\\sigma$ be a diffeomorphism from the submanifold $\\Sigma\\subset \\mathbf{M}$ of dimension $n$ to the differentiable manifold $\\mathbf{M}$ of dimension $m$.\n\\begin{align}\n\\sigma:\\Sigma \\hookrightarrow \\mathbf{M}\n\\end{align}\nIf $y^a$ are local coordinates on $\\mathbf{M}$ at $\\sigma(x)$ then $\\sigma^\\ast$ acting on the local basis of $1$-forms $dy^a$ is given by\n\\begin{align}\n\\sigma^\\ast dy^a=d(y^a \\circ \\sigma)\n\\end{align}\nand for any $f\\in \\mathcal{F}(\\mathbf{M})$\n\\begin{align}\n\\sigma^\\ast (f dy^a)=(f\\circ \\sigma) d(y^a \\circ \\sigma)\n\\end{align}\nIf we define a local coordinate system for $\\Sigma$ at $x\\in\\Sigma$ by\n\\begin{align}\nz^a=y^a\\circ \\sigma\n\\end{align}\nthen\n\\begin{align}\n\\sigma^\\ast (f dy^0\\wedge dy^1\\wedge .. \\wedge dy^m)=(f\\circ\\sigma)dz^0\\wedge dz^1 \\wedge .. \\wedge dz^n\n\\end{align}\n\n \\end{definition}\n\\begin{definition}\nIf $m$-form $\\alpha\\in \\Gamma \\Lambda^{m} \\mathbf{M}$ has compact support then so does the $n$-form $\\sigma^\\ast \\alpha \\in \\Gamma \\Lambda^{n} \\Sigma$, and\n\\begin{align}\n\\int_\\mathbf{M} \\alpha = \\int_\\Sigma \\sigma^\\ast \\alpha.\n\\end{align}\n\\end{definition}\n\\begin{theorem}\nIf $\\Sigma$ is an oriented differential manifold of dimension $n$, with boundary $\\partial \\Sigma$ of dimension $(n-1)$ then\n\\begin{align}\n&\\int_\\Sigma d \\alpha =\\int_{\\partial \\Sigma} \\alpha,\n\\label{stokes_def}\n\\end{align}\nfor all $\\alpha\\in \\Gamma \\Lambda^{n-1}\\Sigma$ with compact support. This theorem is often called the generalized \\emph{Stokes'} theorem.\n\\end{theorem}\n\n\\begin{theorem}\n\\label{lie_int}\nLet $t$ be a choice of coordinate on a manifold $\\mathbf{M}$ such that $\\frac{\\partial}{\\partial t}$ is Killing and let $t$ foliate $\\mathbf{M}$ into surfaces $\\Sigma_t$ . Then for $\\alpha \\in \\Gamma \\Lambda^p \\mathbf{M}$\n\\begin{align}\n\\frac{d}{d t} \\int_{\\Sigma_t} \\alpha=\\int_{\\Sigma_t} \\mathcal{L}_{\\frac{\\partial}{\\partial t}} \\alpha,\n\\end{align}\nand thus\n\\begin{align}\n\\int_{\\mathbf{M}(t_1, t_0)} \\alpha = \\int_{t=t_0}^{t_1} dt \\int_{\\Sigma_t} i_{\\partial_t}\\alpha\n\\end{align}\nwhere $\\mathbf{M}(t_1, t_0)$ is a submanifold of $\\mathbf{M}$ with range of $t$ between $t_0$ and $t_1$.\n\\end{theorem}\n\n\n\\chapter{Distributional $p$-forms}\n\\label{dist_apend}\n\\section{Definitions}\nThe space of $C^{\\infty}$ functions with compact support is called the space of test functions. We extend this notion to the space of test $p$-forms.\n\\begin{definition}\n\\label{def_test_form}\n Let $\\mathbf{M}$ be a differential manifold of dimension $m$. The space of test $p$-forms on $\\mathbf{M}$ is denoted $\\Gamma_{0}\\Lambda^{p} \\mathbf{M}$,\n\\begin{align}\n\\Gamma_{0}\\Lambda^{p}\\mathbf{M}=\\{\\varphi \\in \\Gamma \\Lambda^{p}\\mathbf{M}|\\enspace\\varphi\\enspace \\text{has}\\enspace \\text{compact}\\enspace \\text{support}\\}.\n\\end{align}\n\\end{definition}\n\\begin{definition}\n\\label{dist_form_def}\nThe space of $p$-form distributions $\\Gamma_{D} \\Lambda^{p}\\mathbf{M}$ is the vector space dual to the space of test ($m-p$)-forms $\\Gamma_{0}\\Lambda^{m-p}\\mathbf{M}$,\n\\begin{align}\n\\Gamma_{D}\\Lambda^p\\mathbf{M} \\times\\Gamma_{0}\\Lambda^{m-p}\\mathbf{M} \\rightarrow \\mathbb{R} , \\qquad (\\Psi, \\varphi) \\mapsto \\Psi[\\varphi] \\in \\mathbb{R} ,\n\\end{align}\nwhich satisfies\n\\begin{align}\n\\Psi[\\lambda \\varphi + \\psi]=\\lambda\\Psi[\\varphi]+\\Psi[\\psi],\n\\end{align}\nfor $\\lambda \\in \\mathbb{R} $, $\\varphi, \\psi \\in \\Gamma_{0}\\Lambda^{m-p}\\mathbf{M}$ and $\\Psi \\in \\Gamma_{D} \\Lambda^{p}\\mathbf{M}$.\n\\end{definition}\n\\begin{definition}\nThe subspace of $\\Gamma_{D} \\Lambda^{p}\\mathbf{M}$ comprising \\textbf{piecewise continuous} $p$-forms is the space of \\textbf{regular distributions}. The action of a regular $p$-form distribution $\\psi^D$ on an ($m-p$)-test form $\\varphi$ is given by the integral\n\\begin{align}\n\\psi^D[\\varphi]=\\int_{\\mathbf{M}} \\varphi \\wedge &\\psi\n\\end{align}\nfor any $\\varphi \\in \\Gamma_{0}\\Lambda^{m-p}\\mathbf{M}$ and where $\\psi\\in\\Gamma\\Lambda^p\\mathbf{M}$ is piecewise continuous. We say that $\\psi^D$ is the $p$-form distribution associated with the $p$-form $\\psi$.\\\\[3pt]\n\\end{definition}\n\n\n\n\n\\begin{definition}\n\\label{diff_forms}\n The \\textbf{exterior derivative} of a $p$-form distribution is defined as:\n\\begin{align}\nd: \\Gamma_{D}\\Lambda^{p}\\mathbf{M}\\rightarrow\\Gamma_{D}\\Lambda^{p+1}\\mathbf{M}, \\quad \\Psi \\mapsto d\\Psi\n\\end{align}\nand satisfies\n\\begin{align}\n d\\Psi[\\varphi] = -\\Psi[d\\varphi^{\\eta}]\n\\end{align}\nFor any $\\varphi \\in \\Gamma_{0} \\Lambda^{m-(p+1)}\\mathbf{M}$\n\\end{definition}\n\n\\begin{lemma}\nIf $\\mathbf{M}$ has no boundary then for any regular distribution $\\psi^D \\in\\Gamma_{D}\\Lambda^{p}\\mathbf{M}$\n\\begin{align}\nd \\psi^D[\\varphi]&=(d\\psi)^D[\\varphi]\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\nd \\psi^D[\\varphi] &= -\\int_{\\mathbf{M}}d\\varphi^{\\eta}\\wedge \\psi\\\\\n&=\\int_{\\mathbf{M}}\\varphi\\wedge d\\psi- \\int_{\\mathbf{M}}d(\\varphi^{\\eta}\\wedge\\psi)\\\\\n&=\\int_{\\mathbf{M}}\\varphi\\wedge d\\psi - \\int_{\\partial \\mathbf{M}}(\\varphi^{\\eta}\\wedge\\psi)\\\\\n&=\\int_{\\mathbf{M}}\\varphi \\wedge d \\psi\\\\\n&= (d\\psi)^D[\\varphi]\n\\end{align*}\n\\end{proofs}\n\\section{Criteria for regular distributions in N-U coordinates}\n\n\\begin{theorem}\n\\label{dist_oneform}\nLet the 1-form $\\alpha\\in \\Gamma \\Lambda^1 {(\\m\\backslashC)}$ be represented in Newman-Unti coordinates by\n\\begin{align}\n\\alpha=\\alpha_idz^i, \\qquad \\text{where} \\quad z^0=\\tau, \\quad z^1=R, \\quad z^2=\\theta, \\quad z^3=\\phi,\n\\end{align}\nand where the functions $\\alpha_i=\\alpha_i(\\tau, R, \\theta, \\phi)$ are polynomials in $R$ and are singular on the worldline. Let the most divergent terms in the polynomial functions $\\alpha_i$ be denoted\n \\begin{align}\n \\hat{\\alpha}_i=\\frac{\\alpha_i'(\\tau, \\theta, \\phi)}{R^{\\beta_i}}.\\label{alpha_beta}\n \\end{align}\n where $\\displaystyle{\\alpha_i'(\\tau, \\theta, \\phi)}$ are bounded and $\\beta_i$ are positive constants.\nThe distribution $\\alpha^D\\in \\Gamma_D \\Lambda^1 \\mathcal{M}$, where\n\\begin{align}\n\\alpha^D[\\varphi]=\\int_\\mathcal{M} \\varphi \\wedge \\alpha \\qquad \\text{is finite for all} \\quad \\varphi \\in \\Gamma_0 \\Lambda^3 \\mathcal{M},\n\\end{align}\nis well defined providing the four constants $\\beta_i$ satisfy\n\\begin{align}\n \\beta_0< 3, \\quad \\beta_1<2, \\quad \\beta_2<2, \\quad \\beta_3< 3 .\\label{asymp_cond}\n\\end{align}\n\\end{theorem}\n\\begin{proofs}\nAn arbitrary test 3-form $\\varphi\\in \\Gamma_0 \\Lambda^3 \\mathcal{M}$ is given in Minkowski coordinates by\n$\\displaystyle{\\varphi=\\varphi_{i j k} dy^i \\wedge dy^j \\wedge dy^k}$. Applying a coordinate transformation such that $\\varphi=\\hat{\\varphi}_{i j k} dz^i \\wedge dz^j \\wedge dz^k$ where $\\{z^i\\}$ are NU coordinates yields the following form for the coefficients $\\hat{\\varphi}_{i j k}$,\n\\begin{align}\n\\hat{\\varphi}_{1 2 3}=& R^2 \\textup{Y}^2_{1 2 3},\\notag\\\\\n\\hat{\\varphi}_{0 1 2}=& R \\textup{Y}^1_{0 1 2},\\notag\\\\\n\\hat{\\varphi}_{0 1 3}=&R \\textup{Y}^1_{0 1 3},\\notag\\\\\n\\textup{and}\\quad\\hat{\\varphi}_{0 2 3}=&R^2 \\textup{Y}^2_{0 2 3}+R^3 \\textup{Y}^3_{0 2 3}.\\label{phi_three}\n\\end{align}\nHere the functions $\\textup{Y}^l_{i j k}$ depend on the test functions $\\varphi_{i j k}$, sines and cosines of $\\theta$ and $\\phi$, and the functions $\\dot{\\c}_i$ and $\\ddot{\\c}_i$. The key result is that they are bounded functions of $\\tau$, $\\theta$ and $\\phi$.\n\n We are interested in the boundedness of $\\alpha^D[\\varphi]$ therefore it is sufficient to show that $\\displaystyle{\\int_\\mathcal{M} \\varphi \\wedge \\hat{\\alpha}_i dz^i}$ is bounded for all $\\varphi \\in \\Gamma_0 \\Lambda^3\\mathcal{M}$.\nIn component form we have\n\\begin{align}\n\\Bigg| \\int_\\mathcal{M} \\varphi \\wedge \\hat{\\alpha}_i dz^i \\notag\\Bigg| =&\\Bigg|\\int_\\mathcal{M} (-\\hat{\\varphi}_{1 2 3} \\hat{\\alpha}_0 +\\hat{\\varphi}_{0 2 3} \\hat{\\alpha}_1 -\\hat{\\varphi}_{0 1 3}\\hat{\\alpha}_2 + \\hat{\\varphi}_{0 1 2 }\\hat{\\alpha}_3) d z^{0123}\\Bigg|,\n \\end{align}\n where $\\displaystyle{d z^{0123}=d z^0 \\wedge d z^1 \\wedge d z^2 \\wedge d z^3}$.\n\n Substituting the relations \\ref{phi_three} yields\n \\begin{align}\n \\Bigg| \\int_\\mathcal{M} \\varphi \\wedge \\hat{\\alpha}_i d z^i \\Bigg|\\notag\n =&\\Bigg|\\int_\\mathcal{M} R^2 \\textup{Y}^2_{1 2 3}\\hat{\\alpha}_0d z^{0123}\\Bigg| +\\Bigg|\\int_\\mathcal{M} (R^2\\textup{Y}^2_{0 2 3}+R^3 \\textup{Y}^3_{0 2 3})\\hat{\\alpha}_1 d z^{0123}\\Bigg|\\notag\\\\\n &+\\Bigg|\\int_\\mathcal{M} R \\textup{Y}^1_{0 1 3}\\hat{\\alpha}_2 d z^{0123}\\Bigg|+\\Bigg|\\int_\\mathcal{M} R \\textup{Y}^1_{0 1 2} \\hat{\\alpha}_3 d z^{0123}\\Bigg| \\label{r_deps}\n \\end{align}\nSubstituting \\ref{alpha_beta} and separating with respect to $R$-dependence yields\n\\begin{align}\n \\Bigg| \\int_\\mathcal{M} \\varphi \\wedge \\hat{\\alpha}_i dz^i \\Bigg|\\leq& \\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_0'\\textup{Y}^2_{1 2 3} d z^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R^2}{R^{\\beta_0}} d z^1\\notag\\\\\n &+ \\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_1'(\\textup{Y}^2_{0 2 3}+\\textup{Y}^3_{0 2 3}) dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R^2}{R^{\\beta_1}} d z^1\\notag\\\\\n &+ \\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_2'\\textup{Y}^1_{0 1 3} dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R}{R^{\\beta_2}} d z^1\\notag\\\\\n &+ \\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_3'\\textup{Y}^1_{0 1 2} dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R}{R^{\\beta_3}} d z^1\\label{ints_R}\n\\end{align}\nWe now consider the integrals w.r.t $ z^1=R$. The standard integral\n\\begin{align}\n\\int_0^\\epsilon \\frac{R^\\gamma}{R^{\\beta}} d R= \\Bigg[\\frac{R^{1+\\gamma-\\beta}}{1+\\gamma-\\beta}\\Bigg]^\\epsilon_0\\label{standard_res}\n\\end{align}\nwhere $\\epsilon\\in \\mathbb{R} ^{+}$, is bounded in the limit $\\epsilon\\rightarrow 0$ providing $\\beta < 1+\\gamma$. Comparison with the powers in \\ref{ints_R} yields the conditions \\ref{asymp_cond}.\n\\end{proofs}\n\\begin{theorem}\n\\label{dist_twoform}\nLet the 2-form $\\alpha\\in \\Gamma \\Lambda^2 {(\\m\\backslashC)}$ be represented in Newman-Unti coordinates by\n\\begin{align}\n\\alpha=\\alpha_{ij}dz^i\\wedge dz^j, \\qquad \\text{where} \\quad z^0=\\tau, \\quad z^1=R, \\quad z^2=\\theta, \\quad z^3=\\phi,\n\\end{align}\nand where the functions $\\alpha_{ij}=\\alpha_{ij}(\\tau, R, \\theta, \\phi)$ are polynomials in $R$ and are singular on the worldline.\nLet the most divergent terms in the functions $\\alpha_{ij}$ be denoted\n \\begin{align}\n \\hat{\\alpha}_{ij}=\\frac{\\alpha_{ij}'(\\tau, \\theta, \\phi)}{R^{\\beta_{ij}}}.\\label{alpha_beta2}\n \\end{align}\n where $\\displaystyle{\\alpha_{ij}'(\\tau, \\theta, \\phi)}$ are bounded. and $\\beta_{ij}$ are positive constants.\n The distribution $\\alpha^D\\in \\Gamma_D \\Lambda^2 \\mathcal{M}$, where\n\\begin{align}\n\\alpha^D[\\varphi]=\\int_\\mathcal{M} \\varphi \\wedge \\alpha \\qquad \\text{is finite for all} \\quad \\varphi \\in \\Gamma_0 \\Lambda^2 \\mathcal{M},\n\\end{align}\nis well defined providing the six constants $\\beta_{ij}$ satisfy\n\\begin{align}\n\\beta_{01}< 1, \\quad \\beta_{12}< 2 , \\quad \\beta_{13}<2, \\notag\\\\ \\beta_{02}<2 , \\quad \\beta_{03} <2 , \\quad \\beta_{23}<3 .\\label{asymp_cond2}\n\\end{align}\n\n\\end{theorem}\n\\begin{proofs}\nAn arbitrary test 2-form $\\phi\\in \\Gamma_0 \\Lambda^2 \\mathcal{M}$ is given by\n\\begin{align}\n\\varphi=\\varphi_{i j} dy^i \\wedge dy^j\n\\end{align}\nApplying a coordinate transformation such that $\\varphi=\\hat{\\varphi}_{i j} dz^i \\wedge dz^j$ where $\\{z^i\\}$ are NU coordinates yields the following form for the coefficients $\\hat{\\varphi}_{i j}$,\n\\begin{align}\n\\hat{\\varphi}_{1 2}=& R \\textup{Y}^1_{1 2},\\notag\\\\\n\\hat{\\varphi}_{1 3}=& R \\textup{Y}^1_{1 3},\\notag\\\\\n\\hat{\\varphi}_{0 2}=&R \\textup{Y}^1_{0 2}+R^2 \\textup{Y}^2_{0 2},\\notag\\\\\n\\hat{\\varphi}_{0 3}=&R \\textup{Y}^1_{0 3}+R^2 \\textup{Y}^2_{0 3},\\notag\\\\\n\\hat{\\varphi}_{0 1}=& \\textup{Y}^0_{0 1},\\notag\\\\\n\\textup{and}\\quad\\hat{\\varphi}_{2 3}=&R^2 \\textup{Y}^2_{2 3}.\\label{phi_two}\n\\end{align}\nHere as before the functions $\\textup{Y}^l_{i j}$ are bounded functions of $\\tau$, $\\theta$ and $\\phi$.\n We are interested in the boundedness of $\\alpha^D[\\varphi]$ therefore it is sufficient to show that $\\displaystyle{\\int_\\mathcal{M} \\varphi \\wedge \\hat{\\alpha}_{ij} dz^i\\wedge dz^j }$ is bounded for all $\\varphi \\in \\Gamma_0 \\Lambda^2\\mathcal{M}$.\nHence\n\\begin{align}\n \\Bigg|\\int_\\mathcal{M}\\varphi \\wedge \\hat{\\alpha}_{ij} dz^i\\wedge &dz^j\\Bigg|\\\\ =&\\Bigg|\\int_\\mathcal{M} (-\\hat{\\varphi}_{1 3}\\hat{\\alpha}_{02} +\\hat{\\varphi}_{1 2 }\\hat{\\alpha}_{03} +\\hat{\\varphi}_{0 3}\\hat{\\alpha}_{12} -\\hat{\\varphi}_{0 2 }\\hat{\\alpha}_{13} +\\hat{\\varphi}_{ 2 3 }\\hat{\\alpha}_{01} +\\hat{\\varphi}_{0 1 }\\hat{\\alpha}_{23} ) dz^{0123}\\Bigg|\\notag\n \\end{align}\n Substituting the relations \\ref{phi_two} yields\n\\begin{align}\n \\Bigg|\\int_\\mathcal{M} \\varphi \\wedge \\hat{\\alpha}_{ij} dz^i\\wedge dz^j \\notag \\Bigg|=&\\Bigg|\\int_\\mathcal{M} R \\textup{Y}^1_{1 3}\\hat{\\alpha}_{02}dz^{0123}\\Bigg|+\\Bigg|\\int_\\mathcal{M} R \\textup{Y}^1_{1 2}\\hat{\\alpha}_{03}dz^{0123}\\Bigg|\\notag\\\\\n &+\\Bigg|\\int_\\mathcal{M} (R \\textup{Y}^1_{0 3}+R^2 \\textup{Y}^2_{0 3})\\hat{\\alpha}_{12}dz^{0123}\\Bigg|\\notag\\\\\n &+\\Bigg|\\int_\\mathcal{M} (R \\textup{Y}^1_{0 2}+R^2 \\textup{Y}^2_{0 2})\\hat{\\alpha}_{13}dz^{0123}\\Bigg|\\notag\\\\\n &+\\Bigg|\\int_\\mathcal{M} R^2 \\textup{Y}^2_{2 3} \\hat{\\alpha}_{01}dz^{0123}\\Bigg|+\\Bigg|\\int_\\mathcal{M}\\textup{Y}^0_{0 1} \\hat{\\alpha}_{23} dz^{0123}\\Bigg|\n \\end{align}\nSubstituting \\ref{alpha_beta2} and separating with respect to $R$-dependence yields\n\\begin{align}\n \\Bigg|\\int_\\mathcal{M} \\hat{\\alpha}_{ij} dz^i\\wedge dz^j \\wedge \\varphi\\notag \\Bigg|\\leq& \\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_{02}'\\textup{Y}^1_{1 3} dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R}{R^{\\beta_{02}}} d z^1\\notag\\\\\n &+ \\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_{03}'\\textup{Y}^1_{1 2} dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R}{R^{\\beta_{03}}} d z^1\\notag\\\\\n &+\\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_{12}'(\\textup{Y}^1_{0 3}+\\textup{Y}^2_{0 3}) dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R}{R^{\\beta_{12}}} d z^1\\notag\\\\\n &+\\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_{13}'(\\textup{Y}^1_{0 2}+\\textup{Y}^2_{0 2}) dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R}{R^{\\beta_{13}}} d z^1\\notag\\\\\n &+\\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_{01}'\\textup{Y}^2_{2 3}dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{R^2}{R^{\\beta_{01}}} d z^1\\notag\\\\\n &+\\Bigg|\\max\\Bigg(\\int_{\\tau=-\\infty}^{\\tau=\\infty}\\int_{\\theta=0}^{\\theta=\\pi}\\int_{\\phi=0}^{\\phi=2\\pi}\\alpha_{23}'\\textup{Y}^0_{0 1}dz^{023}\\Bigg)\\Bigg|\\int_0^\\epsilon \\frac{1}{R^{\\beta_{23}}} d z^1\n \\end{align}\nOnce again comparing the integrals with respect to $z^1=R$ with the standard result \\ref{standard_res} yields the relations \\ref{asymp_cond2}.\n\\end{proofs} \n\\chapter{Dirac Geometry}\n\\label{dirac_apend}\n\n\\section{Definitions}\n\\label{dirac_coords}\n\\begin{definition}\n\\label{dirac_map}\n\nConsider the region $N=\\widetilde{N}\\backslash C$ where $\\widetilde{N} \\subset \\mathcal{M}$ is a local neighborhood of the worldline.\nFor every field point $x\\in N$ there is a unique point $\\tau_D(x)$ at which the worldline crosses the plane of simultaneity\naccording to an observer comoving with the charge at $x$.\n\\begin{align}\nC &: \\mathbb{R} \\rightarrow \\mathcal{M}, \\quad \\tau \\mapsto C(\\tau)\\\\\n\\tau_D &: \\mathcal{M} \\rightarrow \\mathbb{R} ,\\quad x \\mapsto \\tau_D(x)\n\\label{tau_def}\n\\end{align}\n\\end{definition}\n\\begin{definition}\n\\label{dirac_y}\n Dirac geometry uses a spacelike displacement vector $Y=x-C(\\tau_D(x))$, which satisfies\n\\begin{align}\ng(Y, \\dot{\\c}(\\tau_D))=0, \\quad\\quad\\quad R_D^2=g(Y, Y),\n\\label{Dirac_geom_def}\n\\end{align}\nto associate a spacetime point with a point on the worldline. We observe that $R_D>0$ is the magnitude of $Y$.\n\\end{definition}\n\n\n\\begin{definition}\n\\label{def_vad}\nWe use the notation $C_D=C(\\tau_D(x))$. The vector fields $V_D, A_D, \\dot{A}_D \\in \\Gamma \\textup{T} N $ are defined as\n\\begin{align}\nV_D|_\\point&=\\dot{\\c}^j(\\tau_D(x)) \\frac{\\partial}{\\partial y^j},\\quadA_D|_\\point=\\ddot{\\c}^j(\\tau_D(x))\\frac{\\partial}{\\partial y^j}\\quad \\text{and} \\quad\\dot{A}_D|_\\point=\\dddot{\\c}^j(\\tau_D(x)) \\frac{\\partial}{\\partial y^j},\n\\label{defdirac_V_A}\n\\end{align}\n\\end{definition}\n\\begin{lemma}\nThe exterior derivative of the Dirac time $\\tau_D$ is given by\n\\begin{align}\nd\\tau_D=-\\frac{\\widetilde{V}_D}{g(Y, A_D)+1}.\n\\label{tauD_def}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\\\\\nIt follows from definition \\ref{dirac_y} that\n\\begin{align}\n0=&d g(Y, V_D),\\notag\\\\\n=& d g(x, V_D)-dg(C_D, V_D),\\notag\\\\\n=&\\widetilde{V}_D+\\big(g(x, A_D)+1-g(A_D, C_D)\\big)d\\tau_D,\\notag\\\\\n=&\\widetilde{V}_D+(g(Y, A_D)+1)d\\tau_D.\n\\end{align}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\nd R_D= \\frac{\\widetilde{\\y}}{R_D}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\\\\\nLet\n\\begin{align}\n\\mathbf{x}\\in \\Gamma \\textup{T} \\mathcal{M},\\qquad \\mathbf{x}|_\\point=x^a \\frac{\\partial}{\\partial y^a} \\qquad \\text{and} \\qquad \\mathbf{C}_D \\in \\Gamma \\textup{T} \\mathcal{M},\\qquad \\mathbf{C}_D|_\\point=C_D^a\\frac{\\partial}{\\partial y^a},\n\\end{align}\nIt follows from definition \\ref{dirac_y} that\n\\begin{align}\ndR_D=&d \\sqrt{g(Y, Y)}\\notag\\\\\n=&\\frac{1}{2 \\sqrt{g(Y, Y)}} d g(Y, Y)\\\\[3pt]\nd g(Y, Y)=& d g(x-C_D, x-C_D)\\notag\\\\\n=&dg(\\mathbf{x}, \\mathbf{x})+dg(\\mathbf{C}_D, \\mathbf{C}_D)-2dg(\\mathbf{x}, \\mathbf{C}_D)\\\\[3pt]\ndg(\\mathbf{x}, \\mathbf{x})&= d(g_{a b}x^a x^b),\\nonumber\\\\\n&= g_{a b} (d x^a) x^b + g_{a b} x^a (d x^b),\\nonumber\\\\\n&= x_a dx^a + x_a dx^a,\n\\end{align}\nNow $dx^a=dy^a$ since $x=(y^0, y^1, y^2, y^3)$, therefore\n\\begin{align}\ndg(\\mathbf{x}, \\mathbf{x})&= 2x_a dy^a,\\nonumber\\\\\n&= 2 \\widetilde{\\mathbf{x}}.\n\\end{align}\nAlso\n\\begin{align}\ndg(\\mathbf{C}_D, \\mathbf{C}_D) &= d(g_{a b} C_D^a C_D^b),\\nonumber\\\\\n&=(d C_D^a)g_{a b} C_D^b + (d C_D^a) g_{a b} C_D^b,\\nonumber\\\\\n&=2 C_{D a} V_D^a d\\tau_D,\\nonumber\\\\\n&=2 g( \\mathbf{C}_D, V_D) d\\tau_D,\n\\end{align}\nand\n\\begin{align}\ndg(\\mathbf{C}_D, \\mathbf{x}) &= d(g_{a b}x^a C_D^b),\\nonumber\\\\\n&=g_{a b} (dx^a)C_D^b + g_{a b} x^a d(C_D^b),\\nonumber\\\\\n&=C_{D a} dx^a + x_a d(C_D^a),\n\\end{align}\nwhere $\\displaystyle{d(C_D^a)=\\frac{\\partial(C_D^a)}{\\partial \\tau}d\\tau=V_D^a d\\tau}$, therefore\n\\begin{align}\ndg(\\mathbf{C}_D, \\mathbf{x}) &= \\widetilde{\\mathbf{C}_D} + x_a V_D^a d \\tau_D,\\nonumber\\\\\n& = \\widetilde{\\mathbf{C}_D} + g(\\mathbf{x}, V_D) d\\tau_D.\n\\end{align}\nThus\n\\begin{align}\ndR_D=&\\frac{1}{2R_D} (dg(\\mathbf{x}, \\mathbf{x})+dg(\\mathbf{C}_D, \\mathbf{C}_D)-2dg(\\mathbf{x}, \\mathbf{C}_D))\\notag\\\\\n=&\\frac{1}{R_D}\\big(\\widetilde{\\y}+ g( Y, V_D) d\\tau_D)\n\\end{align}\nThe definition \\ref{Dirac_geom_def} yields\n\\begin{align}\ndR_D=&\\frac{\\widetilde{\\y}}{R_D}\n\\end{align}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\ndg(Y, A_D)=& \\widetilde{A}_D-\\frac{\\widetilde{V}_D g(Y, \\dot{A}_D)}{g(Y, A_D)+1}\\notag\\\\[2pt]\ndg(Y, \\dot{A}_D)=& \\widetilde{\\dot{A}}_D-\\frac{\\widetilde{V}_D (g(Y, \\ddot{A}_D)+g(A_D, A_D))}{g(Y, A_D)+1}\\notag\\\\[2pt]\ndg(A_D,A_D)=& \\frac{-2g(A_D, \\dot{A}_D)\\widetilde{V}_D}{g(Y, A_D)+1}\n\\label{dirac_relations}\n\\end{align}\n\\end{lemma}\n\\begin{definition}\n\\label{def_normd}\nWe define the normalized vector field\n\\begin{align}\nn_D=\\frac{Y}{R_D},\\qquad \\textup{where} \\qquad g(n_D, n_D)=1 \\qquad \\textup{and} \\qquad g(n_D, V_D)=0.\n\\end{align}\n\\label{nd_def}\n\\end{definition}\n\n\n\n\\section{The Li$\\text{\\'{e}}$nard-Wiechert potential expressed in Dirac Geometry}\n\\label{dirac_lw}\nDirac geometry is not a natural choice to use to describe electromagnetic phenomena because all retarded (and advanced) quantities are given only as Taylor expansions around the Dirac time $\\tau_D$. The retarded stress form must be calculated as such an expansion. Below we give the advanced and retarded Li$\\text{\\'{e}}$nard-Wiechert potentials.\n\\begin{lemma}\n\\label{deltas_def}\nThe difference $\\delta_r=\\tau_D-\\tau_r$ is given in terms of $R_D$ by\n\\begin{align}\n\\delta_r=&R_D-\\frac{1}{2}g(n, \\ddot{\\c})R_D^2 +\\big(\\frac{3}{8}g(n_D, A_D)^2 +\\frac{1}{6}g(n_D, \\dot{A}_D)-\\frac{1}{24}g(A_D, A_D)\\big)R_D^3+\\mathcal{O}(R_D^4).\n\\label{dirac_taur}\n\\end{align}\nand the difference $\\delta_a=\\tau_a-\\tau_D$ is given by\n\\begin{align}\n\\delta_a=&R_D-\\frac{1}{2}g(n, \\ddot{\\c})R_D^2 +\\big(\\frac{3}{8}g(n_D, A_D)^2 -\\frac{1}{6}g(n_D, \\dot{A}_D)-\\frac{1}{24}g(A_D, A_D)\\big)R_D^3+\\mathcal{O}(R_D^4).\n\\label{dirac_taua}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align}\nC(\\tau_r)=C_D-V_D\\delta_r+A_D\\frac{\\delta_r^2}{2}-\\ddot{A}_D\\frac{\\delta_r^3}{6}+\\dddot{A}_D\\frac{\\delta_r^4}{24}+\\mathcal{O}(\\delta_r^5)\n\\end{align}\nand thus the null vector $X$ is given by\n\\begin{align}\nX=x-C(\\tau_r) &= x-C_D+V_D\\delta_r-A_D\\frac{\\delta^2}{2}+\\dot{A}_D\\frac{\\delta_r^3}{6}-\\ddot{A}_D\\frac{\\delta_r^4}{24}+\\mathcal{O}(\\delta_r^5),\\notag\\\\\n&=Y+V_D\\delta_r-A_D\\frac{\\delta_r^2}{2}+\\dot{A}_D\\frac{\\delta_r^3}{6}-\\ddot{A}_D\\frac{\\delta_r^4}{24}+\\mathcal{O}(\\delta_r^5).\n\\label{X_taud}\n\\end{align}\nSubstituting (\\ref{X_taud}) into the lightcone condition (\\ref{null_cond}) gives\n\\begin{align}\ng(X, X)=& g(Y, Y)+2g(Y, V_D)\\delta_r-(1+g(Y, A_D))\\delta_r^2+\\frac{1}{3}g(Y, \\dot{A}_D)\\delta_r^3,\\notag\\\\\n&-\\frac{1}{12}(g(Y, \\ddot{A}_D)+g(A_D, A_D))\\delta_r^4+\\mathcal{O}(\\delta_r^5).\n\\end{align}\nDefinition (\\ref{dirac_y}) and (\\ref{nd_def}) yield\n\\begin{align}\ng(X, X)=& R_D^2-(1+R_Dg(n_D, A_D))\\delta_r^2+\\frac{R_D}{3}g(n_D, \\dot{A}_D)\\delta_r^3\\notag\\\\\n&-\\frac{1}{12}(R_Dg(n_D, \\ddot{A}_D)+g(A_D, A_D))\\delta_r^4+\\mathcal{O}(\\delta_r^5).\n\\label{xx_d}\n\\end{align}\nWe may solve this equation to obtain $\\delta_r$ and $\\delta_a$ in terms of $R_D$.\n\n\n\nLet $\\delta_r=a_1 R_D$, then equating coefficients of order $R_D^2$ yields\n\\begin{align}\na_1^2=1.\n\\end{align}\nWe choose $\\delta_r > 0$. Knowing that $R_D>0$ it follows that $a_1=+1$. Now let $\\delta_r=R_D+a_2R_D^2$ then then equating coefficients of order $R_D^3$ yields\n\\begin{align}\n0=&2a_2+g(n_D, \\ddot{\\c})\\notag\\\\\n\\Rightarrow \\quad& a_2=-\\frac{g(n_D, \\ddot{\\c})}{2}\n\\end{align}\nLet $\\delta_r=R_D-\\frac{g(n_D, \\ddot{\\c})}{2}R_D^2+a_3R_D^3$, then then equating coefficients of order $R_D^4$ yields\n\\begin{align}\na_3=\\frac{3}{8}g(n_D, A_D)^2 +\\frac{1}{6}g(n_D, \\dot{A}_D)-\\frac{1}{24}g(A_D, A_D).\n\\end{align}\nThus to third order $\\delta_r$ is given by (\\ref{dirac_taur}).\n\n\nA similar calculation may be performed in to obtain an expression for $\\delta_a =\\tau_a-\\tau_D$ in terms of $R_D$. In this case all quantities on the left hand side are evaluated at the advanced time $\\tau_a$, so that instead of solving the retarded null condition $g(X, X)=0$ we must solve the advanced null condition $g(W, W)=0$. Since $\\tau_a-\\tau_D$ is positive this means that all terms with odd powers of $\\delta$ will have opposite sign to those in the retarded calculations. The resulting expression for $\\delta_a$ is given by (\\ref{dirac_taua}).\n\n\\end{proofs}\n\\begin{lemma}\nIn terms of the Dirac time $\\tau_D$ and the Dirac radius $R_D$ the retarded Li$\\text{\\'{e}}$nard-Wiechert potential is given by\n\\begin{align}\n\\mathrm{A}_r=&-\\frac{V_D}{R_D} +\\big(A_D +\\frac{1}{2}g(n_D, A_D)V_D\\big)\\notag\\\\\n&+\\Big(V_D\\big(\\frac{1}{8}g(A_D, A_D)-\\frac{1}{8}g(n_D, A_D)^2-\\frac{1}{3}g(n_D, \\dot{A}_D)\\big) -\\frac{1}{2}\\dot{A}_D-\\frac{1}{2}g(n_D, A_D)A_D\\Big)R_D\\notag\\\\\n& +\\mathcal{O}(R_D^2),\n\\label{lw_dirac}\n\\end{align}\nand the advanced Li$\\text{\\'{e}}$nard-Wiechert potential is given by\n\\begin{align}\n\\mathrm{A}_a=&\\frac{V_D}{R_D} +\\big(A_D -\\frac{1}{2}g(n_D, A_D)V_D\\big)\\notag\\\\\n&+\\Big(-V_D\\big(\\frac{1}{8}g(A_D, A_D)-\\frac{1}{8}g(n_D, A_D)^2+\\frac{1}{3}g(n_D, \\dot{A}_D)\\big) +\\frac{1}{2}\\dot{A}_D-\\frac{1}{2}g(n_D, A_D)A_D\\Big)R_D\\notag\\\\\n& +\\mathcal{O}(R_D^2)\n\\label{alw_dirac}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\n\n\n\n\n\nWe evaluate the retarded Li$\\text{\\'{e}}$nard-Wiechert potential as a series in $R_D$.\n\\begin{align}\nV=V_D-A_D\\delta_r+\\dot{A}_D\\frac{\\delta_r^2}{2}-\\ddot{A}_D(\\tau_D)\\frac{\\delta_r^3}{6}+\\mathcal{O}(\\tau^4)\n\\end{align}\nSubstituting (\\ref{dirac_taur}) yields\n\\begin{align}\nV=& V_D-A_dR_D +\\frac{1}{2}\\big(\\dot{A}_D+A_Dg(n_D, A_D)\\big)R_D^2\\notag\\\\\n&+\\Big(A_D\\big(\\frac{3}{8}g(n_D, A_D)^2 +\\frac{1}{6}g(n_D, \\dot{A}_D)-\\frac{1}{24}g(A_D, A_D)\\big)-\\frac{1}{6}\\ddot{A}_D-\\frac{1}{2}g(n_D, A_D)\\dot{A}_D\\Big)R_D^3\\notag\\\\\n&+\\mathcal{O}(R_D^4)\n\\label{v_dirac}\n\\end{align}\nAlso\n\\begin{align}\ng(X, V)=& -(g(Y, A_D)+1)\\delta_r+g(Y, \\dot{A}_D)\\frac{\\delta_r^2}{2}\\notag\\\\\n&+\\big(g(A_D, A_D)-g(Y, \\ddot{A}_D)\\big)\\frac{\\delta_r^3}{6} +\\mathcal{O}(\\delta_r^4)\n\\end{align}\nAgain substituting (\\ref{dirac_taur}) yields\n\\begin{align}\ng(X, V)=&-R_D -\\frac{1}{2}g(n_D, A_D)R_D^2\\notag\\\\\n&+\\Big(\\frac{1}{8}g(n_D, A_D)^2 +\\frac{1}{2}g(n_D, A_D) -\\frac{1}{6}g(n_D, \\dot{A}_D)+\\frac{1}{24}g(A_D, A_D)\\Big)R_D^3 +\\mathcal{O}(R_D^4)\n\\label{gxv_dirac}\n\\end{align}\nDividing (\\ref{v_dirac}) by (\\ref{gxv_dirac}) gives (\\ref{lw_dirac}). Evaluating the advanced potential\n \\begin{align}\n \\mathrm{A}_{\\textup{adv}}|_\\point=\\frac{\\dot{\\c}(\\tau_a)}{g(W, \\dot{\\c}(\\tau_a))}\n \\end{align}\n using the same procedure leads to (\\ref{alw_dirac}).\n\\end{proofs}\n\n\n\n The retarded and advanced Li$\\text{\\'{e}}$nard-Wiechert fields $\\flw_{\\textup{ret}}$ and $\\flw_{\\textup{adv}}$ are obtained by taking the exterior derivative of $\\mathrm{A}_{\\textup{ret}}$ and $\\mathrm{A}_{\\textup{adv}}$ respectively. In 1938 Dirac \\cite{Dirac38} showed that the difference between the retarded and advanced fields is finite on the worldline and given by\n \\begin{align}\n \\frac{1}{2} (\\flw_{\\textup{ret}}-\\flw_{\\textup{adv}})= \\frac{2}{3}(g(\\ddot{\\c}, \\ddot{\\c})\\widetilde{\\dot{\\c}}-\\widetilde{\\dddot{\\c}})\n \\label{fminus}\n \\end{align}\n\n\n\nIt is easily seen that taking the sum of expansions\n\\begin{align}\n\\flw_{\\textup{ret}}=\\frac{1}{2}(\\flw_{\\textup{adv}}+\\flw_{\\textup{ret}})+\\frac{1}{2}(\\flw_{\\textup{adv}}-\\flw_{\\textup{ret}}).\n\\end{align}\nis equivalent to expanding $\\flw_{\\textup{ret}}$ only. This point was emphasized by Infeld and Wallace \\cite{Infeld39}, and later by Havas \\cite{Havas48}.\n\n\n\n\n\\chapter[Adapted N-U coordinates ]{Adapted N-U coordinates $(\\tau, \\mathsf{r} , \\theta, \\phi)$}\n\\label{app_coords}\n\nFor the numerical investigation presented in chapter \\ref{bendingbeams} we use a coordinate system $(\\tau, \\mathsf{r} , \\theta, \\phi)$ adapted from the Newman-Unti coordinates. This change in coordinates was initially motivated by our interest in the ultra-relativistic Li$\\text{\\'{e}}$nard-Wiechert fields. The N-U coordinate system breaks down in the ultra-relativistic limit since $R=-g(X, V)=0$ when $V$ is null. In the new coordinate system the radial parameter is given by\n\\begin{align}\n\\mathsf{r}=-\\frac{R}{\\alpha} = -g(X, \\partial_{y^0})\n\\end{align}\nwhich remains non-zero in the ultra-relativistic limit. If $(y^0, y^1, y^2, y^3)$ is the global Lorentzian coordinate chart then the coordinate transformation is given by\n\\begin{align}\n&y^0=C^0 (\\tau) + \\mathsf{r} \\notag\\\\\n&y^1=C^1(\\tau) + \\mathsf{r} \\sin(\\theta)\\cos(\\phi)\\notag\\\\\n&y^2=C^2 (\\tau) + \\mathsf{r} \\sin(\\theta)\\sin(\\phi)\\notag\\\\\n&y^3=C^3(\\tau) +\\mathsf{r} \\cos(\\theta).\n\\label{coords_new}\n\\end{align}\n\\begin{lemma}\nIn terms of the new coordinates the vector fields $X, V \\in \\Gamma \\textup{T} {(\\m\\backslashC)}$ are given by\n\\begin{align}\n&X=\\mathsf{r} \\frac{\\partial}{\\partial \\mathsf{r} }\\\\\\notag\n&V=\\frac{\\partial}{\\partial \\tau}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\nX&= \\underline{x} - C(\\tau)\\\\\n&=\\mathsf{r} \\frac{\\partial}{\\partial y^0} + \\mathsf{r} \\sin(\\theta)\\cos(\\phi)\\frac{\\partial}{\\partial y^1}+\\mathsf{r} \\sin(\\theta) \\sin(\\phi)\\frac{\\partial}{\\partial y^2} +\\mathsf{r} \\cos\\theta \\frac{\\partial}{\\partial y^3}\\\\\n&=\\mathsf{r} \\frac{\\partial}{\\partial \\mathsf{r} }\\\\\n\\frac{\\partial}{\\partial \\tau}&= \\frac{\\partial y^0}{\\partial \\tau} \\frac{\\partial}{\\partial y^0} + \\frac{\\partial y^1}{\\partial \\tau}\\frac{\\partial}{\\partial y^1}+\\frac{\\partial y^2}{\\partial \\tau}\\frac{\\partial}{\\partial y^2} +\\frac{\\partial y^3}{\\partial \\tau }\\frac{\\partial}{\\partial y^3}\\\\\n&=\\dot{\\c}^0(\\tau)\\frac{\\partial}{\\partial y^0} +\\dot{\\c}^1(\\tau)\\frac{\\partial}{\\partial y^1} +\\dot{\\c}^2(\\tau)\\frac{\\partial}{\\partial y^2}+ \\dot{\\c}^3(\\tau)\\frac{\\partial}{\\partial y^3}\\\\\n&=\\dot{\\c}^a(\\tau)\\frac{\\partial}{\\partial y^a}\\\\\n&=\\dot{\\c}(\\tau)\\\\\n&=V\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\nThe Minkowski metric $g\\in \\bigotimes^{[ \\mathds{F}, \\mathds{F}]} \\mathbf{M}$ is given by\n\\begin{align}\ng=&-c^2 d\\tau \\otimes d\\tau +\\mathsf{r} ^2 d\\theta \\otimes d\\theta + \\mathsf{r} ^2 \\sin^2\\theta d\\phi \\otimes d\\phi\\nonumber\\\\\n&+ \\alpha[d\\tau \\otimes d\\mathsf{r} + d\\mathsf{r} \\otimes d\\tau]+\\mathsf{r} \\alpha_{\\theta} [d\\tau \\otimes d\\theta + d\\theta \\otimes d\\tau] +\\mathsf{r} \\alpha_{\\phi}[d\\tau \\otimes d\\phi \\nonumber\\\\\n&+ d\\phi \\otimes d\\tau]\\label{g_newcoords}\n\\end{align}\nand the inverse metric $\\g^{-1} \\in \\bigotimes^{[ \\mathds{V}, \\mathds{V}]} \\mathbf{M}$ is given by\n\\begin{align}\n \\g^{-1}=&\\frac{c^2 \\sin^2(\\theta)+\\alpha_{\\theta}^2 \\sin^2(\\theta)+\\alpha_{\\phi}^2}{\\sin^2(\\theta) \\alpha^2} \\Big(\\frac{\\partial}{\\partial \\mathsf{r} }\\otimes\\frac{\\partial}{\\partial \\mathsf{r} }\\Big)+\\frac{1}{\\mathsf{r} ^2}\\Big(\\frac{\\partial}{\\partial \\theta}\\otimes\\frac{\\partial}{\\partial \\theta}\\Big)+\\frac{1}{\\mathsf{r} ^2 \\sin^2\\theta}\\Big(\\frac{\\partial}{\\partial \\phi}\\otimes\\frac{\\partial}{\\partial \\phi}\\Big)\\nonumber\\\\\n&+\\frac{1}{\\alpha} \\Big(\\frac{\\partial}{\\partial \\tau} \\otimes \\frac{\\partial}{\\partial \\mathsf{r} } + \\frac{\\partial}{\\partial \\mathsf{r} } \\otimes \\frac{\\partial}{\\partial \\tau}\\Big)\n-\\frac{\\alpha_{\\theta}}{\\alpha \\mathsf{r}}\\Big( \\frac{\\partial}{\\partial \\mathsf{r}}\\otimes \\frac{\\partial}{\\partial \\theta}+ \\frac{\\partial}{\\partial \\theta}\\otimes \\frac{\\partial}{\\partial \\mathsf{r}}\\Big)\\nonumber \\\\\n&-\\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} \\sin^2(\\theta)}\\Big(\\frac{\\partial}{\\partial \\mathsf{r}}\\otimes\\frac{\\partial}{\\partial \\phi}+\\frac{\\partial}{\\partial \\phi}\\otimes\\frac{\\partial}{\\partial \\mathsf{r}}\\Big)\n\\end{align}\nWhere $\\alpha$ is defined by (\\ref{def_alpha}) and $\\alpha_{\\theta}$ and $\\alpha_{\\phi}$ are the derivatives of $\\alpha$ with respect to $\\theta$ and $\\phi$ respectively. Let $z^0=\\tau,\\quad z^1=\\mathsf{r},\\quad z^2=\\theta,\\quad z^3=\\phi$, then the matrices $G'=G'_{ab}=g(\\partial_{z^a},\\partial_{z^b})$ and $G'^{-1}=G'^{-1}_{ab}=\\g^{-1}(d z^a, d z^b)$ are given by\n\n\\[G'=g(\\partial_{z^a},\\partial_{z^b})= \\left( \\begin{array}{cccc}\\displaystyle\n-c^2 &\\displaystyle \\alpha &\\displaystyle \\mathsf{r}\\alpha_{\\theta} &\\displaystyle \\mathsf{r}\\alpha_{\\phi} \\\\\n\\displaystyle \\alpha & 0 & 0 & 0 \\\\\n\\displaystyle \\mathsf{r}\\alpha_{\\theta} & 0 &\\displaystyle \\mathsf{r}^2 &0\\\\\n\\displaystyle \\mathsf{r}\\alpha_{\\phi} & 0 & 0 &\\displaystyle\\mathsf{r}^2 \\sin^2\\theta \\end{array} \\right)\\]\n\n\\[G'^{-1}= \\left( \\begin{array}{cccc}\n0 &\\displaystyle \\frac{1}{\\alpha} & 0 & 0 \\\\\n\\displaystyle\\frac{1}{\\alpha} &\\displaystyle \\frac{ c^2\\sin^2(\\theta)+\\alpha_{\\theta}^2 \\sin^2(\\theta)+\\alpha_{\\phi}^2}{ \\sin^2(\\theta) \\alpha^2} &\\displaystyle -\\frac{\\alpha_{\\theta}}{\\alpha \\mathsf{r}} &\\displaystyle -\\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} sin^2(\\theta)} \\\\\n0 & \\displaystyle-\\frac{\\alpha_{\\theta}}{\\alpha \\mathsf{r}} &\\displaystyle \\frac{1}{\\mathsf{r}^2} &0\\\\\n0 &\\displaystyle -\\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} \\sin^2(\\theta)} & 0 &\\displaystyle \\frac{1}{\\mathsf{r}^2 \\sin^2\\theta} \\end{array} \\right)\\label{gii}\\]\n\n\\end{lemma}\n\\begin{proofs}\n\\begin{align}\ng=-d y^0 \\otimes d y^0 +d y^1\\otimes d y^1 +d y^2 \\otimes d y^2 + d y^3 \\otimes d y^3\n\\end{align}\n\\begin{flushleft}\n$d y^0=\\dot{\\c}^0(\\tau)d\\tau +d\\mathsf{r}$\\\\\n$d y^1=\\dot{\\c}^1(\\tau)d\\tau + \\sin(\\theta)\\cos(\\phi) d\\mathsf{r} + \\mathsf{r} \\cos(\\theta)\\cos(\\phi)d \\theta - \\mathsf{r} \\sin(\\theta)\\sin(\\phi)d\\phi$\\\\\n$d y^2=\\dot{\\c}^2{\\tau} d\\tau + \\sin(\\theta)\\sin(\\phi) d\\mathsf{r} + \\mathsf{r} \\cos(\\theta)\\sin(\\phi)d \\theta + \\mathsf{r} \\sin(\\theta)\\cos(\\phi)d\\phi$\\\\\n$d y^3=\\dot{\\c}^3{\\tau} d\\tau + \\cos(\\theta) d\\mathsf{r} - \\mathsf{r}\\sin(\\theta) d\\theta$\\\\[1cm]\n\\end{flushleft}\nThus\n\\begin{align*}\ng &= -c^2 d\\tau \\otimes d\\tau +\\mathsf{r}^2 d\\theta \\otimes d\\theta + \\mathsf{r}^2 \\sin^2\\theta d\\phi \\otimes d\\phi\\\\\n&+(-\\dot{\\c}^0 +\\dot{\\c}^1 \\sin\\theta \\cos\\phi + \\dot{\\c}^2 \\sin\\theta \\sin\\phi +\\dot{\\c}^3 \\cos\\theta)[d\\tau \\otimes d\\mathsf{r} + d\\mathsf{r} \\otimes d\\tau]\\\\\n&+ (\\dot{\\c}^1 \\mathsf{r} \\cos\\theta \\cos\\phi + \\dot{\\c}^2 \\mathsf{r} \\cos\\theta \\sin\\phi -\\dot{\\c}^3 \\mathsf{r} \\sin \\theta)[d\\tau \\otimes d\\theta + d\\theta \\otimes d\\tau]\\\\\n&+(\\dot{\\c}^2 \\mathsf{r} \\sin\\theta \\cos\\phi - \\dot{\\c}^1 \\mathsf{r} \\sin\\theta \\sin\\phi) [d\\tau \\otimes d\\phi + d\\phi \\otimes d\\tau]\\\\[10pt]\n&=-c^2 d\\tau \\otimes d\\tau +\\mathsf{r}^2 d\\theta \\otimes d\\theta + \\mathsf{r}^2 \\sin^2\\theta) d\\phi \\otimes d\\phi\\\\\n&+ \\alpha[d\\tau \\otimes d\\mathsf{r} + d\\mathsf{r} \\otimes d\\tau]+\\mathsf{r}\\alpha_{\\theta} [d\\tau \\otimes d\\theta + d\\theta \\otimes d\\tau] +\\mathsf{r}\\alpha_{\\phi}[d\\tau \\otimes d\\phi + d\\phi \\otimes d\\tau]\n\\end{align*}\\\\[3pt]\n\n$\\g^{-1}$ follows from the matrix $(\\ref{gii})$\n\\end{proofs}\n\\begin{corrol}\n\\begin{align}\n\\widetilde{d\\tau}&=\\frac{1}{\\alpha}\\partial_{\\mathsf{r}}\\notag\\\\\n\\widetilde{d \\mathsf{r}}&=\\frac{c^2 \\sin^2(\\theta)+\\alpha_{\\theta}^2 \\sin^2(\\theta)+\\alpha_{\\phi}^2}{\\sin^2(\\theta) \\alpha^2}\\partial_\\mathsf{r} +\\frac{1}{\\alpha}\\partial_\\tau -\\frac{\\alpha_{\\theta}}{\\alpha \\mathsf{r}}\\partial_\\theta -\\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} sin^2(\\theta)}\\partial\\phi\\notag\\\\\n\\widetilde{d\\theta} &=\\frac{1}{\\mathsf{r}^2}\\partial_\\theta -\\frac{\\alpha_{\\theta}}{\\alpha\\mathsf{r}}\\partial_\\mathsf{r}\\notag\\\\\n\\widetilde{d\\phi} &=\\frac{1}{\\mathsf{r}^2 \\sin^2\\theta}\\partial_\\phi -\\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} \\sin^2(\\theta)}\\partial_\\mathsf{r}\n\\end{align}\n\\end{corrol}\n\\begin{proofs}\nfollows from definition of $\\g^{-1}$.\n\\end{proofs}\n\\begin{lemma}\nThe 1-forms $\\widetilde{\\x}, \\widetilde{V} \\in \\Gamma \\Lambda^1 \\mathbf{M}$ are given by\n\\begin{align}\n&\\widetilde{X}= \\mathsf{r} \\alpha d\\tau\\\\\n&\\widetilde{V} = -\\epsilon^2d\\tau + \\alpha d\\mathsf{r} + \\mathsf{r}\\alpha_{\\theta} d\\theta + \\mathsf{r}\\alpha_{\\phi} d\\phi\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\n\\widetilde{X} &= \\mathsf{r} g (\\frac{\\partial}{\\partial \\mathsf{r}}, -)\\\\\n&=\\mathsf{r} (-\\dot{\\c}^0 +\\dot{\\c}^1 \\sin\\theta \\cos\\phi + \\dot{\\c}^2 \\sin\\theta \\sin\\phi +\\dot{\\c}^3 \\cos\\theta) d\\tau\\\\\n&= \\mathsf{r} \\alpha d\\tau\\\\\n\\widetilde{V} &= g (\\frac{\\partial}{\\partial \\tau}, -)\\\\\n&= -c^2 d\\tau+ \\alpha d\\mathsf{r} + \\mathsf{r}\\alpha_{\\theta} d\\theta + \\mathsf{r}\\alpha_{\\phi} d\\phi\n\\end{align*}\n\\end{proofs}\n\n\\begin{lemma}\n\\begin{align}\n\\star 1 = -\\alpha \\mathsf{r}^2 \\sin \\theta d\\tau \\wedge d\\mathsf{r} \\wedge d\\theta \\wedge d\\phi\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\n\\star 1 &= \\sqrt{|\\det(g)|} d\\tau\\wedge d \\mathsf{r} \\wedge d\\theta \\wedge d\\phi\\\\\n&=\\sqrt{|-\\alpha^2 \\mathsf{r}^4 \\sin^2{\\theta}|}d\\tau\\wedge d \\mathsf{r} \\wedge d\\theta \\wedge d\\phi\\\\\n&=-\\alpha \\mathsf{r}^2 \\sin \\theta d\\tau \\wedge d\\mathsf{r} \\wedge d\\theta \\wedge d\\phi\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\n\\widetilde{A} = \\dot{\\alpha} d\\mathsf{r} + \\mathsf{r}\\dot{\\alpha_{\\theta}}d\\theta +\\mathsf{r}\\dot{\\alpha_{\\phi}} d\\phi\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\n\\widetilde{A} &= \\frac{d V_a}{d\\tau}d y^a\\\\\n&= -\\ddot{\\c}^0(\\tau) d y^0 + \\ddot{\\c}^1(\\tau) d y^1 + \\ddot{\\c}^2(\\tau)d y^2 +\\ddot{\\c}^3(\\tau)d y^3\\\\\n&=-\\ddot{\\c}^0(\\tau)\\Big[\\dot{\\c}^0(\\tau) d\\tau + d\\mathsf{r}\\Big] \\\\\n &\\quad+\\ddot{\\c}^1(\\tau)\\Big[\\dot{\\c}^1(\\tau) d\\tau + \\sin(\\theta)\\cos(\\phi) d\\mathsf{r} + \\mathsf{r} \\cos(\\theta)\\cos(\\phi)d \\theta - \\mathsf{r} \\sin(\\theta)\\sin(\\phi)d\\phi\\Big]\\\\\n &\\quad+\\ddot{\\c}^2(\\tau)\\Big[\\dot{\\c}^2(\\tau) d\\tau + \\sin(\\theta)\\sin(\\phi) d\\mathsf{r} + \\mathsf{r} \\cos(\\theta)\\sin(\\phi)d \\theta + \\mathsf{r} \\sin(\\theta)\\cos(\\phi)d\\phi\\Big]\\\\\n &\\quad+\\ddot{\\c}^3(\\tau)\\Big[\\dot{\\c}^3(\\tau) d\\tau + \\cos(\\theta) d\\mathsf{r} - \\mathsf{r} \\sin(\\theta) d\\theta\\Big] \\\\\n &=g(A, V) d\\tau + \\dot{\\alpha} d\\mathsf{r} + \\mathsf{r} \\dot{\\alpha_{\\theta}}d\\theta +\\mathsf{r} \\dot{\\alpha_{\\phi}} d\\phi\\\\\n &= \\dot{\\alpha} d\\mathsf{r} + \\mathsf{r} \\dot{\\alpha_{\\theta}}d\\theta +\\mathsf{r} \\dot{\\alpha_{\\phi}} d\\phi\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\nA&=\\frac{\\dot{\\alpha}}{\\alpha}\\partial_\\tau +\\Bigg[\\Big(\\frac{c^2 \\dot{\\alpha}}{\\alpha^2}\\Big) +\\Big( \\frac{\\dot{\\alpha} \\alpha_{\\theta}^2}{\\alpha^2} -\\frac{ \\alpha_{\\theta} \\dot{\\alpha_{\\theta}}}{ \\alpha} \\Big)+ \\frac{1}{\\sin^2(\\theta)}\\Big(\\frac{\\alpha_{\\phi}^2\\dot{\\alpha}}{\\alpha^2}-\\frac{\\alpha_{\\phi}\\dot{\\alpha_{\\phi}}}{\\alpha}\\Big)\\Bigg]\\partial_\\mathsf{r} \\nonumber\\\\\n&+\\frac{1}{\\mathsf{r} }\\Big(\\dot{\\alpha_{\\theta}}-\\frac{\\dot{\\alpha}\\alpha_{\\theta}}{ \\alpha}\\Big)\\partial_\\theta + \\frac{1}{\\mathsf{r} \\sin^2(\\theta)}\\Big(\\dot{\\alpha_{\\phi}}-\\frac{\\dot{\\alpha} \\alpha_{\\phi}}{\\alpha }\\Big)\\partial_\\phi\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\nA &= \\g^{-1}(\\widetilde{A}, -)\\\\\n&=g(A, V)\\g^{-1}(d\\tau, -) +\\dot{\\alpha} \\g^{-1} (d\\mathsf{r} , -) +\\mathsf{r} \\dot{\\alpha_{\\theta}} \\g^{-1}(d\\theta, -) + \\mathsf{r} \\dot{\\alpha_{\\phi}} \\g^{-1}(d\\phi, -)\\\\\n&=\\frac{g(A, V)}{\\alpha}\\partial_\\mathsf{r} +\\dot{\\alpha}\\Bigg(\\frac{c^2 \\sin^2(\\theta)+\\alpha_{\\theta}^2 \\sin^2(\\theta)+\\alpha_{\\phi}^2}{ \\sin^2(\\theta) \\alpha^2}\\partial_\\mathsf{r} +\\frac{1}{\\alpha}\\partial_\\tau -\\frac{\\alpha_{\\theta}}{\\alpha \\mathsf{r} }\\partial_\\theta - \\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} \\sin^2(\\theta)}\\partial_\\phi\\Bigg)\\\\\n&+\\mathsf{r} \\dot{\\alpha_{\\theta}}\\Big(\\frac{1}{\\mathsf{r} ^2}\\partial_\\theta - \\frac{\\alpha_{\\theta}}{\\alpha \\mathsf{r} }\\partial_\\mathsf{r} \\Big) +\\mathsf{r} \\dot{\\alpha_{\\phi}}\\Big(\\frac{1}{\\mathsf{r} ^2 \\sin^2(\\theta)}\\partial_\\phi - \\frac{\\alpha_{\\phi}}{\\alpha \\mathsf{r} \\sin^2(\\theta)}\\partial_\\mathsf{r} \\Big)\\\\\n&=\\frac{\\dot{\\alpha}}{\\alpha}\\partial_\\tau +\\Bigg[\\Big(\\frac{g(A, V)}{\\alpha} + \\frac{c^2 \\dot{\\alpha}}{\\alpha^2}\\Big) +\\Big( \\frac{\\dot{\\alpha} \\alpha_{\\theta}^2}{\\alpha^2} -\\frac{ \\alpha_{\\theta} \\dot{\\alpha_{\\theta}}}{ \\alpha} \\Big)+ \\frac{1}{\\sin^2(\\theta)}\\Big(\\frac{\\alpha_{\\phi}^2\\dot{\\alpha}}{\\alpha^2}-\\frac{\\alpha_{\\phi}\\dot{\\alpha_{\\phi}}}{\\alpha}\\Big)\\Bigg]\\partial_\\mathsf{r} \\\\\n&+\\frac{1}{\\mathsf{r} }\\Big(\\dot{\\alpha_{\\theta}}-\\frac{\\dot{\\alpha}\\alpha_{\\theta}}{ \\alpha}\\Big)\\partial_\\theta + \\frac{1}{\\mathsf{r} \\sin^2(\\theta)}\\Big(\\dot{\\alpha_{\\phi}}-\\frac{\\dot{\\alpha} \\alpha_{\\phi}}{\\alpha }\\Big)\\partial_\\phi\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\ng(X, V)&=\\mathsf{r} \\alpha\\\\\ng(X, A)&=\\mathsf{r} \\dot{\\alpha}\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align*}\ng(X, V) &= g(\\mathsf{r} \\frac{\\partial}{\\partial \\mathsf{r} }, \\frac{\\partial}{\\partial \\tau})\\\\\n&=\\mathsf{r} g(\\frac{\\partial}{\\partial \\mathsf{r} }, \\frac{\\partial}{\\partial \\tau})\\\\\n&=\\mathsf{r} (-\\dot{\\c}^0 +\\dot{\\c}^1 \\sin\\theta \\cos\\phi + \\dot{\\c}^2 \\sin\\theta \\sin\\phi +\\dot{\\c}^3 \\cos\\theta)\\\\\n&=\\mathsf{r} \\alpha\\\\\n\\end{align*}\nFor $g(X, A)$ the only relevant term in the metric is $\\alpha d\\mathsf{r} \\otimes d\\tau$, thus\n\\begin{align*}\ng(X, A) &= \\mathsf{r} \\frac{\\dot{\\alpha}}{\\alpha}g(\\partial_\\mathsf{r} , \\partial_\\tau)\\\\\n&= \\mathsf{r} \\dot{\\alpha}\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\n&\\mathrm{A}=-\\frac{q}{4 \\pi \\epsilon_0}\\Big(\\frac{c^2}{\\alpha \\mathsf{r} }d\\tau + \\frac{1}{\\mathsf{r} }d\\mathsf{r} + \\frac{\\alpha_{\\theta}}{\\alpha}d\\theta + \\frac{\\alpha_{\\phi}}{\\alpha}d\\phi\\Big)\\\\\n&\\mathrm{F}_{\\textup{R}}=\\frac{q}{4 \\pi \\epsilon_0}\\frac{(\\alpha\\dot{\\alpha_{\\theta}}-\\dot{\\alpha}\\alpha_{\\theta})d\\tau \\wedge d\\theta + (\\alpha\\dot{\\alpha_{\\phi}}-\\dot{\\alpha}\\alpha_{\\phi})d\\tau \\wedge d\\phi}{\\alpha^2}\\\\\n&\\mathrm{F}_{\\textup{C}}=-\\qec^2\\Big( \\frac{\\alpha}{\\mathsf{r} ^2}d\\tau \\wedge d\\mathsf{r} + \\frac{\\alpha_{\\theta}}{\\mathsf{r} \\alpha^2}d\\tau \\wedge d\\theta + \\frac{\\alpha_{\\phi}}{\\mathsf{r} \\alpha^2}d\\tau \\wedge d\\phi\\Big)\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n(53) follows directly from (20) (47) (51)\n\\[\\mathrm{F}_{\\textup{R}} = \\frac{q}{4 \\pi \\epsilon_0} \\frac{g(X, V)\\tilde{X}\\wedge \\widetilde{A}-g(X, A)\\tilde{X} \\wedge \\widetilde{V}}{g(X, V)^3}\\]\nUsing the relations:\n\\begin{align}\n\\tilde{X} \\wedge \\widetilde{V}&=\\Big(\\mathsf{r} \\alpha d\\tau\\Big)\\wedge\\Big(-c^2d\\tau + \\alpha d\\mathsf{r} + \\mathsf{r} \\alpha_{\\theta} d\\theta + \\mathsf{r} \\alpha_{\\phi} d\\phi\\Big)\\nonumber\\\\\n&= \\mathsf{r} \\alpha^2 d\\tau \\wedge d\\mathsf{r} + \\mathsf{r} ^2 \\alpha \\alpha_{\\theta} d\\tau \\wedge d\\theta + \\mathsf{r} ^2 \\alpha \\alpha_{\\phi} d\\tau \\wedge d\\phi\\\\\n\\tilde{X} \\wedge \\widetilde{A}&=\\Big(\\mathsf{r} \\alpha d\\tau\\Big)\\wedge\\Big(g(A, V) d\\tau + \\dot{\\alpha} d\\mathsf{r} + \\mathsf{r} \\dot{\\alpha_{\\theta}}d\\theta +\\mathsf{r} \\dot{\\alpha_{\\phi}} d\\phi\\Big)\\nonumber\\\\\n&=\\mathsf{r} \\alpha \\dot{\\alpha} d\\tau \\wedge d\\mathsf{r} + \\mathsf{r} ^2 \\alpha \\dot{\\alpha_{\\theta}} d\\tau \\wedge d\\theta + \\mathsf{r} ^2 \\alpha \\dot{\\alpha_{\\phi}} d\\tau \\wedge d\\phi\n\\end{align}\nalong with (51)(52)gives\n\\begin{align*}\n\\mathrm{F}_{\\textup{R}} &=\\frac{q}{4 \\pi \\epsilon_0}\\frac{\\alpha \\mathsf{r} (\\mathsf{r} \\alpha \\dot{\\alpha} d\\tau \\wedge d\\mathsf{r} + \\mathsf{r} ^2 \\alpha \\dot{\\alpha_{\\theta}} d\\tau \\wedge d\\theta + \\mathsf{r} ^2 \\alpha \\dot{\\alpha_{\\phi}} d\\tau \\wedge d\\phi)}{(\\alpha \\mathsf{r} )^3}\\\\\n&-\\frac{q}{4 \\pi \\epsilon_0}\\frac{\\mathsf{r} \\dot{\\alpha}(\\mathsf{r} \\alpha^2 d\\tau \\wedge d\\mathsf{r} + \\mathsf{r} ^2 \\alpha \\alpha_{\\theta} d\\tau \\wedge d\\theta + \\mathsf{r} ^2 \\alpha \\alpha_{\\phi} d\\tau \\wedge d\\phi)}{(\\alpha \\mathsf{r} )^3}\\\\\n&=\\frac{q}{4 \\pi \\epsilon_0}\\frac{1}{\\alpha^2}\\Big((\\alpha\\dot{\\alpha_{\\theta}}-\\dot{\\alpha}\\alpha_{\\theta})d\\tau \\wedge d\\theta + (\\alpha\\dot{\\alpha_{\\phi}}-\\dot{\\alpha}\\alpha_{\\phi})d\\tau \\wedge d\\phi\\Big)\n\\end{align*}\n\\begin{align*}\n\\mathrm{F}_{\\textup{C}} &=-\\frac{q}{4 \\pi \\epsilon_0}\\frac{c^2 \\tilde{X}\\wedge \\widetilde{V}}{g(X, V)^3} \\\\\n&=-\\frac{q}{4 \\pi \\epsilon_0}\\frac{c^2 (\\mathsf{r} \\alpha^2 d\\tau \\wedge d\\mathsf{r} + \\mathsf{r} ^2 \\alpha \\alpha_{\\theta} d\\tau \\wedge d\\theta + \\mathsf{r} ^2 \\alpha \\alpha_{\\phi} d\\tau \\wedge d\\phi)}{(\\alpha \\mathsf{r} )^3}\\\\\n&=-\\qec^2\\Big( \\frac{1}{\\alpha \\mathsf{r} ^2}d\\tau \\wedge d\\mathsf{r} + \\frac{\\alpha_{\\theta}}{\\mathsf{r} \\alpha^2}d\\tau \\wedge d\\theta + \\frac{\\alpha_{\\phi}}{\\mathsf{r} \\alpha^2}d\\tau \\wedge d\\phi\\Big)\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\n\\begin{align}\n&\\star \\mathrm{F}_{\\textup{R}}=\\frac{q}{4 \\pi \\epsilon_0}\\Big(\\frac{\\alpha_{\\phi} \\dot{\\alpha}-\\alpha \\dot{\\alpha_{\\phi}}}{\\alpha^2 \\sin(\\theta)}d\\tau \\wedge d\\theta - \\frac{\\sin(\\theta)(\\alpha_{\\theta} \\dot{\\alpha} - \\alpha \\dot{\\alpha_{\\theta}})}{\\alpha^2}d\\tau \\wedge d\\phi\\Big)\\\\\n&\\star \\mathrm{F}_{\\textup{C}}=\\frac{q}{4 \\pi \\epsilon_0}\\frac{c^2 \\sin(\\theta)}{\\alpha^2}d\\theta \\wedge d\\phi\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align}\n\\star (\\tilde{X} \\wedge \\widetilde{V}) &=i_V i_{X} \\star 1\\nonumber\\\\\n&=\\mathsf{r} i_{\\partial_\\tau} i_{\\partial_\\mathsf{r} } (\\alpha \\mathsf{r} ^2 \\sin \\theta d\\tau \\wedge d\\mathsf{r} \\wedge d\\theta \\wedge d\\phi )\\nonumber\\\\\n&=\\mathsf{r} ^3 \\alpha \\sin(\\theta)d\\theta \\wedge d\\tau\\\\\n\\star (\\tilde{X} \\wedge \\widetilde{A}) &=i_A i_{X} \\star 1\\nonumber\\\\\n&=\\mathsf{r} ^2 \\sin(\\theta) (\\alpha \\dot{\\alpha_{\\theta}} - \\alpha_{\\theta} \\dot{\\alpha})d\\tau \\wedge d\\phi - \\frac{\\mathsf{r} ^2 (\\alpha \\dot{\\alpha_{\\phi}} - \\alpha_{\\phi} \\dot{\\alpha})}{\\sin(\\theta)}d\\tau \\wedge d\\theta\\nonumber\\\\\n&-\\mathsf{r} ^3 \\dot{\\alpha} \\sin(\\theta) d\\theta \\wedge d\\phi\n\\end{align}\ntherefore\n\\begin{align*}\n\\star\\mathrm{F}_{\\textup{C}} &= \\frac{q}{4 \\pi \\epsilon_0} \\frac{-c^2\\star(\\tilde{X}\\wedge \\widetilde{V})}{g(X, V)^3}\\\\\n&=\\frac{q}{4 \\pi \\epsilon_0}\\frac{c^2 \\sin{\\theta}}{\\alpha^2}d\\theta\\wedge d\\phi \\\\\n\\star\\mathrm{F}_{\\textup{R}}&=\\frac{q}{4 \\pi \\epsilon_0} \\frac{g(X, V)\\star(\\tilde{X}\\wedge \\widetilde{A})-g(X, A)\\star(\\tilde{X} \\wedge \\widetilde{V})}{g(X, V)^3}\\\\\n&=\\frac{q}{4 \\pi \\epsilon_0}\\Big(\\frac{\\alpha_{\\phi} \\dot{\\alpha}-\\alpha \\dot{\\alpha_{\\phi}}}{\\alpha^2 \\sin(\\theta)}d\\tau \\wedge d\\theta - \\frac{\\sin(\\theta)(\\alpha_{\\theta} \\dot{\\alpha} - \\alpha \\dot{\\alpha_{\\theta}})}{\\alpha^2}d\\tau \\wedge d\\phi\\Big)\n\\end{align*}\n\\end{proofs}\n\\begin{lemma}\nThe couloumbic and radiative terms of the 1-forms $\\widetilde{\\elw}$ and $\\widetilde{\\blw}$ take the form\n\\begin{align}\n\\widetilde{\\elw}_{\\textup{C}}=&\\frac{q}{4 \\pi \\epsilon_0} c^2 \\Big(\\frac{\\alpha_{\\phi} }{\\mathsf{r} \\alpha^3}d\\phi+\\frac{1}{\\alpha^2 \\mathsf{r} ^2}d\\mathsf{r} +\\frac{\\alpha_{\\theta} }{\\mathsf{r} \\alpha^3}d\\theta+\\frac{\\dot{\\c}^0+\\alpha }{\\alpha^2 \\mathsf{r} ^2}d\\tau\\notag\\\\\n&+\\frac{\\alpha_{\\theta}^2 }{\\mathsf{r} ^2 \\alpha^3}d\\tau+\\frac{\\alpha_{\\phi}^2 }{\\mathsf{r} ^2 \\alpha^3 \\sin^2(\\theta)}d\\tau\\Big)\\notag\\\\\n\\widetilde{\\elw}_{\\textup{R}}=& \\frac{q}{4 \\pi \\epsilon_0}\\Bigg(\\frac{\\dot{\\alpha}\\alpha_{\\phi}-\\alpha \\dot{\\alpha_{\\phi}}}{\\alpha^3}d\\phi- \\frac{\\alpha \\dot{\\alpha_{\\theta}}-\\dot{\\alpha} \\alpha_{\\theta} }{\\alpha^3}d\\theta\\notag\\\\\n&- \\Big(\\frac{\\alpha_{\\theta} (\\alpha \\dot{\\alpha_{\\theta}}-\\dot{\\alpha} \\alpha_{\\theta})}{\\mathsf{r} \\alpha^3}-\\frac{\\alpha_{\\phi} (\\dot{\\alpha}\\alpha_{\\phi}-\\alpha\\dot{\\alpha_{\\phi}})}{\\mathsf{r} \\alpha^3 \\sin^2(\\theta)}\\Big) d\\tau\\Bigg)\n\\label{e_newcoords}\n\\end{align}\nand\n\\begin{align}\n\\widetilde{\\blw}_{\\textup{C}}&= \\frac{q}{4 \\pi \\epsilon_0} c \\Big(\\frac{\\alpha_{\\theta} \\sin(\\theta)}{\\mathsf{r} \\alpha^3}d\\phi+\\frac{\\dot{\\c}^1 \\sin(\\phi)-\\dot{\\c}^2 \\cos(\\phi)}{\\mathsf{r} \\alpha^3}d\\theta\\Big)\\notag\\\\\n&=\\frac{q}{4 \\pi \\epsilon_0} c \\sin(\\theta)\\Big(\\frac{\\alpha_{\\theta}}{\\mathsf{r} \\alpha^3}d\\phi-\\frac{\\alpha_{\\phi}}{\\mathsf{r} \\alpha^3 \\sin^2(\\theta)}d\\theta\\Big)\\notag\\\\\n\\widetilde{\\blw}_{\\textup{R}}=&\\frac{1}{c}\\frac{q}{4 \\pi \\epsilon_0}\\sin(\\theta)\\Bigg(\\frac{\\dot{\\alpha} \\alpha_{\\theta}-\\alpha \\dot{\\alpha_{\\theta}} }{\\alpha^3}d\\phi+\\frac{\\alpha \\dot{\\alpha_{\\phi}}-\\dot{\\alpha} \\alpha_{\\phi}}{\\alpha^3 \\sin^2(\\theta)}d\\theta\\notag\\\\\n &+\\Big(\\frac{\\alpha_{\\theta} (\\alpha \\dot{\\alpha_{\\phi}}-\\dot{\\alpha} \\alpha_{\\phi}) }{\\mathsf{r} \\alpha^3\\sin^2(\\theta)}\n +\\frac{\\alpha_{\\phi}(\\dot{\\alpha} \\alpha_{\\theta}-\\alpha \\dot{\\alpha_{\\theta}})}{\\mathsf{r} \\alpha^3 \\sin^2(\\theta)}\\Big)d\\tau\\Bigg)\\notag\\\\\n &=\\frac{1}{c}\\frac{q}{4 \\pi \\epsilon_0}\\sin(\\theta)\\Bigg(\\frac{\\dot{\\alpha} \\alpha_{\\theta}-\\alpha \\dot{\\alpha_{\\theta}} }{\\alpha^3}d\\phi+\\frac{\\alpha \\dot{\\alpha_{\\phi}}-\\dot{\\alpha} \\alpha_{\\phi}}{\\alpha^3 \\sin^2(\\theta)}d\\theta\\notag\\\\\n &+\\alpha \\frac{\\alpha_{\\theta} \\dot{\\alpha_{\\phi}}- \\alpha_{\\phi} \\dot{\\alpha_{\\theta}}}{\\mathsf{r} \\alpha^3\\sin^2(\\theta)}\n d\\tau\\Bigg)\n\\end{align}\n\\end{lemma}\n\\begin{proofs}\n\\begin{align}\n\\frac{\\partial}{\\partial y^0}&=\\frac{\\partial \\tau}{\\partial y^0}\\frac{\\partial}{\\partial \\tau}+\\frac{\\partial \\mathsf{r} }{\\partial y^0}\\frac{\\partial}{\\partial \\mathsf{r} }+\\frac{\\partial\\theta}{\\partial y^0}\\frac{\\partial}{\\partial \\theta}+\\frac{\\partial \\phi}{\\partial y^0}\\frac{\\partial}{\\partial \\phi}\\notag\\\\\n&=\\frac{c}{\\alpha} \\Big(-\\frac{\\partial}{\\partial \\tau}+(\\dot{\\c}^0 +\\alpha)\\frac{\\partial}{\\partial \\mathsf{r}}+\\frac{\\alpha_{\\theta}}{r}\\frac{\\partial}{\\partial \\theta}+\\frac{\\alpha_{\\phi}}{\\mathsf{r}\\sin^2(\\theta)}\\frac{\\partial}{\\partial \\phi}\\Big)\n\\end{align}\nResults follow from definitions (\\ref{eb_def}) and (\\ref{bvecdef}).\n\\end{proofs}\n\n\\chapter{MAPLE Input for Part I}\n\\label{maple_ptwo}\nIn this thesis the we use the mathematical software MAPLE to implement the computations which support the results presented in parts I and II. In principle there are other programming tools which could have be used, such as MATHEMATICA and MATLAB, each of which has its own advantages and disadvantages. In general, MATHEMATICA and MAPLE are more suited to symbolic computation, whereas MATLAB is more suited to numerical computation.\n\n In part I of the thesis we require heavy use of symbolic computation. In particular we utilize the tools of differential geometry to manipulate tensors and differential forms. These tools were readily available to us in MANIFOLDS package \\cite{Manifolds} written by Robin Tucker and Charles Wang for use with MAPLE. There are similar packages available for use with other software, such as RICCI for use with MATHEMATICA, and Tensor Toolbox for use with MATLAB, however the availability of the MANIFOLDS package and supporting documentation was an important factor in deciding to use MAPLE instead of other possible programming tools. In addition, the procedural language of MAPLE was appealing to the author based on his experience with $\\textup{C}^{++}$ and FORTRAN programming languages.\n \n The calculations carried out for part II of the thesis are more numerical by nature, however rather than adopting a programming tool more suited to numerical calculations we decided it would be more economical to build on the code already written in MAPLE.\n \n\nThe following script was written in MAPLE 15 and can be run with the packages \\emph{Plots}, \\emph{LinearAlgebra} and the additional package \\emph{Manifolds}\\cite{Manifolds} with tools for differential geometry.\n\\begin{linenumbers*}\n{\\color{red}\n\\begin{singlespacing}\n\\begin{flushleft}\n\\texttt{\\# set up coordinate system}\\\\\n\\texttt{Manifoldsetup(M,[tau,R,theta,phi],[e,E,0],}\\\\\n\\texttt{map(x->simplify(x,symbolic),}\\\\\n\\texttt{[e[0]=d(tau),}\\\\\n\\texttt{e[1]=d(R),}\\\\\n\\texttt{e[2]=d(theta),}\\\\\n\\texttt{e[3]=d(phi)])):}\n\\end{flushleft}\n\\end{singlespacing}}\n\\begin{mapcode}\nConstants([epsilon, q, ep, b0, b1, b2, b3, a3, R0]);\\\\\nManfdomain(M, [a, ad, ath, aph, athd, aphd, adphph, adthth], [tau, theta, phi]):\\\\\nManfdomain(M,[C0,C1,C2,C3,Cd0,Cd1,Cd2,Cd3,Cdd0,Cdd1,Cdd2,Cdd3],[tau]) :\\\\[5pt]\n g := (-1+2\\textasteriskcentered R\\textasteriskcentered ad\/a)\\textasteriskcentered d(tau) \\&X d(tau)\\\\\n- (d(tau) \\&X d(R)+ d(R) \\&X d(tau))\\\\\n+ (R\\ind2\/a\\ind2)\\textasteriskcentered(d(theta) \\&X d(theta))\\\\\n+ (R\\ind2\/a\\ind2)\\textasteriskcentered sin(theta)\\ind2 \\textasteriskcentered d(phi) \\&X d(phi) :\\\\[5pt]\nMancovmetric(M,g):\\\\\nManvol(M) := -(R\\ind2\/a\\ind2)\\textasteriskcentered sin(theta)\\textasteriskcentered`\\&\\textasciicircum`(e[0], e[1], e[2], e[3]) :\\\\[3pt]\n Basis1 := \\{d(tau),d(R),d(theta),d(phi)\\} :\\\\\nBasis2 := \\{d(tau)\\&\\textasciicircum d(R), d(tau)\\&\\textasciicircum d(theta), d(tau)\\&\\textasciicircum d(phi),\\\\\n d(R)\\&\\textasciicircum d(theta), d(R)\\&\\textasciicircum d(phi), d(theta)\\&\\textasciicircum d(phi)\\} :\\\\\nBasis3 := \\{d(tau)\\&\\textasciicircum d(R)\\&\\textasciicircum d(theta), d(tau)\\&\\textasciicircum d(R)\\&\\textasciicircum d(phi),\\\\\n d(tau)\\&\\textasciicircum d(theta)\\&\\textasciicircum d(phi), d(R)\\&\\textasciicircum d(theta)\\&\\textasciicircum d(phi)\\} :\\\\\nBasis4 := \\{e(0) \\&\\textasciicircum e(1) \\&\\textasciicircum e(2), e(1) \\&\\textasciicircum e(2) \\&\\textasciicircum e(3),e(2) \\&\\textasciicircum e(3) \\&\\textasciicircum e(0),e(3) \\&\\textasciicircum e(0) \\&\\textasciicircum e(1)\\}: \\\\[3pt]\na\\_sublist:=\\{diff(a,tau)=ad,diff(a,theta)=ath,diff(a,phi)=aph,\\\\\n diff(ath,tau)=athd,diff(aph,tau)=aphd,\\\\\n diff(ath,phi)=athph,diff(aph,theta)=aphth, diff(ad, theta)=athd, diff(ad, phi)=aphd, diff(aph, phi)=aphph,\\\\\n diff(ath, theta)=athth, diff(adph, phi)=adphph, diff(adth, theta)=adthth\\}:\\\\[5pt]\nCd\\_sublist := \\{diff(C0,tau)=Cd0,diff(C1,tau)=Cd1,\\\\\n\t diff(C2,tau)=Cd2,diff(C3,tau)=Cd3\\} :\\\\\nCd\\_inv\\_sublist := \\{Cd0=diff(C0,tau),Cd1=diff(C1,tau),\\\\\n\t \t Cd2=diff(C2,tau),Cd3=diff(C3,tau)\\} :\\\\\nCdd\\_sublist := \\{diff(C0,tau,tau)=Cdd0,diff(C1,tau,tau)=Cdd1,\\\\\n\t diff(C2,tau,tau)=Cdd2,diff(C3,tau,tau)=Cdd3, diff(Cd0,tau)=Cdd0,diff(Cd1,tau)=Cdd1,\\\\\n diff(Cd2,tau)=Cdd2,diff(Cd3,tau)=Cdd3\\} :\\\\\nCddd\\_sublist := \\{ diff(Cdd0,tau)=Cddd0,diff(Cdd1,tau)=Cddd1,\\\\\ndiff(Cdd2,tau)=Cddd2,diff(Cdd3,tau)=Cddd3\\} :\\\\[5pt]\naa := -Cd0+Cd1\\textasteriskcentered cos(phi)\\textasteriskcentered sin(theta)+Cd2\\textasteriskcentered sin(phi)\\textasteriskcentered sin(theta)+Cd3\\textasteriskcentered cos(theta) :\\\\\naath := diff(aa,theta) :\\\\\naaph := diff(aa,phi) :\\\\\naad := subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aa),tau)) :\\\\\naathd := subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aath),tau)) :\\\\\naaphd := subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aaph),tau)) :\\\\\naathth:=subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aath),theta)) :\\\\\naaphph:=subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aaph),phi)) :\\\\\naadphph:=subs(Cdd\\_sublist,diff(diff(subs(Cd\\_inv\\_sublist,aad),phi), phi)) :\\\\\naadthth:=subs(Cdd\\_sublist,diff(diff(subs(Cd\\_inv\\_sublist,aad),theta), theta)) :\\\\[5pt]\naa\\_sublist:=\\{a=aa, ath=aath, ad=aad, aph=aaph, athd=aathd, aphd=aaphd, athth=aathth,\\\\\n aphph=aaphph, adphph=aadphph, adthth=aadthth\\}:\\\\[5pt]\nx0 := C0-(R\/a):\\\\\nx1 := C1-(R\/a)\\textasteriskcentered sin(theta)\\textasteriskcentered cos(phi):\\\\\nx2 := C2-(R\/a)\\textasteriskcentered sin(theta)\\textasteriskcentered sin(phi):\\\\\nx3 := C3-(R\/a)\\textasteriskcentered cos(theta):\\\\[5pt]\n J := Matrix(4, 4):\\\\\nfor i from 0 to 3 do J[i+1, 1] := diff(x || i, tau):\\\\\nJ[i+1, 2] := diff(x || i, R):\\\\\nJ[i+1, 3] := diff(x || i, theta):\\\\\nJ[i+1, 4] := diff(x || i, phi) end do:\\\\\nsubs(a\\_sublist, J):\\\\[5pt]\nDetJ := simplify(Determinant(J)):\\\\\ndetJ := (R\\ind2\/a\\ind2)\\textasteriskcentered sin(theta) :\\\\[5pt]\n AdJ := simplify(eval(subs( a\\_sublist, Adjoint(J)))):\\\\[5pt]\ndf\\_tau\\_0 :=AdJ[1, 1]\/detJ :\\\\\ndf\\_tau\\_1 := AdJ[1, 2]\/detJ :\\\\\ndf\\_tau\\_2 := AdJ[1, 3]\/detJ :\\\\\ndf\\_tau\\_3 := AdJ[1, 4]\/detJ :\\\\\ndf\\_R\\_0 := AdJ[2, 1]\/detJ :\\\\\ndf\\_R\\_1 := AdJ[2, 2]\/detJ :\\\\\ndf\\_R\\_2 := AdJ[2, 3]\/detJ :\\\\\ndf\\_R\\_3 := AdJ[2, 4]\/detJ :\\\\\ndf\\_theta\\_0 := AdJ[3, 1]\/detJ :\\\\\ndf\\_theta\\_1 := AdJ[3, 2]\/detJ :\\\\\ndf\\_theta\\_2 := AdJ[3, 3]\/detJ :\\\\\ndf\\_theta\\_3 := AdJ[3, 4]\/detJ :\\\\\ndf\\_phi\\_0 := AdJ[4, 1]\/detJ :\\\\\ndf\\_phi\\_1 := AdJ[4, 2]\/detJ :\\\\\ndf\\_phi\\_2 := AdJ[4, 3]\/detJ :\\\\\ndf\\_phi\\_3 := AdJ[4, 4]\/detJ :\\\\[3pt]\nPD\\_0:=df\\_tau\\_0\\textasteriskcentered PD(tau)+df\\_R\\_0\\textasteriskcentered PD(R) +df\\_theta\\_0\\textasteriskcentered PD(theta) +df\\_phi\\_0\\textasteriskcentered PD(phi):\\\\\nPD\\_1:=df\\_tau\\_1\\textasteriskcentered PD(tau)+df\\_R\\_1\\textasteriskcentered PD(R) +df\\_theta\\_1\\textasteriskcentered PD(theta) +df\\_phi\\_1\\textasteriskcentered PD(phi):\\\\\nPD\\_2:=df\\_tau\\_2\\textasteriskcentered PD(tau)+df\\_R\\_2\\textasteriskcentered PD(R) +df\\_theta\\_2\\textasteriskcentered PD(theta) +df\\_phi\\_2\\textasteriskcentered PD(phi):\\\\\nPD\\_3:=df\\_tau\\_3\\textasteriskcentered PD(tau)+df\\_R\\_3\\textasteriskcentered PD(R) +df\\_theta\\_3\\textasteriskcentered PD(theta) +df\\_phi\\_3\\textasteriskcentered PD(phi):\\\\[5pt]\n VX := R\\textasteriskcentered PD(R) ;\\\\\n dualX := F2C(\\&~(VX)) ;\\\\\n VV := PD(tau)+VX\\textasteriskcentered (ad\/a) ;\\\\\n dualV := collect(F2C(\\&~(VV)), Basis1);\\\\\ndualA :=collect(expand(R\\textasteriskcentered (ad\\ind2\/a\\ind2)\\textasteriskcentered d(tau) + (-ad\/a)\\textasteriskcentered d(R) +R\\textasteriskcentered ((ad\\textasteriskcentered ath)\/a\\ind2-athd\/a)\\textasteriskcentered d(theta)+ R\\textasteriskcentered ((ad\\textasteriskcentered aph)\/a\\ind2-aphd\/a)\\textasteriskcentered d(phi)), Basis1);\\\\\nVA :=collect(expand(F2C( \\&~(dualA))), Basis6) ;\\\\[5pt]\nALW := collect(expand(F2C(dualV\/(-R))), Basis1) ;\\\\\nFLW := collect(expand(subs(a\\_sublist, d(ALW))), Basis2) ;\\\\\nstarFLW:=collect(F2C(\\&i (\\&star(FLW))), Basis2);\\\\[5pt]\n stress:=proc(kill);\\\\\n collect(subs(Cd\\_sublist,F2C(((ep\/2)\\textasteriskcentered ((PD\\_||kill \\&i FLW) \\&\\textasciicircum(\\&star FLW)-(PD\\_||kill \\&i(\\&star FLW))\\&\\textasciicircum FLW)))), Basis3);\\\\\nend proc:\\\\[5pt]\nstress\\_0:=stress(0):\\\\\nstress\\_1:=stress(1):\\\\\nstress\\_2:=stress(2):\\\\\nstress\\_3:=stress(3):\\\\[5pt]\n expansion\\_cdot\\_sublist:=\\{epsilon=1, Cd0=1+(b0\\textasteriskcentered tau\\ind2\/2)+O(tau\\ind3), Cd1=(b1\\textasteriskcentered tau\\ind2\/2)+O(tau\\ind3), Cd2=(b2\\textasteriskcentered tau\\ind2\/2)+O(tau\\ind3), \\\\ Cd3=a3\\textasteriskcentered tau+(b3\\textasteriskcentered tau\\ind2\/2)+O(tau\\ind3), Cdd0=b0\\textasteriskcentered tau+O(tau\\ind2), Cdd1=b1\\textasteriskcentered tau+O(tau\\ind2), Cdd2=b2\\textasteriskcentered tau+O(tau\\ind2), Cdd3=a3+b3\\textasteriskcentered tau+O(tau\\ind2)\\};\\\\[3pt]\n S\\_k\\_cdot:=proc(sublist, kill)\\\\\n local spl;\\\\\nspl:=stress\\_||kill;\\\\\nsubs(sublist, subs(aa\\_sublist, collect(expand(subs(Cd1\\textasteriskcentered cos(phi)\\textasteriskcentered sin(theta)\\\\\n+Cd2\\textasteriskcentered sin(phi)\\textasteriskcentered sin(theta)+Cd3\\textasteriskcentered cos(theta)=a+Cd0,\\\\\n-Cd1\\textasteriskcentered cos(phi)\\textasteriskcentered sin(theta)-Cd2\\textasteriskcentered sin(phi)\\textasteriskcentered sin(theta)\\\\\n-Cd3\\textasteriskcentered cos(theta)=-a-Cd0,-Cd1\\textasteriskcentered cos(phi)\\textasteriskcentered cos(theta)\\\\\n-Cd2\\textasteriskcentered sin(phi)\\textasteriskcentered cos(theta)+Cd3\\textasteriskcentered sin(theta)=-ath,Cd1\\textasteriskcentered cos(phi)\\textasteriskcentered cos(theta)\\\\\n+Cd2\\textasteriskcentered sin(phi)\\textasteriskcentered cos(theta)-Cd3\\textasteriskcentered sin(theta)=ath,\\\\\n-Cd1\\textasteriskcentered sin(phi)+Cd2\\textasteriskcentered cos(phi)=aph\/sin(theta),Cd1\\textasteriskcentered sin(phi)\\\\\n-Cd2\\textasteriskcentered cos(phi)=-aph\/sin(theta), Cd\\_sublist,spl)), Basis3)));\\\\\nend proc:\\\\[5pt]\n intgrd\\_0:= series(coeff(S\\_k\\_cdot(expansion\\_cdot\\_sublist, 0), `\\&\\textasciicircum`(d(tau), d(theta), d(phi))), tau=0):\\\\\nintgrd\\_1:= series(coeff(S\\_k\\_cdot(expansion\\_cdot\\_sublist, 1), `\\&\\textasciicircum`(d(tau), d(theta), d(phi))), tau=0):\\\\\nintgrd\\_2:= series(coeff(S\\_k\\_cdot(expansion\\_cdot\\_sublist, 2), `\\&\\textasciicircum`(d(tau), d(theta), d(phi))), tau=0):\\\\\nintgrd\\_3:= series(coeff(S\\_k\\_cdot(expansion\\_cdot\\_sublist, 3), `\\&\\textasciicircum`(d(tau), d(theta), d(phi))), tau=0):\\\\[5pt]\n get\\_integrands:= proc();\\\\\n print(t, intgrd\\_0);\\\\\n print(x, intgrd\\_1);\\\\\n print(y, intgrd\\_2);\\\\\n print(z, intgrd\\_3);\\\\\n end proc:\\\\[5pt]\n get\\_integrals:= proc();\n print(t, factor(simplify(int(int(int(intgrd\\_0, phi=0..2\\textasteriskcentered Pi), theta=0..Pi), tau))));\\\\\n print(x, factor(simplify(int(int(int(intgrd\\_1, phi=0..2\\textasteriskcentered Pi), theta=0..Pi), tau))));\\\\\n print(y, factor(simplify(int(int(int(intgrd\\_2, phi=0..2\\textasteriskcentered Pi), theta=0..Pi), tau))));\\\\\n print(z, factor(simplify(int(int(int(intgrd\\_3, phi=0..2\\textasteriskcentered Pi), theta=0..Pi), tau))));\\\\\n end proc:\\\\\n get\\_integrals();\n\\end{mapcode}\n\\newpage\n\\end{linenumbers*}\n\\section*{Comments}\n{\\footnotesize \\color{red} \\tt 1-7} Set up the Newman-Unti coordinate system $(\\tau, R, \\theta, \\phi)=(\\mathtt{tau, R, theta, phi})$\\\\\n{\\footnotesize \\color{red} \\tt 8-12} The global variables are defined. For $i=0..3$ we use notation $C^i=\\mathtt{Ci}$, $\\dot{\\c}^i=\\mathtt{Cdi}$, $\\ddot{\\c}^i=\\mathtt{Cddi}$. Also $\\alpha=\\mathtt{a}$, $\\dot{\\alpha}=\\mathtt{ad}$, $\\alpha_{\\theta}=\\mathtt{ath}$,$\\alpha_{\\phi}=\\mathtt{aph}$, $\\dot{\\alpha_{\\phi}}=\\mathtt{aphd}$ etc. The constants $a, b^i$ defining the comoving frame are given by $\\mathtt{a}$ and $\\mathtt{bi}$ respectively. Also $\\mathtt{q}$ and $\\mathtt{ep}$ are constants.\\\\\n{\\footnotesize \\color{red} \\tt 13-17} The metric (\\ref{g_nu_def}) is input. This associates the manifold $\\mathtt{M}$ with Minkowski space $\\mathcal{M}$. The function \\texttt{Mancovmetric(M, g)} identifies \\texttt{g} as the metric on \\texttt{M}. The \\emph{Manifolds} package will automatically give the inverse metric and the vector and covector bases on $\\textup{T}\\mathcal{M}$ and $\\textup{T}^*\\mathcal{M}$. Note that there is no factor of $c^2$ in the metric because \nwe use dimensions such that $g(\\ddot{\\c}, \\ddot{\\c})=-1$. \\\\\n{\\footnotesize \\color{red} \\tt 18} \\texttt{Manvol(M)} defines the volume $4$-form. Notice the negative orientation.\\\\\n{\\footnotesize \\color{red} \\tt 19-25} Define coordinate bases to simplify output\\\\\n{\\footnotesize \\color{red} \\tt 26-31} These lines define the relationships between $\\alpha$ and its derivatives.\\\\\n{\\footnotesize \\color{red} \\tt 32-41} These lines define the relationships between the components of $C$, $\\dot{\\c}$,$\\ddot{\\c}$ and $\\dddot{\\c}$ .\\\\\n{\\footnotesize \\color{red} \\tt 42-58} The here we define the parameters \\texttt{aa, aad, aath, aaph, aaphd...} which assign the coordinate representations to the variables \\texttt{a, ad, ath, aph, aphd...} \\\\\n{\\footnotesize \\color{red} \\tt 59-62} The coordinate transformation from Newman-Unti (\\texttt{tau, R,theta, phi}) to Minkowski coordinates (\\texttt{x0, x1, x2, x3}).\\\\\n{\\footnotesize \\color{red} \\tt 63-70} We determine the Jacobian \\texttt{J} and its determinant. \\\\\n{\\footnotesize \\color{red} \\tt 71-87} We calculate the partial derivatives of the Newman-Unti coordinates with respect to the Minkowski coordinates.\\\\\n{\\footnotesize \\color{red} \\tt 88-95} These lines define the Minkowski basis vectors \\texttt{PD\\_t}=$\\frac{\\partial}{\\partial x0}$,\\texttt{PD\\_x}=$\\frac{\\partial}{\\partial x1}$, \\texttt{PD\\_y}=$\\frac{\\partial}{\\partial x2}$, \\texttt{PD\\_z}=$\\frac{\\partial}{\\partial x3}$ in terms of Newman-Unti coordinates.\\\\\n{\\footnotesize \\color{red} \\tt 96-103} Defines the vectors $X=\\mathtt{VX}$, $V=\\mathtt{VV}$, and $A=\\mathtt{VA}$ and their duals using (\\ref{X_NU}) and (\\ref{V_NU}) and (\\ref{nu_dual_vecs}). \\\\\n {\\footnotesize \\color{red} \\tt 104-106} The Li$\\text{\\'{e}}$nard-Wiechert potential $\\mathrm{A}=\\mathtt{ALW}$ is defined using (\\ref{LW_def}). The $2$-form field $\\mathrm{F}=\\mathtt{FLW}$ may is calculated by taking the exterior derivative. This is included in the \\emph{Manifolds} package. The Hodge dual is also used to define $\\star \\mathrm{F}=\\mathtt{starFLW}$ \\\\\n {\\footnotesize \\color{red} \\tt 107-114} These lines define the four stress 3-forms $\\mathrm{S}_K=$\\texttt{stress\\_i} for $\\mathtt{i=0, 1, 2, 3}$. \\\\\n {\\footnotesize \\color{red} \\tt 115-119} Defines the expansion around the momentarily comoving frame\\\\\n {\\footnotesize \\color{red} \\tt 120-32} A procedure for substituting the expansion into either of the stress 3-forms and simplifying the resulting expression.\\\\\n{\\footnotesize \\color{red} \\tt 133-145} These lines provide the procedure \\texttt{get\\_integrands} for obtaining the integrands.\\\\\n{\\footnotesize \\color{red} \\tt 147-156} These lines provide the procedure \\texttt{get\\_integrals} for carry out the integration. \\\\\n{\\footnotesize \\color{red} \\tt 157} This calls the procedure \\texttt{get\\_integrals}. The result is stated in (\\ref{int_Sk}).\n\\chapter{MAPLE Input for Part II}\n\\label{mapletwo}\n For the numerical investigation in Part II we use MAPLE to perform many different calculations, integrals and plots for a wide range of input parameters. As a result I have many different files with variations on the code. With hindsight I would have liked to have kept all the code in one file, beautifully annotated and ready to reproduce any calculation. However coding in MAPLE is a skill which I have learnt throughout my PhD and the code I have written hasn't always been the most simple or the most elegant. In this section I present some of the most important code which has been used to obtain the results stated in chapter \\ref{bendingbeams}. Once again we use the packages \\emph{Plots}, \\emph{LinearAlgebra} and \\emph{Manifolds}\\cite{Manifolds}.\n\\section{Part 1 - Setup}\n\\resetlinenumber[1]\n\\begin{linenumbers}\n{\\color{red}\n\\begin{singlespacing}\n\\begin{flushleft}\n\\texttt{\\# set up coordinate system}\\\\\n\\texttt{Manifoldsetup(M,[tau,r,theta,phi],[e,E,0],}\\\\\n\\texttt{map(x->simplify(x,symbolic),}\\\\\n\\texttt{[e[0]=d(tau),}\\\\\n\\texttt{e[1]=d(r),}\\\\\n\\texttt{e[2]=d(theta),}\\\\\n\\texttt{e[3]=d(phi)])):}\n\\end{flushleft}\n\\end{singlespacing}}\n\\begin{mapcode}\nConstants(epsilon, Lp, Rp, thetap, v, X0, Y0, Z0, q\\_e, ep, mu, c):\\\\\nManfdomain(M,gAV) :\\\\\nManfdomain(M,[a,ath,aph],[tau,theta,phi]) :\\\\\nManfdomain(M,[ad,athd,aphd, athph, aphth],[tau,theta,phi]) :\\\\\nManfdomain(M,[C0,C1,C2,C3,Cd0,Cd1,Cd2,Cd3,Cdd0,Cdd1,Cdd2,Cdd3],[tau]) :\\\\\nManfdomain(M,[rhat, cthhat, sthhat, cphhat, sphhat, T0], [tau]):\\\\[5pt]\ng := -c\\textasciicircum 2\\textasteriskcentered d(tau) \\& X d(tau)\\\\\n+ a\\textasteriskcentered (d(tau) \\& X d(r)+ d(r) \\& X d(tau))\\\\\n+ r\\textasteriskcentered ath \\textasteriskcentered (d(tau) \\& X d(theta)+ d(theta) \\& X d(tau))\\\\\n+ r\\textasteriskcentered aph\\textasteriskcentered (d(tau) \\& X d(phi)+ d(phi) \\& X d(tau))\\\\\n+ r\\textasciicircum 2\\textasteriskcentered (d(theta) \\& X d(theta))\\\\\n+ r\\textasciicircum 2\\textasteriskcentered sin(theta)\\textasciicircum 2 \\textasteriskcentered d(phi) \\& X d(phi) :\\\\[5pt]\n\\texttt{Mancovmetric(M,g):}\\\\\n\\texttt{G:=Manconmetric(M):}\\\\[5pt]\nCd\\_sublist := \\{diff(C0,tau)=Cd0,diff(C1,tau)=Cd1,\\\\\ndiff(C2,tau)=Cd2,diff(C3,tau)=Cd3\\} :\\\\\nCd\\_inv\\_sublist := \\{Cd0=diff(C0,tau),Cd1=diff(C1,tau),\\\\\nCd2=diff(C2,tau),Cd3=diff(C3,tau)\\} :\\\\\nCdd\\_sublist := \\{diff(C0,tau,tau)=Cdd0,diff(C1,tau,tau)=Cdd1,\\\\\ndiff(C2,tau,tau)=Cdd2,diff(C3,tau,tau)=Cdd3\\} :\\\\[5pt]\naa := -Cd0+Cd1\\textasteriskcentered cos(phi)\\textasteriskcentered sin(theta)+Cd2\\textasteriskcentered sin(phi)\\textasteriskcentered sin(theta)+Cd3\\textasteriskcentered cos(theta) :\\\\\naath := diff(aa,theta) :\\\\\naaph := diff(aa,phi) :\\\\\naad := subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aa),tau)) :\\\\\naathd := subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aath),tau)) :\\\\\naaphd := subs(Cdd\\_sublist,diff(subs(Cd\\_inv\\_sublist,aaph),tau)) :\\\\[5pt]\nBasis1 := \\{d(tau),d(r),d(theta),d(phi)\\} :\\\\\nBasis2 := \\{d(tau)\\& \\textasciicircum d(r), d(tau)\\& \\textasciicircum d(theta), d(tau)\\& \\textasciicircum d(phi),\n d(r)\\& \\textasciicircum d(theta), d(r)\\& \\textasciicircum d(phi), d(theta)\\& \\textasciicircum d(phi)\\} :\\\\\nBasis3 := \\{d(tau)\\& \\textasciicircum d(r)\\& \\textasciicircum d(theta), d(tau)\\& \\textasciicircum d(r)\\& \\textasciicircum d(phi),\n d(tau)\\& \\textasciicircum d(theta)\\& \\textasteriskcentered d(phi), d(r)\\& \\textasciicircum d(theta)\\& \\textasciicircum d(phi)\\} :\\\\[5pt]\nVX := r\\textasteriskcentered PD(r) :\\\\\nVV := PD(tau) :\\\\\ndualX := \\& ~(VX) :\\\\\ndualV := \\& ~(VV):\\\\\ndualA := subs(gAV \\textasteriskcentered d(tau) + ad\\textasteriskcentered d(r) + r\\textasteriskcentered athd\\textasteriskcentered d(theta) + r\\textasteriskcentered aphd\\textasteriskcentered d(phi) ):\\\\\nVA := \\& ~(dualA) :\\\\[5pt]\n\nALW := q\\_e\\textasteriskcentered(dualV\/(r\\textasteriskcentered a)) :\\\\\nFLW := collect(subs(\\{diff(a,tau)=ad,diff(a,theta)=ath,diff(a,phi)=aph,\n diff(ath,tau)=athd,diff(aph,tau)=aphd,\n diff(ath,phi)=athph,diff(aph,theta)=athph\\},\n simplify(F2C(d(ALW)))),Basis2) :\\\\\nFLWc := subs(ad=0,athd=0,aphd=0,FLW) :\\\\\nFLWr := collect(simplify(FLW - FLWc),Basis2) :\\\\[5pt]\n\n x0 := (C0+r)\/c:\\\\\nx1 := C1+r\\textasteriskcentered sin(theta)\\textasteriskcentered cos(phi):\\\\\nx2 := C2+r\\textasteriskcentered sin(theta)\\textasteriskcentered sin(phi):\\\\\nx3 := C3+r\\textasteriskcentered cos(theta):\\\\[5pt]\n\nJ := Matrix(4, 4):\\\\\nJ[1, 1]:=Cd0\/c:\\\\\nfor i from 1 to 3 do J[i+1, 1] := Cd || i:\\\\\nJ[i+1, 2] := diff(x || i, r):\\\\\nJ[i+1, 3] := diff(x || i, theta):\\\\\nJ[i+1, 4] := diff(x || i, phi) end do:\\\\\nJ[1,2]:=diff(x0, r):J[1,3]:=diff(x0, theta):J[1,4]:=diff(x0, phi):\\\\\nJ:\\\\[5pt]\n\nDetJ := simplify(Determinant(J)):\\\\\ndetJ := -(1\/c)\\textasteriskcentered a\\textasteriskcentered r\\textasciicircum 2\\textasteriskcentered sin(theta) :\\\\\nManvol(M) :=-(1\/c) a\\textasteriskcentered r\\textasciicircum 2\\textasteriskcentered sin(theta)\\textasteriskcentered`\\&\\textasciicircum`(e[0], e[1], e[2], e[3]) :\\\\[5pt]\n\nAdJ := simplify(Adjoint(J)):\\\\[3pt]\ndf\\_tau\\_t := AdJ[1, 1]\/detJ :\\\\\ndf\\_tau\\_x := AdJ[1, 2]\/detJ :\\\\\ndf\\_tau\\_y := AdJ[1, 3]\/detJ :\\\\\ndf\\_tau\\_z := AdJ[1, 4]\/detJ :\\\\\n\\#df\\_r\\_t := AdJ[2, 1]\/detJ :\\\\\ndf\\_r\\_t := ((Cd0+a)\\textasteriskcentered c)\/a :\\\\\ndf\\_r\\_x := AdJ[2, 2]\/detJ :\\\\\ndf\\_r\\_y := AdJ[2, 3]\/detJ :\\\\\ndf\\_r\\_z := AdJ[2, 4]\/detJ :\\\\\n\\#df\\_theta\\_t := AdJ[3, 1]\/detJ :\\\\\ndf\\_theta\\_t := (ath\\textasteriskcentered c)\/(r\\textasteriskcentered a) :\\\\\ndf\\_theta\\_x := AdJ[3, 2]\/detJ :\\\\\ndf\\_theta\\_y := AdJ[3, 3]\/detJ :\\\\\ndf\\_theta\\_z := AdJ[3, 4]\/detJ :\\\\\ndf\\_phi\\_t := AdJ[4, 1]\/detJ :\\\\\ndf\\_phi\\_x := AdJ[4, 2]\/detJ :\\\\\ndf\\_phi\\_y := AdJ[4, 3]\/detJ :\\\\\ndf\\_phi\\_z := AdJ[4, 4]\/detJ :\\\\[5pt]\n\n PD\\_t:=df\\_tau\\_t\\textasteriskcentered PD(tau)+df\\_r\\_t\\textasteriskcentered PD(r) +df\\_theta\\_t\\textasteriskcentered PD(theta) +df\\_phi\\_t\\textasteriskcentered PD(phi):\\\\\nPD\\_x:=df\\_tau\\_x\\textasteriskcentered PD(tau)+df\\_r\\_x\\textasteriskcentered PD(r) +df\\_theta\\_x\\textasteriskcentered PD(theta) +df\\_phi\\_x\\textasteriskcentered PD(phi):\\\\\nPD\\_y:=df\\_tau\\_y\\textasteriskcentered PD(tau)+df\\_r\\_y\\textasteriskcentered PD(r) +df\\_theta\\_y\\textasteriskcentered PD(theta) +df\\_phi\\_y\\textasteriskcentered PD(phi):\\\\\nPD\\_z:=df\\_tau\\_z\\textasteriskcentered PD(tau)+df\\_r\\_z\\textasteriskcentered PD(r) +df\\_theta\\_z\\textasteriskcentered PD(theta) +df\\_phi\\_z\\textasteriskcentered PD(phi):\\\\[5pt]\n\nPDt\\_Fc:=PD\\_t \\&i FLWc:\\\\\nPDt\\_starFc:=collect(subs(aph=aaph,Cd\\_sublist,F2C(PD\\_t \\&i (\\&star(FLWc)))), Basis1,simplify):\\\\\nElec\\_c :=(1\/c)\\textasteriskcentered PD\\_t \\&i FLWc :\\\\\nMag\\_c :=(1\/(c\\textasteriskcentered c))\\textasteriskcentered collect(subs(aph=aaph,Cd\\_sublist,F2C(PD\\_t \\&i (\\&star(FLWc)))),\n Basis1,simplify) :\\\\\nElec\\_r := (1\/c)\\textasteriskcentered collect(PD\\_t \\&i FLWr,Basis1) :\\\\\nMag\\_r := (1\/(c\\textasteriskcentered c))\\textasteriskcentered collect(PD\\_t \\&i F2C(\\&star(FLWr)),Basis1) :\\\\[5pt]\n\n Elec\\_cx := simplify(PD\\_x \\&i Elec\\_c) :\\\\\nElec\\_cy := simplify(PD\\_y \\&i Elec\\_c) :\\\\\nElec\\_cz := simplify(PD\\_z \\&i Elec\\_c) :\\\\\nElec\\_rx := simplify(PD\\_x \\&i Elec\\_r) :\\\\\nElec\\_ry := simplify(PD\\_y \\&i Elec\\_r) :\\\\\nElec\\_rz := simplify(PD\\_z \\&i Elec\\_r) :\\\\\nMag\\_cx := simplify(PD\\_x \\&i Mag\\_c) :\\\\\nMag\\_cy := simplify(PD\\_y \\&i Mag\\_c) :\\\\\nMag\\_cz := simplify(PD\\_z \\&i Mag\\_c) :\\\\\nMag\\_rx := simplify(PD\\_x \\&i Mag\\_r) :\\\\\nMag\\_ry := simplify(PD\\_y \\&i Mag\\_r) :\\\\\nMag\\_rz := simplify(PD\\_z \\&i Mag\\_r) :\\\\[5pt]\n Energy\\_res :=(1\/2)\\textasteriskcentered (ep\\textasteriskcentered ((Elec\\_cx+Elec\\_rx)\\textasciicircum 2+(Elec\\_cy+Elec\\_ry)\\textasciicircum 2\\\\\n +(Elec\\_cz+Elec\\_rz)\\textasciicircum 2)+(1\/mu)\\textasteriskcentered ((Mag\\_cx+Mag\\_rx)\\textasciicircum 2+(Mag\\_cy+Mag\\_ry)\\textasciicircum 2\\\\\n +(Mag\\_cz+Mag\\_rz)\\textasciicircum 2)):\\\\[5pt]\n\nhat\\_ sublist :=\n\\{T0=(sqrt((X0-C1)\\textasciicircum 2 + (Y0-C2)\\textasciicircum 2 + (Z0-C3)\\textasciicircum 2) +C0)\/c,\\\\\nrhat=sqrt((X0-C1)\\textasciicircum 2 + (Y0-C2)\\textasciicircum 2 + (Z0-C3)\\textasciicircum 2),\\\\\ncthhat=(Z0-C3)\/(sqrt((X0-C1)\\textasciicircum 2 + (Y0-C2)\\textasciicircum 2 + (Z0-C3)\\textasciicircum 2)),\\\\\nsthhat=sqrt((X0-C1)\\textasciicircum 2+(Y0-C2)\\textasciicircum 2)\/(sqrt((X0-C1)\\textasciicircum 2 + (Y0-C2)\\textasciicircum 2 + (Z0-C3)\\textasciicircum 2)),\\\\\ncphhat=(X0-C1)\/(sqrt((X0-C1)\\textasciicircum 2+(Y0-C2)\\textasciicircum 2)),\\\\\nsphhat=(Y0-C2)\/(sqrt((X0-C1)\\textasciicircum 2+(Y0-C2)\\textasciicircum 2))\\}:\\\\[5pt]\n\nprehat\\_ subslist :={cos(theta)=cthhat,sin(theta)=sthhat,\\\\\ncos(phi)=cphhat,sin(phi)=sphhat,r=rhat} :\\\\[5pt]\n\nCurve3\\_ def := \\{\\\\\nC0a=epsilon\\textasteriskcentered gamma\\textasteriskcentered tau,\\\\\nC1a=epsilon\\textasteriskcentered gamma\\textasteriskcentered v\\textasteriskcentered tau,\\\\\nC2a=0,\\\\\nC3a=0 \\} :\\\\[2pt]\nCurve2\\_ def := \\{\\\\\nC0a=epsilon\\textasteriskcentered gamma\\textasteriskcentered tau,\\\\\nC1a=Lp-epsilon\\textasteriskcentered (Rp\\textasteriskcentered sin((Lp\/(epsilon\\textasteriskcentered Rp))-(gamma\\textasteriskcentered v\\textasteriskcentered tau)\/Rp)),\\\\\nC2a=epsilon\\textasteriskcentered Rp\\textasteriskcentered (1-cos((Lp\/(epsilon\\textasteriskcentered Rp))-(gamma\\textasteriskcentered v\\textasteriskcentered tau)\/Rp)),\\\\\nC3a=0\\} :\\\\[2pt]\nCurve1\\_ def :=\\{\\\\\nC0a=epsilon\\textasteriskcentered gamma\\textasteriskcentered tau,\\\\\nC1a=epsilon\\textasteriskcentered (gamma\\textasteriskcentered v\\textasteriskcentered cos(thetap)\\textasteriskcentered tau+Lp -Rp\\textasteriskcentered sin(thetap)+cos(thetap)\\textasteriskcentered (thetap\\textasteriskcentered Rp-Lp\/epsilon)),\\\\\nC2a=epsilon\\textasteriskcentered (-gamma\\textasteriskcentered v\\textasteriskcentered sin(thetap)\\textasteriskcentered tau + Rp\\textasteriskcentered (1-cos(thetap))-sin(thetap)\\textasteriskcentered (thetap\\textasteriskcentered Rp-Lp\/epsilon)),\\\\\nC3a=0\\} :\\\\[5pt]\n\nCurve3\\_ sublist := eval(subs(Diff=diff,eval(subs(Curve3\\_ def,\\\\\n\\{\\\\\nC0=C0a,Cd0=Diff(C0a,tau),Cdd0=Diff(C0a,tau,tau),\\\\\nC1=C1a,Cd1=Diff(C1a,tau),Cdd1=Diff(C1a,tau,tau),\\\\\nC2=C2a,Cd2=Diff(C2a,tau),Cdd2=Diff(C2a,tau,tau),\\\\\nC3=C3a,Cd3=Diff(C3a,tau),Cdd3=Diff(C3a,tau,tau)\\}\\\\\n)))) :\\\\[2pt]\n\nCurve2\\_ sublist := eval(subs(Diff=diff,eval(subs(Curve2\\_ def,\\\\\n\\{\\\\\nC0=C0a,Cd0=Diff(C0a,tau),Cdd0=Diff(C0a,tau,tau),\\\\\nC1=C1a,Cd1=Diff(C1a,tau),Cdd1=Diff(C1a,tau,tau),\\\\\nC2=C2a,Cd2=Diff(C2a,tau),Cdd2=Diff(C2a,tau,tau),\\\\\nC3=C3a,Cd3=Diff(C3a,tau),Cdd3=Diff(C3a,tau,tau)\\}\\\\\n)))) :\\\\[2pt]\n\n\nCurve1\\_ sublist := eval(subs(Diff=diff,eval(subs(Curve1\\_ def,\\\\\n\\{\\\\\nC0=C0a,Cd0=Diff(C0a,tau),Cdd0=Diff(C0a,tau,tau),\\\\\nC1=C1a,Cd1=Diff(C1a,tau),Cdd1=Diff(C1a,tau,tau),\\\\\nC2=C2a,Cd2=Diff(C2a,tau),Cdd2=Diff(C2a,tau,tau),\\\\\nC3=C3a,Cd3=Diff(C3a,tau),Cdd3=Diff(C3a,tau,tau)\\}\\\\\n)))) :\\\\[5pt]\n\nget\\_range3 := proc(Values\\_sublist)\\\\\n local Taub ;\\\\\n Taub := subs(Values\\_sublist,X0\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)) ;\\\\\n 0..Taub ;\\\\\nend proc :\\\\\nget\\_range2 := proc(Values\\_sublist)\\\\\n local Taua ;\\\\\n Taua := subs(Values\\_sublist,-Rp\\textasteriskcentered thetap\/(gamma\\textasteriskcentered v)) ;\\\\\n Taua..0 ;\\\\\nend proc :\\\\\nget\\_range1 := proc(Values\\_sublist)\\\\\n local Taua ;\\\\\n Taua := subs(Values\\_sublist,-Rp\\textasteriskcentered thetap\/(gamma\\textasteriskcentered v)) ;\\\\\n subs(Values\\_sublist,StartTau)..Taua ;\\\\\nend proc :\\\\[5pt]\n\\end{mapcode}\n\\end{linenumbers}\n\\label{get_fields}\n\\begin{linenumbers}\n\\begin{mapcode}\n Get\\_Fields := proc(Cnum,Values\\_sublist)\\\\\n local Curve\\_sublist;\\\\\n Curve\\_sublist := Curve||Cnum||\\_sublist ;\\\\\n \\{\\\\\n Elec\\_cx\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Elec\\_cx))))) ,\n Elec\\_cy\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Elec\\_cy))))) ,\\\\\n Elec\\_cz\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Elec\\_cz))))) ,\\\\\n Elec\\_rx\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Elec\\_rx))))) ,\\\\\n Elec\\_ry\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Elec\\_ry))))) ,\\\\\n Elec\\_rz\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Elec\\_rz))))) ,\\\\\n\n Mag\\_cx\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Mag\\_cx))))) ,\\\\\n Mag\\_cy\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Mag\\_cy))))) ,\\\\\n Mag\\_cz\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Mag\\_cz))))) ,\\\\\n Mag\\_rx\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Mag\\_rx))))) ,\\\\\n Mag\\_ry\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Mag\\_ry))))) ,\\\\\n Mag\\_rz\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n Mag\\_rz))))) ,\\\\\n\n T0\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n T0))))) ,\\\\\n\n C1\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n C1))))) ,\\\\\n\n aa\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n a))))) ,\n\n Energy\\_res =\\\\\n subs(Values\\_sublist,subs(Curve\\_sublist,\\\\\n subs(hat\\_sublist,subs(prehat\\_subslist,\\\\\n subs(a=aa, ad=aad, ath=aath, athd=aathd, aph=aaph, aphd=aaphd,\\\\\n sqrt((Elec\\_cx+Elec\\_rx)\\textasciicircum 2+(Elec\\_cy+Elec\\_ry)\\textasciicircum 2+(Elec\\_cz+Elec\\_rz)\\textasciicircum 2))))))\\\\\n \\} ;\\\\\nend proc:\n\\end{mapcode}\n\\end{linenumbers}\n\\section*{Comments}\n {\\footnotesize \\color{red} \\tt 1-98} This section of the code is almost identical to that of Part I with a few notable exceptions:\n \\begin{itemize}\n \\item We use the coordinate system given by (\\ref{coords_new}) where $(\\tau, \\mathsf{r}, \\theta, \\phi)$=(\\texttt{tau, r, theta, phi}).\n \\item The metric is now given by (\\ref{g_newcoords}).\n \\item We We define $\\mathrm{F}_{\\textup{C}}=$\\texttt{FLWc} by setting all components of acceleration to zero in \\texttt{FLW}. We define $\\mathrm{F}_{\\textup{R}}=$\\texttt{FLWr} as the difference \\texttt{FLW}-\\texttt{FLWc}.\n\\end{itemize}\n {\\footnotesize \\color{red} \\tt 99-106} Calculate $\\mathrm{E}_{\\textup{C}}=$\\texttt{Elec\\_c}, $\\mathrm{E}_{\\textup{R}}=$\\texttt{Elec\\_r}, $\\mathrm{B}_{\\textup{C}}=$\\texttt{Mag\\_c} and $\\mathrm{B}_{\\textup{R}}=$\\texttt{Mag\\_r}. This is easily done using (\\ref{ebvec_def}).\\\\\n{\\footnotesize \\color{red} \\tt 107-118} Calculate the components of these vectors in the \\texttt{x1, x2} and \\texttt{x3} directions by taking the internal contractions with respect to \\texttt{PD\\_x}, \\texttt{PD\\_y} and \\texttt{PD\\_z} respectively.\\\\\n{\\footnotesize \\color{red} \\tt 119-121} Calculate the total energy of the electric field $||\\mathrm{E}(\\tau, \\mathsf{r}, \\theta, \\phi)||^2=$\\texttt{Energy\\_res}.\n{\\footnotesize \\color{red} \\tt 122-130} Input the substitutions given by (\\ref{path_ref}). \\\\\n{\\footnotesize \\color{red} \\tt 131-147} Define the three sections of the pre-bent path by inputting the components (\\ref{prebent_path}) according to (\\ref{worldline_comps}). The labels for the axis in the code are different to those given in figure \\ref{setup} due to the way I initially set up the trajectory. The axes $x, y, z$ in the figure correspond to \\texttt{y}, \\texttt{z}, \\texttt{x} in the code, and correspondingly the point ${\\boldsymbol X}=(X_0, Y_0, Z_0)$ is given by \\texttt{(Y0, Z0, X0)}. The coordinate system is aligned so that instead of being located at the terminus of the bend as in the figure (\\ref{setup}), the origin is located at the end of the small straight line section. As a result the parameter $\\texttt{Lp}$ is defined as the negative of the distance $Z$. We use notation $R=$\\texttt{Rp} and $\\Theta$=\\texttt{thetap}.\\\\\n{\\footnotesize \\color{red} \\tt 148-168} Define the three corresponding list of substitutions which will associate a field with a particular trajectory. \\\\\n{\\footnotesize \\color{red} \\tt 169-183} Calculate the ranges of $\\tau=$\\texttt{tau} for each of the three sections of the path. The values \\texttt{Taua} and \\texttt{Taub} are the tau values at the start and the end of the bend respectively. The value \\texttt{StartTau} is the initial value for \\texttt{tau}.\\\\\n{\\footnotesize \\color{red} \\tt 184-186} The procedure \\texttt{get\\_fields} uses the substitutions in the previous section to output the listed fields as functions of $\\tau=$\\texttt{tau} for a given section of path and a given set of input parameters. The inputs are the number \\texttt{Cnum}$=$1, 2 \\textup{or} 3 which tells maple which of the three sections of the path {\\footnotesize \\color{red} \\tt 131-147} we are considering, and a list of numerical inputs of the following format\n\\begin{mapcode}\nValues\\_sublist0 :=\nsubs(gam=1000,\\{X0=0,Y0=0,Z0=1,epsilon=1,v=sqrt(1-1\/gam\\ind2),Lp=0, Rp=1000, thetap=0.1, gamma=gam, StartTau=-20\\});\n\\end{mapcode}\n{\\footnotesize \\color{red} \\tt 247-251 }Notice the lab time $T_0(\\tau)=$\\texttt{T0\\_res} and the total energy of the electric field $||\\mathcal{E}(\\tau, \\mathsf{r}, \\theta, \\phi)||^2=$\\texttt{Energy\\_res} are also obtained as functions of \\texttt{tau}.\n\n\n\\section{Part 3 - minimize peak field }\n\\label{minimize_app}\n\\begin{linenumbers}\n\\begin{mapcode}\nget\\_list3 := proc(Values\\_sublist)\\\\\n local Taub ;\\\\\n Taub := subs(Values\\_sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)) ;\\\\\n \\$(round(Taub)..0) ;\\\\\nend proc :\\\\\nget\\_list2 := proc(Values\\_sublist)\\\\\n local Taua, Taub ;\\\\\n Taub := subs(Values\\_sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)) ;\\\\\n Taua := subs(Values\\_sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)-Rp\\textasteriskcentered thetap\/(gamma\\textasteriskcentered v)) ;\\\\\n \\$(round(Taua)..round(Taub) );\\\\\nend proc :\\\\\nget\\_list1 := proc(Values\\_sublist)\\\\\n local Taua, Taub ;\\\\\n Taub := subs(Values\\_sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)) ;\\\\\n Taua := subs(Values\\_sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)-Rp\\textasteriskcentered thetap\/(gamma\\textasteriskcentered v)) ;\\\\\n \\$(round(subs(Values\\_sublist,StartTau))..round(Taua) )\\\\\nend proc :\\\\[5pt]\n\n Max\\_ field := proc(Values\\_ sublist)\\\\\n local Field1,Field2,Field3,taurng1,taurng2,taurng3, FUNCT1, FUNCT2, FUNCT3,Taua, Taub,VAL1, VAL2, VAL3, i1, i2,i3, F1, FF1, F2, FF2;\\\\\nTaub := evalf(subs(Values\\_ sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v))) ;\\\\\nTaua := evalf(subs(Values\\_ sublist,Lp\/(epsilon\\textasteriskcentered gamma\\textasteriskcentered v)-Rp\\textasteriskcentered thetap\/(gamma\\textasteriskcentered v))) ;\\\\\n Field1:=evalf(subs(Get\\_ Fields(1,Values\\_ sublist),Energy\\_ res));\\\\\n Field2:=evalf(subs(Get\\_ Fields(2,Values\\_ sublist),Energy\\_ res));\\\\\n Field3:=evalf(subs(Get\\_ Fields(3,Values\\_ sublist),Energy\\_ res));\\\\\n taurng1 := get\\_ list1(Values\\_ sublist) ;\\\\\n taurng2 := get\\_ list2(Values\\_ sublist) ;\\\\\n taurng3 := get\\_ list3(Values\\_ sublist) ;\\\\\nVAL1:=(abs(subs(Values\\_ sublist, StartTau))-(abs(round(Taua)))):\\\\[3pt]\nfor i1 from 1 to VAL1 do:\\\\\nfor i2 from 1 to 100 do:\\\\\n FUNCT1:= max(subs(tau=taurng1[i1], Field1), subs(tau=taurng1[i1]-i2\/100, Field1));\\\\\nend do:\\\\\nend do:\\\\\nARR:=Array(1..19):\\\\\n\\#for i1 from 1 to (abs(round(Taua))) do:\\\\\nfor i3 from 1 to 19 do:\\\\\nARR[i3]:=(evalf(subs(tau=-i3\\textasteriskcentered(0.05), Field2)));\\\\\nFUNCT2:=max(ARR);\\\\\nend do:\\\\\n\\#end do:\\\\\nmax(FUNCT1, FUNCT2);\\\\\nend proc :\\\\\n\nValues\\_sublist1 :=\\\\\nsubs(gam=1000,{X0=0,Y0=0,Z0=1,epsilon=1,v=sqrt(1-1\/gam\\textasciicircum 2),\\\\\ngamma=gam,Lp=0,Rp=Rpp, thetap=thetapp, StartTau=-20});\\\\\n\nthetap\\_range:= 1\/95, 1\/90, 1\/85, 1\/80, 1\/75, 1\/70, 1\/65, 1\/60, 1\/55, 1\/50, 1\/45, 1\/40, 1\/35, 1\/30, 1\/25, 1\/20, 1\/15, 1\/10, 1\/5, 1;\\\\\n\nRp\\_range:=500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500, 6000, 6500, 7000, 7500, 8000, 8500, 9000, 9500, 10000;\\\\\n\nB:=Matrix(20, 20);\\\\\n\n for i1 from 1 to 20 do:\\\\\n for i2 from 1 to 20 do:\\\\\n subs\\_LthetaR :=subs(thetapp=thetap\\_range[i1],Rpp=Rp\\_range[i2], Values\\_sublist1);\\\\\n B[i1, i2]:= Max\\_field(subs\\_LthetaR);\\\\\nend do;\\\\\n end do;\\\\\n\n\\end{mapcode}\n\\end{linenumbers}\n\n\\section*{Comments}\n {\\footnotesize \\color{red} \\tt 269-287} The procedures \\texttt{get\\_list||Cnum} will round the values of \\texttt{Taua} and \\texttt{Taub} to the nearest integer and output the range of \\texttt{tau} as a sequence of integers.\\\\\n {\\footnotesize \\color{red} \\tt 288-315} The procedure \\texttt{Max\\_ field} will compare the peak value of \\texttt{Energy\\_ res} for the initial straight line and the bend for a number of values of \\texttt{tau}. The field given by the second straight line is negligible. The peak field for the straight segment is given by the local variable \\texttt{FUNCT1} and the peak field for the bend is given by \\texttt{FUNCT2}.\\\\\n {\\footnotesize \\color{red} \\tt 316-330} These lines of code will create a $20\\times20$ matrix \\texttt{B} whose elements are the peak fields corresponding to the given values of \\texttt{thetap} and \\texttt{Rp}. These values correspond to those given in table \\ref{minimize_table} and the resulting matrix was used to plot figure \\ref{minimize} using the MAPLE function \\emph{matrixplot}.\n\\section{Part 4 - Plots}\n\\begin{linenumbers}\n\\begin{mapcode}\nPlot\\_Field\\_tau := proc(Values\\_sublist,Field)\\\\\n local Field1,Field2,Field3,taurng1,taurng2,taurng3;\\\\\n Field1:=subs(Get\\_Fields(1,Values\\_sublist),Field) ;\\\\\n Field2:=subs(Get\\_Fields(2,Values\\_sublist),Field) ;\\\\\n Field3:=subs(Get\\_Fields(3,Values\\_sublist),Field) ;\\\\\n taurng1 := get\\_range1(Values\\_sublist) ;\\\\\n taurng2 := get\\_range2(Values\\_sublist) ;\\\\\n taurng3 := get\\_range3(Values\\_sublist) ;\\\\\n display(\\\\\n plot(Field1,tau=taurng1,color=BLACK,\\_rest),\\\\\n plot(Field2,tau=taurng2,color=RED,\\_rest, numpoints=1000),\\\\\n plot(Field3,tau=taurng3,color=BLUE,\\_rest)\\\\\n ):\\\\\nend proc:\\\\[5pt]\n\n\nPlot\\_Field\\_T0 := proc(Values\\_sublist,Field)\\\\\n local Field1,Field2,Field3,taurng1,taurng2,taurng3;\\\\\n taurng1 := get\\_range1(Values\\_sublist) ;\\\\\n taurng2 := get\\_range2(Values\\_sublist) ;\\\\\n taurng3 := get\\_range3(Values\\_sublist) ;\\\\\n Field1:=subs(Get\\_Fields(1,Values\\_sublist),[T0\\_res,Field,tau=taurng1]) ;\\\\\n Field2:=subs(Get\\_Fields(2,Values\\_sublist),[T0\\_res,Field,tau=taurng2]) ;\\\\\n Field3:=subs(Get\\_Fields(3,Values\\_sublist),[T0\\_res,Field,tau=taurng3]) ;\\\\\n display(\\\\\n plot(Field1,color=BLACK,\\_rest),\\\\\n plot(Field2,color=RED,\\_rest),\\\\\n plot(Field3,color=BLUE,\\_rest)\\\\\n ):\\\\\nend proc :\\\\[5pt]\n\n\nValues\\_sublist1 :=\\\\\nsubs(gam=1000, \\{X0=0.005,Y0=0,Z0=0.0005,epsilon=1,v=sqrt(1-1\/gam\\textasciicircum 2),gamma=gam,Lp=0,thetap=0.13,Rp=0.5, StartTau=-100\\\\\n, c=3\\textasteriskcentered 10\\textasciicircum (8), q\\_e=-1.80951262\\textasteriskcentered 10\\textasciicircum (-8)\\});\\\\\n\nValues\\_sublist2 :=\\\\\nsubs(gam=1000, \\{X0=0.005,Y0=0, Z0=0.0005, epsilon=1,v=(sqrt(1-1\/gam\\textasciicircum 2)),gamma=gam,Lp=0,thetap=0,Rp=0.5, StartTau=-100, c=3\\textasteriskcentered 10\\textasciicircum (8),\\\\ q\\_e=(-1.80951262\\textasteriskcentered 10\\textasciicircum (-8))\\});\\\\[5pt]\n\n\nEEx:=subs(Get\\_Fields(1,Values\\_sublist2), Elec\\_cx\\_res+Elec\\_rx\\_res):\\\\\nEEy:=subs(Get\\_Fields(1,Values\\_sublist2), Elec\\_cy\\_res+Elec\\_ry\\_res):\\\\\nEEz:=subs(Get\\_Fields(1,Values\\_sublist2), Elec\\_cz\\_res+Elec\\_rz\\_res):\\\\\nTT:=subs(Get\\_Fields(1,Values\\_sublist2),(T0\\_res)):\\\\[5pt]\n\nEEx1:=subs(Get\\_Fields(1,Values\\_sublist1),Elec\\_cx\\_res+Elec\\_rx\\_res):\\\\\nEEy1:=subs(Get\\_Fields(1,Values\\_sublist1),Elec\\_cy\\_res+Elec\\_ry\\_res):\\\\\nEEz1:=subs(Get\\_Fields(1,Values\\_sublist1),Elec\\_cz\\_res+Elec\\_rz\\_res):\\\\\nTT1:=subs(Get\\_Fields(1,Values\\_sublist1),(T0\\_res)):\\\\[5pt]\n\nEEx2:=subs(Get\\_Fields(2,Values\\_sublist1),Elec\\_cx\\_res+Elec\\_rx\\_res):\\\\\nEEy2:=subs(Get\\_Fields(2,Values\\_sublist1),Elec\\_cy\\_res+Elec\\_ry\\_res):\\\\\nEEz2:=subs(Get\\_Fields(2,Values\\_sublist1),Elec\\_cz\\_res+Elec\\_rz\\_res):\\\\\nTT2:=subs(Get\\_Fields(2,Values\\_sublist1),(T0\\_res)):\\\\[5pt]\n\n\nEEx3:=subs(Get\\_Fields(3,Values\\_sublist1),Elec\\_cx\\_res+Elec\\_rx\\_res):\\\\\nEEy3:=subs(Get\\_Fields(3,Values\\_sublist1),Elec\\_cy\\_res+Elec\\_ry\\_res):\\\\\nEEz3:=subs(Get\\_Fields(3,Values\\_sublist1),Elec\\_cz\\_res+Elec\\_rz\\_res):\\\\\nTT3:=subs(Get\\_Fields(3,Values\\_sublist1),(T0\\_res)):\\\\[5pt]\n\npart1x:=plot([10\\textasciicircum (12)\\textasteriskcentered TT1, abs(EEx1), tau=get\\_range1(Values\\_sublist1)], color=black, numpoints=10000):\\\\\npart2x:=plot([10\\textasciicircum (12)\\textasteriskcentered TT2, abs(EEx2), tau=get\\_range2(Values\\_sublist1)], color=red,resolution=600, numpoints=50000):\\\\\npart3x:=plot([10\\textasciicircum (12)\\textasteriskcentered TT3, abs(EEx3), tau=get\\_range3(Values\\_sublist1)], color=blue, numpoints=10000):\\\\\n\n\npart1y:=plot([10\\textasciicircum (12)\\textasteriskcentered TT1, abs(EEy1), tau=get\\_range1(Values\\_sublist1)], color=black, numpoints=10000):\\\\\npart2y:=plot([10\\textasciicircum (12)\\textasteriskcentered TT2, abs(EEy2), tau=get\\_range2(Values\\_sublist1)], color=red,resolution=600, numpoints=50000):\\\\\npart3y:=plot([10\\textasciicircum (12)\\textasteriskcentered TT3, abs(EEy3), tau=get\\_range3(Values\\_sublist1)], color=blue, numpoints=10000):\\\\\n\n\n\n\npart1z:=plot([10\\textasciicircum (12)\\textasteriskcentered TT1, abs(EEz1), tau=get\\_range1(Values\\_sublist1)], color=black, numpoints=10000):\\\\\npart2z:=plot([10\\textasciicircum (12)\\textasteriskcentered TT2, abs(EEz2), tau=get\\_range2(Values\\_sublist1)], color=red,resolution=600, numpoints=50000):\\\\\npart3z:=plot([10\\textasciicircum (12)\\textasteriskcentered TT3, abs(EEz3), tau=get\\_range3(Values\\_sublist1)], color=blue, numpoints=10000):\\\\\n\n\n\n\nResize(display(part1x, part2x, part3x, axes=boxed, view=[15.8..16.8, 0..8], axesfont=[TIMES, ROMAN, 20], thickness=3));\\\\\nResize(display(part1y, part2y, part3y, axes=boxed, view=[15.8..16.8, 0..8], axesfont=[TIMES, ROMAN, 20], thickness=3));\\\\\nResize(display(part1z, part2z, part3z, axes=boxed, view=[15.8..16.8, 0..8], axesfont=[TIMES, ROMAN, 20], thickness=3));\\\\\n\n\n\\end{mapcode}\n\\end{linenumbers}\n\\section*{Comments}\n{\\footnotesize \\color{red} \\tt 331-334} This procedure will plot any field in the list \\texttt{Get\\_fields} (or combination thereof) against \\texttt{tau} for a given set of inputs. We can plot the field due to the straight trajectory by setting \\texttt{thetap}$=0$.\\\\\n{\\footnotesize \\color{red} \\tt 345-361} This procedure will plot any field in the list as a function of \\texttt{T0}.\\\\\n{\\footnotesize \\color{red} \\tt 362-417} This will make the plots given in figure \\ref{fig_fields_comp}.\n\n\\section{Part 5 - Convolution}\n\\label{convolution_app}\n\n\n\n \\begin{linenumbers}\n \\begin{mapcode}\nrho\\_ box:= (t,a, b) -> 1\/a\\textasteriskcentered(Heaviside(t+a\/2+b)-Heaviside(t-a\/2+b));\\\\\nplot(rho\\_ box(t,0.0005, 0),t=-0.01..0.01,title=\"box distribution\",colour=brown,axes=boxed);\\\\\nrho\\_ Gauss:= (t, a, b) -> 1\/(a\\textasteriskcentered sqrt(2\\textasteriskcentered Pi))\\textasteriskcentered exp((-(t-b)\\ind2)\/(2\\textasteriskcentered a\\ind2));\\\\\nplot(rho\\_ Gauss(t, 0.5, 0),t=-1..1,title=\"Gaussian distribution\",colour=brown,axes=boxed,numpoints=10000);\\\\[5pt]\n\nconv:=proc(PEAK, a, b, N,comp )\\\\\n local t, i, E\\_seq, tau\\_seq,rho\\_seq,E0\\_seq, conv,sum1 ;\\\\\n global convx1, convy1, convz1, convx2, convy2, convz2 ;\\\\\nif PEAK=1 then\\\\\nfor i from 0 to (N-1) do\\\\\nt:=16.6667;\\\\\n\\#solve(a+((b-a)\/N)\\textasteriskcentered (i+1\/2)=TT,tau);\\\\\n\\#print(\"-----\"\ntau\\_1\\_||i :=solve(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2)=10\\textasciicircum(12)\\textasteriskcentered TT,tau);\\\\\n\\#print(tau\\_1\\_||i) ;\\\\\nEEE\\_0\\_||i :=evalf(subs(tau=tau\\_1\\_||i, EE||comp));\\\\\nEEE\\_1\\_||i :=EEE\\_0\\_||i\\textasteriskcentered evalf(rho\\_Gauss(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2), b-a, t));\\\\\nrho\\_1\\_||i:=rho\\_Gauss(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2), b-a, t);\\\\\nend do:\\\\\nsum1:=add(EEE\\_1\\_||i, i=0..N-1);\\\\\nconv||comp||PEAK:=sum1\/add(evalf(rho\\_Gauss(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2), b-a, t)), i=0..N-1);\\\\\nprint(conv||comp||PEAK);\\\\[3pt]\nelif PEAK=2 then\\\\\nfor i from 0 to (N-1) do\\\\\nt:=16.685;\\\\\ntau\\_1\\_||i :=fsolve(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2)=10\\textasciicircum(12)\\textasteriskcentered TT||PEAK,tau);\\\\\n\\#print(tau\\_1\\_||i) ;\\\\\nEEE\\_0\\_||i :=evalf(subs(tau=tau\\_1\\_||i, EE||comp||PEAK));\\\\\nEEE\\_1\\_||i :=EEE\\_0\\_||i\\textasteriskcentered evalf(rho\\_Gauss(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2), b-a, t));\\\\\nrho\\_1\\_||i:=rho\\_Gauss(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2), b-a,t);\\\\\nend do:\\\\\nsum1:=add(EEE\\_1\\_||i, i=0..N-1);\\\\\nconv||comp||PEAK:=sum1\/add(evalf(rho\\_Gauss(t-a-((b-a)\/N)\\textasteriskcentered (i+1\/2), b-a, t)), i=0..N-1);\\\\\nprint(conv||comp||PEAK);\\\\\nend if:\\\\\nend proc:\n \\end{mapcode}\n \\end{linenumbers}\n\\section*{Comments}\n{\\footnotesize \\color{red} \\tt 347-352} Defines the charge profile $\\rho(\\nu)$. We can use either a box profile or a Gaussian profile.\\\\\n{\\footnotesize \\color{red} \\tt 424-458} Procedure for calculating the convolution $(\\ref{E_Tot})$. The convolution has to be evaluated for the pre-bent path and for the straight path for a selection of different bunch lengths. We adopt a Gaussian form for ${\\rho_{\\textup{Lab}}}$ and define the bunch length as the full width at half maximum (FWHM). The results are given in table \\ref{table3}.\n\n\n\n\n\n\\end{appendices}\n\n\\end{mainmatter}\n\n\\begin{backmatter}\n\n\n\\fancyhfoffset[L,R]{\\marginparsep+\\marginparwidth}\n\\fancyhead[LO,RE]{}\n\\fancyhead[LE,RO]{\\slshape{Bibliography}}\n\\fancyfoot[C]{\\thepage}\n\\renewcommand{\\chaptermark}[1]{\\markboth{#1}{}}\n\\renewcommand{\\sectionmark}[1]{\\markright{#1}{}}\n\\addcontentsline{toc}{chapter}{Bibliography}\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn recent years, many studies on Apostol-type numbers and polynomials have been carried out by some researchers (see \\cite{ChoiFAR}-\\cite{KucukogluJNT2017}). Among others, in this paper, we are mainly dealt with the $\\lambda$-Apostol-Daehee numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$ and polynomials $\\mathfrak{D}_n\\left(x;\\lambda\\right)$ introduced and investigated by Simsek \\cite{SimsekASCM2016, SimsekASCM2017} respectively as in the following generating functions:\n\\begin{equation}\n\tG_{\\mathfrak{D}}\\left(t; \\lambda\\right):=\\frac{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}{\\lambda\\left(1+\\lambda t\\right)-1}=\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(\\lambda\\right)\\frac{t^n}{n!},\n\t\\label{L-ADnum}\n\\end{equation}\nand\n\\begin{equation}\n\tG_{\\mathfrak{D}}\\left(t, x; \\lambda\\right):=\\frac{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}{\\lambda\\left(1+\\lambda t\\right)-1}\\left(1+\\lambda t\\right)^x=\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!},\n\t\\label{L-ADpoly}\n\\end{equation}\n(\\textit{cf}. \\cite{SimsekASCM2016, SimsekASCM2017}; and also see \\cite{SimsekYardimci2016}).\n\nFirst few values of the numbers $\\mathcal{D}_{n}(\\lambda )$ are given as follows:%\n\\begin{eqnarray*}\n\t\\mathcal{D}_{0}(\\lambda ) &=&\\frac{\\log \\lambda }{\\lambda -1},\\\\\n\t\\mathcal{D}_{1}(\\lambda ) &=&-\\frac{\\lambda ^{2}\\log \\lambda }{\\left(\n\t\t\\lambda -1\\right) ^{2}}+\\frac{\\lambda }{\\lambda -1}, \\\\\n\t\\mathcal{D}_{2}(\\lambda ) &=&\\frac{2\\lambda ^{4}\\log \\lambda }{\\left(\n\t\t\\lambda -1\\right) ^{3}}+\\frac{\\lambda ^{2}\\left( 1-3\\lambda \\right) }{\\left(\n\t\t\\lambda -1\\right) ^{2}}, \\\\\n\t\\mathcal{D}_{3}(\\lambda ) &=&-\\frac{6\\lambda ^{6}\\log \\lambda }{\\left(\n\t\t\\lambda -1\\right) ^{4}}+\\frac{\\lambda ^{3}\\left( 11\\lambda ^{2}-7\\lambda\n\t\t+2\\right) }{\\left( \\lambda -1\\right) ^{3}},\n\\end{eqnarray*}\nand so on (\\textit{cf}. \\cite{KucukogluAADM2019, SimsekASCM2016, SimsekASCM2017, SimsekYardimci2016}).\n\nAnother family of Apostol-type numbers and polynomials is the family of the numbers $Y_{n}\\left( \\lambda \\right) $ (so-called Simsek numbers) and the polynomials $Y_{n}\\left( x;\\lambda \\right) $ (so-called Simsek polynomials) defined respectively by the following generating functions (\\textit{cf}. \\cite{simsekTJM}):\n\\begin{equation}\n\tF\\left( t,\\lambda \\right):=\\frac{2}{\\lambda \\left( 1+\\lambda t\\right) -1}%\n\t=\\sum\\limits_{n=0}^{\\infty }Y_{n}\\left( \\lambda \\right) \\frac{t^{n}}{n!},\n\t\\label{YNumGenFunc}\n\\end{equation}%\nand\n\\begin{equation}\n\tF\\left( t,x,\\lambda \\right) :=\\frac{2\\left( 1+\\lambda t\\right) ^{x}}{\\lambda\n\t\t\\left( 1+\\lambda t\\right) -1}=\\sum\\limits_{n=0}^{\\infty }Y_{n}\\left(\n\tx;\\lambda \\right) \\frac{t^{n}}{n!}.\n\t\\label{YPolyGenFunc}\n\\end{equation}\n\nFor $n\\in \\mathbb{N}_{0}:=\\{0,1,2,3,\\dots\\}$, the numbers $Y_{n}(\\lambda )$ are computed by the following explicit formula:\n\\begin{equation}\n\tY_{n}(\\lambda )=2(-1)^{n}\\frac{n!}{\\lambda -1}\\left( \\frac{\\lambda ^{2}}{%\n\t\t\\lambda -1}\\right) ^{n},\n\t\\label{YnExpl}\n\\end{equation}\nby which, one may easily compute first few values of the numbers $Y_{n}\\left( \\lambda \\right) $ as below:\n\\begin{eqnarray*}\n\t&&Y_{0}(\\lambda )=\\frac{2}{\\lambda -1}, \\quad Y_{1}(\\lambda )=-\\frac{2\\lambda ^{2}%\n\t}{\\left( \\lambda -1\\right) ^{2}}, \\quad Y_{2}(\\lambda )=\\frac{4\\lambda ^{4}}{\\left(\n\t\t\\lambda -1\\right) ^{3}}, \\\\\n\t&&Y_{3}(\\lambda )=-\\frac{12\\lambda ^{6}}{\\left( \\lambda -1\\right) ^{4}}%\n\t,\\quad Y_{4}(\\lambda )=\\frac{48\\lambda ^{8}}{\\left( \\lambda -1\\right) ^{5}},\n\\end{eqnarray*}%\nand so on (\\textit{cf}. \\cite{simsekTJM}; and also see \\cite{KucukogluJNT2017}).\n\nLet $\\left( x\\right) _{n}=x\\left(x-1\\right)\\dots\\left(x-n+1\\right)$ with $\\left( x\\right) _{0}=1$. Then, the combination of (\\ref{YNumGenFunc}) with (\\ref{YPolyGenFunc}) yields the relation between the numbers $Y_{n}(\\lambda )$ and the polynomials $Y_{n}(x;\\lambda )$ given by \n\\begin{equation}\n\tY_{n}(x;\\lambda )=\\sum_{j=0}^{n}\\binom{n}{j} Y_{j}(\\lambda ) \\lambda\n\t^{n-j}(x)_{n-j},\n\t\\label{ttC}\n\\end{equation}\nby which, one may easily compute first few values of the polynomials $Y_{n}\\left(x; \\lambda \\right) $ as below:\n\\begin{eqnarray*}\n\tY_{0}(x;\\lambda ) &=&\\frac{2}{\\lambda -1}, \\\\\n\tY_{1}(x;\\lambda ) &=&\\frac{2\\lambda }{\\lambda -1}x-\\frac{2\\lambda ^{2}}{%\n\t\t\\left( \\lambda -1\\right) ^{2}}, \\\\\n\tY_{2}(x;\\lambda ) &=& \\frac{2\\lambda ^{2}}{\\lambda -1}x^{2}-\\frac{6\\lambda\n\t\t^{3}-2\\lambda ^{2}}{\\left( \\lambda -1\\right) ^{2}}x+\\frac{4\\lambda ^{4}}{%\n\t\t\\left( \\lambda -1\\right) ^{3}}, \\\\\n\tY_{3}(x;\\lambda ) &=&\\frac{2\\lambda ^{3}}{\\lambda -1}x^{3}-\\frac{12\\lambda\n\t\t^{4}-6\\lambda ^{3}}{\\left( \\lambda -1\\right) ^{2}}x^{2}+\\frac{22\\lambda\n\t\t^{5}-14\\lambda ^{4}+4\\lambda ^{3}}{\\left( \\lambda -1\\right) ^{3}}x-\\frac{%\n\t\t12\\lambda ^{6}}{\\left( \\lambda -1\\right) ^{4}},\n\\end{eqnarray*}\nand so on (\\textit{cf}. \\cite{simsekTJM}; and also see \\cite{KucukogluJNT2017}).\n\nAnother family of Apostol-type numbers and polynomials is the family of the numbers $Y_{n}^{\\left( -k\\right) }\\left(\n\\lambda \\right)$ (so-called negative higher-order Simsek numbers) and the polynomials $Q_{n}\\left(x; \\lambda,\nk\\right)$ (so-called negative higher-order Simsek polynomials) defined respectively by the following generating functions: \n\\begin{equation}\n\t\\mathcal{G}_{Y}\\left(t,k;\\lambda\\right):=2^{-k}\\left(\\lambda \\left( 1+\\lambda\n\tt\\right) -1\\right) ^{k}=\\sum_{n=0}^{\\infty }Y_{n}^{\\left( -k\\right) }\\left(\n\t\\lambda \\right) \\frac{t^{n}}{n!} \\label{GenFHigOrdNegYpoly}\n\\end{equation}%\nand\n\\begin{equation}\n\t\\mathcal{G}_{Q}\\left(t,x,k;\\lambda\\right):=\\mathcal{G}_{Y}\\left(t,k;\\lambda\\right)\n\t\\left(1+\\lambda t\\right)^x=\\sum_{n=0}^{\\infty }Q_{n}\\left(x; \\lambda,\n\tk\\right) \\frac{t^{n}}{n!}, \\label{GenFHigOrdNegYpolyx}\n\\end{equation}\n(\\textit{cf}. \\cite{KucukogluAxiom2019}).\n\nFor $n\\in \\mathbb{N}_{0}$, the numbers $Y_{n}^{\\left( -k\\right) }\\left(\\lambda \\right)$ are computed by the following explicit formula:\n\\begin{equation}\n\tY_{n}^{\\left( -k\\right) }\\left( \\lambda \\right)=\\left\\{ \n\t\\begin{array}{cc}\n\t\t2^{-k} n!\\binom{k}{n}\\lambda^{2n}\\left(\\lambda-1\\right)^{k-n} & \\text{if}\n\t\t\\quad n\\leq k \\\\ \n\t\t0 & \\text{if} \\quad n>k\n\t\\end{array}\n\t\\right. \\label{HigOrdNegYpolyExplicit}\n\\end{equation}\nby which, one may easily compute the values of the numbers $Y_{n}^{\\left( -k\\right) }\\left(\\lambda \\right)$ as below:\n\\begin{eqnarray*}\n\tY_{0}^{\\left( -k\\right) }\\left( \\lambda\n\t\\right)&=&2^{-k}\\left(\\lambda-1\\right)^{k}, \\\\\n\tY_{1}^{\\left( -k\\right) }\\left( \\lambda \\right)&=&2^{-k} \\binom{k}{1}%\n\t\\lambda^{2}\\left(\\lambda-1\\right)^{k-1}, \\\\\n\tY_{2}^{\\left( -k\\right) }\\left( \\lambda \\right)&=&2^{-k} 2!\\binom{k}{2}%\n\t\\lambda^{4}\\left(\\lambda-1\\right)^{k-2}, \\\\\n\t&\\vdots& \\\\\n\tY_{j}^{\\left( -k\\right) }\\left( \\lambda \\right)&=&2^{-k} j!\\binom{k}{j}%\n\t\\lambda^{2j}\\left(\\lambda-1\\right)^{k-j} \\quad \\mathit{for} \\quad j\\leq k, \\\\\n\t&\\vdots& \\\\\n\tY_{k}^{\\left( -k\\right) }\\left( \\lambda \\right)&=&2^{-k} k!\\lambda^{2k},\n\\end{eqnarray*}\n(\\textit{cf}. \\cite{KucukogluAxiom2019}).\n\nThe combination of (\\ref{GenFHigOrdNegYpoly}) with (\\ref{GenFHigOrdNegYpolyx}) yields the relation between the numbers $Y_{n}^{\\left( -k\\right) }\\left(\\lambda \\right)$ and the polynomials $Q_{n}\\left(x; \\lambda, k\\right)$ is given, for $k, n \\in \\mathbb{N}_0$, by\n\\begin{equation}\n\tQ_{n}\\left(x; \\lambda, k\\right)=\\sum_{j=0}^{n}\\binom{n}{j}Y_{j}^{\\left( -k\\right) }\\left( \\lambda\n\t\\right) %\n\t\\lambda^{n-j}\\left(x\\right)_{n-j}, \\label{Formula-HigOrdNegYpolyx}\n\\end{equation}\nby which, one may easily compute first few values of the polynomials $Q_{n}\\left(x; \\lambda, k\\right)$ as below:\n\\begin{eqnarray*}\n\tQ_{0}\\left(x; \\lambda, k\\right)&=&2^{-k}\\left(\\lambda-1\\right)^k, \\notag \\\\\n\tQ_{1}\\left(x; \\lambda, k\\right)&=&2^{-k}\\left(\\lambda-1\\right)^k\\lambda\n\tx+2^{-k}k\\lambda^2\\left(\\lambda-1\\right)^{k-1}, \\notag \\\\\n\tQ_{2}\\left(x; \\lambda,\n\tk\\right)&=&2^{-k}\\left(\\lambda-1\\right)^k\\lambda^2x^2+\\left(-2^{-k}\\left(%\n\t\\lambda-1\\right)^k\\lambda^2\n\t+2^{-k+1}k\\lambda^3\\left(\\lambda-1\\right)^{k-1}\\right)x \\notag \\\\\n\t&&+2^{-k}k\\left(k-1\\right)\\lambda^4\\left(\\lambda-1\\right)^{k-1},\n\\end{eqnarray*}\nand so on (\\textit{cf}. \\cite{KucukogluAxiom2019}).\n\nLet $x\\in \\left[ 0,1\\right] $ and $k \\in \\mathbb{N}_0$. Then, the Bernstein Basis functions, $B_{k}^{n}(x)$, is defined as below:\n\\begin{equation}\n\tB_{k}^{n}(x)=\\binom{n}{k} x^{k}(1-x)^{n-k}, \\qquad \\left(k=0,1,\\dots,n; \\ \\ n \\in \\mathbb{N}_0\\right)\n\t\\label{Berns-Basis}\n\\end{equation}\nand its generating function is given by \n\\begin{equation}\n\t\\frac{\\left(xt\\right)^{k}e^{\\left( 1-x\\right) t}}{k!}%\n\t=\\sum_{n=0}^{\\infty}B_{k}^{n}(x)\\frac{t^{n}}{n!}, \\label{GenFunc-Berns}\n\\end{equation}\nso that the Bernstein Basis functions have relationships with a large number of concepts including the Bezier curves, the binomial distribution, the Poisson distribution, the Catalan numbers, and etc.; see, for details, \\cite{AcikgozSerkan, ErkusDuman, Duman, Lorentz,FPTASimsek2013,SimsekBVP2013, SimsekHJMS2014, SimsekMMAS2015, Simsek Acikgoz} and also cited references therein.\n\nIt is concluded with the help of (\\ref{HigOrdNegYpolyExplicit}) and (\\ref{Berns-Basis}) that there exists the following relation between the numbers $Y_{n}^{\\left( -k\\right) }\\left( \\lambda \\right) $ and the Bernstein basis functions: \n\\begin{equation}\n\tY_{n}^{\\left( -k\\right) }\\left( \\lambda \\right)=\\frac{\\left(-1\\right)^{k-n} n!}{2^{k}}\\lambda^n B_{n}^{k}(\\lambda)\n\t\\label{Y-Berns}\n\\end{equation}\nwhere $n, k \\in \\mathbb{N}_0$ and $\\lambda\\in \\left[ 0,1\\right] $ (\\textit{cf}. \\cite{KucukogluAxiom2019}).\n\nActually, the numbers $Y_{n}^{\\left( -k\\right) }\\left( \\lambda \\right)$ have other relations than its relation to the Bernstein basis functions. Among others, the numbers $Y_{n}^{\\left( -k\\right) }\\left( \\lambda \\right)$ have relationships with the Poisson--Charlier polynomials, the Bell polynomials (i.e., exponential polynomials) and other kinds of combinatorial numbers. To see the relations mentioned above, the interested reader may glance at the recent paper \\textup{\\cite{KucukogluAxiom2019}}.\n\nThe Stirling numbers of the first kind, $S_{1}(n,k)$, are defined, for $n, k\\in \\mathbb{N}_0$, by the following recurrence relation:\n\\begin{equation*}\n\tS_{1}(n+1,k)=-nS_{1}(n,k)+S_{1}(n,k-1)\n\\end{equation*}%\nwith the side conditions $S_{1}(0,0)=1$, $S_{1}(0,k)=0$ if $k>0$, $S_{1}(n,0)=0$ if $n>0$, $%\nS_{1}(n,k)=0$ if $k>n$; and these numbers are also given by\n\\begin{equation}\n\t\\left( x\\right) _{n}=\\sum_{k=0}^{n}S_{1}\\left( n,k\\right) x^{k}\n\t\\label{DefinitionFirstStirling}\n\\end{equation}\nand \n\\begin{equation}\n\t\\frac{\\left( \\log (1+t)\\right) ^{k}}{k!}=\\sum_{n=k}^{\\infty }S_{1}(n,k)\\frac{%\n\t\tt^{n}}{n!}\n\t\\label{S1}\n\\end{equation}\n(\\textit{cf}. \\cite{BonaBOOK, CharalambidesBOOK, Comtet, Gradimir, Qi, SimsekFPTA}; and the references cited therein).\n\nThe Cauchy numbers (or the Bernoulli numbers of the second kind), $b_{n} \\left(0\\right)$, are defined by\n\\begin{equation}\n\tb_{n} \\left(0\\right) =\\int\\limits_{0}^{1}\\left( x\\right) _{n}dx \n\t\\label{Cauchy-Int-Formula}\n\\end{equation}\nand\n\\begin{equation}\n\tG_{C}\\left(t\\right):=\\frac{t}{\\log \\left(t+1\\right) }=\\sum_{n=0}^{%\n\t\t\\infty }b_{n} \\left(0\\right)\\frac{t^{n}}{n!} \\label{Cauchy-1}\n\\end{equation}%\n(\\textit{cf}. \\cite[p. 116]{Roman}, \\cite{KimEtAl2016}, \\cite{MerliniCauchy}, \\cite{Qi}). \n\nThe combination of (\\ref{DefinitionFirstStirling}) with (\\ref{Cauchy-Int-Formula}) yields the relation between the Cauchy numbers and the Stirling numbers of the first kind given by\n\\begin{equation}\n\tb_{n} \\left(0\\right)= \\sum\\limits_{m=0}^{n}\\frac{%\n\t\tS_{1}\\left(n,m\\right) }{m+1},\n\t\\label{Cauchy-Int-Formula2}\n\\end{equation}\n(\\textit{cf}. \\cite[p. 294]{Comtet}, \\cite[p. 1908]{MerliniCauchy}, \\cite[p. 114]{Roman}).\n\nThe Daehee numbers, $D_n$, are defined by the following generating function:\n\\begin{equation}\n\tF_D\\left(t\\right):=\\frac{\\log\\left(1+t\\right)}{t}=\\sum_{n=0}^{\\infty}D_n \\frac{t^n}{n!}\n\t\\label{DaeheeNumGF}\n\\end{equation}\nand these numbers are computed, for $n\\in \\mathbb{N}_{0}$, by the following explicit formula:\n\\begin{equation}\n\tD_n=\\frac{\\left(-1\\right)^n n!}{n+1}\n\t\\label{DaeheeNumExpl}\n\\end{equation}\n(\\textit{cf}. \\cite{El-Desouky, KimDaehee, SimsekASCM2016, SimsekASCM2017}).\n\nThe well-known Chu-Vandermonde identity is given by\n\\begin{equation}\n\t\\binom{x+y}{n}=\\frac{1}{n!}\n\t\\sum\\limits_{k=0}^{n}\\binom{n}{k}\\left(x\\right)_{k}\\left(y\\right)_{n-k}\n\t\\label{Chu-Vandermonde}\n\\end{equation}\n(\\textit{cf}. \\cite{Comtet, Jordan, ysimsekMTJPAM}).\n\nThe outline of this paper may briefly given as follows: \n\nIn Section \\ref{Section-2}, we present various identities and computation formulas containing not only the $\\lambda$-Apostol-Daehee numbers and polynomials, but also Simsek numbers and polynomials, the Stirling numbers of the first kind, the Daehee numbers and the Chu-Vandermonde identity. Besides, we derive an infinite series representation for the $\\lambda$-Apostol-Daehee polynomials.\n\nIn Section \\ref{Section-3}, by using functional equations containing the generating functions for the Cauchy numbers and the integrals of the generating functions for the $\\lambda$-Apostol-Daehee numbers and polynomials, we also derive some identities and formulas for these numbers and polynomials.\n\nIn Section \\ref{Section-4}, we give Mathematica implementation of a formula which computes the $\\lambda$-Apostol-Daehee polynomials in terms of the Simsek polynomials. By this implementation, some plots of the $\\lambda$-Apostol-Daehee polynomials are presented for some randomly selected special cases.\n\nIn Section \\ref{Section-5}, we conclude the paper with some comments and observations on our results.\n\n\\section{Identities containing Apostol-type numbers and polynomials}\n\\label{Section-2}\n\nIn this section, by using the techniques of generating function and their functional equations, we derive some identities involving not only the Chu-Vandermonde identity, but also some special numbers and polynomials such as the numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$, the polynomials $\\mathfrak{D}_n\\left(x;\\lambda\\right)$, the numbers $Y_n\\left(\\lambda\\right)$, the polynomials $Y_n\\left(x;\\lambda\\right)$, the numbers $S_1\\left(n,m\\right)$ and the numbers $D_n$. In addition, we get computation formulas for not only the numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$, but also the polynomials $\\mathfrak{D}_n\\left(x;\\lambda\\right)$. Moreover, we derive an infinite series representation for the polynomials $\\mathfrak{D}_m\\left(x;\\lambda\\right)$ in terms of the polynomials $Q_{m}\\left(x; \\lambda, n\\right)$.\n\nBy (\\ref{L-ADpoly}), we have\n\\begin{equation*}\n\t\\left(1+\\lambda t\\right)^x=\\frac{\\lambda\\left(1+\\lambda t\\right)-1}{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}\n\\end{equation*}\nand\n\\begin{equation*}\n\t\\left(1+\\lambda t\\right)^y=\\frac{\\lambda\\left(1+\\lambda t\\right)-1}{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(y;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{equation*}\nMultiplying the above two equations each other, we get\n\\begin{equation*}\n\t\\left(1+\\lambda t\\right)^{x+y}=\\left(\\frac{\\lambda\\left(1+\\lambda t\\right)-1}{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}\\right)^2 \\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(y;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{equation*}\nWith the application of the Binomial theorem and the Cauchy product rule to the above equation, we get\n\\begin{equation*}\n\t\\sum_{n=0}^{\\infty}\\binom{x+y}{n}\\lambda^n t^n=\\frac{\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)^2}{\\left(\\log\\lambda\\right)^2\\left(1+\\frac{\\log \\left(1+\\lambda t\\right)}{\\log\\lambda}\\right)^2} \\sum_{n=0}^{\\infty}\\sum_{j=0}^{n}\\binom{n}{j}\\mathfrak{D}_j\\left(x;\\lambda\\right)\\mathfrak{D}_{n-j}\\left(y;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{equation*}\nThus, we have\n\\begin{eqnarray*}\n\t\\sum_{n=0}^{\\infty}\\binom{x+y}{n}\\lambda^n t^n&=&\n\t\\frac{\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)^2}{\\left(\\log\\lambda\\right)^2}\n\t\\sum_{n=0}^{\\infty}\\binom{-2}{n}\\frac{\\left(\\log\\left(1+\\lambda t\\right)\\right)^n}{\\left(\\log \\lambda\\right)^n}\\\\\n\t&&\\times\\sum_{n=0}^{\\infty}\\sum_{j=0}^{n}\\binom{n}{j}\\mathfrak{D}_j\\left(x;\\lambda\\right)\\mathfrak{D}_{n-j}\\left(y;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{eqnarray*}\nBy combining (\\ref{S1}) with the above equation, after some elementary calculations, we get\n\\begin{eqnarray*}\n\t\\sum_{n=0}^{\\infty}\\binom{x+y}{n}\\lambda^n t^n&=&\n\t\\frac{\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)^2}{\\left(\\log\\lambda\\right)^2}\n\t\\sum_{m=0}^{\\infty}\\sum_{n=0}^{m}\\binom{-2}{n}n!\\frac{S_1\\left(m,n\\right)}{\\left(\\log \\lambda\\right)^n}\\frac{t^m}{m!}\\\\\n\t&&\\times\\sum_{n=0}^{\\infty}\\sum_{j=0}^{n}\\binom{n}{j}\\mathfrak{D}_j\\left(x;\\lambda\\right)\\mathfrak{D}_{n-j}\\left(y;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{eqnarray*}\nBy applying the Cauchy product rule to the above equation, we get\n\\begin{eqnarray*}\n\t\\sum_{m=0}^{\\infty}\\binom{x+y}{m}\\lambda^m t^m&=&\n\t\\frac{\\left(\\lambda^4 t^2+2\\lambda^2\\left(\\lambda-1\\right)t+\\left(\\lambda-1\\right)^2\\right)}{\\left(\\log\\lambda\\right)^2}\\\\\n\t&&\\times\n\t\\sum_{m=0}^{\\infty}\\sum_{k=0}^{m}\\binom{m}{k}\\sum_{n=0}^{k}\\binom{-2}{n}n!\\frac{S_1\\left(k,n\\right)}{\\left(\\log \\lambda\\right)^n}\\\\\n\t&&\\times\\sum_{j=0}^{m-k}\\binom{m-k}{j}\\mathfrak{D}_j\\left(x;\\lambda\\right)\\mathfrak{D}_{m-k-j}\\left(y;\\lambda\\right)\\frac{t^m}{m!}.\n\\end{eqnarray*}\nAfter some elementary calculations and by comparing the coefficients of $\\frac{t^m}{m!}$ on both sides of the above equation, we arrive at the following theorem:\n\\begin{theorem}\n\tLet $m \\in \\mathbb{N}\\setminus\\{0,1\\}$. Then we have\n\t\\begin{eqnarray}\n\t\t\\binom{x+y}{m}&=&\\frac{\\lambda^{4-m}}{\\left(\\log\\lambda\\right)^2}m\\left(m-1\\right)A\\left(m-2\\right) \\label{Th-1a}\\\\\n\t\t&&+\\frac{2\\lambda^{2-m}\\left(\\lambda-1\\right)}{\\left(\\log\\lambda\\right)^2}mA\\left(m-1\\right) \\notag\\\\\n\t\t&&+\\frac{\\lambda^{-m}\\left(\\lambda-1\\right)^2}{\\left(\\log\\lambda\\right)^2}A\\left(m\\right),\\notag\n\t\\end{eqnarray}\n\twhere\n\t\\begin{eqnarray*}\n\t\tA\\left(m\\right)=\\sum_{k=0}^{m}\\sum_{n=0}^{k}\\sum_{j=0}^{m-k}\\binom{m}{k}\\binom{-2}{n}\\binom{m-k}{j}\\frac{n!S_1\\left(k,n\\right)\\mathfrak{D}_j\\left(x;\\lambda\\right)\\mathfrak{D}_{m-k-j}\\left(y;\\lambda\\right)}{\\left(\\log \\lambda\\right)^n}.\n\t\\end{eqnarray*}\n\\end{theorem}\n\nCombining (\\ref{Th-1a}) with (\\ref{Chu-Vandermonde}) yields the following corollary:\n\\begin{corollary}\n\tLet $m \\in \\mathbb{N}\\setminus\\{0,1\\}$. Then we have\n\t\\begin{eqnarray}\n\t\t\\sum\\limits_{k=0}^{m}\\binom{m}{k}\\left(x\\right)_{k}\\left(y\\right)_{m-k} &=&\\frac{\\lambda^{4-m}}{\\left(\\log\\lambda\\right)^2}m\\left(m-1\\right)A\\left(m-2\\right) \\label{Th-1}\\\\\n\t\t&&+\\frac{2\\lambda^{2-m}\\left(\\lambda-1\\right)}{\\left(\\log\\lambda\\right)^2}mA\\left(m-1\\right) \\notag\\\\\n\t\t&&+\\frac{\\lambda^{-m}\\left(\\lambda-1\\right)^2}{\\left(\\log\\lambda\\right)^2}A\\left(m\\right).\\notag\n\t\\end{eqnarray}\n\\end{corollary}\n\nBy the combination of (\\ref{L-ADpoly}) with (\\ref{YPolyGenFunc}) and (\\ref{DaeheeNumGF}), we get the following functional equation:\n\\begin{equation}\n\tG_{\\mathfrak{D}}\\left(t, x; \\lambda\\right)=\\left(\\frac{\\log\\lambda}{2}+\\frac{\\lambda t F_D\\left(\\lambda t\\right)}{2}\\right)F\\left( t,x,\\lambda \\right).\n\\end{equation}\nwhich yields\n\\begin{equation*}\n\t\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}=\\left(\\frac{\\log\\lambda}{2}+\\frac{\\lambda t}{2}\\sum_{n=0}^{\\infty}\\lambda^n D_n \\frac{t^n}{n!}\\right)\\sum_{n=0}^{\\infty} {Y}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{equation*}\nBy applying the Cauchy product rule to the above equation, after some elementary calculations, we get\n\\begin{eqnarray*}\n\t\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}&=&\\frac{\\log\\lambda}{2}\\sum_{n=0}^{\\infty} {Y}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}\\\\\n\t&&+\\frac{1}{2}\\sum_{n=0}^{\\infty}\\sum_{j=0}^{n-1}n\\binom{n-1}{j}\\lambda^{n-j} D_{n-j-1}{Y}_j\\left(x;\\lambda\\right)\\frac{t^n}{n!}.\n\\end{eqnarray*}\nComparing the coefficients of $\\frac{t^n}{n!}$ on both sides of the above equation yields the following theorem:\n\\begin{theorem}\n\tLet $n \\in \\mathbb{N}$. Then we have\n\t\\begin{equation}\n\t\t\\mathfrak{D}_n\\left(x;\\lambda\\right)=\\frac{\\log\\lambda}{2}{Y}_n\\left(x;\\lambda\\right)+\\frac{1}{2}\\sum\\limits_{j=0}^{n-1}n\\binom{n-1}{j}\\lambda^{n-j} D_{n-j-1}{Y}_j\\left(x;\\lambda\\right).\n\t\t\\label{Th-2}\n\t\\end{equation}\n\\end{theorem}\n\nUsing (\\ref{DaeheeNumExpl}) in (\\ref{Th-2}), we get a relation between $\\lambda$-Apostol-Daehee polynomials and Simsek polynomials, given by following corollary:\n\\begin{corollary}\n\tLet $n \\in \\mathbb{N}$. Then we have\n\t\\begin{equation}\n\t\t\\mathfrak{D}_n\\left(x;\\lambda\\right)=\\frac{\\log\\lambda}{2}{Y}_n\\left(x;\\lambda\\right)-\\frac{n!}{2}\\sum\\limits_{j=0}^{n-1}\\left(-1\\right)^{n-j}\\frac{\\lambda^{n-j} {Y}_j\\left(x;\\lambda\\right)}{j!\\left(n-j\\right)}.\n\t\t\\label{Cor1}\n\t\\end{equation}\n\\end{corollary}\n\nSubstituting $x=0$ into (\\ref{Cor1}), we also get a relation, between $\\lambda$-Apostol-Daehee numbers and Simsek numbers, given by following corollary:\n\\begin{corollary}\n\tLet $n \\in \\mathbb{N}$. Then we have\n\t\\begin{equation}\n\t\t\\mathfrak{D}_n\\left(\\lambda\\right)=\\frac{\\log\\lambda}{2}{Y}_n\\left(\\lambda\\right)+\\frac{n!}{2}\\sum\\limits_{j=0}^{n-1}\\left(-1\\right)^{n-j-1}\\frac{\\lambda^{n-j}{Y}_j\\left(\\lambda\\right)}{j!\\left(n-j\\right)}.\n\t\t\\label{Cor2}\n\t\\end{equation}\n\\end{corollary}\n\nCombining (\\ref{YnExpl}) with (\\ref{Cor2}), we get a computation formula for the numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$ by the following corollary:\n\\begin{corollary}\n\t\\begin{equation}\n\t\t\\mathfrak{D}_n\\left(\\lambda\\right)=\\frac{(-1)^{n}n!}{\\lambda-1}\\left( \\left( \\frac{\\lambda ^{2}}{%\n\t\t\t\\lambda -1}\\right) ^{n} \\log\\lambda-\\lambda^n\\sum\\limits_{j=0}^{n-1}\\frac{1}{n-j}\\left(\\frac{\\lambda}{\\lambda-1}\\right)^j\\right).\n\t\t\\label{Cor3}\n\t\\end{equation}\n\\end{corollary}\n\n\\begin{remark}\n\tThe computation formula \\textup{(\\ref{Cor3})}, obtained by reduction from the equation \\textup{(\\ref{Th-2})}, may also be obtained with the help of the application of the binomial theorem on the generating function for the numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$. In the meanwhile, for another form of this formula, the interested reader may see the recent paper \\textup{\\cite[Theorem 8, p. 492]{KucukogluAADM2019}} in which other methods and generating function families used in order to achieve the aforementioned formula.\n\\end{remark}\n\nBy (\\ref{Cor3}), we obtain a finite sum whose values is computed by the numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$ given by the following corollary:\n\\begin{corollary}\n\tLet $n \\in \\mathbb{N}$. Then we have\n\t\\begin{equation}\n\t\t\\sum\\limits_{j=0}^{n-1}\\frac{1}{n-j}\\left(\\frac{\\lambda}{\\lambda-1}\\right)^j=(-1)^{n+1}\\frac{\\left(\\lambda-1\\right)\\mathfrak{D}_n\\left(\\lambda\\right)}{n!\\lambda^n}+\\left( \\frac{\\lambda}{%\n\t\t\t\\lambda -1}\\right)^{n} \\log\\lambda.\n\t\t\\label{Cor4}\n\t\\end{equation}\n\\end{corollary}\n\nBy using the Taylor series expansion of the function $\\log\\left(1+\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)\\right)$, by assuming that $| \\lambda\\left(1+\\lambda t\\right)-1 |<1$, in the equation (\\ref{L-ADpoly}), and then by making some simplifications, we get\n\\begin{equation*}\n\t\\sum_{m=0}^{\\infty}\\mathfrak{D}_m\\left(x;\\lambda\\right)\\frac{t^m}{m!}=\\left(1+\\lambda t\\right)^x\\sum_{n=0}^{\\infty}\\left(-1\\right)^n\\frac{\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)^{n}}{n+1}.\n\\end{equation*}\nBy combining (\\ref{GenFHigOrdNegYpolyx}) with the above equation, we get\n\\begin{equation*}\n\t\\sum_{m=0}^{\\infty}\\mathfrak{D}_m\\left(x;\\lambda\\right)\\frac{t^m}{m!}=\\sum_{n=0}^{\\infty}\\left(-1\\right)^n \\frac{2^n}{n+1}\\sum_{m=0}^{\\infty }Q_{m}\\left(x; \\lambda,\n\tn\\right) \\frac{t^{m}}{m!}.\n\\end{equation*}\nwhich yields\n\\begin{equation*}\n\t\\sum_{m=0}^{\\infty}\\mathfrak{D}_m\\left(x;\\lambda\\right)\\frac{t^m}{m!}=\\sum_{n=0}^{\\infty }\\sum_{m=0}^{\\infty}\\left(-1\\right)^n \\frac{2^n Q_{m}\\left(x; \\lambda,\n\t\tn\\right)}{n+1} \\frac{t^{m}}{m!}.\n\\end{equation*}\nBy assuming that $| \\lambda-1 |<1$ and comparing the coefficients of $\\frac{t^{m}}{m!}$ on both sides of the above equation yields a relation, between the numbers $\\mathfrak{D}_m\\left(x;\\lambda\\right)$ and the polynomials $Q_{m}\\left(x; \\lambda,\nn\\right)$, given by the following theorem:\n\\begin{theorem}\n\tIf $| \\lambda-1 |<1$, then we have the following infinite series representation for the polynomials $\\mathfrak{D}_m\\left(x;\\lambda\\right)$:\n\t\\begin{equation}\n\t\t\\mathfrak{D}_m\\left(x;\\lambda\\right)=\\sum_{n=0}^{\\infty}\\left(-1\\right)^n \\frac{2^n Q_{m}\\left(x; \\lambda,\n\t\t\tn\\right)}{n+1}.\n\t\t\\label{Th-Series-DQ}\n\t\\end{equation}\n\\end{theorem}\n\n\\section{Further identities derived from integral formulas and Cauchy numbers}\n\\label{Section-3}\n\nIn this section, by using functional equations involving the generating functions for the Cauchy numbers and the integrals of the generating functions for the numbers $\\mathfrak{D}_n\\left(\\lambda\\right)$, the polynomials $\\mathfrak{D}_n\\left(x; \\lambda\\right)$, we derive some identities and formulas.\n\nIntegrating both-sides of the equation (\\ref{L-ADpoly}), with respect to the variable $x$, from $0$ to $1$, we get the following integral formula:\n\\begin{eqnarray}\n\t\\int\\limits_{0}^{1} G_{\\mathfrak{D}}\\left(t, x; \\lambda\\right)\\mathrm{d}x&=&\\int\\limits_{0}^{1} \\frac{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}{\\lambda\\left(1+\\lambda t\\right)-1}\\left(1+\\lambda t\\right)^x \\mathrm{d}x\\\\\n\t&=&\\frac{\\lambda t\\left(\\log\\lambda+\\log \\left(1+\\lambda t\\right)\\right)}{\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)\\log\\left(1+\\lambda t\\right)},\n\\end{eqnarray}\nwhich, by (\\ref{Cauchy-1}) and (\\ref{L-ADnum}), yields the following functional equation:\n\\begin{eqnarray}\n\t\\int\\limits_{0}^{1} G_{\\mathfrak{D}}\\left(t, x; \\lambda\\right)\\mathrm{d}x=G_{\\mathfrak{D}}\\left(t; \\lambda\\right)G_{C}\\left(\\lambda t\\right).\n\\end{eqnarray}\nCombining the above equation with (\\ref{Cauchy-1}), (\\ref{L-ADnum}) and (\\ref{L-ADpoly}) yields\n\\begin{eqnarray}\n\t\\int\\limits_{0}^{1} \\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}\\mathrm{d}x=\\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(\\lambda\\right)\\frac{t^n}{n!}\\sum_{n=0}^{%\n\t\t\\infty }\\lambda^n b_{n} \\left(0\\right)\\frac{t^{n}}{n!}.\n\\end{eqnarray}\nBy applying the Cauchy product rule to the right-hand side of the above equation, after some elementary calculations, we get\n\\begin{eqnarray}\n\t\\sum_{n=0}^{\\infty}\\int\\limits_{0}^{1}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\mathrm{d}x\\frac{t^n}{n!}=\\sum_{n=0}^{\\infty}\\sum_{m=0}^{n}\\binom{n}{m}\\lambda^m b_{m} \\left(0\\right)\\mathfrak{D}_{n-m}\\left(\\lambda\\right)\\frac{t^{n}}{n!}.\n\\end{eqnarray}\nComparing the coefficients of $\\frac{t^{n}}{n!}$ on both sides of the above equation yields the following theorem:\n\\begin{theorem}\n\t\\begin{eqnarray}\n\t\t\\int\\limits_{0}^{1}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\mathrm{d}x=\\sum_{m=0}^{n}\\binom{n}{m}\\lambda^m b_{m} \\left(0\\right)\\mathfrak{D}_{n-m}\\left(\\lambda\\right).\n\t\t\\label{Th-Cauchy}\n\t\\end{eqnarray}\n\\end{theorem}\n\n\\begin{remark}\n\tBy using the generating function for the $k$-th order $\\lambda$-Apostol-Daehee polynomials, Choi \\textup{\\cite[Theorem 5, p.1854]{ChoiFAR}} gave the following integral formula:\n\t\\begin{equation*}\n\t\t\\int\\limits_{\\alpha}^{\\alpha+1}\\mathfrak{D}^{\\left(k\\right)}_n\\left(x;\\lambda\\right)\\mathrm{d}x=\\sum_{m=0}^{n}m!\\binom{n}{m}\\lambda^{m}\\mathfrak{D}^{\\left(k\\right)}_{n-m}\\left(\\alpha;\\lambda\\right)p_m.\n\t\\end{equation*}\n\tIf we substitute $k=1$ and $\\alpha=0$ into the above formula, we get\n\t\\begin{equation*}\n\t\t\\int\\limits_{0}^{1}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\mathrm{d}x=\\sum_{m=0}^{n}m!\\binom{n}{m}\\lambda^{m}\\mathfrak{D}_{n-m}\\left(\\lambda\\right)p_m.\n\t\\end{equation*}\n\tWhen we compare the above formula with the equation \\textup{(\\ref{Th-Cauchy})}, it is shown that the numbers $m!p_m$ considered in the formula above actually correspond to the Cauchy numbers $b_{m} \\left(0\\right)$ which is obtained by the techniques of generating functions and their functional equations. Thus, we conclude that Choi \\textup{\\cite{ChoiFAR}} modified the numbers $b_{m} \\left(0\\right)$ as follows:\n\t\\begin{equation*}\n\t\tm!p_m=b_{m} \\left(0\\right)\n\t\\end{equation*}\n\tin order to obtain an integral formula for the higher-order $\\lambda$-Apostol-Daehee polynomials.\n\\end{remark}\n\nIntegrating both-sides of the equation (\\ref{L-ADpoly}), with respect to the variable $x$, from $0$ to $z$, we get the following integral formula:\n\\begin{eqnarray}\n\t\\int\\limits_{0}^{z} G_{\\mathfrak{D}}\\left(t, x; \\lambda\\right)\\mathrm{d}x&=&\\int\\limits_{0}^{z} \\frac{\\log\\lambda+\\log \\left(1+\\lambda t\\right)}{\\lambda\\left(1+\\lambda t\\right)-1}\\left(1+\\lambda t\\right)^x \\mathrm{d}x\\\\\n\t&=&\\frac{\\left(\\left(1+\\lambda t\\right)^z-1\\right)\\left(\\log\\lambda+\\log \\left(1+\\lambda t\\right)\\right)}{\\left(\\lambda\\left(1+\\lambda t\\right)-1\\right)\\log\\left(1+\\lambda t\\right)}\n\\end{eqnarray}\nwhich, by (\\ref{L-ADnum}) and (\\ref{L-ADpoly}), yields the following functional equation:\n\\begin{eqnarray}\n\t\\int\\limits_{0}^{z} G_{\\mathfrak{D}}\\left(t, x; \\lambda\\right)\\mathrm{d}x=\\frac{\\left( G_{\\mathfrak{D}}\\left(t, z; \\lambda\\right)- G_{\\mathfrak{D}}\\left(t; \\lambda\\right)\\right)G_{C}\\left(\\lambda t\\right)}{\\lambda t}.\n\\end{eqnarray}\nCombining the above equation with (\\ref{Cauchy-1}), (\\ref{L-ADnum}) and (\\ref{L-ADpoly}) yields\n\\begin{eqnarray*}\n\t\\int\\limits_{0}^{z} \\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}\\mathrm{d}x=\\frac{1}{\\lambda t}\\left( \\sum_{n=0}^{\\infty}\\left(\\mathfrak{D}_n\\left(z;\\lambda\\right)-\\mathfrak{D}_n\\left(\\lambda\\right)\\right)\\frac{t^n}{n!}\\right)\\sum_{n=0}^{\\infty }\\lambda^n b_{n} \\left(0\\right)\\frac{t^{n}}{n!}.\n\\end{eqnarray*}\nBy applying the Cauchy product rule to the right-hand side of the above equation, after some elementary calculations, we get\n\\begin{eqnarray*}\n\t\\int\\limits_{0}^{z} \\sum_{n=0}^{\\infty}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\frac{t^n}{n!}\\mathrm{d}x&=&\\sum_{n=0}^{\\infty}\\frac{1}{n+1}\\sum_{m=0}^{n+1}\\binom{n+1}{m}\\lambda^{m-1} b_{m} \\left(0\\right)\\\\\n\t&&\\times\\left(\\mathfrak{D}_{n+1-m}\\left(z;\\lambda\\right)-\\mathfrak{D}_{n+1-m}\\left(\\lambda\\right)\\right)\\frac{t^{n}}{n!}.\n\\end{eqnarray*}\nComparing the coefficients of $\\frac{t^{n}}{n!}$ on both sides of the above equation yields the following theorem:\n\\begin{theorem}\n\t\\begin{eqnarray}\n\t\t&&\\int\\limits_{0}^{z}\\mathfrak{D}_n\\left(x;\\lambda\\right)\\mathrm{d}x\\\\\n\t\t&&=\\frac{1}{n+1}\\sum_{m=0}^{n+1}\\binom{n+1}{m}\\lambda^{m-1} b_{m} \\left(0\\right)\\left(\\mathfrak{D}_{n+1-m}\\left(z;\\lambda\\right)-\\mathfrak{D}_{n+1-m}\\left(\\lambda\\right)\\right).\\notag\n\t\\end{eqnarray}\n\\end{theorem}\n\n\\section{Implementation of computation formulas involving $\\lambda$-Apostol-Daehee polynomials}\n\\label{Section-4}\n\nIn this section, by implementing some of our results with the aid of the Wolfram programming language in Mathematica \\cite{WolframCloud}, we compute a few values of the $\\lambda$-Apostol-Daehee polynomials. In addition, we also give some illustrations involving two dimensional plots of the $\\lambda$-Apostol-Daehee polynomials.\n\nWe first give Mathematica implementation of the equation (\\ref{Th-2}) in Implementation \\ref{YlistDpoly} in which we utilized from the Implementation \\ref{YlistYpoly} and the Implementation \\ref{YlistNum} given by Simsek and Kucukoglu \\cite{SimsekKucukoglu2020Chapter} in order to compute the rational functions $Y_{n}\\left(\\lambda \\right)$ and the polynomials $Y_{n}\\left(x; \\lambda \\right)$.\n\n\\begin{lstlisting}[language=Mathematica, label=YlistDpoly, caption={Let the expression \\texttt{lparameter} denote the parameter $\\lambda$ of the polynomials $\\mathfrak{D}_n\\left(x;\\lambda \\right)$. Then, the following Mathematica code returns the values of these polynomials}.]\nDPoly[x_,lparameter_,n_]:=(Log[lparameter]\/2)*YPoly[x,lparameter,n]+(Factorial[n]\/2)*Sum[((-1)^(n-j-1))*((((lparameter)^(n-j))*YPoly[x,lparameter,j])\/(Factorial[j]*(n-j))), {j,0,n-1}]\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Mathematica, label=YlistYpoly, caption={Let the expression \\texttt{lparameter} denote the parameter $\\lambda$ of the polynomials $Y_{n}\\left(x; \\lambda \\right)$. Then, the following Mathematica code returns the values of these polynomials} (\\textit{cf}. \\cite{SimsekKucukoglu2020Chapter}).]\nYPoly[x_,lparameter_,n_]:=Sum[Binomial[n,j]*(lparameter^(n-j))*FactorialPower[x, n-j, 1]*YNum[lparameter,j], {j,0,n}]\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Mathematica, label=YlistNum, caption={Let the expression \\texttt{lparameter} denote the parameter $\\lambda$. Then, the following Mathematica code returns the values of the rational functions $Y_{n}\\left(\\lambda \\right)$} (\\textit{cf}. \\cite{SimsekKucukoglu2020Chapter}).]\nYNum[l_,n_]:=2*((-1)^n)* (Factorial[n]\/(l-1))*(((l^2)\/(l-1))^n)\n\\end{lstlisting}\n\nBy using the Implementation \\ref{YlistDpoly} in Mathematica, we compute first three values of the $\\lambda$-Apostol-Daehee polynomials as follows:\n\\begin{eqnarray*}\n\t&&\\mathfrak{D}_0\\left(x;\\lambda\\right)=\\frac{\\log \\lambda}{\\lambda -1}, \\\\\n\t&&\\mathfrak{D}_1\\left(x;\\lambda\\right)=\\frac{\\lambda}{\\lambda -1}+\\left(\\frac{\\lambda x}{\\lambda -1} -\\frac{\\lambda^2 }{\\left(\\lambda -1\\right)^2}\\right)\\log \\lambda,\n\t\\\\\n\t&&\\mathfrak{D}_2\\left(x;\\lambda\\right)=-\\frac{2\\lambda^3}{\\left(\\lambda-1\\right)^2}+\\frac{\\lambda^2 \\left(2x-1\\right)}{\\lambda-1}+\\left(\\frac{2\\lambda^4}{\\left(\\lambda -1\\right)^3}-\\frac{2\\lambda^3 x}{\\left(\\lambda -1\\right)^2}+\\frac{\\lambda^2 x\\left(x-1\\right)}{\\lambda-1}\\right)\n\t\\log \\lambda,\n\\end{eqnarray*}\nand so on.\n\nBy using the Implementation \\ref{YlistDpoly} and the \\texttt{Plot} command in Mathematica, we also give some two dimensional plots of the polynomials $\\mathfrak{D}_n\\left(x;\\lambda \\right)$ in Figure \\ref{PlotsDPoly1a}-Figure \\ref{PlotsDPoly3}.\n\nFigure \\ref{PlotsDPoly1a}-Figure \\ref{PlotsDPoly1c} shows the effects of the parameter $x$ on the graphs of the polynomials $\\mathfrak{D}_n\\left(x;\\lambda \\right)$.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{D_Poly_x_1_n_0-3_lambda_1_5-3_5.png} \n\t\\caption{\\textmd{Plots of the polynomials $\\mathfrak{D}_n\\left(1;\\lambda \\right)$ for randomly selected special case when $\\lambda\\in\\left[\\frac{3}{2},\\frac{7}{2}\\right]$ with $n \\in \\{0,1,2,3\\}$.}}\n\t\\label{PlotsDPoly1a}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{D_Poly_x_2_n_0-3_lambda_1_5-3_5.png} \n\t\\caption{\\textmd{Plots of the polynomials $\\mathfrak{D}_n\\left(2;\\lambda \\right)$ for randomly selected special case when $\\lambda\\in\\left[\\frac{3}{2},\\frac{7}{2}\\right]$ with $n \\in \\{0,1,2,3\\}$.}}\n\t\\label{PlotsDPoly1b}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{D_Poly_x_3_n_0-3_lambda_1_5-3_5.png} \n\t\\caption{\\textmd{Plots of the polynomials $\\mathfrak{D}_n\\left(3;\\lambda \\right)$ for randomly selected case when $\\lambda\\in\\left[\\frac{3}{2},\\frac{7}{2}\\right]$ with $n \\in \\{0,1,2,3\\}$.}}\n\t\\label{PlotsDPoly1c}\n\\end{figure}\n\nFigure \\ref{PlotsDPoly3} shows the effects of the parameter $\\lambda$ on the graphs of the polynomials $\\mathfrak{D}_n\\left(x;\\lambda \\right)$.\n\n\\begin{figure}[H]\n\t\\begin{subfigure}{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.65\\textwidth]{D_Poly_x_-2_2_n_0-3_lambda_3_2.png} \n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\\\ \n\t\\begin{subfigure}{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.65\\textwidth]{D_Poly_x_-2_2_n_0-3_lambda_e.png}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\\\\n\t\\begin{subfigure}{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.65\\textwidth]{D_Poly_x_-2_2_n_0-3_lambda_7_2.png} \n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{\\textmd{Plots of the polynomials $\\mathfrak{D}_n\\left(x;\\lambda \\right)$ for the randomly selected special cases when $x\\in\\left[-2,2\\right]$ and $n \\in \\{0,1,2,3\\}$ \\textbf{(a)} $\\lambda=\\frac{3}{2}$; \\textbf{(b)} $\\lambda=e$; \\textbf{(c)} $\\lambda=\\frac{7}{2}$; \\textbf{(d)} $\\lambda=e^2$.}}\n\t\\label{PlotsDPoly3}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\ContinuedFloat\n\t\\begin{subfigure}{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.65\\textwidth]{D_Poly_x_-2_2_n_0-3_lambda_ee.png} \n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{\\textmd{Plots of the polynomials $\\mathfrak{D}_n\\left(x;\\lambda \\right)$ for the randomly selected special cases when $x\\in\\left[-2,2\\right]$ and $n \\in \\{0,1,2,3\\}$ \\textbf{(a)} $\\lambda=\\frac{3}{2}$; \\textbf{(b)} $\\lambda=e$; \\textbf{(c)} $\\lambda=\\frac{7}{2}$; \\textbf{(d)} $\\lambda=e^2$.}}\n\t\\label{PlotsDPoly3}\n\\end{figure}\n\n\\section{Conclusion}\n\\label{Section-5}\n\nIn this paper, we present various identities and computation formulas containing not only the $\\lambda$-Apostol-Daehee numbers and polynomials, but also Simsek numbers and polynomials, the Stirling numbers of the first kind, the Daehee numbers, and also the Chu-Vandermonde identity. Furthermore, by using functional equations containing the generating functions for the Cauchy numbers and the integrals of the generating functions for the $\\lambda$-Apostol-Daehee numbers and polynomials, we also derive some identities and formulas for these numbers and polynomials. In addition, we give Mathematica implementation of a computation formula which computes the $\\lambda$-Apostol-Daehee polynomials in terms of the polynomials $Y_n\\left(x; \\lambda\\right)$. By the aid of the Mathematica implementation, we also give some plots which help the readers to analyze the behaviour of the $\\lambda$-Apostol-Daehee polynomials for some randomly selected special cases of its parameters. As a conclusion, the results of this paper have the potential to affect many researchers conducting a research not only in computational mathematics, discrete mathematics and combinatorics, but also in other related fields.\n\nFor future studies, it is planned to investigate connections of the $\\lambda$-Apostol-Daehee numbers with some special functions such as the Bernstein basis functions which possess many applications not only in approximation theory, but also in the construction of the Bezier curves widely used in computer-aided geometric design (\\textit{cf}. \\cite{AcikgozSerkan, ErkusDuman, Duman, Lorentz,FPTASimsek2013,SimsekBVP2013, SimsekHJMS2014, SimsekMMAS2015, Simsek Acikgoz} and also cited references therein).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHuman-centered robots \\tr{(e.g., self-driving cars, assistive robots, etc.)} operate in close proximity to humans. When designing planning algorithms for human-centered robots, it is critical for the robot to reason about the mutual influence between itself and human actors. Such a mutual dependency can be formulated as a general-sum game, in which a standard approach is to assume that each agent is a perfectly rational, expected utility maximizer, who simultaneously responds optimally to all the others (i.e., operates under equilibrium strategies) \\cite{myerson2013game, schwarting2019social}. However, experimental studies \\cite{goeree2001ten, crawford2007level, wright2014level} suggest that human behaviors often systemically deviate from equilibrium behaviors due to their latent cognitive states: \\textit{bounded intelligence} (cognitive limitation) and \\textit{irrationality} (tendency to make errors). Therefore, a robot must account for its human partner's cognitive states for seamless and safe interactions.\n\nRecent works exploited the leader-follower model \\cite{Sadighinformation,sadigh2016planning,fisac2019hierarchical,stefansson2019human} and the level-$k$ model \\cite{LiUnsignalized, tian2018adaptive, Sisi2019} to equip robots with the ability to reason about humans' non-equilibrium behaviors. These planners either assign humans' latent states \\textit{a priori}, omitting humans' distinct cognitive characteristics, or passively adapt to the humans' latent states, sacrificing the benefits from actively learning the latent states (Sec.~\\ref{sec: related work}).\n\nIn this work, we propose an anytime game-theoretic planning framework that integrates iterative reasoning models, a partially observable Markov decision process (POMDP), and chance-constrained Monte-Carlo belief tree search. Drawing inspiration from behavioral game theory, we model humans' intelligence levels and degrees of rationality as their latent cognitive states, capturing their heterogeneous cognitive limitations and tendencies to make errors. Rather than passively adapting to humans' latent states when planning, our approach enables the robot to actively and safely learn the latent states to achieve its goal more effectively \\tr{without losing the ability of real-time execution}. Our key insight is: \\textit{Human-centered robots can exploit the mutual influence in interactions to design actions that reveal their human partners' cognitive limitations and degrees of rationality. By actively reasoning about these latent states, robots can achieve more effective planning.} \nOverall, we make the following contributions:\n\n\n\n\n\n\\noindent\n\\textbf{An active anytime game-theoretic planner.} We formalize the robot planning problem in a human-robot team as a POMDP with the human's cognitive states as latent states. The POMDP is approximately solved by an open-loop Monte-Carlo belief tree search algorithm in an anytime manner. Coupled with explicit realization of active information gathering on the latent states, chance-constrained branching, and tailored terminal value functions, our planner enables the robot to safely and adaptively balance between exploration (learning the human's latent states) and exploitation (maximizing utility).\n\n\n\\noindent\n\\textbf{Application of our framework to autonomous driving.} The proposed behavioral planner is connected with an off-the-shelf low-level motion control layer \\cite{borrelli2017predictive} to achieve feedback control for an autonomous car represented by a high-fidelity model. Simulations and user studies are conducted to show the effectiveness of our planner compared to baselines.\n\n\\vspace{-0.13cm}\n\\section{Related Work}\\label{sec: related work}\n\n\n\\noindent\\textbf{Game-theoretic planning for human-centered robots.} \\tr{Our work is related to \\cite{Sadighinformation, sadigh2016planning, fisac2019hierarchical, stefansson2019human, LiUnsignalized, tian2018adaptive, Sisi2019}. In \\cite{sadigh2016planning, fisac2019hierarchical, stefansson2019human}, the robot exploits the Stackelberg game \\cite{simaan1973additional} and models its human partner as a \\textit{follower} who accommodates the robot's planned actions. The follower model is homogeneous (the human is always the follower w.r.t. the robot), thus the robot may behave poorly when the human does not behave like a follower. The approach in \\cite{Sadighinformation} allows the robot to \\textit{actively} ``probe\" the human's latent states, but the underlying human model is still a pure follower model. In addition, the safety of the planner is not explicitly enforced in \\cite{Sadighinformation}. In \\cite{LiUnsignalized, tian2018adaptive, Sisi2019}, the robot exploits the level-$k$ model \\cite{costa2001cognition} to model the human as an agent who reasons under various intelligence levels. While the level-$k$ model is heterogeneous, it assumes that humans with lower-levels of intelligence best respond to higher-level humans, omitting humans' potential irrationality. Furthermore, the planners in \\cite{LiUnsignalized, tian2018adaptive, Sisi2019} \\textit{passively} adapt to the human's intelligence level, leading to less effective plans. \\textit{Our approach is different from \\cite{Sadighinformation, sadigh2016planning, fisac2019hierarchical, stefansson2019human} since our planner does not assume humans' cognitive characteristics in interactions a \\textit{priori} but reasons about them when planning. Our work is also distinguished from \\cite{LiUnsignalized, tian2018adaptive, Sisi2019} by enabling the robot to safely and actively learn the human's cognitive states to maximize utility more effectively without losing the ability of real-time execution.}}\n\n\n\n\n\n\n\n\n\\noindent\\textbf{Solution methods for POMDP.} A POMDP is a framework for planning under uncertainty. Various approaches have been proposed to approximate POMDPs, including point-based methods \\cite{pineau2003point, porta2006point, kurniawati2008sarsop}, open-loop strategies \\cite{yu2005open, weinstein2013open,perez2015open, phan2019memory}, and Monte-Carlo tree search \\cite{silver2010monte,lim2019sparse}. Partially observable Monte-Carlo planning (POMCP) \\cite{silver2010monte} performs well, though building a closed-loop search tree in large games is computationally expensive. Open-loop strategies condition action selection on previous action sequences. They use a much smaller search space by sacrificing the ability of active information gathering, and achieve competitive performance compared to closed-loop planning under computational constraints \\cite{weinstein2013open}. \\textit{Our approach combines the strengths from POMCP and open-loop strategies, achieving real-time active game-theoretic planning.}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\vspace{-0.2cm}\n\\section{Problem formulation}\\label{sec: problem formulation}\n\n\\noindent\n\\textbf{Human-robot team formalization.}\nWe formalize the human-robot team as a two-player dynamic game represented by the tuple $\\mathcal{G} = <\\mathcal{P}, \\tilde{\\mathcal{S}}, \\mathcal{A}, f, r_{\\mathcal{R}}, r_\\mathcal{H}>$, where $\\mathcal{P} = \\{\\mathcal{R},\\mathcal{H}\\}$ represents the two players with $\\mathcal{R}$ denoting the robot and $\\mathcal{H}$ denoting the human; $\\tilde{\\mathcal{S}} = \\tilde{\\mathcal{S}}_{\\mathcal{R}}\\times \\tilde{\\mathcal{S}}_{\\mathcal{H}}$ and $\\mathcal{A} = \\mathcal{A}_{\\mathcal{R}} \\times \\mathcal{A}_{\\mathcal{H}}$ are, respectively, the joint fully-observable state and action spaces of the two agents; the function $f$ governs the evolution of the joint fully-observable state and is defined by the following dynamic model: $\\tilde{s}_{t+1} = f(\\tilde{s}_t,a^{\\mathcal{R}}_t, a^{\\mathcal{H}}_t)$ ( $\\tilde{s}_t \\in \\tilde{\\mathcal{S}}$, $a^{\\mathcal{R}}_t\\in\\mathcal{A}_{\\mathcal{R}}$, $a^{\\mathcal{H}}_t\\in\\mathcal{A}_{\\mathcal{H}}$); $r_{(\\cdot)}: \\tilde{\\mathcal{S}} \\rightarrow \\mathbb{R}$ is the reward function of an agent\n\n\\noindent\n\\textbf{Robot planning as a POMDP.} We consider planning from the robot's perspective. The fully-observable state $\\tilde{s}$ of the human-robot team represents measurable variables (e.g., position, speed, etc.). In addition, the human also has latent states (e.g., preference, trust, cognitive limitation, etc.) that characterize his\/her cognition and reasoning; such latent states cannot be observed directly, and therefore must be inferred from interactions. We let $\\theta \\in \\Theta$ denote the human's latent states, and we consider an augmented state space $\\mathcal{S} = \\tilde{\\mathcal{S}} \\times \\Theta$. As the robot's knowledge about the augmented state $s \\in \\mathcal{S}$ is incomplete, it maintains a belief distribution over $\\mathcal{S}$ at each discrete time step $t$, namely, the robot maintains the belief state $b_t := [\\mathbb{P}(s_t = s_1),\\dots,\\mathbb{P}(s_t = s_{|\\mathcal{S}|})]^{\\intercal}$. We formulate the robot planning problem as a POMDP defined by the tuple $<\\mathcal{G}, \\mathcal{S}, \\mathcal{B}, \\Omega, \\mathcal{Z}, \\rho, r'_{\\mathcal{R}}, \\mathbb{O}_{\\text{safe}}>$, where $\\mathcal{G}$ denotes the dynamic game model defined above; $\\mathcal{S}$ is the augmented state space; $\\mathcal{B}$ represents the space of probability distributions over $\\mathcal{S}$ ($b_t\\in\\mathcal{B}$); $\\Omega$ is the finite observation space; \\small $\\mathcal{Z}: \\Omega \\times \\mathcal{S} \\rightarrow [0,1]$ \\normalsize is a probability function specifying the probability of receiving an observation in a state; the belief dynamics function $\\rho: \\mathcal{B} \\times \\mathcal{A}_{\\mathcal{R}} \\times \\Omega\\rightarrow \\mathcal{B}$ governs the belief state evolution and is defined as $b_{t+1} = \\rho(b_t,a^{\\mathcal{R}}_t,o_{t+1})$. Given an initial belief state $b_t$, the robot executes the action $a^{\\mathcal{R}}_t$, receives the observation $o_{t+1}$ at time step $t+1$, and updates its belief accordingly. $r'_{\\mathcal{R}}: \\mathcal{B} \\times \\mathcal{A}_{\\mathcal{R}} \\rightarrow \\mathbb{R}$ denotes the reward function of the robot in belief space (defined in Sec.~\\ref{sec: embed human model}); $\\mathbb{O}_{\\text{safe}}\\subseteq \\tilde{\\mathcal{S}}$ represents the set of safe states of the robot.\n\nWe let $\\pi_{\\mathcal{R}}: \\mathcal{B} \\rightarrow \\mathcal{A}_{\\mathcal{R}}$ denote a robot's deterministic policy. Given belief state $b_t$, the robot maximizes its value: \n\\vspace{-0.2cm}\n\\begin{align}\\label{equ: closed loop policy}\n\\pi^{*}_{\\mathcal{R}} = \\argmax_{\\pi} V_{\\mathcal{R}}^{\\pi}(b_t),\n\\end{align}\n\\normalsize\n\\vspace{-0.45cm}\n\n\\noindent\nwhere \\small $\\small V_{\\mathcal{R}}^{\\pi}(b_t) = \\mathbb{E}_{\\mathcal{Z}} \\left[\\sum_{\\tau=0}^{\\infty} \\gamma^\\tau r'_\\mathcal{R}(b_{t+\\tau}, a_{t+\\tau}) \\big| a_{t+\\tau}=\\pi(b_{t+\\tau})\\right]\\normalsize$ \\normalsize is the value function representing the robot's expected return starting from $b_t$, subject to its policy and the belief dynamics function.\nNote the robot needs to reason about both its own actions and its human partner's responses due to the mutual dependence. The POMDP formulation allows us to condition the robot behaviors on the inferred latent states\n\n\n\n\n\n\\vspace{-0.2cm}\n\\section{Human latent states modeling}\\label{sec: internal state modeling}\n\n\n\\subsection{Iterative Reasoning Model}\n\nWhen studying interactions in games, players are commonly assumed to adopt the Nash equilibrium solution: each player has unlimited computational resources and responds optimally to the others. \nIn real life however, humans are known to act irrationally and have cognitive limitations. The iterative reasoning models from behavioral game theory have been proven to show better performance in characterizing humans' reasoning capabilities in simultaneous games \\cite{wright2014level}. Examples of iterative reasoning models include: the level-$k$ model \\cite{costa2001cognition}, the cognitive hierarchy model \\cite{camerer2004cognitive}, and the quantal level-$k$ model \\cite{stahl1994experimental}. All these models aim to capture humans' cognitive limitations and share a common feature: they model humans as agents with heterogeneous \\textit{bounds} on their reasoning abilities, i.e., human agents can only perform a finite number of iterations of reasoning, and such an intelligence bound is referred as the \\textit{intelligence level}. Among the various integrative reasoning models, the quantal level-$k$ model is the state-of-the-art \\cite{WRIGHT201716}.\n\n\n\\vspace{-0.2cm}\n\\subsection{Human Quantal Level-k Reasoning Model}\n\\label{sec: human qlk reasoning}\n\n\\noindent\n\\textbf{Quantal best response and rationality coefficient.} One of the key components of the quantal level-$k$ (ql-$k$) model is quantal best response. The notion behind quantal best response is that human players are more likely to select actions with higher expected future rewards \\cite{mckelvey1995quantal}. Formally, we define the quantal best response function as follows: let $Q^{i}(\\tilde{s},a^i|a^{-i})$ denote agent $i$'s expected total reward ($i\\in\\mathcal{P}$) when executing $a^{i}$ in $\\tilde{s}$ against an action $a^{-i}$ from his\/her opponent $-i$.\nThen a quantal best response by agent $i$ to agent $-i$ is a mixed policy:\n\n\\vspace{-0.4cm}\n\\small\n\\begin{align}\\label{equ: quantal best response}\n\\hspace{+0.5cm}\\pi^{i}(\\tilde{s},a^i|a^{-i}) = \\frac{\\text{exp}\\big( \\lambda^i Q^{i}(\\tilde{s},a|a^{-i})\\big)}{\\sum_{a'\\in \\tilde{\\mathcal{A}}_{i}} \\text{exp}\\big(\\lambda^i Q^{i}(\\tilde{s},a'|a^{-i})\\big)} \\enspace,\n\\end{align}\n\\vspace{-0.3cm}\n\\normalsize\n\n\\noindent where $\\lambda^i \\in (0,1]$ is the \\textit{rationality coefficient} that controls the degree of agent $i$ conforming to optimal behaviors. In general, the larger the $\\lambda$ is, the more rational the human is\n\n\n\n\\noindent\n\\textbf{Human quantal level-$k$ policies.} In the ql-$k$ model, the iterative reasoning process starts from ql-$0$ agents who are\nnon-strategic reasoners.\nThen, a ql-$k$ agent, $k\\in\\mathbb{N}^+$, assumes the other agents are ql-$(k-1)$ agents, predicts their ql-$(k-1)$ policies, and quantally best responds to the predicted ql-$(k-1)$ policies. On the basis of ql-$0$ policies, the ql-$k$ policies are defined for every $i\\in\\mathcal{P}$, for every $\\lambda\\in\\Lambda$, and for every $k = 1,\\dots,k_{\\text{max}}$ through a sequential and iterative process. Specifically, given an initial state $\\tilde{s}_t \\in \\tilde{\\mathcal{S}}$, a ql-$k$ agent $i$ maximizes the following objective: $\\max_{\\pi^{i,k,\\lambda^i}} V^{i,k}(\\tilde{s}_t)$, where \\small $V^{i,k}(\\tilde{s}_t)= \\mathbb{E}_{\\pi^{*,-i,k-1,\\lambda^{-i}}}\\Big[ \\sum_{\\tau=0}^{\\infty}\\gamma^\\tau r_{i}(\\tilde{s}_{t+\\tau})\\Big]$ \\normalsize is the ql-$k$ value function of agent $i$ and \\small $\\pi^{*,-i,k-1,\\lambda^{-i}}: \\tilde{\\mathcal{S}}\\times\\mathcal{A}_{-i} \\rightarrow [0,1]$ \\normalsize is the predicted ql-$(k-1)$ policy of agent $-i$. The optimal value function satisfies the following Bellman equation: \\footnotesize $ V^{*,i,k}(\\tilde{s}) = \\mathcal{B}\\ V^{*,i,k}(\\tilde{s})=\\max_{a^i\\in\\mathcal{A}_i} \\mathbb{E}_{\\pi^{*,-i,k-1,\\lambda^{-i}}}\\Big[r_{i}(\\tilde{s}') + \\gamma V^{*,i,k}(\\tilde{s}') \\big |\\tilde{s}' = f(\\tilde{s},a^i,a^{-i}), a^{-i} \\sim \\pi^{*,-i,k-1,\\lambda^{-i}}\\Big]$ \\normalsize, and can be determined via value iteration. Then, we define the $Q$-value function as:\n\n\\vspace{-0.4cm}\n\\small\n\\begin{align}\\label{equ: quantal Q-value}\n Q^{*,i,k} (\\tilde{s},a^i)= \\mathbb{E}_{\\pi^{*,-i,k-1,\\lambda^{-i}}} \\big[r_{i}(\\tilde{s}) + \\gamma V^{*,i,k}(\\tilde{s}')\\Big],\n\\end{align}\n\\normalsize\n\\vspace{-0.5cm}\n\n\\noindent and \\eqref{equ: quantal best response} is adopted to define agent $i$'s ql-$k$ policy. We note that when agent $i$ predicts its opponent's ql-$(k-1)$ policy, it assumes that its opponent's rationality coefficients is also $\\lambda^i$ (i.e., $\\lambda^{-i} = \\lambda^i$) and forms its policy based on $\\pi^{*,-i,k-1,\\lambda^i}$. We summarize the algorithm that computes the ql-$k$ policies of the agents in $\\mathcal{G}$ in Alg.~\\ref{alg: quantal level-k dp}.\n\n\\vspace{-0.4cm}\n\\input{sections\/quantal_level_k_dp}\n\\vspace{-0.4cm}\n\n\n\\noindent\n\\textbf{Summary.} We model the human's latent states as his\/her intelligence level and rationality coefficient, i.e., $\\theta = (k,\\lambda)$ and use Alg.~\\ref{alg: quantal level-k dp} to compute the policies\/value functions of the human and the robot as ql-$k$ agents, which are exploited to solve the POMDP in Sec.~\\ref{sec: decision-making algorithm}.\n\n\\vspace{-0.2cm}\n\\section{Anytime Active game-theoretic planning}\\label{sec: decision-making algorithm}\n\nIn this section, we embed the human ql-$k$ model in the POMDP through belief dynamics following the procedure in \\cite{Sisi2019,chen2020trust} and present our anytime game-theoretic planner.\n\n\\vspace{-0.1cm}\n\\subsection{Embed the Human Behavioral Model in Robot Planning}\\label{sec: embed human model}\n\n\\noindent\n\\textbf{Observation function.} We define an observation made by the robot as $o := \\tilde{s}$, i.e., the robot can measure the joint physical state $\\tilde{s}$. Then the observation function is defined as: $\\mathcal{Z}(o',s) = \\mathbb{I}(o' = \\tilde{s})$, where $\\tilde{s}$ is the joint physical state in $s$, and $\\mathbb{I}(\\cdot)$ is an indicator function, taking $1$ if the event $(\\cdot)$ is true and taking $0$ otherwise.\n\n\\noindent\n\\textbf{Prior belief state.} We define the probability of arriving to state $s'=(\\tilde{s}',\\theta')\\in\\mathcal{S}$ from state $s=(\\tilde{s},\\theta)\\in\\mathcal{S}$ after executing $a \\in\\mathcal{A}_{\\mathcal{R}}$ as:\n\n\\vspace{-.5cm}\n\\footnotesize\n\\begin{align}\\label{equ: belief transition}\n& \\mathcal{T}(s,a,s') := \\mathbb{P}(s_{t+1} = s' | s_{t} = s, a^{\\mathcal{R}}_t = a)\\\\\n& =\\hspace{-0.2cm} \\sum_{a' \\in \\mathcal{A}_{\\mathcal{H}}} \\hspace{-0.2cm}\\mathbb{I}\\big(\\tilde{s}' = f(\\tilde{s},a,a')\\big)\\mathbb{P}(\\theta_{t+1}=\\theta'|\\theta_{t}=\\theta,\\tilde{s}_t=\\tilde{s},\\bar{\\sigma})\\pi^{\\mathcal{H},k,\\lambda}(\\tilde{s},a'),\\nonumber\n\\end{align}\n\\normalsize\n\\vspace{-0.3cm}\n\n\\noindent\nwhere $f$ is the dynamics function described in Sec.~\\ref{sec: problem formulation}, $\\mathbb{P}(\\theta_{t+1}|\\theta_{t},\\tilde{s}_t,\\bar{\\sigma})$ represents an explicit probabilistic model that governs the dynamics of the latent states ($\\bar{\\sigma}$ denotes the model parameters), and $\\pi^{\\mathcal{H},k,\\lambda}$ denotes the human's ql-$k$ policy with rationality coefficient $\\lambda$ (recall $\\theta=(k,\\lambda)$). Then, we can define a prior belief state prediction function that predicts the future belief state without accounting for the possible observations: $\\tilde{b}_{t+1} = \\tilde{\\rho}(b_t, a)$, where each element in $\\tilde{b}_{t+1}$ is computed following $ \\tilde{b}_{t+1}(s') = \\sum_{s\\in\\mathcal{S}}\\mathcal{T}(s,a,s') b_t(s)$. Then, the robot's reward function in belief space can be defined as: $ r_{\\mathcal{R}}'(b_t,a_t)= \\sum_{\\tilde{s}'}r_{\\mathcal{R}}(\\tilde{s}')\\mathbb{P}(\\tilde{s}_{t+1}=\\tilde{s}'|\\tilde{\\rho}(b_t,a_t))$.\n\n\\noindent\n\\textbf{Prior probability of future observations.} With the observation function and the prior belief prediction function defined, given an initial belief state $b$, we can predict the probabilities of the robot's future observations after executing an action $a\\in\\mathcal{A}_{\\mathcal{R}}$ following:\n\n\\vspace{-0.5cm}\n\\footnotesize\n\\begin{align}\\label{equ: observation prediction}\n\\mathcal{O}(o,b,a):=&\\mathbb{P}(o_{t+1}=o | b_{t} = b, a^{\\mathcal{R}}_t = a) = \\hspace{-0.2cm}\\sum_{s'\\in\\mathcal{S}} \\mathcal{Z}(o, s')\\tilde{b}_{t+1}(s').\n\\end{align}\n\\normalsize\n\\vspace{-0.4cm}\n\n\\noindent\n\\textbf{Posterior belief update.} After executing an action and receiving an observation, the robot can update its posterior belief state through the belief dynamics equation $b_{t+1} = \\rho(b_t, a^{\\mathcal{R}}_t, o_{t+1})$. More specifically, each element in $b_{t+1}$ can be computed using the Bayesian inference equation \\cite{sarkka2013bayesian}:\n\n\\vspace{-0.5cm}\n\\small\n\\begin{align}\n b_{t+1}(s') \\propto {\\mathcal{Z}(o_{t+1}, s') \\tilde{b}_{t+1}(s')}, \\quad \\forall s' \\in \\mathcal{S}.\n \\label{equ: bayesian}\n\\end{align}\n\\normalsize\n\\vspace{-0.5cm}\n\n\\noindent\n\\textbf{Summary}. The human's behavioral model is embedded into the belief transition \\eqref{equ: belief transition}. With \\eqref{equ: bayesian}, the robot infers the human's latent states, and in turn uses the inference result to predict belief evolution via the belief prediction function $\\tilde{\\rho}$.\n\n\\vspace{-0.1cm}\n\\subsection{Robot Planning Algorithm}\n\\label{sec: planning algorithm}\n\\vspace{-0.1cm}\n\\noindent\n\\textbf{Closed-loop policy.}\nIn stochastic environments, the robot's actions can help ``actively learn\" the latent states for the benefits of the future. \\cite{bar1974dual} shows that only the closed-loop policy has the capability of \\textit{active learning} as it optimally balances between uncertainty reduction and maximizing utility. Solving \\eqref{equ: closed loop policy} via stochastic dynamic programming to obtain a closed-loop policy is computationally intractable \\cite{bellman1966dynamic}. To achieve real-time computation, we exploit open-loop strategies, but compensate for the loss of active learning.\n\n\n\\noindent\n\\textbf{Open-loop feedback strategy.}\nIn contrast to solving for the optimal policy in \\eqref{equ: closed loop policy}, we let the robot solve for an optimal action sequence at each $t$:\n\n\\vspace{-0.5cm}\n\\small\n\\begin{align}\n \\mathbf{a}^{\\mathcal{R}}_t & = \\argmax_{\\mathbf{a}} \\mathbb{E}_{\\mathcal{Z}} \\left[ \\hat{V}(b_{t+T}) + \\sum_{\\tau=0}^{T-1} r'_{\\mathcal{R}}(b_{t+\\tau},a_{t+\\tau})\\right],\n \\label{equ: open-loop optimization}\n\\end{align}\n\\normalsize\n\\vspace{-0.4cm}\n\n\n\\noindent\nwhere $T$ is the planning horizon, $\\mathbf{a} = \\{a_{t+0},\\dots,a_{t+T-1}\\}$ is a planned action sequence, and $\\hat{V}(b_{t+T})$ denotes the \\textit{terminal value} of the predicted belief state $b_{t+T}$. The robot plans in a feed-back manner by applying the first action in $\\mathbf{a}_t^{\\mathcal{R}}$ and re-planing at the next time step. As opposed to the closed-loop policy, \\eqref{equ: open-loop optimization} fixes the action plan ahead and omits the benefits that can be propagated back from future observations. Consequently, \\eqref{equ: open-loop optimization} only \\textit{passively} learns the latent states and yields conservative actions. Hence, explicit methods can be used to actively learn the latent states \\cite{WITTENMARK199567}.\n\n\\noindent\n\\textbf{Active learning of the latent states.} We exploit the Shannon entropy \\cite{shannon2001mathematical} to measure the estimation uncertainty of a belief state, and augment the robot's reward function with the expected information gain:\n\n\\vspace{-0.47cm}\n\\small\n\\begin{align}\n\\mathcal{I}(b,a) = H(b) - \\sum_{o}\\mathcal{O}(o,b,a)H\\big(\\rho(b,a,o)\\big),\n\\end{align}\n\\normalsize\n\\vspace{-0.40cm}\n\n\\noindent\nwhere \\small $H(b) = -\\sum_{s\\in\\mathcal{S}}b(s)\\log{b(s)}$\\normalsize, and $\\mathcal{O}$ is the observation prediction function defined in \\eqref{equ: observation prediction}. In general, the higher the $\\mathcal{I}(b,a)$ is, the more expected information the robot obtains about the human's latent states if the robot executes $a$ in $b$. With the augmented reward function $\\tilde{r}_{\\mathcal{R}}(b,a)$, \\eqref{equ: open-loop optimization} becomes:\n\n\\small\n\\vspace{-0.5cm}\n\\begin{subequations}\\label{equ: open-loop optimization with safe probing}\n\\begin{align}\n &\\mathbf{a}^{\\mathcal{R}}_t = \\argmax_{\\mathbf{a}} \\mathbb{E}_{\\mathcal{Z}} \\left[ \\hat{V}(b_{t+T})+ \\sum_{\\tau=0}^{T-1} \\tilde{r}_{\\mathcal{R}}(b_{t+\\tau},a_{t+\\tau})\\right], \\label{eq: action}\n\\end{align}\n\\vspace{-0.55cm}\n\\begin{align}\n \\hspace{-2.9cm}\\tilde{r}_{\\mathcal{R}}(b,a) = r'_{\\mathcal{R}}(b,a) + \\eta\\mathcal{I}(b,a),\\label{eq: exploration reward}\n\\end{align}\n\\vspace{-0.85cm}\n\\begin{align}\n \\hspace{-0.5cm}\\text{s.t.} \\quad \\mathbb{P}(\\tilde{s}_{t+\\tau} \\in \\mathbb{O}_{\\text{safe}} | \\mathbf{a}, b_t) \\geq 1 - \\Delta_{\\tau}, \\sum_{\\tau = 1}^{T} \\Delta_{\\tau} \\leq \\Delta,\\label{equ: constraints}\n\\end{align}\n\\end{subequations}\n\\normalsize\n\\vspace{-0.45cm}\n\n\n\\noindent\nwhere $\\eta \\propto H(b)$ is an adaptive term that enables the robot to learn the latent states as needed. The constraint \\eqref{equ: constraints} requires that the probability that the predicted state $\\tilde{s}_{t+\\tau}$ is in the safe set $\\mathbb{O}_{\\text{safe}}$ is larger than $1-\\Delta_{\\tau}$ for all steps over the planning horizon, with $\\Delta_{\\tau}$ being a design parameter. The overall safety is bounded by $\\Delta$ via risk allocation \\cite{blackmore2009convex, ono2012joint} (evaluation of risk is shown in Alg.~\\ref{alg: tree search}). \\cite{Sisi2019} exploited continuous relaxation techniques to solve \\eqref{equ: open-loop optimization} with a time-joint chance constraint. However, with the information reward \\eqref{eq: exploration reward}, the approach in \\cite{Sisi2019} could become intractable.\n\n\n\\vspace{-0.2cm}\n\\input{sections\/treesearch_algo}\n\\vspace{-0.3cm}\n\n\n\n\\noindent\n\\textbf{Open-loop chance-constrained Monte-Carlo belief tree search.} POMCP \\cite{silver2010monte} combines Monte-Carlo simulation and game-tree search. Building upon POMCP, we propose an algorithm (Alg. \\ref{alg: tree search}) to solve \\eqref{equ: open-loop optimization with safe probing} in an anytime manner. Our algorithm differs from POMCP in the following ways: 1) a search node, $v = \\big< \\mathbf{a}, V(\\mathbf{a}), N(\\mathbf{a}) \\big>$, stores an action sequence $\\mathbf{a}$ and its associated statistics: $V(\\mathbf{a})$ is the mean return of all simulations that execute $\\mathbf{a}$, and $N(\\mathbf{a})$ counts the number of times that $\\mathbf{a}$ has been visited; 2) leaf node expansions must enforce the safety chance constraint (line 14); 3) terminal values are estimated using the pre-computed ql-$k$ values (line 11); 4) the active information gathering on latent states is explicitly realized via the augmented reward function $\\tilde{r}_{\\mathcal{R}}$.\n\n\n\\noindent\n\\textbf{Benefits of \\cref{alg: tree search}.} A search node only stores an action sequence since the open-loop optimization \\eqref{equ: open-loop optimization with safe probing} searches for an action sequence. In contrast to POMCP, in which a node stores a history of actions and observations, the search space in Alg.~\\ref{alg: tree search} is significantly reduced. Hence Alg.~\\ref{alg: tree search} can run in real-time under computational constraints. Quantal level-$k$ reasoning (Sec.~\\ref{sec: internal state modeling}) is exploited to estimate the terminal value when the maximum planning depth is reached (line 11). Specifically, the terminal belief state $b_{t+T}$ is used to determine the human's intelligence level distribution $\\mathbb{P}(k^{\\mathcal{H}}|b_{t+T})$, then, we assume the robot behaves as a ql-$(k^{\\mathcal{H}}+1)$ agent (recall that a ql-$(k+1)$ agent quantally best responds to a ql-$k$ agent) with rationality coefficient $\\lambda = 1$ and estimates the terminal value using the pre-computed robot's ql-$(k^{\\mathcal{H}}+1)$ value weighted by $\\mathbb{P}(k^{\\mathcal{H}}|b_{t+T})$. By actively learning the belief state, the planner can quickly reduce the estimation error on the terminal values and maximize its performance.\n\n\n\\noindent\n\\textbf{Infeasibility handling mechanism.} When the root node has no safe successors, we relax the chance constraint and find the action that minimizes the degree of constraint violation\n\n\n\n\n\\section{Experiments}\n\n\\subsection{Implementation Details}\\label{sec: implementation details}\n\n\n\\noindent\n\\textbf{Test domain.} We use \\textit{autonomous driving} as the test domain. In particular, we consider a forced merging scenario \\cite{isele2019interactive}, where an autonomous car must merge to an adjacent (upper) lane that is occupied by a human-driven car (\\cref{fig: intro_fig}).\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{picture}(300.0, 40)\n\\put(8, 5){\\epsfig{file=media\/introl.png,width = 0.9 \\linewidth,angle=0, trim=0cm 0.0cm 0.0cm 0.0cm,clip}}\n\\small\n\\normalsize\n\\end{picture}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{\\hspace{-0.3cm} An autonomous car interacts with a human-driven car controlled by a user through a racing wheel and a pedal in a forced merging scenario.}\n\\label{fig: intro_fig}\n\\vspace{-0.2cm}\n\\end{figure}\n\n\\noindent\n\\textbf{Reward functions.} The reward function of a car is a linear combination of features \\small$\\phi: \\tilde{S}\\rightarrow \\mathbb{R}^{n_f}$,\\normalsize satisfying $r_{(\\cdot)}(\\tilde{s}) = \\omega^\\intercal \\cdot\\phi(\\tilde{s})$. The features encode safety, comfort, progress, etc. \\cite{tian2018adaptive}. The weights $\\omega \\in \\mathbb{R}^{n_f}$ can be recovered via an inverse learning algorithm \\cite{ziebart2008maximum, sun2019interpretable, tian2020bounded}. Note that when the autonomous car runs Alg. \\ref{alg: tree search}, the safety feature in its reward function is deactivated since the safety is handled by \\eqref{equ: constraints}.\n\n\\noindent\n\\textbf{Level-0 policy.} Alg.~\\ref{alg: quantal level-k dp} requires a ql-$0$ policy to initiate the iterative reasoning process. Similar to \\cite{LiUnsignalized, tian2018adaptive}, we let a ql-$0$ agent be a non-strategic agent who treats others as static obstacles when making decisions\n\n\\noindent\n\\textbf{Latent state dynamics.} Recall \\eqref{equ: belief transition} that an explicit probabilistic model is required to govern the dynamics of the latent states. In this work, we consider single-shot games, i.e., the human's latent states are assumed to be constant during interaction. Hence, the latent states' transition model is reduced to $\\mathbb{P}(\\theta_{t+1}|\\theta_{t},\\tilde{s}_t,\\bar{\\sigma}) {=} \\mathbb{I}(\\theta_{t{+}1}{=}\\theta_t)$. In general repeated games, the transition model can be represented as a Markov chain and its parameters $\\bar{\\sigma}$ can be embedded in the POMDP and learned simultaneously as in \\cite{tianbeating}.\n\n\n\\noindent\n\\textbf{High-level robot planning.} The dynamics of the human-robot team are represented as\n\n\\small\n\\vspace{-0.5cm}\n\\begin{align}\n \\hspace{-0.3cm}\\left[ \\begin{matrix}\\dot{x}_{\\mathcal{R}} & \\dot{y}_{\\mathcal{R}} & \\dot{x}_{\\mathcal{H}} & \\dot{v}_{\\mathcal{R}} & \\dot{v}_{\\mathcal{H}}\\end{matrix}\\right] = \\left[\\begin{matrix}v_{\\mathcal{R}} & w_{\\mathcal{R}} & v_{\\mathcal{H}} & a_{\\mathcal{R}} & a_{\\mathcal{H}}\\end{matrix} \\right],\\label{equ: unicycle model}\n\\end{align}\n\\vspace{-0.5cm}\n\\normalsize\n\n\\noindent where $x$ ($y$) is the longitudinal (lateral) position, $v$ ($w$) is the longitudinal (lateral) speed, and $a$ is the acceleration. The sampling period is $\\Delta t = 0.5[s]$. We use a state grid of the size $40\\times6\\times40\\times6\\times6$ to represent the discrete states of the human-robot system.\nThe safety set $\\mathbb{O}_{\\text{safe}}$ includes states in which the boundaries of the two agents do not overlap. We use $\\Delta {=} 0.05$ and $\\Delta_{\\tau} {=} \\frac{1}{160}$ as the chance constraint thresholds in \\eqref{equ: constraints}. We let the highest intelligence level of the human be $k_{\\text{max}}{=}2$ based on experimental results in \\cite{costa2006cognition,costa2009comparing}. The rationality coefficients take value from the set $\\Lambda{=}\\{0.5, 0.8, 1.0\\}$. The planning horizon in \\eqref{equ: open-loop optimization with safe probing} is $T {=} 8$.\n\n\n\\noindent\n\\textbf{Hierarchical planning and control.} \nBehavioral planning and control of the autonomous car are hierarchically connected. The planning layer (Alg.~\\ref{alg: tree search}) uses a low-fidelity model \\eqref{equ: unicycle model} to generate behavioral commands, and runs at $8Hz$ on a laptop with 2.8 GHz CPU. In the low-level control layer (running at $8Hz$), the vehicle dynamics are represented by a high-fidelity bicycle model \\cite{kong2015kinematic}, and we use a model predictive controller \\cite{borrelli2017predictive, cvxgen} to generate continuous controls that drive the system to the desired states generated by Alg.~\\ref{alg: tree search}. In Alg.~\\ref{alg: tree search}, when the actual system state deviates from the state grid, the nearest neighboring grid will be exploited.\n\n\n\n\n\\noindent\n\\textbf{Baselines.} We consider two baseline planners: 1) our planner without the feature of active information gathering, i.e., the autonomous car passively infers the human's latent states (BLP-1); 2) the strategic game-theoretic planner in \\cite{fisac2019hierarchical} that treats the human-driven car as a \\textit{follower} who accommodates the actions from the autonomous car (BLP-2). Both BLP-2 and our planner use a closed-loop feedback structure when building human behavioral models, but our planner reasons about the heterogeneity in the human's cognitive limitations and irrationality through active inference rather than treating the human as a follower.\n\n\n\n\\vspace{-0.1cm}\n\\subsection{The Human Behavioral Model}\n\\label{sec: quantal level-k illuetration}\n\n\\noindent\n\\textbf{Human intelligence level interpretation.} The ql-$k$ model is exploited to reason about human behaviors under bounded intelligence. Recall that the level-$0$ agent represents a non-strategic agent who treats others as static obstacles. Thus, a ql-$1$ agent can be interpreted as a cautious agent since it believes that its opponent is an aggressive non-strategic agent. On the contrary, a ql-$2$ agent behaves aggressively since it believes its opponent is a cautious ql-$1$ agent.\n\n\\cref{fig: ql-k illustration - optimal} shows the interactions between two cars modeled as ql-$k$ agents in the forced merging task. The heat-map displays the ql-$k$ value function ($V^{*,i,k}$ described in Sec.~\\ref{sec: human qlk reasoning}) of the lower-lane car, indicating the preferred states; colder color means higher value. It can be observed in (a-b) that the interactions are seamless between a ql-$1$ agent and a ql-$2$ agent, which is expected since the ql-$2$ agent's belief in the model of the ql-$1$ agent matches the ground truth (note that the high-value region in the upper lane encourages the red car to merge ahead of the white car (a); the low-value region in front of the white car guides it to yield (b)). However, when an agent's true model deviates from the other agent's belief, conflicts may occur. For instance, (c) shows the interaction between two ql-$1$ agents with a dead-lock because both agents prefer to yield. The low-value region in front of the yellow car discourages it to merge although it is safe. Similarly, when two ql-$2$ agents interact (d), collisions may occur as both agents think their opponents will be likely to yield. Note that the high value region in the upper lane encourages the yellow car to merge even if it is not safe.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{picture}(200.0, 50)\n\\put(-25, 30){\\epsfig{file=media\/level_2_vs_level_1.png,width = 0.5 \\linewidth, height = 1.0cm,angle=0, trim=1cm 0.0cm 4cm 0.0cm,clip}}\n\\put(100, 30){\\epsfig{file=media\/level_1_vs_level_1_2.png,width = 0.5 \\linewidth, height = 1.0cm, angle=0, trim=1cm 0.0cm 4cm 0.0cm,clip}}\n\\put(-25, 0){\\epsfig{file=media\/level_1_vs_level_1.png,width = 0.5 \\linewidth, height = 1.0cm,angle=0, trim=1cm 0.0cm 4cm 0.0cm,clip}}\n\\put(100, 0){\\epsfig{file=media\/level_2_vs_level_2_3.png,width = 0.5 \\linewidth, height = 1.0cm, angle=0, trim=1cm 0.0cm 4cm 0.0cm,clip}}\n\\small\n\\put(-25,20){(c)}\n\\put(-25,47){(a)}\n\\put(100,20){(d)}\n\\put(100,47){(b)}\n\\normalsize\n\\end{picture}\n\\end{center}\n\\vspace{-0.5cm}\n\\caption{\\textbf{Interactions between ql-$k$ agents}. (a-b): ql-1 (white) v.s. ql-2 (red); (c): ql-1 v.s. ql-1; (d): ql-2 v.s. ql-2. All agent have $\\lambda =1$, the same initial longitudinal position, and the same initial speed $12 [m\/s]$.}\n\\label{fig: ql-k illustration - optimal}\n\\end{figure}\n\\vspace{-0.3cm}\n\n\n\n\\vspace{-0.3cm}\n\\subsection{Case Studies and Quantitative Results}\n\\label{sec: case studies}\n\nWe compare our planner against the baselines via simulations. The human driven-car is modeled as a ql-$k$ agent, and is also controlled by the hierarchical planning and control scheme described in Sec.~\\ref{sec: implementation details}, with behavioral commands obtained directly from the corresponding ql-$k$ policies.\n\n\\noindent\\textbf{Hypotheses.} We state the following two hypotheses: 1). active exploration improves efficiency of the robot's planning; 2). our planner is robust to the human's heterogeneous intelligence levels and irrational behaviors.\n\n\n\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{picture}(200.0, 65)\n\\put(-25, 34){\\epsfig{file=media\/curious_car_with_cautious_human.png,width = 0.5 \\linewidth, height = 1.25cm,angle=0, trim=4.5cm 0.0cm 8.5cm 0.5cm,clip}}\n\\put(-25, 0){\\epsfig{file=media\/level_2_car_with_cautious_human.png,width = 0.5 \\linewidth, height = 1.25cm,angle=0, trim=4.5cm 0.0cm 8cm 0.5cm,clip}}\n\n\\put(100, 34){\\epsfig{file=media\/curious_car_with_aggressive_human_ahead.png,width = 0.5 \\linewidth, height = 1.25cm,angle=0, trim=4cm 0.0cm 9cm 0.5cm,clip}}\n\\put(100, 0){\\epsfig{file=media\/leader_car_with_aggressive_human_ahead.png,width = 0.5 \\linewidth, height = 1.25cm,angle=0, trim=4cm 0.0cm 9cm 0.5cm,clip}}\n\\small\n\\put(-20,68){(a) Ours}\n\\put(-20,34){(b) BLP-1}\n\\put(105,68){(c) Ours}\n\\put(105,34){(d) BLP-2}\n\\normalsize\n\\end{picture}\n\\end{center}\n\\vspace{-0.8cm}\n\\caption{\\textbf{Planner comparison.} (a-b) show the interactions between a simulated \\textit{cautious} human-driven car (white) and the autonomous car (orange); (c-d) show those between a simulated \\textit{aggressive} human-driven car (red) and the autonomous car (orange). Initial speeds are $12 ~[m\/s]$.} \n\\label{fig: scenario-1}\n\\vspace{-0.4cm}\n\\end{figure}\n\n\\begin{figure*}[thp!]\n\\centering\n\\begin{picture}(200.0, 75.0)\n\\put(-52, 0){\\epsfig{file=media\/cautious_human_inference_result.png,width = 0.16 \\linewidth, trim=0.1cm 0.1cm 0cm 0.5cm,clip}}\n\\put(25, 0){\\epsfig{file=media\/aggressive_human_inference_result.png,width = 0.16 \\linewidth, trim=0.1cm 0.1cm 0cm 0.5cm,clip}}\n\\put(105, 12){\\epsfig{file=media\/TTM_compare.png,width = 0.245 \\linewidth,angle=0, trim=0cm 0.5cm 0cm 0.3cm,clip}}\n\\small\n\\put(-30,68){(a)}\n\\put(45,68){(b)}\n\\put(115,68){(c)}\n\\put(130,0){\\tiny\\textbf{Scenario 1}}\n\\put(180,0){\\tiny\\textbf{Scenario 2}}\n\\normalsize\n\\end{picture}\n\\begin{picture}(200.0, 75.0)\n\\put( 18, 7){\\epsfig{file=media\/cautious_human_env_RS_compare.png,width = 0.24 \\linewidth, trim=0.0cm 0.0cm 0cm 0.3cm,clip}} \n\\put( 130, 7){\\epsfig{file=media\/aggressive_human_env_RS_compare.png,width = 0.24 \\linewidth, trim=0.0cm 0.0cm 0cm 0.3cm,clip}} \n\\small\n\\put(20,68){(d)}\n\\put(135,68){(e)}\n\\normalsize\n\\end{picture}\n\\vspace{-0.3cm}\n \\caption{\\textbf{Evaluation result.} (a-b): Human latent states inference results for (a) cautious and (b) aggressive humans. (c): Average TM of the autonomous car; error bars show 95\\% confidence integral. (d-e): The rates of success under various human rationality coefficients (higher $\\lambda$ yields more rational human behaviors). The human-driven car is a simulated cautious driver with intelligence level-$1$ in (d), and an aggressive driver with intelligence level-$2$ in (e).}\n \\label{fig: sim-success}\n\\vspace{-0.5cm}\n\\end{figure*}\n\nWe first show two case studies, highlighting the benefits that naturally emerged from our planner.\n\n\\noindent \\textbf{Scenario 1.} Fig.~\\ref{fig: scenario-1}(a-b) show a scenario where the autonomous car and a simulated cautious human-driven car (ql-$1$ agent) with rationality coefficient $\\lambda=0.8$ start at the same initial speed and longitudinal position. It can be observed that with our planner, the autonomous car actively indicates an intention to merge by nudging into the target lane, as it predicts that, the human's \\textit{reactions} triggered by the ``probing\" action can help disambiguate the human's latent states. With the baseline planner (BLP-1), the autonomous car takes a longer time to merge, since passive inference requires additional observations to infer the latent states.\\enspace\n\n\n\n\n\\noindent \\textbf{Scenario 2.} \\cref{fig: scenario-1}(c-d) show a case that is similar to scenario 1 but the human-driven car is simulated by an aggressive ql-$2$ agent who starts behind the autonomous car. With our planner, the autonomous car nudges in to explore the human's latent states, then quickly decides to yield after observing the humans' aggressive reactions. With BLP-2, the autonomous car initiates a dangerous merge as BLP-2 incorrectly assumes the human-driven car will likely yield. Note that when humans behave under heterogeneous cognitive states, it is critical for a robot to reason about such a heterogeneity to better predict human behaviors.\n\n\n\n\\noindent\n\\textbf{Metrics for quantitative results.} In quantitative studies, we use two metrics for validating the hypotheses : 1) the rate of success (RS), which measures the percentage of simulations in which no collision or dead-lock occurs; 2) the time used by the autonomous car to complete the merge (TM).\n\n\n\\noindent\n\\textbf{Results.} We run $50$ simulations for each of the scenarios in the case studies using our planner and two baseline planners. We first compare our planner against BLP-1 in terms of inference performance (BLP-2 has no inference capability). In \\cref{fig: sim-success} (a-b), we show the time history of the autonomous car's belief in the ground truth human-driven car's latent states. It can be observed that our planner can more effectively identify the human's latent states by exploiting the mutual influence to make its human partner reveal his\/her hidden states. In \\cref{fig: sim-success} (c), we show the average TM for the two scenarios. Note that our planner achieves the lowest TM, due to the active inference. {Furthermore, our planner achieves the highest confidence on TM since our planner strategically generates actions that aim for triggering the most informative reactions from the human, this regulates human behaviors in a game-theoretic sense (such a phenomena is also observed in user studies \\cref{fig: human-experiment}(b)).}\n\n\n\nNext, we evaluate the robustness of our planner under various human intelligence levels and degrees of irrationality. We let the simulated ql-$k$ human take his\/her rationality coefficient from $\\Lambda$ and run $100$ simulations for each ($\\lambda, k$) combination. The human starts at a random position within 10 $m$ around the autonomous car. In \\cref{fig: sim-success}(d-e), we show the RS of each planner for each scenario. Note that in all cases, our planner and BLP-1 achieve more than 95\\% RS. This is attributed to reasoning about the human's bounded intelligence and irrationality, and enforcing the safety chance constraint. BLP-2 shows satisfactory RS when the simulated human is a ql-$1$ agent, but the RS decreases as the human becomes more irrational. When the human is a ql-$2$ agent, BLP-2 performs noticeably worse. The results indicate that our planner enables the robot to learn the human's latent states, plan more effectively, and be robust to the human's various intelligence levels and irrational behaviors.\n\n\n\\subsection{User Study}\n\\label{sec: user study}\n\n\\noindent\n\\textbf{Objective.} We conduct user studies in which we let the autonomous car interact with a real human driver in a simulator (\\cref{fig: intro_fig}), showcasing the effectiveness of our planner.\n\n\\noindent\n\\textbf{Experiment setup.} We recruited 10 human participants. For each participant, we ran Scenario 1 for $3$ times using each of the planners. Here, the accelerations of the upper lane car are provided by human participants directly through a pedal. \n\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{picture}(200.0, 56)\n\\put(-23, 0){\\epsfig{file=media\/human_exp_RS_compare.png,width = 0.33 \\linewidth,angle=0, trim=0cm 0cm 0.0cm 0.0cm,clip}}\n\\put(53, -2){\\epsfig{file=media\/AV_merge_reward_hist_human_experiment.png,width = 0.4 \\linewidth,angle=0, trim=0cm 0.0cm 0.5cm 0.0cm,clip}}\n\\put(150, -5){\\epsfig{file=media\/human_inference.png,width = 0.29 \\linewidth,angle=0, trim=0.cm 0.0cm 0.0cm 0.5cm,clip}}\n\\small\n\\normalsize\n\\end{picture}\n\\end{center}\n\\vspace{-0.5cm}\n\\caption{\\textbf{User study results}: (a) the rates of success of the planners; (b) the time histories of the step reward of the autonomous car; the thick line represents the mean and the shaded area represents the 95\\% confidence tube of the data; (c) the distribution of inferred intelligence levels and rationality coefficients of the human participants.}\n\\label{fig: human-experiment}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\noindent\n\\textbf{Results.} \\cref{fig: human-experiment}(a) shows the RS of each planner. It can be observed that both our planner and BLP-1 outperform the BLP-2 in terms of safety. \\cref{fig: human-experiment}(b) shows the time histories of the step reward collected by the autonomous car. Note that although the autonomous car pays more costs in the first few steps using our planner (due to the probing actions) it is able to complete the task much more effectively and safely compared with the baselines. In \\cref{fig: human-experiment}(c), we show the distributions of inferred intelligence levels and rationality coefficients. It can be observed that roughly 60\\% of the participants are identified as ql-$1$ agents, which is aligned with the experiment study on other forms of games \\cite{costa2009comparing}. In addition, \\cref{fig: human-experiment}(c) suggests that, under the reward function assumed by the autonomous car, most of the participants demonstrated behaviors that align with the behaviors produced by the rationality coefficient $\\lambda=0.8$.\n\\vspace{-0.1cm}\n\\section{Conclusion}\n\nWe proposed an anytime game-theoretic planning framework that integrates iterative reasoning models, POMDP, and chance-constrained Monte-Carlo belief tree search for robot behavioral planning. Our planner enables a robot to safely and actively reason about its human partner's latent cognitive states in real-time to maximize its utility more effectively. We applied the proposed approach to an autonomous driving domain where our behavioral planner and a low-level motion controller hierarchically control an autonomous car to negotiate traffic merges. Both simulation and user study results demonstrated the effectiveness of our planner compared with baseline planners.\n\n\\newpage\n\n\n\n\n\n\\IEEEpeerreviewmaketitle\n\n\n\n\\vspace{-0.3cm}\n\\setstretch{0.98}\n\\bibliographystyle{IEEEtran}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgpng b/data_all_eng_slimpj/shuffled/split2/finalzzgpng new file mode 100644 index 0000000000000000000000000000000000000000..3a845cd8209e9c81f6ba6220ad1b2b022702313c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgpng @@ -0,0 +1,5 @@ +{"text":"\\section{Jet-filtrations on representations}\n\nLet $\\rho :\\mathfrak{g\\rightarrow gl(}V)$ be a representation of a (finite\ndimensional) Lie algebra $\\mathfrak{g.}$ Suppose we have an ascending\nfiltration of subspaces\n\n\\begin{equation}\n\\{0\\}\\varsubsetneqq V_{1}\\varsubsetneqq V_{2}\\varsubsetneqq\n....\\varsubsetneqq V_{k}\\varsubsetneqq V\n\\end{equation\nsatisfying\n\n\\begin{equation}\nV_{i}\\varsubsetneqq \\rho (\\mathfrak{g})(V_{i})\\subset V_{i+1}\\text{ \\ \\ \\ \\\n\\ }1\\leq i\\leq k\n\\end{equation\nwhere $\\rho (\\mathfrak{g})(V_{i})\\overset{def}{=}\\{\\rho (x)(v)\\mid x\\in \n\\mathfrak{g,}$ $v\\in V_{i}\\}$ and $V_{k+1}=V$.\n\n\\begin{definition}\nA filtration (1) satisfying (2) is a jet-filtration on the representation \n\\rho .$ The integer $k$ is the length of the jet filtration (1).\n\\end{definition}\n\nWe can refine (1): For instance, if there exists some $V_{i}\\varsubsetneqq\nW_{i}\\varsubsetneqq V_{i+1}$ satisfying $V_{i}\\varsubsetneqq \\rho\n(V_{i})\\subset W_{i}\\varsubsetneqq \\rho (\\mathfrak{g})(W_{i})\\subset\nV_{i+1}, $ then we can insert $W_{i}$ into (1) and obtain a finer\njet-filtration. In particular, we define a maximally refined jet-filtration\nin the obvious way. In the opposite direction, we can omit some term(s) from\n(1) and the remaining terms define a coarser jet-filtration.\n\n\\begin{definition}\nThe jet-order of the representation $\\rho :\\mathfrak{g\\rightarrow gl(}V)$ is\nthe maximum of the lengths of all (maximally refined) jet-filtrations (1).\n\\end{definition}\n\nThree natural questions:\n\n\\bigskip\n\n\\textbf{Q1. }What does (1) have to do with jets?\n\n\\bigskip\n\n\\textbf{Q2. }How do we construct jet-filtrations?\n\n\\bigskip\n\n\\textbf{Q3. }Why are jet-filtrations relevant?\n\n\\bigskip\n\nLet us start with \\textbf{Q2. }\n\n\\section{Irreducible representations}\n\nSuppose that $\\rho :\\mathfrak{g\\rightarrow gl(}V)$ is irreducible. We choose\nsome $0\\neq v\\in V$ and define $V_{1}\\overset{def}{=}Span\\{v\\}.$ If \nV_{1}=V, $ we stop. If not, then $\\rho (\\mathfrak{g)}(V_{1})\\nsubseteqq\nV_{1} $ by irreducibility. In this case, we define $V_{2}\\overset{def}{=\nSpan\\{\\rho (\\mathfrak{g)}(V_{1})\\cup V_{1}\\}$ and get $V_{1}\\varsubsetneqq\nV_{2}$ and $V_{1}\\varsubsetneqq \\rho (\\mathfrak{g)}(V_{1})\\subset V_{2}.$ If \n$V_{2}=V$ we stop. If not, then $\\rho (\\mathfrak{g)}(V_{2})\\nsubseteqq V_{2}$\nby irreducibility. In this case, we define $V_{3}\\overset{def}{=}Span\\{\\rho \n\\mathfrak{g)}(V_{2})\\cup V_{2}\\},$ and get $V_{2}\\varsubsetneqq V_{3}$ and \nV_{2}\\varsubsetneqq \\rho (\\mathfrak{g)}(V_{2})\\subset V_{3}.$ Continuing\nthis process, we finally get a jet-filtration (1) such that \nV_{k-1}\\varsubsetneqq \\rho (\\mathfrak{g)}(V_{k})=V.$ Note that this\njet-filtration is maximally refined by construction and its length depends\non the vector $v$ we start with.\n\n\\begin{definition}\nA vector which maximizes the lengths of all such jet-filtrations is called a\nmaximal vector of the irreducible representation $\\rho .$\n\\end{definition}\n\nThe above construction gives an algorithm for constructing jet-filtrations\nusing an arbitrary irreducible representation. Since irreducible\nrepresentations of semi simple Lie algebras are well known, it is\ninstructive at this stage to look at a concrete example. So let $\\mathfrak{g\n=\\mathfrak{sl}(2,\\mathbb{R})$ with the well known basi\n\\begin{equation}\ne=\\left[ \n\\begin{array}{cc}\n0 & 1 \\\\ \n0 & \n\\end{array\n\\right] ,\\text{ \\ \\ }f=\\left[ \n\\begin{array}{cc}\n0 & 0 \\\\ \n1 & \n\\end{array\n\\right] ,\\text{ \\ \\ }h=\\left[ \n\\begin{array}{cc}\n1 & 0 \\\\ \n0 & -\n\\end{array\n\\right]\n\\end{equation\nand $V=\\mathbb{R}_{k}\\mathbb{[}x,y]=$ polynomials in the variables $x,y$ of\ntotal degree $\\leq k.$ Now $\\dim V=k+1$ and has the basis $v_{1}=y^{k},$ \nv_{2}=y^{k-1}x,...,v_{k}=yx^{k-1},v_{k+1}=x^{k}.$ We define the linear map \n\\rho :\\mathfrak{sl}(2,\\mathbb{R})\\rightarrow End(\\mathbb{R}_{k}\\mathbb{[\nx,y])$ by giving its values on this basis as\n\n\\begin{equation}\n\\rho (e)\\overset{def}{=}x\\frac{\\partial }{\\partial y},\\text{ \\ \\ }\\rho (f\n\\overset{def}{=}y\\frac{\\partial }{\\partial x},\\text{ \\ \\ }\\rho (h)\\overset\ndef}{=}x\\frac{\\partial }{\\partial x}-y\\frac{\\partial }{\\partial y}\n\\end{equation\nand check by a straightforward computation that (4) gives a representation \n\\rho :\\mathfrak{sl}(2,\\mathbb{R})\\rightarrow \\mathfrak{gl}(\\mathbb{R}_{k\n\\mathbb{[}x,y])$. Now $\\rho (e)v_{1}=v_{2},$ $\\rho (e)v_{2}=v_{3},...,\\rho\n(e)v_{k}=v_{k+1}$ and $\\rho (e)v_{k+1}=0.$ We define $V_{i}=Spa\n\\{v_{1},v_{2},...,v_{i})$ and get the filtration\n\n\\begin{equation}\n\\{0\\}\\varsubsetneqq V_{1}\\varsubsetneqq V_{2}\\varsubsetneqq\n....\\varsubsetneqq V_{k}\\varsubsetneqq V\n\\end{equation\nsatisfying\n\n\\begin{equation}\nV_{i}\\varsubsetneqq \\rho (e)(V_{i})\\subset V_{i+1}\n\\end{equation}\n\nThe key fact now is that both $\\rho (f)$ and $\\rho (h)$ preserve the\nfiltration (5) because $\\rho (f)v_{i}=v_{i-1}$ and $\\rho (h)v_{i}=\\lambda\n_{i}v_{i}$ for some constant $\\lambda _{i}.$ Therefore we can replace $e$ in\n(6) by the whole $\\mathfrak{sl}(2,\\mathbb{R})$ and (5) becomes a\njet-filtration on the irreducible representation $\\rho .$ Thus for any\npositive integer $k,$ the above well known irreducible representation of \n\\mathfrak{sl}(2,\\mathbb{R})$ of degree $k+1$ has the jet-filtration (5) with\njet-order $k$ and has $v_{1}$ as a maximal vector$.$\n\nIt is well known that irreducible representations of semi simple Lie\nalgebras are \"made up\" of the representations of $\\mathfrak{sl}(2,\\mathbb{R\n).$(see, for instance, [H]). Without going into the technical details, we\nwill state here the following proposition whose proof is now almost trivial\nfor anyone familiar with the representation theory of semi simple Lie\nalgebras.\n\n\\begin{proposition}\nLet $\\mathfrak{g}$ be any semi simple Lie algebra and $k$ be any positive\ninteger. Then there exists an irreducible representation of $\\mathfrak{g}$\nwhose jet-order is greater than $k.$\n\\end{proposition}\n\nWe now come to \\textbf{Q1. }\n\n\\section{Klein geometries}\n\nLet $(\\mathfrak{h,h}_{0})$ be an infinitesimal and effective Klein geometry,\ni.e., $\\mathfrak{h}$ is a finite dimensional Lie algebra, $\\mathfrak{h\n_{0}\\subset \\mathfrak{h}$ is a subalgebra which does not contain any ideals\nof $\\mathfrak{h}$ other than $\\{0\\}$ (see [O1], [O3] for details). Any such\nKlein geometry defines the filtration (called the Weissfeiler filtration in\nsome works)\n\n\\begin{equation}\n\\mathfrak{h\\supsetneqq h}_{0}\\supsetneqq \\mathfrak{h}_{1}\\supsetneqq\n....\\supsetneqq \\mathfrak{h}_{m}\\supsetneqq \\mathfrak{h}_{m+1}=\\{0\\}\n\\end{equation\nwhere $\\mathfrak{h}_{i+1}\\overset{def}{=}\\{x\\in \\mathfrak{h}_{i}\\mid \\lbrack\nx,\\mathfrak{h]\\subset h}_{i}\\},$ $0\\leq i\\leq k.$ We called the integer $m+1$\nthe infinitesimal order of $(\\mathfrak{h,h}_{0})$ and denoted it by $ord\n\\mathfrak{h,h}_{0}).$ Now let $(G,H)$ be an effective Klein geometry\ninducing $(\\mathfrak{h,h}_{0}),$ i.e., $G$ is a Lie group with Lie algebra \n\\mathfrak{g,}$ $H\\subset G$ a closed subgroup with Lie algebra $\\mathfrak{h\n. $ Now $G$ acts on $G\/H=M$ as a transitive transformation group with the\nstabilizer $H.$ If $g(x)=y,$ then the transformation $g\\in G$ is locally\ndetermined near $x$ (sometimes globally on $M!)$ by its $m$'th order jet at \nx$ where $m=ord(\\mathfrak{h,h}_{0}).$ Therefore, in order to answer \\textbf\nQ1, }we must relate Klein geometries to representations.\n\nNow given some representation $\\rho :\\mathfrak{g\\rightarrow gl(}V),$ we\ndefine the operation $[,]$ on $\\mathfrak{g\\times }V$ by\n\n\\begin{equation}\n\\lbrack (g_{1},v_{1}),(g_{2},v_{2})]\\overset{def}{=}([g_{1},g_{2}],\\rho\n(g_{1})(v_{2})-\\rho (g_{2})(v_{1}))\n\\end{equation}\n\nClearly $[,]$ is bilinear and skew symmetric and a straightforward\nverification shows that it also satisfies the Jacobi identity. Therefore $\n\\mathfrak{g\\times }V,[,])$ is a Lie algebra which we will denote by \n\\mathfrak{g\\times }_{\\rho }V$. This construction is a special case of a more\ngeneral one described in [J] (see pg. 17). We identify $\\mathfrak{g}$ with\nits image $(\\mathfrak{g},0)\\subset \\mathfrak{g\\times }_{\\rho }V$ and\nidentify $V$ with the abelian ideal $(0,V)\\subset \\mathfrak{g\\times }_{\\rho\n}V.$ From (8) we deduce the important formula\n\n\\begin{equation}\n\\lbrack \\mathfrak{g},V]=[(\\mathfrak{g},0),(0,V)]=(0,\\rho (\\mathfrak{g\n)V)=\\rho (\\mathfrak{g})V\n\\end{equation}\n\nNow suppose that $\\rho :\\mathfrak{g\\rightarrow gl(}V)$ is irreducible with\nthe jet-filtration (5). To be consistent with the notation in (7), we\nrewrite the ascending filtration (5) as a descending filtration (writing \nV_{k-i}$ for $V_{i})$\n\n\\begin{equation}\nV\\supsetneqq V_{0}\\supsetneqq V_{1}\\supsetneqq ....\\supsetneqq\nV_{k}\\supsetneqq \\{0\\}\n\\end{equation}\n\nWe define\n\n\\begin{equation}\n\\mathfrak{h}\\overset{def}{=}\\mathfrak{g\\times }_{\\rho }V\n\\end{equation}\n\n\\begin{equation}\n\\mathfrak{h}_{0}\\overset{def}{=}V_{0}\\varsubsetneqq V\\subset \\mathfrak{h}\n\\end{equation\nand consider the Klein geometry $(\\mathfrak{h,h}_{0})$ which is effective by\n(9) and irreducibility of $\\rho .$ By the definition of (7), we have $\n\\mathfrak{h}_{i+1},\\mathfrak{h}]\\subset \\mathfrak{h}_{i}$ and $\\mathfrak{h\n_{i+1}$ is the largest subalgebra of $\\mathfrak{h}_{i}$ satisfying $\n\\mathfrak{h}_{i+1},\\mathfrak{h}]\\subset \\mathfrak{h}_{i}.$ For $(\\mathfrak\nh,h}_{0})$ defined by (11), (12), we have $[V_{i+1},\\mathfrak{h]\\subset \nV_{i}$ by (9) and therefore $V_{i}\\subset \\mathfrak{h}_{i}.$ It follows that \n$m\\geq k.$ If (10) is maximally refined, we conclude $V_{i}=\\mathfrak{h}_{i}$\nand $m=k.$ Hence we obtain\n\n\\begin{proposition}\nLet $\\rho :\\mathfrak{g\\rightarrow gl(}V)$ be an irreducible representation\nwith some maximally refined jet-filtration (10) of length $k.$ Then the\nKlein geometry $(\\mathfrak{h,h}_{0})$ defined by (11), (12) is effective\nwith $ord(\\mathfrak{h,h}_{0})=k+1.$\n\\end{proposition}\n\nProposition 5 gives a partial answer to the first fundamental problem of jet\ntheory (FP1) posed in [O1]. It is easy to show that $[\\mathfrak{h}_{i}\n\\mathfrak{h}_{j}]\\subset \\mathfrak{h}_{i+j}$ in (7) and therefore $\\mathfrak\nh}_{i}$ is abelian if $2i\\geq m+1=ord(\\mathfrak{h,h}_{0}),$ i.e., the \" half\ntail\" of the Weissfeiler filtration always consists of abelian ideals of \n\\mathfrak{h}_{0}.$ In the examples we construct by (11), (12), note that \n\\mathfrak{h}_{0}$ is always abelian! It is easy to modify our construction\nby defining $\\mathfrak{h}_{0}\\overset{def}{=}$ $\\mathfrak{r\\times }_{\\rho\n}V_{0}$ (rather than by (12)) for some \"carefully chosen\" subalgebra \n\\mathfrak{r\\subset g}$ which will make $\\mathfrak{h}_{0}$ nonabelian (but\nsolvable!) and still give the conclusion of Proposition 5. However, we do\nnot know the degree of freedom we have for $\\mathfrak{h}_{0}.$\n\nTo conclude this section, we we would like to express our belief that the\nmodern representation theory of Lie algebras can be reconstructed, possibly\nwith some new insights and perspectives, as a special case of the theory of\nKlein geometries.\n\nWe now come to \\textbf{Q3 }(which is partially answered by the above\nparagraph)\\textbf{.}\n\n\\section{Stiffening of symmetries and integrability}\n\nAny introductory book on differential geometry starts by defining the\ntangent space, vector fields, tensor fields, differential forms...etc. These\nare all first order objects, i.e., a transformation acts on these objects by\nits first order derivatives. One is naturally lead to ask the following\nadmittedly vague question:\n\n\\textbf{Q: }Do the \"higher order derivatives \" play any role in the global\nstructure of smooth manifolds?\n\nIf the answer to \\textbf{Q }is affirmative, then \"higher order derivatives\"\nare surely quite relevant since understanding the structures of manifolds is\na fundamental problem in differential geometry. We do not know of any\nattempt in the literature to answer \\textbf{Q} other than the striking\nunpublished preprint [Ol].\n\nNow suppose some Lie group $G$ (rather, a pseudogroup of finite order) acts\ntransitively on a smooth manifold $M.$ This situation expresses the fact\nthat $M$ is \"symmetric\" in some particular way. If some proper subgroup \nG^{\\prime }\\subset G$ also acts transitively on $M,$ i.e., if the action of\nthe transformation group $G$ can be stiffened to the transformations of \nG^{\\prime },$ we can speculate that $M$ is \"more symmetric\" in the same way.\nAs an extreme, if $G^{\\prime \\prime }\\subset G^{\\prime }\\subset G$ acts\nsimply transitively, then $M$ is a Lie group, the most symmetric object from\nthis standpoint. However, if we allow the transformation group to be \"too\nlarge\" like $Diff(M),$ then all manifolds become symmetric since $Diff(M)$\nacts transitively on $M.$\n\nNow the key fact: For some fixed $M,$ if a Lie group $G,$ whose dimension is\nmuch greater than the dimension of $M,$ acts transitively and effectively on \n$M,$ in which case the dimension of the stabilizer $H$ must be large too\nsince $\\dim G\/H=$ $\\dim G-\\dim H$ $=\\dim M,$ then the Klein geometry $(G,H)$\nmust have high order as defined above, i.e., we need higher order\nderivatives of the transformations of $G$ to determine them. The reason is\nthat $H$ injects into the $k$'th order jet group $G_{k}(n)$ where $n=\\dim M.$\nFor instance, if $k=1,$ then we must have $\\dim H\\leq \\dim G_{1}(n)=n^{2}.$\nIf $\\dim H\\succneqq \\dim G_{1.}(n),$ then we need at least second order\nderivatives to inject $H$ into $G_{2}(n),$ if $\\dim H\\succneqq \\dim\nG_{2.}(n),$ then we need at least third order derivatives...etc. In short,\n\"large groups act transitively and effectively on small spaces with large\norder\". Now suppose $(G,H)$ has order $k+1$ and the action of $G$ can not be\nstiffened to a proper subgroup of order $k.$ This indicates an obstruction\ncoming from $k$'th order derivatives and the problem is, of course, to\nformulate this obstruction in precise mathematical terms. Now $(G,H)$\ndefines a flat pre-homogeneous geometry (PHG) of order $k+1$ ([O1], [O2])\nwhich will \"contain\" many PHG's of order $k$ and none of these PHG's can\nintegrate, i.e., have vanishing curvature, for otherwise they would give a\nstiffening.\n\nRemarkably, the stiffening problem has a purely algebraic counterpart: If \n(G,H)$ stiffens $(G^{\\prime },H^{\\prime }),$ then $(\\mathfrak\ng,h)\\preccurlyeq (g}^{\\prime },\\mathfrak{h}^{\\prime }),$ i.e., $\\mathfrak{g+\n}^{\\prime }=\\mathfrak{g}^{\\prime },$ $\\mathfrak{h=g\\cap h}^{\\prime }$ (see\n[O1], pg. 131 for details). Note that $\\dim \\mathfrak{g-}\\dim \\mathfrak{h=\n\\dim \\mathfrak{g}^{\\prime }-\\dim \\mathfrak{h}^{\\prime }.$ Now $\\preccurlyeq $\nis a partial order on Klein geometries and the problem is to find the\nminimal (also maximal!) elements.\n\nThe above speculative scenario has been one of the motivations for us to\nbecome interested in Klein geometries and geometric structures of high order\nand we hope it may be inspiring also for some others.\n\n\\bigskip\n\n\\textbf{References}\n\n\\bigskip\n\n[H] B.C.Hall: Lie Groups, Lie Algebra, and Representations: An Elementary\nIntroduction, Graduate Texts in Mathematics, Book 222, Springer, 2015\n\n[J] N. Jacobson: Lie Algebras, Dover Publications Inc., 1962\n\n[Ol] P.J.Olver: Differential hyperforms, I, preprint, University of\nMinnesota, 1982\n\n[O1] E.H.Orta\\c{c}gil: An Alternative Approach to Lie Groups and Geometric\nstructures, OUP, 2018\n\n[O2] \\ \\ \\ \\ \\ \\ \" \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ : Curvature without connection,\narXiv:2003.06593, 2020\n\n[O3] \\ \\ \\ \\ \\ \"\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ : Nilpotency and higher order\nderivatives in differential geometry, arXiv:2105.07254, 2021\n\n\\bigskip\n\nErc\\\"{u}ment H. Orta\\c{c}gil\n\nortacgile@gmail.com\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{Price prediction based on machine learning}\nThe traditional perception of the stock price is mainly based on technical analysis and fundamental analysis. However, with the development of big data, people increasingly realize that these two methods do not make full use of market public information. Therefore, asset price prediction based on machine learning has become a hot topic recently.\\par\nAdebiyi A. Ariyo, etc. predict the stock price based on the ARIMA model, which has been proven to have strong short-term prediction potential and can compete with the existing stock price prediction technology\\cite{9}. Carson Kai-Sang Leung etc. proposed structural support vector machines (SSVMs) and use SSVM to predict the positive or negative changes of its share price\\cite{10}. Recently, with the continuous development of deep learning technology and computing hardware, stock price prediction based on deep learning has become a hot topic. Sreelekshmy Selvin, etc. based on LSTM, CNN, and RNN, the stock price prediction experiment is carried out, and a sliding window is proposed to predict the short-term future value\\cite{11}. \n\n\n\n\\subsection{Our work}\nThis article is based on two popular assets - gold and bitcoin, which are considered to be good hedging assets. We will only make predictions based on the price series itself, although this is not common, because stock analysts usually refer to a large number of factors. We will focus on the application of neural networks in time series, and hope to expand it to general time series data sets.\\par\n\nTo provide an effective trading strategy, we prefer a prediction model and a programming model to tackle problems.\\\\\n\\indent Since the only data we can use to formulate the strategy is the daily prices of assets over five years. We first conduct feature engineering to excavate attributes through economical and mathematical methods. A sliding window is also introduced to lengthen the training set for the following prediction.\\\\\n\\indent Then we constantly improve the model based on the basic LSTM model, combining various optimize algorithms and mechanisms. After proving that our At-BiLSTM model is accurate, we calculate the effective duration of the model and predict the rate of return based on the duration.\\\\\n\n\n\\section{Data pre-processing}\nTo preserve the effectiveness of prediction and strategies, the continuity and authenticity of the trading data must be guaranteed. However, not all the data we have are integral. To ameliorate the condition of the data set, three methods have been proposed to improve the data as shown below:\n\\begin{itemize}\n\\item For transaction date, we convert different time forms into the same data type.\n\\item On days the market is closed, we take the gold price of the previous trading day within the available time as the price of that day, to obtain the corresponding prices of the two assets on each trading day within five years.\n\\item While training the model, we will set up a sliding window and continuously make predictions with the latest available historical data as input information.\n\\item Lagrange interpolation method is used for data padding.\n\\end{itemize}\n\\subsection{Lagrange interpolation method}\nThe Lagrange interpolation method is a polynomial interpolation method. The method selects several suitable values around the interpolation point to construct a simple interpolation function $y=G_j{(x)}$, which goes through the interpolation points. The simple but accurate interpolation function $G_j{(x)}$ utilizes the existing data and computes the corresponding data accordingly.\n\\begin{equation}\nL(x)=\\sum_{j=0}^l G_j(x)y_j\n\\end{equation}\nwhere $G_j(x)$ is the weighting coefficient function.\n\\begin{equation}\nG_j{(x)}=\\prod_{k=0,k\\neq j}^l \\frac{x-x_k}{x_j-x_k}=\\frac{x-x_0}{x_j-x_0}\\cdots \\frac{x-x_{j-1}}{x_j-x_{j-1}}\n\\frac{x-x_{j+1}}{x_j-x_{j+1}}\n\\end{equation}\nWe fill in the missing prices of gold over the five years using python.\n\\subsection{Feature engineering}\nSince the only data we can use are the daily prices of two assets over five years, we conduct feature engineering to acquire more attributes to be used for analysis. By decomposing and aggregating raw data, we can better analyse the essence of the problem. In this part, two methods are applied to unearth new clusters of attributes.\n\\subsubsection{Economical attributes}\nReferring to the real financial market, we calculate the following eight economical attributes:\n\\begin{itemize}\n\\item \\textbf{Rate of return:} Measure the price change over two days by the equation below:\n\\begin{equation}\\label{shou}\nReturn_t=\\frac{P_t-P_{t-1}}{P_t}\n\\end{equation}\nFor the convenience of prediction, we will predict the daily rate of return in the following section \\ref{Predict}.\n\\item \\textbf{Price variance:} Measure the deviation of asset prices. We generate 10-day variance and 20-day variance.\n\\item \\textbf{Moving average index:} Measure the average asset prices over a period of time. We generate 10-day MA and 30-day MA.\n\\item \\textbf{Boll index:} Created by Mr.John Boll,\\cite{2}\n who uses statistical principles to find out the standard deviation and confidence interval to determine the volatility range and future trend of the stock price. We generate high, middle and low boll value through the equation below:\n\\begin{equation}\nBoll_t = \\frac{\\sum\\limits_{n=1}^{N}p_{t-n}}{N}\\pm 2\\sigma_B\n\\end{equation}\nwhere $\\sigma_B$ is the price standard deviation.\n\\item \\textbf{Psychological index:} Measure the sentiment index of investors' psychological fluctuations in the asset market.\n\\begin{equation}\nPsy_t=\\frac{N_{up}}{N}\n\\end{equation}\nwhere $N_{up}$ is the number of days during the last $N$ days.\n\\item \\textbf{Technical indicator:} We also introduce a popular indicators, RSI, in financial markets to expand our economical attributes. With reference to the definitions of the indicator in the stock market, we can calculate the corresponding asset attribute.\n\\begin{equation}\nRSI=100-\\frac{100}{1+(\\frac{1}{n}\\sum\\limits ^n_{i=0}r_{t-i})\/(\\frac{1}{n}\\sum\\limits ^n_{i=0}f_{t-i})}\n\\end{equation}\nwhere $r_{t-i}$ is the total change in price rising days and $f_{t-i}$ is the total change in price falling days.\n\\end{itemize}\n\\subsubsection{Mathematical attributes}\nEmpirical research shows that most financial data have the characteristics of sharp peaks and thick tails, so they cannot be characterized by normal distribution. The independence assumption of the sequence is generally no longer valid and the prices of financial assets present autocorrelation and volatility cluster in time series. To capture the asymmetrical data, we choose GARCH model to describe the conditional heteroskedasticity over time series and introduce attributes:\n\\begin{itemize}\n\\item \\textbf{Stochastic disturbance term $\\mu_t$}\n\\item \\textbf{Conditional variance $\\sigma^2_t$}\n\\end{itemize}\n\n\\begin{figure*}[H]\n\\centering\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{FATG.png}\n\\caption{Price return of gold}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{FATB.png}\n\\caption{Price return of bitcoin}\n\\end{minipage}\n\\end{figure*}\n\nGARCH model is a widely used time series analysis method proposed by Bollerslev (1986), which are an extension of the earlier work on ARCH models by Engle (1982).\\cite{1}\nThe core idea of ARCH model is that the variance of error at time $t$ depends on the previous variance of error.\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\ny_t=X^{`}_t\\phi+\\mu_t, \\mu_t \\sim N(0,\\sigma^2_t)\\\\\n\\sigma^2_t=\\alpha_0+\\sum\\limits_{i=1}^p\\alpha_i\\mu^2_{t-i}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\mu_t$ represents the stochastic disturbance term without serial correlation and $\\sigma^2_t$ represents the variance of the stochastic disturbance.\\\\\n\\indent Then we generate GARCH model by adding the lag of $\\sigma_t$ to ARCH model:\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\ny_t=X^{`}_t\\phi+\\mu_t, \\mu_t \\sim N(0,\\sigma^2_t)\\\\\n\\sigma^2_t=\\alpha_0+\\sum\\limits_{i=1}^p\\alpha_i\\mu^2_{t-i}+\\sum\\limits_{i=1}^q\\beta_i\\sigma^2_{t-i}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $q$ and $p$ represent the lags in the GARCH term and the ARCH term respectively.\\\\\n\\indent In order to use the GARCH model, we need to verify the stationary time process and normality of the acquired data before extracting features. Due to the huge amount of data, the ADF method is used in the test.\n\\begin{table*}[htbp]\n\\centering\n\\caption{ADF test}\n\\begin{tabular}{c|cccccccc}\n\\hline\n\\textbf{Asset} & \\multicolumn{4}{c}{\\textbf{gold}} & \\multicolumn{4}{c}{\\textbf{bitcoin}} \\\\ \\hline\n\\multicolumn{1}{l|}{} & \\textbf{tau1} & \\textbf{tau3} & \\textbf{phi2} & \\textbf{phi3} & \\textbf{tau1} & \\textbf{tau3} & \\textbf{phi2} & \\textbf{phi3} \\\\ \\hline\n\\multicolumn{1}{l|}{} & -29.3978 & -29.665 & 293.3379 & 440.0067 & -23.5781 & -23.6103 & 185.8161 & 278.724 \\\\\n\\textbf{1pct} & -2.58 & -3.96 & 6.09 & 8.27 & -2.58 & -3.96 & 6.09 & 8.27 \\\\\n\\textbf{5pct} & -1.95 & -3.41 & 4.68 & 6.25 & -1.95 & -3.41 & 4.68 & 6.25 \\\\\n\\textbf{10pct} & -1.62 & -3.12 & 6.25 & 5.34 & -1.62 & -3.12 & 6.25 & 5.34 \\\\ \\hline\n\\end{tabular}\n\\end{table*}\nADF test shows that at the $1\\%$ confidence level, we can reject the null hypothesis and consider the series of gold and bicoin as stable.\\\\\n\\indent GARCH(1,1) model has few parameters and is often tested in practice. Since it is quite difficult to determine the order of the GARCH model, so we choose commonly used GARCH(1,1) model to extract attributes. From GARCH(1,1), we acquire two attributes $\\mu_t$ and $\\sigma_t^2$, referring to stochastic disturbance team and conditional variance respectively.\n\\begin{equation}\n\\mu_t=y_t-x_t\\phi\n\\end{equation}\n\\begin{equation}\n\\sigma^2_t=\\alpha_0+\\alpha_1\\mu^2_{t-1}+\\beta_1\\sigma^2_{t-1}\n\\end{equation}\nwhere $y_t$ is the asset price in time $t$, $x_t$ is the asset price in time $t-1$, $\\mu_t$ is the stochastic disturbance term, $\\sigma^2_t$ is the variance.\\\\\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}[t]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{gs.png}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{gm.png}\n\\end{minipage}\n\\caption{Attribute distribution of gold}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{cs.png}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{cm.png}\n\\end{minipage}\n\\caption{Attribute distribution of bitcoin}\n\\end{figure*}\nAs the figure shown, attributes $\\mu_t$ have a high degree of coincidence with asset return and attributes $\\sigma^2_t$ are all positive. So they are suitable for describing asset return.\n\\section{Prediction model}\\label{Predict}\nSince there are many factors affecting the volatility of stock prices, deep learning models such as neural networks can be used for prediction. The most popular model used for time series prediction is RNN model. The chain-type structural feature of RNN is particularly suitable to process the discrete data series. Since the information of hidden layer comes only from the information of current input and last layer, however, there is no memory function in RNN model. It leads to gradient disappearance problems. In this context, LSTM (Long-short term memory) model has emerged.\n\\subsection{LSTM model}\nCells were introduced to LSTM model, which utilizes three gates, forget, input and output, to maintain and control information. The forget gate combines the information of the previous hidden layer $s_{t-1}$ and input $x_{t}$. Under the sigmoid function, it decides whether the old information to be discard or not. The input gate and tanh function filter the activation information and work out new cell $\\tilde{c_{t}}$. The output gate finally output the information after processing. In the model, the decay propagation of the gradient information can be solved to keep the network in memory for a relative long time.\n\t\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{LSTM.pdf}\n\t\\caption{The structure of LSTM}\n\t\\end{figure}\nThe structure of the \"gate\" realizes the addition and removal of information, and determines whether the model will save or forget the information during training. With this \"gate structure\", LSTM model effectively alleviates the problem of gradient change and selectively stores information. For a given feature at $t$, the network will have the ability to backward predict the target information, enabling investors to predict the future price at some time, so as to make more accurate decisions.\n\\subsubsection{Two-layer LSTM with Batch Normalization}\nThe research conducted by Li(2021)\\cite{7} shows that the structure of two-layer LSTM can effectively improve the accuracy of the model prediction and the study presented by Sergey Ioffe(2015)\\cite{5} shows that Batch Normalization could accelerate deep Network Training by Reducing Internal Covariate Shift. So we make some improvement in the traditional model correspondingly.\\\\\n\\indent Batch Normalization is an efficient data normalization method used to activate layers in deep neural networks. It can speed up the convergence rate of the model training, make the training process more stable, avoid gradient explosion or gradient disappearance and play a certain regularization role.\n\\begin{equation}\n\\begin{array}{l}\nInput:B=\\{x_{1\\cdots m}\\}\\quad \\gamma,\\beta(parameters)\\\\\nOutput:\\{y_i=BN_{\\gamma,\\beta}(x_i)\\}\\\\\n\\mu_B \\gets \\frac{1}{m}\\sum\\limits^m_{i=1}x_i\\\\\n\\sigma^2_B\\gets \\frac{1}{m}\\sum\\limits^m_{i=1}(x_i-\\mu_B)^2\\\\\n\\tilde{x_{i}}\\gets \\frac{(x_i-\\mu_B)^2}{\\sqrt{\\sigma^2_B+\\varepsilon}}\\\\\ny_i\\gets \\gamma \\tilde{x_{i}}+\\beta\n\\end{array}\n\\end{equation}\nwhere B is the set of input values, a learnable parameter, which can be used to normalize with and restore the data.\\\\\n\\indent The data is normalized to a unified interval to reduce the data divergence and the learning complexity of the network. After normalization, using $\\gamma$ and $\\beta$ as the parameters preserves the distribution of the original data. To further avoid overfitting, we add \"Dropout layers\" in the model. The layers can temporarily discard some neural from the network during the training process of a deep learning network refereed to the probability. The mechanism helps find sparse network containing only a fraction of the neurons based on the original complex network, greatly reducing the possibility of overfitting.\\cite{6}\n\n\\subsection{Bidirectional LSTM model}\nTo further improve the efficiency of data use and enhance the robustness of the model, we combine a forward LSTM model with a backward LSTM model to construct Bidirectional LSTM model. Bi-LSTM can effectively use the forward and backward feature information of the input and improve the accuracy when the parameters such as the training rounds are unchanged.\n\\subsection{Attention mechanism}\nWhen assessing a set of information, our neurons automatically scan the global image and then focus on the target based on past experience and current goals, that is, the focus of attention. Once the focus is determined, we pay more attention to more details in that area and suppress other useless information. For example, in the field of image processing, when scanning journal images, people mainly focus on the picture and the title, in line with our life experience.\\\\\n\\indent Weight factors are assigned to each factor to measure its contribution in attention mechanism. Firstly, two hidden layer, Encoder and Decoder, equivalent to the input and output of the structure are employed. Points are then accumulated with the Decoder hidden layer and each Encoder hidden layer. The results are recorded as \"Score\". All scores are then sent to the softmax layer so that the higher score the layer achieves, the greater the probability would be. Thus the mechanism suppresses invalid or noise information.\n\\begin{figure}[htbp]\n\\centering\n\n\\centering\n\\includegraphics[width=0.6\\linewidth]{attLayer.png}\n\\caption{Structure of attention mechanism}\n\\end{figure}\n\\begin{figure}\n\n\n\\centering\n\\includegraphics[width=0.6\\linewidth]{attstructure.png}\n\\caption{Process of attention mechanism}\n\n\\end{figure}\nwhere the softmax function is:\n\\begin{equation}\nsoftmax(z)_i = \\frac{exp(z_i)}{ {\\textstyle \\sum_{j}^{}}exp(z_j) }\n\\end{equation}\n\\indent For time series data, due to high-dimensional variables and multi-step time steps, if trained with equal weights, it will lead to unclear information focus, eventually resulting in overfitting noise information. Therefore, we introduce attention mechanism to the our model. We apply attention mechanism in the time step dimension of the input variable, considering the past trading data as significant factors during the prediction of future return.\n\\subsection{Results}\n\\subsubsection{RMSProp optimization algorithm}\nWhile training the model, we choose RMSProp optimization algorithm to adjust model parameters.\n\\begin{equation}\ng\\gets \\frac{1}{m} \\bigtriangledown_\\theta \\sum\\nolimits_i L(f(x^{(i)};\\theta),y^{(i)})\n\\end{equation}\n\\begin{equation}\nr_{t}\\leftarrow \\rho r_{t-1}+(1-\\rho)g\\odot g\n\\end{equation}\n\\begin{equation}\n\\Delta\\theta=-\\frac{\\epsilon}{\\sqrt{\\delta+r}}\\odot g\n\\end{equation}\nwhere $\\rho$ is used to control how much historical information is acquired.\\\\\n\\indent Given that the loss functions of neural networks are all non-convex, RMSProp\\cite{3} presents better in non-convex conditions, changing the cumulative gradient to the exponential decay moving average to discard distant past history.\n\\indent Based on the above analysis, we finally determined the optimal parameter portfolio as follows:\n\\begin{table}[htbp]\n\\centering\n\\caption{Optimal parameter portfolio}\n\\begin{tabular}{@{}c|cccc@{}}\n\\toprule\n & \\textbf{Units} & \\textbf{batch\\_size} & \\textbf{Learning rate} & \\textbf{epochs} \\\\ \\midrule\n\\textbf{LSTM} & 128 & 128 & 0.01 & 300 \\\\\n\\textbf{BiLSTM} & 64 & 128 & 0.001 & 300 \\\\\n\\textbf{At-BiLSTM} & 32 & 128 & 0.01 & 300 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\nWe input two assets datasets into the three models respectively and compare the accuracy of the three models between the training set and the test set.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{3acc.png}\n\\caption{Accuracy of three models}\n\\end{figure}\nThe accuracy of the three models gradually increase for both assets, indicating that AT-BiLSTM model is prominent in prediction. In addition to the above three models, we compared the accuracy of several common prediction models and obtained the results as follows:\n\nConsidering the above analysis, we finally decide to use the AT-BiLSTM model with high predictive ability for return prediction during the asset trading period. Using the equation \\ref{shou}, we can work out a five-year forecast for the prices of two assets. In order to show the difference between prediction and reality more directly, we also work out the local price trend.\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{goldPre.png}\n\\caption{Prediction of Gold}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{bitPre.png}\n\\caption{Prediction of Bitcoin}\n\\end{minipage}\n\\end{figure*}\n\\section{Firm offer training strategy}\nThe prediction of return by the above models is a global train, assuming that we have all the trading data during the period. However, when we develop strategies, we cannot know the \"future\" data. For example, we suppose the trader were at 9\/11\/2016, the beginning of the dataset. At this point, the trader would not have any trading data, namely that there is no data for the trader to train the model. That means we must take out a considerable trading period at the beginning, during which we temporarily do not participate in the transaction. Only if the accuracy of the model rises to a certain extent, can we use the model to predict and make investment decisions. Therefore, we should divide the five-year period into training and trading periods.\\\\\n\\indent To save the training time, we update the model using an incremental training pattern. Taking gold trading as an example, the training error decreases over time. As the figure shown above, after 300 days the error plateaus, indicating the model is fully learned. So we select the $300^{th}$ day of the trading period as the point we start our strategy. When we start our strategy, we still train the prediction model with incremental training pattern, so that the model always learns the latest data. As can be seen from the attention mechanism above, the latest trading data presents greater learning value.\\par\nWe also compared the attention bilstm model with other traditional models. Finally, the experimental and prediction results are shown in the TABLE III. It can be seen that the attention bilstm model achieves the best prediction effect. It also can be seen in the Fig 9 and Fig 10, that the model has a good fit for the price.\n\n\\begin{table*}[htbp]\n\\caption{accuracy of common models}\n\\centering\n\\begin{tabular}{@{}cccccc@{}}\n\\toprule\n\\textbf{Indicator} & \\textbf{Assets} & \\textbf{LinearModel} & \\textbf{SVM} & \\textbf{Decision tree} & \\textbf{Arima Model} \\\\ \\midrule\n{\\textbf{Acc.(\\%)}} & \\textbf{Bitcoin} & 0.5646 & 0.5505 & 0.5489 & 0.5316 \\\\\n & \\textbf{Gold} & 0.5723 & 0.5589 & 0.5501 & 0.5469 \\\\ \\midrule\n{\\textbf{AUC}} & \\textbf{Bitcoin} & 0.6241 & 0.5004 & 0.5208 & \\textless{}0.5 \\\\\n & \\textbf{Gold} & 0.6405 & \\textless{}0.5 & \\textless{}0.5 & \\textless{}0.5 \\\\ \\midrule\n\\textbf{Indicator} & \\textbf{Assets} & \\textbf{LSTM} & \\textbf{AT-LSTM} & \\textbf{BiLSTM} & \\textbf{AT-BiLSTM} \\\\ \\midrule\n{\\textbf{Acc.(\\%)}} & \\textbf{Bitcoin} & 0.5357 & 0.6239 & 0.5854 & 0.6578 \\\\\n & \\textbf{Gold} & 0.5821 & 0.6433 & 0.6315 & 0.6697 \\\\ \\midrule\n{\\textbf{AUC}} & \\textbf{Bitcoin} & 0.6755 & 0.7016 & 0.6988 & 0.7194 \\\\\n & \\textbf{Gold} & 0.6817 & 0.7132 & 0.7102 & 0.7303 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\n\n\n\\section{Conclusion, Discussion and Future Work}\nThis paper have developed an automatic model that can predict the price, judge the trading point and determine the trading volume. This model is based on Bi-LSTM and adds attention mechanism. The timing and volume strategy based on dynamic programming and entropy weight method is also added to make trading decision. After our test, it has good prediction accuracy and strong robustness. In our simulation back test, it has achieved an annualized yield of $170\\%$. The value of $\\$1000$ investment would be $\\$8542.3$. \\par\nAlthough our strategy can achieve high annual yield, ironically, its yield is not even as good as buying bitcoin at the beginning of the trading period and holding it all the time. This also shows that the model performs generally for one-sided markets. Besides, Although the attention Bi-LSTM model has been proved to be the best in our experiments, it can be seen that the performance gap between it and other models is not large. There is a possibility that the attention Bi-LSTM model does not perform well in other data sets, which also proves that there is no free lunch theorem\\cite{8} from another side.\\par\nThis paper only considers the prediction of price, and does not explore in depth how to place an asset order. For example, when our model predicts that prices will rise tomorrow, how many assets should we buy? At the same time, we also did not consider the matching of trading books, liquidity and other issues that may occur in the real trading process. These problems need to be further studied.\n\n\n\\section{Introduction}\n\\subsection{Price prediction based on machine learning}\nThe traditional perception of the stock price is mainly based on technical analysis and fundamental analysis. However, with the development of big data, people increasingly realize that these two methods do not make full use of market public information. Therefore, asset price prediction based on machine learning has become a hot topic recently.\\par\nAdebiyi A. Ariyo, etc. predict the stock price based on the ARIMA model, which has been proven to have strong short-term prediction potential and can compete with the existing stock price prediction technology\\cite{9}. Carson Kai-Sang Leung etc. proposed structural support vector machines (SSVMs) and use SSVM to predict the positive or negative changes of its share price\\cite{10}. Recently, with the continuous development of deep learning technology and computing hardware, stock price prediction based on deep learning has become a hot topic. Sreelekshmy Selvin, etc. based on LSTM, CNN, and RNN, the stock price prediction experiment is carried out, and a sliding window is proposed to predict the short-term future value\\cite{11}. \n\n\n\n\\subsection{Our work}\nThis article is based on two popular assets - gold and bitcoin, which are considered to be good hedging assets. We will only make predictions based on the price series itself, although this is not common, because stock analysts usually refer to a large number of factors. We will focus on the application of neural networks in time series, and hope to expand it to general time series data sets.\\par\n\nTo provide an effective trading strategy, we prefer a prediction model and a programming model to tackle problems.\\\\\n\\indent Since the only data we can use to formulate the strategy is the daily prices of assets over five years. We first conduct feature engineering to excavate attributes through economical and mathematical methods. A sliding window is also introduced to lengthen the training set for the following prediction.\\\\\n\\indent Then we constantly improve the model based on the basic LSTM model, combining various optimize algorithms and mechanisms. After proving that our At-BiLSTM model is accurate, we calculate the effective duration of the model and predict the rate of return based on the duration.\\\\\n\n\n\\section{Data pre-processing}\nTo preserve the effectiveness of prediction and strategies, the continuity and authenticity of the trading data must be guaranteed. However, not all the data we have are integral. To ameliorate the condition of the data set, three methods have been proposed to improve the data as shown below:\n\\begin{itemize}\n\\item For transaction date, we convert different time forms into the same data type.\n\\item On days the market is closed, we take the gold price of the previous trading day within the available time as the price of that day, to obtain the corresponding prices of the two assets on each trading day within five years.\n\\item While training the model, we will set up a sliding window and continuously make predictions with the latest available historical data as input information.\n\\item Lagrange interpolation method is used for data padding.\n\\end{itemize}\n\\subsection{Lagrange interpolation method}\nThe Lagrange interpolation method is a polynomial interpolation method. The method selects several suitable values around the interpolation point to construct a simple interpolation function $y=G_j{(x)}$, which goes through the interpolation points. The simple but accurate interpolation function $G_j{(x)}$ utilizes the existing data and computes the corresponding data accordingly.\n\\begin{equation}\nL(x)=\\sum_{j=0}^l G_j(x)y_j\n\\end{equation}\nwhere $G_j(x)$ is the weighting coefficient function.\n\\begin{equation}\nG_j{(x)}=\\prod_{k=0,k\\neq j}^l \\frac{x-x_k}{x_j-x_k}=\\frac{x-x_0}{x_j-x_0}\\cdots \\frac{x-x_{j-1}}{x_j-x_{j-1}}\n\\frac{x-x_{j+1}}{x_j-x_{j+1}}\n\\end{equation}\nWe fill in the missing prices of gold over the five years using python.\n\\subsection{Feature engineering}\nSince the only data we can use are the daily prices of two assets over five years, we conduct feature engineering to acquire more attributes to be used for analysis. By decomposing and aggregating raw data, we can better analyse the essence of the problem. In this part, two methods are applied to unearth new clusters of attributes.\n\\subsubsection{Economical attributes}\nReferring to the real financial market, we calculate the following eight economical attributes:\n\\begin{itemize}\n\\item \\textbf{Rate of return:} Measure the price change over two days by the equation below:\n\\begin{equation}\\label{shou}\nReturn_t=\\frac{P_t-P_{t-1}}{P_t}\n\\end{equation}\nFor the convenience of prediction, we will predict the daily rate of return in the following section \\ref{Predict}.\n\\item \\textbf{Price variance:} Measure the deviation of asset prices. We generate 10-day variance and 20-day variance.\n\\item \\textbf{Moving average index:} Measure the average asset prices over a period of time. We generate 10-day MA and 30-day MA.\n\\item \\textbf{Boll index:} Created by Mr.John Boll,\\cite{2}\n who uses statistical principles to find out the standard deviation and confidence interval to determine the volatility range and future trend of the stock price. We generate high, middle and low boll value through the equation below:\n\\begin{equation}\nBoll_t = \\frac{\\sum\\limits_{n=1}^{N}p_{t-n}}{N}\\pm 2\\sigma_B\n\\end{equation}\nwhere $\\sigma_B$ is the price standard deviation.\n\\item \\textbf{Psychological index:} Measure the sentiment index of investors' psychological fluctuations in the asset market.\n\\begin{equation}\nPsy_t=\\frac{N_{up}}{N}\n\\end{equation}\nwhere $N_{up}$ is the number of days during the last $N$ days.\n\\item \\textbf{Technical indicator:} We also introduce a popular indicators, RSI, in financial markets to expand our economical attributes. With reference to the definitions of the indicator in the stock market, we can calculate the corresponding asset attribute.\n\\begin{equation}\nRSI=100-\\frac{100}{1+(\\frac{1}{n}\\sum\\limits ^n_{i=0}r_{t-i})\/(\\frac{1}{n}\\sum\\limits ^n_{i=0}f_{t-i})}\n\\end{equation}\nwhere $r_{t-i}$ is the total change in price rising days and $f_{t-i}$ is the total change in price falling days.\n\\end{itemize}\n\\subsubsection{Mathematical attributes}\nEmpirical research shows that most financial data have the characteristics of sharp peaks and thick tails, so they cannot be characterized by normal distribution. The independence assumption of the sequence is generally no longer valid and the prices of financial assets present autocorrelation and volatility cluster in time series. To capture the asymmetrical data, we choose GARCH model to describe the conditional heteroskedasticity over time series and introduce attributes:\n\\begin{itemize}\n\\item \\textbf{Stochastic disturbance term $\\mu_t$}\n\\item \\textbf{Conditional variance $\\sigma^2_t$}\n\\end{itemize}\n\n\\begin{figure*}[H]\n\\centering\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{FATG.png}\n\\caption{Price return of gold}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{FATB.png}\n\\caption{Price return of bitcoin}\n\\end{minipage}\n\\end{figure*}\n\nGARCH model is a widely used time series analysis method proposed by Bollerslev (1986), which are an extension of the earlier work on ARCH models by Engle (1982).\\cite{1}\nThe core idea of ARCH model is that the variance of error at time $t$ depends on the previous variance of error.\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\ny_t=X^{`}_t\\phi+\\mu_t, \\mu_t \\sim N(0,\\sigma^2_t)\\\\\n\\sigma^2_t=\\alpha_0+\\sum\\limits_{i=1}^p\\alpha_i\\mu^2_{t-i}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\mu_t$ represents the stochastic disturbance term without serial correlation and $\\sigma^2_t$ represents the variance of the stochastic disturbance.\\\\\n\\indent Then we generate GARCH model by adding the lag of $\\sigma_t$ to ARCH model:\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\ny_t=X^{`}_t\\phi+\\mu_t, \\mu_t \\sim N(0,\\sigma^2_t)\\\\\n\\sigma^2_t=\\alpha_0+\\sum\\limits_{i=1}^p\\alpha_i\\mu^2_{t-i}+\\sum\\limits_{i=1}^q\\beta_i\\sigma^2_{t-i}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $q$ and $p$ represent the lags in the GARCH term and the ARCH term respectively.\\\\\n\\indent In order to use the GARCH model, we need to verify the stationary time process and normality of the acquired data before extracting features. Due to the huge amount of data, the ADF method is used in the test.\n\\begin{table*}[htbp]\n\\centering\n\\caption{ADF test}\n\\begin{tabular}{c|cccccccc}\n\\hline\n\\textbf{Asset} & \\multicolumn{4}{c}{\\textbf{gold}} & \\multicolumn{4}{c}{\\textbf{bitcoin}} \\\\ \\hline\n\\multicolumn{1}{l|}{} & \\textbf{tau1} & \\textbf{tau3} & \\textbf{phi2} & \\textbf{phi3} & \\textbf{tau1} & \\textbf{tau3} & \\textbf{phi2} & \\textbf{phi3} \\\\ \\hline\n\\multicolumn{1}{l|}{} & -29.3978 & -29.665 & 293.3379 & 440.0067 & -23.5781 & -23.6103 & 185.8161 & 278.724 \\\\\n\\textbf{1pct} & -2.58 & -3.96 & 6.09 & 8.27 & -2.58 & -3.96 & 6.09 & 8.27 \\\\\n\\textbf{5pct} & -1.95 & -3.41 & 4.68 & 6.25 & -1.95 & -3.41 & 4.68 & 6.25 \\\\\n\\textbf{10pct} & -1.62 & -3.12 & 6.25 & 5.34 & -1.62 & -3.12 & 6.25 & 5.34 \\\\ \\hline\n\\end{tabular}\n\\end{table*}\nADF test shows that at the $1\\%$ confidence level, we can reject the null hypothesis and consider the series of gold and bicoin as stable.\\\\\n\\indent GARCH(1,1) model has few parameters and is often tested in practice. Since it is quite difficult to determine the order of the GARCH model, so we choose commonly used GARCH(1,1) model to extract attributes. From GARCH(1,1), we acquire two attributes $\\mu_t$ and $\\sigma_t^2$, referring to stochastic disturbance team and conditional variance respectively.\n\\begin{equation}\n\\mu_t=y_t-x_t\\phi\n\\end{equation}\n\\begin{equation}\n\\sigma^2_t=\\alpha_0+\\alpha_1\\mu^2_{t-1}+\\beta_1\\sigma^2_{t-1}\n\\end{equation}\nwhere $y_t$ is the asset price in time $t$, $x_t$ is the asset price in time $t-1$, $\\mu_t$ is the stochastic disturbance term, $\\sigma^2_t$ is the variance.\\\\\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}[t]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{gs.png}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{gm.png}\n\\end{minipage}\n\\caption{Attribute distribution of gold}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{cs.png}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{cm.png}\n\\end{minipage}\n\\caption{Attribute distribution of bitcoin}\n\\end{figure*}\nAs the figure shown, attributes $\\mu_t$ have a high degree of coincidence with asset return and attributes $\\sigma^2_t$ are all positive. So they are suitable for describing asset return.\n\\section{Prediction model}\\label{Predict}\nSince there are many factors affecting the volatility of stock prices, deep learning models such as neural networks can be used for prediction. The most popular model used for time series prediction is RNN model. The chain-type structural feature of RNN is particularly suitable to process the discrete data series. Since the information of hidden layer comes only from the information of current input and last layer, however, there is no memory function in RNN model. It leads to gradient disappearance problems. In this context, LSTM (Long-short term memory) model has emerged.\n\\subsection{LSTM model}\nCells were introduced to LSTM model, which utilizes three gates, forget, input and output, to maintain and control information. The forget gate combines the information of the previous hidden layer $s_{t-1}$ and input $x_{t}$. Under the sigmoid function, it decides whether the old information to be discard or not. The input gate and tanh function filter the activation information and work out new cell $\\tilde{c_{t}}$. The output gate finally output the information after processing. In the model, the decay propagation of the gradient information can be solved to keep the network in memory for a relative long time.\n\t\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{LSTM.pdf}\n\t\\caption{The structure of LSTM}\n\t\\end{figure}\nThe structure of the \"gate\" realizes the addition and removal of information, and determines whether the model will save or forget the information during training. With this \"gate structure\", LSTM model effectively alleviates the problem of gradient change and selectively stores information. For a given feature at $t$, the network will have the ability to backward predict the target information, enabling investors to predict the future price at some time, so as to make more accurate decisions.\n\\subsubsection{Two-layer LSTM with Batch Normalization}\nThe research conducted by Li(2021)\\cite{7} shows that the structure of two-layer LSTM can effectively improve the accuracy of the model prediction and the study presented by Sergey Ioffe(2015)\\cite{5} shows that Batch Normalization could accelerate deep Network Training by Reducing Internal Covariate Shift. So we make some improvement in the traditional model correspondingly.\\\\\n\\indent Batch Normalization is an efficient data normalization method used to activate layers in deep neural networks. It can speed up the convergence rate of the model training, make the training process more stable, avoid gradient explosion or gradient disappearance and play a certain regularization role.\n\\begin{equation}\n\\begin{array}{l}\nInput:B=\\{x_{1\\cdots m}\\}\\quad \\gamma,\\beta(parameters)\\\\\nOutput:\\{y_i=BN_{\\gamma,\\beta}(x_i)\\}\\\\\n\\mu_B \\gets \\frac{1}{m}\\sum\\limits^m_{i=1}x_i\\\\\n\\sigma^2_B\\gets \\frac{1}{m}\\sum\\limits^m_{i=1}(x_i-\\mu_B)^2\\\\\n\\tilde{x_{i}}\\gets \\frac{(x_i-\\mu_B)^2}{\\sqrt{\\sigma^2_B+\\varepsilon}}\\\\\ny_i\\gets \\gamma \\tilde{x_{i}}+\\beta\n\\end{array}\n\\end{equation}\nwhere B is the set of input values, a learnable parameter, which can be used to normalize with and restore the data.\\\\\n\\indent The data is normalized to a unified interval to reduce the data divergence and the learning complexity of the network. After normalization, using $\\gamma$ and $\\beta$ as the parameters preserves the distribution of the original data. To further avoid overfitting, we add \"Dropout layers\" in the model. The layers can temporarily discard some neural from the network during the training process of a deep learning network refereed to the probability. The mechanism helps find sparse network containing only a fraction of the neurons based on the original complex network, greatly reducing the possibility of overfitting.\\cite{6}\n\n\\subsection{Bidirectional LSTM model}\nTo further improve the efficiency of data use and enhance the robustness of the model, we combine a forward LSTM model with a backward LSTM model to construct Bidirectional LSTM model. Bi-LSTM can effectively use the forward and backward feature information of the input and improve the accuracy when the parameters such as the training rounds are unchanged.\n\\subsection{Attention mechanism}\nWhen assessing a set of information, our neurons automatically scan the global image and then focus on the target based on past experience and current goals, that is, the focus of attention. Once the focus is determined, we pay more attention to more details in that area and suppress other useless information. For example, in the field of image processing, when scanning journal images, people mainly focus on the picture and the title, in line with our life experience.\\\\\n\\indent Weight factors are assigned to each factor to measure its contribution in attention mechanism. Firstly, two hidden layer, Encoder and Decoder, equivalent to the input and output of the structure are employed. Points are then accumulated with the Decoder hidden layer and each Encoder hidden layer. The results are recorded as \"Score\". All scores are then sent to the softmax layer so that the higher score the layer achieves, the greater the probability would be. Thus the mechanism suppresses invalid or noise information.\n\\begin{figure}[htbp]\n\\centering\n\n\\centering\n\\includegraphics[width=0.6\\linewidth]{attLayer.png}\n\\caption{Structure of attention mechanism}\n\\end{figure}\n\\begin{figure}\n\n\n\\centering\n\\includegraphics[width=0.6\\linewidth]{attstructure.png}\n\\caption{Process of attention mechanism}\n\n\\end{figure}\nwhere the softmax function is:\n\\begin{equation}\nsoftmax(z)_i = \\frac{exp(z_i)}{ {\\textstyle \\sum_{j}^{}}exp(z_j) }\n\\end{equation}\n\\indent For time series data, due to high-dimensional variables and multi-step time steps, if trained with equal weights, it will lead to unclear information focus, eventually resulting in overfitting noise information. Therefore, we introduce attention mechanism to the our model. We apply attention mechanism in the time step dimension of the input variable, considering the past trading data as significant factors during the prediction of future return.\n\\subsection{Results}\n\\subsubsection{RMSProp optimization algorithm}\nWhile training the model, we choose RMSProp optimization algorithm to adjust model parameters.\n\\begin{equation}\ng\\gets \\frac{1}{m} \\bigtriangledown_\\theta \\sum\\nolimits_i L(f(x^{(i)};\\theta),y^{(i)})\n\\end{equation}\n\\begin{equation}\nr_{t}\\leftarrow \\rho r_{t-1}+(1-\\rho)g\\odot g\n\\end{equation}\n\\begin{equation}\n\\Delta\\theta=-\\frac{\\epsilon}{\\sqrt{\\delta+r}}\\odot g\n\\end{equation}\nwhere $\\rho$ is used to control how much historical information is acquired.\\\\\n\\indent Given that the loss functions of neural networks are all non-convex, RMSProp\\cite{3} presents better in non-convex conditions, changing the cumulative gradient to the exponential decay moving average to discard distant past history.\n\\indent Based on the above analysis, we finally determined the optimal parameter portfolio as follows:\n\\begin{table}[htbp]\n\\centering\n\\caption{Optimal parameter portfolio}\n\\begin{tabular}{@{}c|cccc@{}}\n\\toprule\n & \\textbf{Units} & \\textbf{batch\\_size} & \\textbf{Learning rate} & \\textbf{epochs} \\\\ \\midrule\n\\textbf{LSTM} & 128 & 128 & 0.01 & 300 \\\\\n\\textbf{BiLSTM} & 64 & 128 & 0.001 & 300 \\\\\n\\textbf{At-BiLSTM} & 32 & 128 & 0.01 & 300 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\nWe input two assets datasets into the three models respectively and compare the accuracy of the three models between the training set and the test set.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{3acc.png}\n\\caption{Accuracy of three models}\n\\end{figure}\nThe accuracy of the three models gradually increase for both assets, indicating that AT-BiLSTM model is prominent in prediction. In addition to the above three models, we compared the accuracy of several common prediction models and obtained the results as follows:\n\nConsidering the above analysis, we finally decide to use the AT-BiLSTM model with high predictive ability for return prediction during the asset trading period. Using the equation \\ref{shou}, we can work out a five-year forecast for the prices of two assets. In order to show the difference between prediction and reality more directly, we also work out the local price trend.\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{goldPre.png}\n\\caption{Prediction of Gold}\n\\end{minipage\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{bitPre.png}\n\\caption{Prediction of Bitcoin}\n\\end{minipage}\n\\end{figure*}\n\\section{Firm offer training strategy}\nThe prediction of return by the above models is a global train, assuming that we have all the trading data during the period. However, when we develop strategies, we cannot know the \"future\" data. For example, we suppose the trader were at 9\/11\/2016, the beginning of the dataset. At this point, the trader would not have any trading data, namely that there is no data for the trader to train the model. That means we must take out a considerable trading period at the beginning, during which we temporarily do not participate in the transaction. Only if the accuracy of the model rises to a certain extent, can we use the model to predict and make investment decisions. Therefore, we should divide the five-year period into training and trading periods.\\\\\n\\indent To save the training time, we update the model using an incremental training pattern. Taking gold trading as an example, the training error decreases over time. As the figure shown above, after 300 days the error plateaus, indicating the model is fully learned. So we select the $300^{th}$ day of the trading period as the point we start our strategy. When we start our strategy, we still train the prediction model with incremental training pattern, so that the model always learns the latest data. As can be seen from the attention mechanism above, the latest trading data presents greater learning value.\\par\nWe also compared the attention bilstm model with other traditional models. Finally, the experimental and prediction results are shown in the TABLE III. It can be seen that the attention bilstm model achieves the best prediction effect. It also can be seen in the Fig 9 and Fig 10, that the model has a good fit for the price.\n\n\\begin{table*}[htbp]\n\\caption{accuracy of common models}\n\\centering\n\\begin{tabular}{@{}cccccc@{}}\n\\toprule\n\\textbf{Indicator} & \\textbf{Assets} & \\textbf{LinearModel} & \\textbf{SVM} & \\textbf{Decision tree} & \\textbf{Arima Model} \\\\ \\midrule\n{\\textbf{Acc.(\\%)}} & \\textbf{Bitcoin} & 0.5646 & 0.5505 & 0.5489 & 0.5316 \\\\\n & \\textbf{Gold} & 0.5723 & 0.5589 & 0.5501 & 0.5469 \\\\ \\midrule\n{\\textbf{AUC}} & \\textbf{Bitcoin} & 0.6241 & 0.5004 & 0.5208 & \\textless{}0.5 \\\\\n & \\textbf{Gold} & 0.6405 & \\textless{}0.5 & \\textless{}0.5 & \\textless{}0.5 \\\\ \\midrule\n\\textbf{Indicator} & \\textbf{Assets} & \\textbf{LSTM} & \\textbf{AT-LSTM} & \\textbf{BiLSTM} & \\textbf{AT-BiLSTM} \\\\ \\midrule\n{\\textbf{Acc.(\\%)}} & \\textbf{Bitcoin} & 0.5357 & 0.6239 & 0.5854 & 0.6578 \\\\\n & \\textbf{Gold} & 0.5821 & 0.6433 & 0.6315 & 0.6697 \\\\ \\midrule\n{\\textbf{AUC}} & \\textbf{Bitcoin} & 0.6755 & 0.7016 & 0.6988 & 0.7194 \\\\\n & \\textbf{Gold} & 0.6817 & 0.7132 & 0.7102 & 0.7303 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\n\n\n\\section{Conclusion, Discussion and Future Work}\nThis paper have developed an automatic model that can predict the price, judge the trading point and determine the trading volume. This model is based on Bi-LSTM and adds attention mechanism. The timing and volume strategy based on dynamic programming and entropy weight method is also added to make trading decision. After our test, it has good prediction accuracy and strong robustness. In our simulation back test, it has achieved an annualized yield of $170\\%$. The value of $\\$1000$ investment would be $\\$8542.3$. \\par\nAlthough our strategy can achieve high annual yield, ironically, its yield is not even as good as buying bitcoin at the beginning of the trading period and holding it all the time. This also shows that the model performs generally for one-sided markets. Besides, Although the attention Bi-LSTM model has been proved to be the best in our experiments, it can be seen that the performance gap between it and other models is not large. There is a possibility that the attention Bi-LSTM model does not perform well in other data sets, which also proves that there is no free lunch theorem\\cite{8} from another side.\\par\nThis paper only considers the prediction of price, and does not explore in depth how to place an asset order. For example, when our model predicts that prices will rise tomorrow, how many assets should we buy? At the same time, we also did not consider the matching of trading books, liquidity and other issues that may occur in the real trading process. These problems need to be further studied.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \\label{intro}\n\nSuperconducting transition-edge sensors (TESs) are highly sensitive\nthermometers widely used as radiation detectors over an energy range from\nnear infrared to hard x-ray.\nTESs can be operated both in the dc and ac bias mode and in both\ncases the detector response can be\nmodelled in great detail \\cite{Swetz12,JvdKuur11}. It has been recently demonstrated that TES-based devices behave\nas weak-links due to the proximity effect from the superconducting leads \\cite{Sadleir10}. A detailed experimental\ninvestigation of the weak-link effects in dc biased x-ray\nmicrocalorimeters is ongoing \\cite{Sadleir11,Smith12} and a\ntheoretical framework for modelling of the resistive state of a TES under dc bias has been\ndeveloped \\cite{Kozorezov11}. Evidence of weak-link effects in ac biased\nTES microcalorimeters where reported\\cite{Gottardi12xray}, but\nan adequated experimental and theretical investigation is still missing.\n In this report we present a detailed characterization of a TES-based\n low-$G$ bolometer developed for the Short Wavelength Band detector of the instrument\n SAFARI on board of the Japanese infrared mission SPICA \\cite{Safari}. A\n comparison between the performance of the TES under dc and\n ac bias is reported. \n\n\\section{Experimental set-up}\n \\label{Setup}\n\nFor the ac measurements described below we use a Frequency Domain\nMultiplexer (FDM) system \\cite{GottardiSPIE2012} working in the\nfrequency range from 1 to $5 \\mathrm{MHz}$.\nThe read-out is done using a two-stage\nSQUID amplifier with on-chip linearization from PTB and high-$Q$\nlithographic $LC$ resonators \\cite{Gottardi12bolo}. \nThe TES arrays chip is mounted on a copper bracket inside a\nsuperconducting Helmholtz coil used to generate a\nuniform perpendicular magnetic field over the whole pixels array. \nThe external magnetic field is shielded using a Nb can covered by few layers of metallic glass tape. \n\nThe device under test is a low-$G$ bolometer based on a Ti\/Au (16\/60 nm)\nbilayer, deposited on a $0.5 \\mu \\mathrm{m}$ thick, $130\\times 70 \\mu\n\\mathrm{m}^2$ suspended $Si_3N_4$ island connected to the thermal bath via\n4 $Si_3N_4$ cross-shaped supporting legs that are $2 \\mathrm{\\mu m}$ wide and $400 \\mathrm{\\mu m}$ long.\nThe TES area is $50\\times 50 \\mu \\mathrm{m^2}$. It has a critical\ntemperature of $T_C=85\\,\\mathrm{mK}$, a normal state resistance of\n$R_N=98\\,\\mathrm{m\\Omega}$, a measured $G=0.27 \\mathrm{pW\/K}$ and a calculated\nNoise Equivalent Power (NEP) of $2.\\\n3\\times 10^{-19} \\mathrm{W\/\\sqrt{Hz}}$.\nAn $8\\mathrm{nm}$ thick Ta absorber with an area of $70\\times70 \\mathrm{\\mu m}^2$ is deposited close to the TES. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[height=5.2cm]{Fig1_small.eps}\n \\caption[example] \n { \\label{fig:FDMscheme} The schematic drawing ({\\bf a.}) of the\n Frequency Domain Multiplexer and pictures of the experimental set-up ({\\bf b.}) and the TES bolometer with cross-shaped supporting legs ({\\bf c.}).}\n \\end{figure} \n\n\nThe electrical contact to the bolometer is realized by $90\\mathrm{nm}$ thick\nNb leads deposited on the top of the SiN legs.\nThe sensor was previously characterised\nunder dc bias \\cite{Pourya12,Hijmering12} and showed a power plateau of\n$9.4\\mathrm{fW}$ and a dark NEP of $4.8\\times 10^{-19}\n\\mathrm{W\/\\sqrt{Hz}}$ at $30\\mathrm{mK}$.\nBelow we report the results for the TES ac biased at a frequency of\n$2.4\\mathrm{MHz}$.\nThe FDM set up and a picture of the TES bolometer is shown in Fig.~\\ref{fig:FDMscheme}.\n\n\n\n\\section{Experimental results}\n \\label{results}\n\nTo characterize the detector under ac bias we studied the dependence of\nthe TES current on the voltage, the bath temperature and the applied\nmagnetic field.\nIn \\figref{fig:ivpv} we show the TES current-to-voltage ($IV$) and\npower-to-voltage ($PV$) \ncharacteristics as a function of the bath temperature. \nThe TES current shows regular oscillating structures that are better\nresolved when looking at the measured power. The maxima of those\nstructures occur at relatively constant TES voltages for all the bath\ntemperatures.\n\\begin{figure}[ht]\n\\center\n\\includegraphics[height=4.7cm]{Fig2.eps}\n\\caption{\\label{fig:ivpv} Measured TES current ({\\bf a.}) and\n power ({\\bf b.}) as a function of the ac bias voltage for\n several bath temperature}\n\\end{figure}\n\nThe dependence of the TES current on the applied magnetic field is\nshown in \\figref{fig:IvsB}. Under ac bias (colored-dotted lines) with\nthe TES in transition we\nobserved a Fraunhofer-like oscillating pattern, typical of a superconducting\nweak-link structure. \nUnder dc bias (dark line) the current oscillations are generally much\nsmaller and more difficult to see as it was previously reported\n\\cite{Hijmering12}. Another interesting difference between the ac and dc\nbias measurement is that at large applied magnetic field the ac\ncurrent does not decrease as one would expect from the Meissner\neffect. Moreover, a sudden increase of the TES current is observed at\ncertain magnetic fields. \n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[height=8cm,angle=-90]{Fig3.eps}\n\\caption{\\label{fig:IvsB} Measured TES current as a function of\n applied magnetic field.}\n\\end{figure}\nDespite the observed structure in the $IV$s characteristics \nthe TES bolometer dark NEP at zero magnetic field was measured to be\nonly a factor 1.4 higher than the theoretical value. \n\nIn \\figref{fig:nepacdc} the dark NEP \nspectra taken under ac and dc bias at a TES resistance\n$R_{tes}=0.3R_{N}$ and a \nbath temperature of respectively\n$T_{bath}=40 \\mathrm{mK}$ and $T_{bath}=20 \\mathrm{mK}$\nare shown. The dark NEP was calculated by dividing the\ncurrent noise by the responsivity at low frequency, which can be\napproximated by $\\frac{1}{I_O(R_{tes}-Z_{th})}$, where $I_O$ is the\neffective bias current, $R_{tes}$ is the TES resistance and $Z_{th}$\nis the Thevenin impedance in the bias circuit as derived from the\ncalibration of the $IV$ curves.\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[height=7.5cm,angle=-90]{Fig4.eps}\n\\caption{\\label{fig:nepacdc} Measured dark NEP at low frequency under\n ac and dc bias. The spectra were taken at TES resistance\n $R_{tes}=0.3R_{N}$ and at a bath temperature of \n$T_{bath}=40 \\mathrm{mK}$ and $T_{bath}=20 \\mathrm{mK}$\n respectively. In the ac bias case a double light-tight box was used.}\n\\end{figure}\n\nThe dark NEP measured at low frequency( $f \\lesssim 40 \\mathrm{Hz}$) at\nthe optimal bias point in the transition and at zero magnetic field was about $(3.2\\pm 0.13)\\times 10^{-19} \\mathrm{W\/\\sqrt{Hz}}$\n and $(4.8 \\pm 0.2)\\times 10^{-19} \\mathrm{W\/\\sqrt{Hz}}$ for\nthe ac and dc bias case respectively. \nThe NEP was independent on the bath temperature for temperature\napproximately below $50\\mathrm{mK}$.\nThe noise measured under ac bias is only \n1.4 times higher than the expected theoretical NEP of $2.3\\times\n10^{-19} \\mathrm{W\/\\sqrt{Hz}}$ and was obtained after mounting the\nlight-tight FDM set-up in a second light-absorbing box.\nWe suspect that even this approach was not sufficient to reduce the effect of the stray radiation to below $10^{-19} \\mathrm{W\/\\sqrt{Hz}}$ and a further improvement of our experimental set-up is needed.\n \nWe remark that the measurements under dc bias were done using a\nsingle light-tight box. They are affected at low frequency by an\nexcess of power leaking throught the box and by the $1\/f$-noise from the SQUID\namplifier.\n\nThe observed TES current response reported above are\n likely due to the weak-link behaviour of the bolometer. A detailed\n explanation of the Josephson effects in an ac-biased TES will be reported in a\nfollowing paper. It can be shown that in order to compare the performance of the TES under ac and dc bias, only the current in-phase with the applied voltage should be considered\n in the ac bias case. The rms value of the in-phase ac current is equivalent to the dc current value measured in a dc\n biased TES. This has been experimentally verified and the results are shown in\n Fig~\\ref{fig:pivacdc}.a. \n\\begin{figure*}[ht]\n\\includegraphics[width=11.5cm]{Fig5.eps}\n\\caption{\\label{fig:pivacdc} $IV$ ({\\bf a}) and $PV$ ({\\bf b})\n characteristics of the TES-based bolometer measured under ac and dc bias.}\n\\end{figure*}\nThe current-to-voltage characteristics measured under ac and dc bias\nshow good agreement within the experimental uncertainties due to the\ncalibration procedure. Under ac bias the quasiparticle current\n$I_{qp}$, being in-phase with the applied voltage, is the only current\nwhich contributes to the power dissipated in the TES. The data sets\nplotted in Fig~\\ref{fig:pivacdc}.b are the measured $PV$\ncharacteristics of the bolometer under ac and dc bias respectively.\nThe $PV$ curves are consistent with each other within the calibration\nerrors. We clarify here that the TES current is calibrated using the\nnominal SQUID parameters and using standard SQUID signal calibration\ntechniques, while to obtain the TES voltage we assumed the TES normal state resistance $R_N$ to be known and performed a\nlinear fit of the superconducting and normal branches of the measured\n$IV$ curves. The TES $R_N$ has been measured on test TiAu structures with 4 points measurements in a dedicated test\nset-up. The error on $R_N$ is about $1\\%$. This calibration procedure\nis particularly sensitive to how accurate the normal branch of the\n$IV$ curve has been measured. \nIn both the ac and dc bias case we estimate the total\nuncertainties to be about $2\\%$ and $4\\%$ in the current and power\nrespectively.\n\n\n\n\n\\section{Conclusion}\n\n\nWe observed weak-link behaviour in the ac biased TES-based low G\nbolometer developed for SAFARI. A complete modeling of the\nweak-link effect in an ac biased detector is under development. When\nlooking only at the current component in-phase with the applied\nbias voltage the TES current and power response under ac bias is\ncomparable with the dc data. \n A dark NEP of $(3.2\\pm 0.13) \\cdot 10^{-19} \n\\mathrm{W\/\\sqrt{Hz}}$ was observed with the pixel ac-biased at $2.4\n\\mathrm{MHz}$. The measured dark NEP is only a factor of $1.4$ higher than the\nexpected theoretical value and is very likely affected by a not yet stray-light free environment. \n \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Intro}\n\nConsider a smooth connected manifold $M$ of dimension $n\\geq 3$ equipped with a sub-Riemannian structure $(\\Delta,g)$ which consists of a totally nonholonomic smooth distribution $\\Delta$ of rank $m\u00a0\\rho \\qquad \\forall v \\in \\bar{B}_1,\n\\]\nwhere $\\bar{B}_1$ stands for the closed unit ball in $\\mathbb R^n$. Then, for any $\\epsilon >0$, there exist $z\\in B_{\\epsilon}(x)$ and $\\zeta \\in \\partial^-_Ph(z)$ such that\n\\begin{eqnarray*}\n|f(z)-f(x)| < \\epsilon \\quad\n\\mbox{and} \\quad \\zeta \\cdot v > \\rho \\qquad \\forall v \\in \\bar{B}_1.\n\\end{eqnarray*}\n\\end{lemma}\nTo prove Proposition \\ref{PROPidem}, we consider a point $x\\in \\mathcal{O}$, a co-vector $p\\in \\partial^-f(x)$ and a neighborhood $\\mathcal{W}$ of $(x,p)\\in T^*M$. In fact, up to taking a chart, we can assume that work in $\\mathbb R^n$ with a function $f$ defined on an open set $O\\subset \\mathbb R^n$, with $x\\in O$ and $p\\in (\\mathbb R^n)^*$, and with a neighborhood $\\mathcal{W}$ of $(x,p)\\in T^*(\\mathbb R^n)$ which contains a set of the form $B_{\\delta}(x) \\times B_{\\delta}^*(p)$ with $\\delta >0$. Let $h: O \\rightarrow \\mathbb R$ be the continuous function defined by \n\\[\nh(y):=f(y)-p\\cdot y \\qquad \\forall y \\in O.\n\\]\nSince $p\\in \\partial^-f(x)$, we check easily that \n\\[\nDh(x;v) \\geq 0 \\qquad \\forall v \\in \\bar{B}_1.\n\\]\nThus, by applying Lemma \\ref{LEMSubbotin} with $\\rho=-\\delta$ and $\\epsilon=\\delta$, there are $z \\in B_{\\delta}(x)$ and $\\zeta \\in \\partial^-_Ph(z)$ such that $|h(z)-h(x)| < \\delta$ and\n\\begin{eqnarray*}\n \\zeta \\cdot v > -\\delta \\qquad \\forall v \\in \\bar{B}_1.\n\\end{eqnarray*}\nWe infer that $|\\zeta|<\\delta$ and $p+\\zeta \\in \\partial_P^-f(z)$. \n\\end{proof}\n\n\\subsection{Lipschitz points}\n\nWe assume in this section that $M$ is equipped with a smooth Riemannian metric $h$ whose geodesic distance is denoted by $d^h$ and for which at every $x\\in M$ the associated norm in $T_xM$ is denoted by $|\\cdot|_x$ and the norm of some $p\\in T_x^*M$ is defined by $|p|_x:=|v|_x$ where $p=h_x(v,\\cdot)$ (we refer the reader to \\cite{sakai96} for further details of Riemannian geometry). Then, for every $x\\in M$, the pointed distance $d^h_x:=d^h(x,\\cdot)$ is $1$-Lipschitz with respect to $d^h$, there is an open neighborhood $\\mathcal{V}$ of $x$ such that $d^h_x$ is smooth in $\\mathcal{V} \\setminus \\{x\\}$ with a differential of norm $1$ and we have\n\\begin{eqnarray}\\label{22avril1}\n\\partial^-d_x^h(x) =\\Bigl\\{ p\\in T_x^*M \\, \\vert \\, |p|_x\\leq 1\\Bigr\\}.\n\\end{eqnarray}\nAs shown by the following result, the lipschitzness of $f$ is controlled by the size of co-vectors in $\\partial^- f$. We recall that a set $\\mathcal{C}\\subset M$ is said to be {\\it convex} with respect to $h$ if any minimizing geodesic between two points of $\\mathcal{C}$ is contained in $\\mathcal{C}$.\n\n\\begin{proposition}\\label{PROPSubLip}\nLet $\\mathcal{C} \\subset \\mathcal{O}$ be an open convex set (with respect to $h$) and $K\\geq 0$ be fixed, then the following properties are equivalent:\n\\begin{itemize}\n\\item[(i)] $f$ is $K$-Lipschitz (with respect to $d^h$) on $\\mathcal{C}$.\n\\item[(ii)] For every $x\\in \\mathcal{C}$ and every $p\\in \\partial^-f(x)$, $|p|_x\\leq K$. \n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPSubLip}]\nAssume that (i) is satisfied and fix $x\\in \\mathcal{C}$ and $p\\in \\partial^-f(x)$ (if $\\partial^-f(x)$ is nonempty). By assumption $f$ admits a support function from below $\\varphi:\\mathcal{V}\\rightarrow \\mathbb R$ with $d\\varphi(x)=p$ and by $K$-Lipschitzness of $f$ we have\n\\[\n\\varphi(y) \\leq f(y) \\leq f(x) + Kd^h(x,y) = \\varphi(x) + Kd_x^h(y) \\qquad \\forall y\\in \\mathcal{V} \\cap \\mathcal{C}.\n\\] \nWe infer that $p=d\\varphi(x)$ belongs to $\\partial^- (Kd_x^h)(x)=K\\partial^-d_x^h(x)$, which by (\\ref{22avril1}) gives $|p|_x\\leq K$. \\\\\n\nLet us now assume that (ii) is satisfied and fix $\\bar{x}\\in \\mathcal{C}$, $\\bar{\\delta}>0$ with $\\bar{B}^h(\\bar{x},\\bar{\\delta})\\subset \\mathcal{C}$ and some constant $K'>K$. Pick a smooth convex function $\\phi : [0,\\bar{\\delta}) \\rightarrow [0,+\\infty)$ such that \n\\begin{eqnarray}\\label{22sept1}\n\\phi(t) = K't \\qquad \\forall t\\in [0,\\bar{\\delta}\/4]\n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{22sept2}\n\\phi(5\\bar{\\delta}\/16) + \\min_{x\\in \\bar{B}^h(\\bar{x},\\bar{\\delta})} \\left\\{f(x)\\right\\} \\geq f(\\bar{x})+ \\phi(\\bar{\\delta}\/8),\n\\end{eqnarray} \nfix $y,z$ in $B(\\bar{x},\\bar{\\delta}\/8)$, and define $g:B^h(\\bar{x},\\bar{\\delta}\/2) \\rightarrow \\mathbb R$ by\n\\[\ng(x):= f(x)+\\phi \\left(d^h(x,y) \\right) \\qquad \\forall x \\in B^h(\\bar{x},\\bar{\\delta}\/2).\n\\]\nThe function $g$ is continuous on $B(\\bar{x},\\delta\/2)$ and satisfies (by (\\ref{22sept2}) and the fact that $\\phi$ is increasing) for every $x\\in B^h(\\bar{x},\\bar{\\delta}\/2) \\setminus B^h(\\bar{x},7\\bar{\\delta}\/16)$,\n\\[\ng(x) > f(x) + \\phi(5\\bar{\\delta}\/16) \\geq f(\\bar{x})+ \\phi(\\bar{\\delta}\/8) \\geq g(\\bar{x}),\n\\]\nhence it attains a minimum at some point $x_g\\in B(\\bar{x},\\bar{\\delta}\/2)$. If $x_g\\neq y$, since $x\\mapsto \\phi(d^h(x,y))$ is $C^1$ on $B(\\bar{x},\\bar{\\delta}\/2)\\setminus \\{y\\}$ (with differential of norm $1$), this means that \n\\[\n-\\phi' \\left(d^h(x_g,y)\\right) dd_y^h(x_g) \\in \\partial^-f(x_g),\n\\]\nthus $|\\phi'(d(x_g,y))|\\leq K$ which contradicts the properties satisfied by $\\phi$ ((\\ref{22sept1}) and the convexity). In consequence $x_g=y$. Therefore, \n\\[\nf(y)=g(x_g) \\leq g(z) =f(z)+\\phi \\left(d^h(z,y)\\right) \\leq f(z) +K'd^h(z,y),\n\\]\nwhere we have used that $d^h(y,z)\\leq \\bar{\\delta}\/8$ and (\\ref{22sept1}). Since $y,z$ are arbitrary points in $B(\\bar{x},\\bar{\\delta}\/8)$ and $K'$ any constant $>K$, we are done. \n\\end{proof}\n\nWe say that $f$ is {\\it Lipschitz} at some point $x\\in \\mathcal{O}$ if there is a smooth Riemannian metric on $M$ such that $f$ is Lipschitz, with respect to that metric, on an open neighborhood of $x$, and we denote by $\\mbox{Lip}\\,(f)$ the set of such points. Of course, if $f$ is Lipschitz at some point $x\\in \\mathcal{O}$ with respect to some metric then it is Lipschitz with respect to any other metric. Proposition \\ref{PROPSubLip} yields the following characterization:\n\n\\begin{proposition}\\label{PROPSubLipEQ}\nFor every $x\\in \\mathcal{O}$, the following properties are equivalent:\n\\begin{itemize}\n\\item[(i)] $x \\in \\mbox{Lip}\\,(f)$.\n\\item[(ii)] $\\partial^-f$ is bounded in a neighborhood of $x$, that is, there is a neighborhood $\\mathcal{V}\\subset \\mathcal{O}$ of $x$ such that the set of $(y,q)$ with $y\\in \\mathcal{V}, q\\in \\partial^-f(y)$ is relatively compact in $T^*M$.\n\\end{itemize}\n\\end{proposition}\n\nFinally, we note that by construction the set $\\mbox{Lip}(f)$ is open and $f$ is locally Lipschitz on $\\mbox{Lip}(f)$, thus Rademacher's Theorem (see {\\it e.g.} \\cite{eg15,federer69}) implies that $f$ is differentiable almost everywhere on $\\mbox{Lip}(f)$.\n\n\\subsection{Lipzchitz points from below and limiting subdifferentials}\n\nWe say that $f$ is {\\it Lipschitz from below} at some point $x\\in M$ if it admits a support function from below at $x\\in M$ which is Lipschitz on its domain, and we denote by $\\mbox{Lip}^-(f)$ the set of such points. Moreover, we call {\\it limiting subdifferential} of $f$ at $x$, denoted by $\\partial^-_Lf(x)$, the set of $p\\in T_x^*M$ for which there is a sequence $\\{(x_k,p_k)\\}_{k\\in \\mathbb N}$ converging to $(x,p)$ in $T^*M$ such that $p_k\\in \\partial^-f(x_k)$ for all $k\\in \\mathbb N$. As shown by the following result, Lipschitz points from below belong to the domain of the limiting subdifferential.\n\n\\begin{proposition}\\label{PROPLip-Lim}\nLet $h$ be a smooth Riemaniann metric on $M$, $K>0$ and $x\\in \\mathcal{O}$ be such that $f$ admits a support function from below at $x\\in M$ which is $K$-Lipschitz (with respect to $d^h$) on its domain, then there is $p\\in \\partial^-_Lf(x)$ with $|p|_x\\leq K$.\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPLip-Lim}]\nLet $x\\in \\mathcal{O}$ be such that $f$ admits a support function from below $\\varphi$ at $x\\in M$ which is $K$-Lipschitz with respect to a smooth Riemannian metric $h$. Up to considering a chart and extending properly the restrictions of $f_{\\vert \\mathcal{V}}$ and $\\varphi_{\\vert \\mathcal{V}}$ on a small open neighborhood $\\mathcal{V}$ of $x\\in \\mathbb R^n$ to the whole $\\mathbb R^n$, we may assume that we work in $\\mathbb R^n$ with $\\tilde{f}:\\mathbb R^n \\rightarrow \\mathbb R$ continuous such that $\\tilde{f}_{\\vert \\mathcal{V}}=f_{\\vert \\mathcal{V}}$ (so that $\\partial^-_L\\tilde{f}(x)=\\partial^-_Lf(x)$) and a support function $\\tilde{\\varphi}:\\mathbb R^n \\rightarrow \\mathbb R$ of $\\tilde{f}$ (or $f$) at $x$ which is $\\tilde{K}$-Lipschitz with respect to the Euclidean metric for some $\\tilde{K}>K$ as close to $K$ as we want. Thus, we have by assumption\n\\begin{eqnarray}\\label{11mai1}\n\\tilde{f}(x) = \\tilde{\\varphi}(x) \\quad \\mbox{and} \\quad \\tilde{f}(y) - \\tilde{\\varphi}(y) \\geq 0 \\quad \\forall y \\in \\mathbb R^n.\n\\end{eqnarray}\nFor every positive integer $k$, define the function $\\psi_k :\\mathbb R^n \\times \\mathbb R^n \\rightarrow \\mathbb R$ by\n\\[\n\\psi_k(y,z) := \\tilde{f}(y) - \\tilde{\\varphi}(z) + k |y-z|^2 + |z-x|^2 \\qquad \\forall (y,z) \\in \\mathbb R^n \\times \\mathbb R^n\n\\]\nand set\n\\[\nm_k := \\inf \\Bigl\\{ \\psi_k(y,z) \\, \\vert \\, (y,z) \\in \\mathbb R^n \\times \\mathbb R^n \\Bigr\\}.\n\\]\nBy $\\tilde{K}$-Lipschitzness of $\\tilde{\\varphi}$ and (\\ref{11mai1}), we have for every $k\\in \\mathbb N^*$ and any $y,z\\in \\mathbb R^n$, \n\\begin{eqnarray*}\n\\psi_k(y,z) & = & \\tilde{f}(y) - \\tilde{\\varphi}(y) + \\left( \\tilde{\\varphi}(y) - \\tilde{\\varphi}(z) \\right) + k |y-z|^2 + |z-x|^2 \\\\\n& \\geq & -\\tilde{K} |y-z| + k |y-z|^2 + |z-x|^2.\n\\end{eqnarray*}\nThus, since $m_k\\leq \\psi_k(x,x)=0$, we infer that for every $k\\in \\mathbb N^*$ the infimum in the definition of $m_k$ is attained at some $(y_k,z_k)$, {\\it i.e.} $m_k = \\psi_k(y_k,z_k)$, and there holds\n\\[\n\\lim_{k\\rightarrow +\\infty} y_k = \\lim_{k\\rightarrow +\\infty} z_k = x. \n\\]\nGiven $k\\in \\mathbb N^*$, we note that $\\psi_{k}(y_k,\\cdot) \\geq m_k$ gives \n\\[\n\\tilde{f}(y_k) - \\tilde{\\varphi}(z) + k |y_k-z|^2 + |z-x|^2 \\geq \\tilde{f}(y_k) - \\tilde{\\varphi} (z_k) + k |y_k-z_k|^2 + |z_k-x|^2\n\\]\nfor all $z\\in \\mathbb R^n$, which means that the function $- \\tilde{\\varphi}$ admits the smooth function\n\\[\n z \\longmapsto - \\tilde{\\varphi}(z_k) + k |y_k-z_k|^2 - k |y_k-z|^2 + |y_k-x|^2 - |z-x|^2\n \\]\n as a support function from below at $z_k$ and so yields \n \\[\n -2k\\left( z_k-y_k\\right)^* -2 \\left( z_k-x\\right)^* \\in \\partial^- (-\\tilde{\\varphi})(z_k).\n \\]\nSimilarly, $\\psi_{k}(\\cdot,z_k) \\geq m_k$ gives \n\\[\n p_k:= -2k\\left( y_k-z_k\\right)^* \\in \\partial^- \\tilde{f}(y_k) \\qquad \\forall k \\in \\mathbb N^*.\n \\]\nSince $-\\tilde{\\varphi}$ is $\\tilde{K}$-Lipschitz with respect to the Euclidean metric, we have by Proposition \\ref{PROPSubLip}\n\\[\n\\left|-2k\\left( z_k-y_k\\right)^* -2 \\left( z_k-x\\right)^* \\right| \\leq \\tilde{K} \\quad \\mbox{for all } k.\n\\]\nIn conclusion, we have shown that for all $k$, we have $p_k \\in \\partial^- \\tilde{f}(y_k)$ with $\\lim_{k\\rightarrow +\\infty} y_k=x$ and $|p_k| \\leq \\tilde{K} +o(1)$ for $k$ large. Since $\\tilde{f}_{\\vert \\mathcal{V}}=f_{\\vert \\mathcal{V}}$, this implies that $p_k \\in \\partial^- f(y_k)$ for $k$ large enough, which by compactness gives some $\\tilde{p} \\in \\partial^-_L f(x)$ with $|p|\\leq \\tilde{K}$. We conclude by letting $\\tilde{K}$ tend to $K$. \n\\end{proof}\n\nThe following result is an easy consequence of Rademacher's Theorem, it shows that viscosity subdifferentials of $f$ are nonempty almost everywhere over $\\mbox{Lip}^-(f)$.\n\n\\begin{proposition}\\label{PROPLip-Sub}\nThere is a set $\\mathcal{N}\\subset \\mbox{Lip}^-(f)$ of Lebesgue measure zero in $M$ such that $\\partial^-f(x)\\neq \\emptyset$ for every $x\\in \\mbox{Lip}^-(f) \\setminus \\mathcal{N}$.\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPLip-Sub}]\nWithout loss of generality, up to considering charts, we may assume that $M=\\mathbb R^n$. Pick a sequence $\\{x_k\\}_{k\\in \\mathbb N^*}$ which is dense in $\\mathbb R^n$ and set for every $k,l \\in \\mathbb N^*$,\n\\[\nA_{k,l} :=\\Bigl\\{ x\\in \\mbox{Lip}^-(f) \\cap B(x_k,1\/l) \\, \\vert \\, f(y)-f(x) \\geq -l|y-x|, \\, \\forall y\\in B(x_k,1\/l)\\Bigr\\}.\n\\]\nSince $\\mbox{Lip}^-(f)=\\cup_{l,k\\in \\mathbb N^*}A_{l,k}$, it is sufficient to show that for every $k,l\\in \\mathbb N^*$, $\\partial^-f(x)\\neq \\emptyset$ for almost every $x\\in A_{k,l}$. Fix $k,l\\in \\mathbb N^*$ such that $A_{k,l} \\neq \\emptyset$ and define the function $\\phi_{k,l}:A_{k,l} \\rightarrow \\mathbb R$ by\n\\[\n\\phi_{k,l}(x) := \\sup \\Bigl\\{\\phi(x) \\, \\vert \\, \\phi: B(x_k,1\/l)\\rightarrow \\mathbb R \\mbox{ is $l$-Lipschitz and } \\phi\\leq f \\mbox{ on } B(x_k,1\/l)\\Bigr\\}.\n\\] \nBy construction, $\\phi_{k,l}$ is finite (because $A_{k,l} \\neq \\emptyset$), $l$-Lipschitz and equal to $f$ on $A_{k,l}$. By Rademacher's Theorem (see {\\it e.g.} \\cite{eg15,federer69}), we infer that for almost every $x\\in A_{k,l}$, $\\phi_{k,l}$ is differentiable at $x$ which means that $\\partial^-\\phi_{k,l}(x)$ is a singleton and $\\phi_{k,l}$ admits a support function from below differentiable at $x$ and shows that $\\partial^- f(x)$ is nonempty.\n\\end{proof}\n\n\\subsection{On the domain of the limiting subdifferential}\\label{SECAlter}\n\nWe may wonder whether limiting subdifferentials are nonempty almost everywhere. The following result gives a positive answer in one dimension: \n\n\\begin{proposition}\\label{PROPDichotomy}\nLet $a,b\\in \\mathbb R$ with $a\\varphi(y)$, so that\n\\[\n\\min \\left\\{\\varphi(z) \\, \\vert \\, z \\in [x,e]\\right\\} \\leq \\varphi(y) < \\varphi(x) \\leq \\varphi(e),\n\\]\nwhich contradicts the assumption. The rest of the proof is left to the reader.\n\\end{proof}\nThe set $S$ of $x\\in (a,b)$, for which there is a sequence $\\{x_k\\}_{k\\in \\mathbb N}$ converging to $x$ such that $0\\in \\partial^-\\varphi(x_k)$ for all $k\\in \\mathbb N$, is closed in $(a,b)$. We need to show that $\\varphi$ is differentiable almost everywhere in $(a,b)\\setminus S$. Suppose for contradiction that there is a set $E \\subset (a,b)\\setminus S$ of positive Lebesgue measure such that $\\varphi$ is not differentiable at any $x\\in E$ and fix $\\bar{x}$ a density point of $E$. We claim that there are $c, d, e \\in (a,b)\\setminus S$ with $\\bar{x}\\in (c,d) \\subset (a,b)\\setminus S$ and $e\\in [c,d]$ such that $\\varphi$ is monotone on $[c,e]$ and $[e,d]$. As a matter of fact, otherwise, for every $k\\in \\mathbb N^*$ large enough, the assumption of Lemma \\ref{LEMDim1} is not satisfied over the interval $I_k:=[\\bar{x}-1\/k,\\bar{x}+1\/k] \\subset (a,b)\\setminus S$ so there are $x_k, y_k, z_k\\in I_k$ with $x_k< z_k< y_k$ such that\n\\[\n\\varphi(z_k) = \\min \\left\\{\\varphi(z) \\, \\vert \\, z \\in [x_k,y_k] \\right\\} < \\min \\left\\{\\varphi(x_k),\\varphi(y_k)\\right\\},\n\\]\nwhich means that $\\varphi$ attains a local minimum at $z_k$ so that $0\\in \\partial^-f(z_k)$ with $z_k$ converging to $x$ as $k$ tends to $+\\infty$, a contradiction. In conclusion, since monotone functions are differentiable almost everywhere, we infer that $\\varphi$ is differentiable almost everywhere in a neighborhood of $\\bar{x}$, which contradicts the fact that $\\bar{x}$ is a density point of $E$. \n\\end{proof}\n\n\\begin{remark}\\label{RemDYS}\nThe Denjoy-Young-Saks Theorem allows to make more precise assertion (ii). Given $\\varphi$ as in Proposition \\ref{PROPDichotomy}, the {\\it Dini derivatives} $D^+\\varphi, D_+\\varphi,D^-\\varphi, D_-\\varphi:(a,b) \\rightarrow \\mathbb R\\cup \\{\\pm \\infty\\}$ of $\\varphi$ at $x\\in (a,b)$ are defined by \n \\[\n D^+\\varphi(x) = \\limsup_{h\\rightarrow 0^+} \\frac{\\varphi(x+h)-\\varphi(x)}{h}, \\quad D_+\\varphi(x) = \\liminf_{h\\rightarrow 0^+} \\frac{\\varphi(x+h)-\\varphi(x)}{h}\n \\] \n\\[\n D_-\\varphi(x) = \\liminf_{h\\rightarrow 0^+} \\frac{\\varphi(x)-\\varphi(x-h)}{h}, \\quad D^-\\varphi(x) = \\limsup_{h\\rightarrow 0^+} \\frac{\\varphi(x)-\\varphi(x-h)}{h}.\n\\]\nDenjoy-Young-Saks' Theorem (see \\cite{hanson34} and references therein) asserts that for almost every $x\\in (a,b)$, one of the following assertions holds:\n\\begin{itemize}\n\\item[(1)] $D^+\\varphi(x)= D_+\\varphi(x)=D^-\\varphi(x)= D_-\\varphi(x) \\in \\mathbb R$, {\\it i.e.} $f$ is differentiable at $x$,\n\\item[(2)] $D^+f(x)=D^-f(x)=+\\infty$ and $D_+f(x)=D_-f(x)=-\\infty$,\n\\item[(3)] $D^+f(x)=+\\infty, D_-f(x)=-\\infty$ and $D_+f(x)=D^-f(x)\\in \\mathbb R$,\n\\item[(4)] $D^-f(x)=+\\infty, D_+f(x)=-\\infty$ and $D_-f(x)=D^+f(x)\\in \\mathbb R$.\n\\end{itemize}\nAs a consequence, we may also suppose in Proposition \\ref{PROPDichotomy} (ii) that one of the above assertions (2), (3), (4) is satisfied.\n \\end{remark}\n\nProposition \\ref{PROPDichotomy} implies that the limiting subdifferential of a continuous function in dimension one is nonempty almost everywhere, we do not know if this result holds true in higher dimension (see Section \\ref{SEC8jan}). \n\n\\subsection{Projective limiting subdifferentials}\n\nWe now introduce an object that allows to capture limits of elements of $\\partial^-f$ going to infinity. For every $x\\in \\mathcal{O}$, we call {\\it projective limiting subdifferential} of $f$ at $x$, denoted by $\\partial^-_{PL}f(x)$, the set of $p\\in T_x^*M\\setminus \\{0\\}$ for which there are sequences $\\{(x_k,p_k)\\}_{k\\in \\mathbb N}$ in $T^*M$ and $\\{\\lambda_k\\}_{k\\in \\mathbb N}$ in $(0,+\\infty)$ satisfying\n\\[\n\\lim_{k\\rightarrow +\\infty} \\left|p_k\\right|_{x_k} = +\\infty, \\quad \\lim_{k\\rightarrow +\\infty} \\left( x_k,\\lambda_k p_k\\right) = (x,p)\n\\quad\n\\mbox{and} \\quad p_k \\in \\partial^-f(x_k) \\quad \\forall k \\in \\mathbb N.\n\\]\nBy construction, the set $\\partial^-_{PL}f(x)$ is a positive cone, that is, if $p\\in \\partial^-_{PL}f(x)$ then $\\lambda p\\in \\partial^-_{PL}f(x)$ for all $\\lambda >0$. Proposition \\ref{PROPSubLipEQ} implies the following:\n\n\\begin{proposition}\\label{PROPLip-PL}\nFor every $x\\in \\mathcal{O}$, $\\partial^-_{PL}f(x)=\\emptyset$ if and only if $x\\in \\mbox{Lip}(f)$.\n\\end{proposition}\n\nAs we shall see in the next section, viscosity and limiting subdifferentials of pointed sub-Riemannian distances come along with minimizing normal extremals while projective limiting subdifferentials are associated with minimizing abnormal extremals. \n\n\\section{Minimizing Sard Conjecture, subdifferentials and lipschitzness}\\label{SEC3}\n\nThroughout this section we consider a point $x\\in M$ and fix a smooth Riemannian metric $h$ on $M$. Then, we denote by $d_{SR}^x:=d_{SR}(x,\\cdot)$ the pointed sub-Riemannian distance and we define $f_x:M\\rightarrow \\mathbb R$ by \n\\begin{eqnarray}\\label{19oct1}\nf_x(y) := \\frac{1}{2} d_{SR}^x(y)^2 \\qquad \\forall y \\in M.\n\\end{eqnarray}\nWe check easily by definition of the viscosity subdifferential that we have for every $y\\in M\\setminus \\{x\\}$ and every $p\\in T_y^*M$, \n\\begin{eqnarray}\np \\in \\partial^-d_{SR}^x(y) \\quad \\Longleftrightarrow \\quad d_{SR}(x,y) p \\in \\partial^-f_x(y).\n\\end{eqnarray}\n\nThe aim of this section is to explain the link between subdifferentials of $f_x$ and minimizing geodesics and eventually to provide several characterization of the Minimizing Sard Conjecture.\n\n\\subsection{Abnormal and normal extremals}\\label{SECExtremals}\n\nWe explain below how the notions of abnormal and normal extremals emerge in sub-Riemannian geometry and introduce several notations that will be used in the next sections, we refer the reader to \\cite{riffordbook} for further details. \\\\\n\nLet us consider $\\bar{y}\\neq x$ in $M$ and $\\bar{\\gamma} : [0,1] \\rightarrow M$ a minimizing geodesic from $x$ to $\\bar{y}$, that is, a horizontal path that minimizes ($|v|^2=g_x(v,v)$)\n\\[\n\\mbox{energy}^g(\\gamma)= \\int_0^1 |\\dot{\\gamma}(t)|^2\\, dt,\n\\]\namong all paths $\\gamma\\in \\Omega_x^{\\Delta}$ verifying $\\gamma(1)=\\bar{y}$. Then, consider an orthonormal family $\\mathcal{F} = \\{X^1, \\ldots, X^m\\}$ of smooth vector fields which parametrizes $\\Delta$ on an open neighborhood $\\mathcal{V}\\subset M$ of $\\bar{\\gamma}([0,1])$ and denote by $E$ the end-point mapping associated with $x$ and $\\mathcal{F}$ defined by\n$$\nE(u) := \\gamma^u(1) \\qquad \\forall u \\in L^2([0,1],\\mathbb R^m),\n$$\nwhere $\\gamma^u$ is the curve in $\\Omega_x^{\\Delta}$ solution to the Cauchy problem\n$$\n\\dot{\\gamma}^u (t) = \\sum_{i=1}^m u_i(t) \\, X^i \\left(\\gamma^u(t)\\right) \\, \\mbox{ for a.e. } t \\in [0,1], \\quad \\gamma^u(0)=x.\n$$\nBy construction, there is a control $\\bar{u} \\in L^2([0,1],\\mathbb R^m)$ such that $\\bar{\\gamma}=\\gamma^{\\bar{u}}$ and $E$ is well-defined and smooth on an open set $U\\subset L^2([0,1],\\mathbb R^m)$ containing $\\bar{u}$. Then, we define the smooth mappings $C: U \\rightarrow \\mathbb R$ and $F: U \\rightarrow M\\times \\mathbb R$ by \n\\begin{eqnarray}\\label{10oct1}\nC(u):= \\frac{1}{2} \\| u\\|_{L^2}^2 \\quad \\mbox{and} \\quad F(u) := \\left( E (u), C(u) \\right) \\qquad \\forall u \\in U\n\\end{eqnarray}\nand note that for every $u\\in U$ we have, because $\\mathcal{F}$ is orthonormal,\n\\[\nC(u) = \\frac{1}{2} \\mbox{energy}^g(\\gamma^u).\n\\]\nIn particular, by construction we have \n\\[\nC(\\bar{u}) = \\frac{1}{2} \\mbox{energy}^g(\\bar{\\gamma}) = \\frac{1}{2} d_{SR}(x,\\bar{y})^2 \n\\quad\n\\mbox{and} \\quad \\left| \\bar{u}(t) \\right| = d_{SR}(x,\\bar{y}) \\quad \\mbox{for a.e. } t \\in [0,1].\n\\]\nFurthermore, we observe that since $\\bar{\\gamma}$ has no self-intersection, we may assume by shrinking $\\mathcal{V}$ if necessary, that $\\mathcal{V}$ is smoothly diffeomorphic to the open unit ball $B^n(0,1) \\subset \\mathbb R^n$ through a diffeomorphism $\\Phi: \\mathcal{V} \\rightarrow B^n(0,1)$ satisfying\n\\begin{eqnarray}\\label{ESTDiffeo}\n\\frac{1}{K}|p| \\leq \\left| p \\cdot d_x\\Phi \\right|^*_x \\leq K |p| \\qquad \\forall x \\in \\mathcal{V}, \\, \\forall p \\in (\\mathbb R^n)^*\n\\end{eqnarray}\nfor some constant $K>0$ depending on $\\bar{\\gamma}$, so that we may assume from now that we are in $\\mathbb R^n$. \n\nLet us now consider a local minimizer $u\\in U$ of $C$ with end-point $y:=E(u)$, that is, such that\n\\[\nC(u) = \\min \\Bigl\\{C(u') \\, \\vert \\, u' \\in U, \\, E(u') = y \\Bigr\\}.\n\\]\nBy the above construction, this means that the horizontal path $\\gamma^u$ minimizes the energy $\\mbox{energy}^g(\\gamma)$ among all paths $\\gamma\\in \\Omega_x^{\\Delta}$ sufficiently close to $\\gamma^u$ verifying $\\gamma(1)=y$. By the Lagrange Multiplier Theorem, there are $\\bar{p}^u \\in T_y^*M$ and $\\bar{p}_0^u \\in \\{0,1\\}$ with $(\\bar{p}^u,\\bar{p}_0^u) \\neq (0,0)$ such that (we denote here the differentials of smooth function with uppercase letter \"D\")\n\\begin{eqnarray}\\label{Lagrange}\n\\bar{p}^u \\cdot D_{u} E = \\bar{p}_0^u \\, D_{u} C,\n\\end{eqnarray}\nwhere the differentials of $E$ and $C$ at $u$ are respectively given by \n\\begin{eqnarray}\\label{dE}\nD_{u}E(v) = \\int_0^1 S^u(1) S^u(t)^{-1} B^u(t) v(t)\\, dt \\qquad \\forall v \\in L^2( [0,1],\\mathbb R^m)\n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{dC}\nD_{u}C(v) = \\int_0^1\\langle u(t),v(t)\\rangle \\, dt \\qquad \\forall v \\in L^2( [0,1],\\mathbb R^m),\n\\end{eqnarray}\nwhere we have defined $B^u : [0,1] \\rightarrow M_{n,m}(\\mathbb R)$ by\n\\begin{eqnarray}\\label{Bu}\nB^u(t) := \\left(X^{1}(\\gamma^{u}(t),\\cdots,X^{m}(\\gamma^{u}(t) \\right) \\qquad \\forall t \\in [0,1]\n\\end{eqnarray}\nand $S^u:[0,1] \\rightarrow M_n(\\mathbb R)$ as the solution to the Cauchy problem\n\\begin{eqnarray}\\label{5avril3}\n\\dot{S}^u(t)=A^u(t)S^u(t) \\quad \\mbox{for a.e. } t \\in [0,1], \\quad S^u(0)=I_{n},\n\\end{eqnarray}\nwith $A^u:[0,1] \\rightarrow M_n(\\mathbb R)$ given by ($J_{X^i}$ is the Jacobian matrix of $X^i$)\n\\begin{eqnarray}\\label{5avril4}\nA^u(t) := \\sum_{i=1}^m u_{i}(t) J_{X^{i}}\\left(\\gamma^u(t)\\right) \\qquad \\mbox{for a.e. } t \\in [0,1].\n\\end{eqnarray} \nThen we define the extremal $\\psi^{u,\\bar{p}^u} : [0,1] \\rightarrow T^*M$ by \n\\begin{eqnarray}\\label{formulaextremal}\n\\psi^{u,\\bar{p}^u}(t) := \\left(\\gamma^u(t),p^{u,\\bar{p}^u}(t)\\right) := \\left(\\gamma_u(t), \\bar{p}^u \\cdot S^u(1)S^u(t)^{-1}\\right) \\qquad \\forall t \\in [0,1].\n\\end{eqnarray}\nBy construction, $\\psi^{u,\\bar{p}^u}$ is a lift of $\\gamma^u$ verifying $\\psi^{u,\\bar{p}^u}(1)=(y,\\bar{p}^u)$ and we have the following result:\n\n\\begin{proposition}\\label{PROPp0}\nDepending on the value of $\\bar{p}_0^u \\in \\{0,1\\}$ in (\\ref{Lagrange}), we have:\n\\begin{itemize}\n\\item[(i)] If $\\bar{p}_0^u=0$, then $\\psi^{u,\\bar{p}^u}$ is an abnormal extremal (and $\\gamma^{u}$ is a singular).\n\\item[(ii)] If $\\bar{p}_0^u=1$, then $\\psi^{u,\\bar{p}^u}$ is a normal extremal and we have \n\\[\nu(t) = B^u(t)^{*} p(t)^* \\qquad \\mbox{for a.e. } t \\in [0,1].\n\\]\n\\end{itemize}\n\\end{proposition}\n\nThis being said, we explain in the next section the link between subdifferentials of $f_x$ and extremals. We keep the same notations as above. \n\n\\subsection{Subdifferentials and extremals}\n\nThe key result to connect subdifferentials of $f_x$ to normal extremals, due to Tr\u00e9lat and the author \\cite{rt05}, is the following:\n \n\\begin{proposition}\\label{PROPsub1}\nLet $y\\in M\\setminus \\{x\\}$, $p\\in \\partial^- f_x(y)$ and $u\\in U$ be a local minimizer of $C$ with end-point $E(u)=y$, then we have \n\\begin{eqnarray}\\label{26sept1}\np\\cdot D_{u}E = D_{u}C.\n\\end{eqnarray}\nIn particular, there is a unique minimizing geodesic from $x$ to $y$, it is given by the projection of the normal extremal $\\psi:[0,1] \\rightarrow T^*M$ satisfying $\\psi(1)=(y,p)$. Moreover, if $\\psi(0)$ is not a critical point of the exponential mapping $\\exp_x$, then $y$ does not belong to $\\mbox{Abn}^{min}(x)$.\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPsub1}]\nLet $y\\in M\\setminus \\{x\\}$, $p\\in \\partial^- f_x(y)$, and $u\\in U$ be a local minimizer of $C$ with end-point $E(u)=y$ and let $\\varphi:M\\rightarrow \\mathbb R$ be a support function from below of class $C^1$ with $d\\varphi(x)=p$. By assumption, we have \n\\[\nC(u) = f_x(y)=\\varphi(y) \\quad \\mbox{and} \\quad C(u') \\geq f_x\\left(E(u')\\right) \\geq \\varphi \\left(E(u')\\right) \\quad \\forall u'\\in U,\n\\]\nwhich means that the function $C-\\varphi\\circ E$ attains a local minimum at $u$. Hence we have (\\ref{26sept1}) and by Proposition \\ref{PROPp0} we infer that $\\gamma^u$ has to be the projection of the normal extremal $\\psi^{u,\\bar{p}^{u}}$ with $\\bar{p}^{u}:=d_{y}\\varphi =p$. If $\\psi(0)$ is not a critical point of $\\exp_x$, then the horizontal path $\\gamma^u$ is not singular and and there is no other minimizing geodesic from $x$ to $y$, we infer that $y\\notin \\mbox{Abn}^{min}(x)$.\n\\end{proof}\n\n\\begin{remark}\\label{REM28oct}\nWe note that if $y\\in M\\setminus \\{x\\}$, $p\\in \\partial^-_P f_x(y)$ and $u\\in U$ is a local minimizer of $C$ with end-point $E(u)=y$, then we have (\\ref{26sept1}) and moreover since the function $C-\\varphi\\circ E$ with a local minimum at $u$ is of class $C^2$ (where $\\varphi:M\\rightarrow \\mathbb R$ is a support function from below of class $C^2$ with $d\\varphi(x)=p$), we have\n\\begin{eqnarray*}\n D^2_uC (v) - p\\cdot D^2_uE(v) \\geq 0 \\qquad \\forall v \\in \\mbox{\\rm Ker}(D_uE),\n\\end{eqnarray*}\nwhere $D^2_uC$ and $D^2_uE$ stand for the quadratic forms defined respectively by the second order differentials of $C$ and $E$ at $u$.\n\\end{remark}\n\nAs a consequence, Proposition \\ref{PROPsub1} allows to associate to each $p\\in \\partial^-_L f_x(y)$ a normal extremal whose projection is minimizing.\n \n\\begin{proposition}\\label{PROPsub2}\nLet $y\\in M\\setminus \\{x\\}$ and $p\\in \\partial^-_L f_x(y)$, then the projection of the normal extremal $\\psi:[0,1] \\rightarrow T^*M$ satisfying $\\psi(1)=(y,p)$ is a minimizing geodesic from $x$ to $y$.\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPsub2}]\nLet $y\\in M\\setminus \\{x\\}$, $p\\in \\partial^-_L f_x(y)$ and $\\{(y_k,p_k)\\}_{k\\in \\mathbb N}$ be a sequence converging to $(y,p)$ in $T^*M$ such that $p_k\\in \\partial^-u(y_k)$ for all $k\\in \\mathbb N$. By Proposition \\ref{PROPsub1}, for every $k\\in \\mathbb N$, the projection $\\gamma_k:[0,1] \\rightarrow M$ of the normal extremal $\\psi_k:[0,1] \\rightarrow T^*M$ verifying $\\psi_k(1)=(y_k,p_k)$ is a minimizing geodesic from $x$ to $y_k$. By regularity of the Hamiltonian flow, the sequence $\\{\\psi_k\\}_{k\\in \\mathbb N}$ converges to the normal extremal $\\psi:[0,1] \\rightarrow T^*M$ verifying $\\psi(1)=(y,p)$ and its projection is minimizing as a limit of minimizing geodesics from $x$ to $y_k$ which converges to $y$.\n\\end{proof}\n\nIf however we consider a co-vector $p$ in $\\partial^-_{PL} f_x(y)$ then Proposition \\ref{PROPsub1} allows to obtain a minimizing singular geodesic associated with $p$, more precisely we have:\n\n\\begin{proposition}\\label{PROPsub3}\nLet $y\\in M\\setminus \\{x\\}$ and $p\\in \\partial^-_{PL} f_x(y)$, then there is a singular minimizing geodesic from $x$ to $y$. Moreover, for all sequences $\\{(y_k,p_k)\\}_{k\\in \\mathbb N}$ in $T^*M$ and $\\{\\lambda_k\\}_{k\\in \\mathbb N}$ in $(0,+\\infty)$ satisfying\n\\[\n\\lim_{k\\rightarrow +\\infty} \\left|p_k\\right|_{y_k} = +\\infty, \\quad \\lim_{k\\rightarrow +\\infty} \\left( y_k,\\lambda_k p_k\\right) = (y,p)\n\\quad\n\\mbox{and} \\quad p_k \\in \\partial^-f(y_k) \\quad \\forall k \\in \\mathbb N,\n\\]\nany uniformly convergent subsequence of the sequence of minimizing geodesics $\\{\\gamma_k\\}_{k\\in \\mathbb N}$ given by the projections of the normal extremals $\\psi_k:[0,1] \\rightarrow T^*M$ verifying $\\psi_k(1)=(y_k,p_k)$, converges to a singular minimizing geodesic admitting an abnormal extremal $\\psi:[0,1] \\rightarrow T^*M$ such that $\\psi(1)=(y,p)$.\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPsub3}]\nLet $y\\in M\\setminus \\{x\\}$, $p\\in \\partial^-_{PL} f_x(y)$ and two sequences $\\{(y_k,p_k)\\}_{k\\in \\mathbb N}$ in $T^*M$ and $\\{\\lambda_k\\}_{k\\in \\mathbb N}$ in $(0,+\\infty)$ satisfying the properties given in the statement. Assume that some subsequence $\\{\\gamma_{k_l}\\}_{l\\in \\mathbb N}$ of the sequence $\\{\\gamma_k\\}_{k\\in \\mathbb N}$ given by the projections of the normal extremals $\\psi_k:[0,1] \\rightarrow T^*M$ verifying $\\psi_k(1)=(y_k,p_k)$ converges uniformly (in $W^{1,2}([0,1],M)$) to some minimizing geodesic $\\bar{\\gamma}$. Using the notations of the previous section, we write $\\gamma=\\gamma^{\\bar{u}}$ with $\\bar{u} \\in L^2( [0,1],\\mathbb R^m)$ and notice that for an index $l$ large enough, $l\\geq L$, we can write $\\gamma_{k_l}$ as $\\gamma^{u_l}$ for some $u_l\\in U$ where the subsequence $\\{u_l\\}_{l\\geq L}$ tends to $\\bar{u}$ in $L^2( [0,1],\\mathbb R^m)$ as $l$ tends to $\\infty$. Then we have for all index $l\\geq L$,\n\\[\np_{k_l}\\cdot D_{u_l}E = D_{u_l}C\n\\]\nwhich gives\n\\[\n\\lambda_{k_l} p_{k_l}\\cdot D_{u_l}E = \\lambda_{k_l} D_{u_l}C \\qquad \\forall l \\geq L,\n\\]\nwhere\n\\[\n\\lim_{l\\rightarrow +\\infty} \\lambda_{k_l} p_{k_l} = p \\quad \\mbox{and} \\quad \\lim_{l\\rightarrow +\\infty} \\lambda_{k_l} =0.\n\\]\nBy passing to the limit, we obtain $p \\cdot D_{\\bar{u}}E=0$ which, by Proposition \\ref{PROPp0}, shows that $\\gamma=\\gamma^{\\bar{u}}$ is a singular minimizing geodesic admitting an abnormal extremal $\\psi:[0,1] \\rightarrow T^*M$ such that $\\psi(1)=(y,p)$.\n\n\\end{proof}\n\n\\subsection{Non-Lipschitz points admit Goh abnormals}\\label{SECGoh}\n\nFor every $x\\in M$, $\\mbox{Goh-Abn}^{min}(x)$ stands for the set of $y\\in M$ for which there is a minimizing path $\\gamma \\in \\Omega^{\\Delta}_x$ joining $x$ to $y$ which is singular and admits an abnormal lift $\\psi:[0,1] \\rightarrow \\Delta^{\\perp}$ satisfying the Goh condition\n\\begin{eqnarray}\\label{Goh26sept}\n\\psi(t) \\cdot [\\Delta,\\Delta](\\gamma(t)) = 0 \\qquad \\forall t \\in [0,1].\n\\end{eqnarray}\nBy construction, $\\mbox{Goh-Abn}^{min}(x)$ is a closed set satisfying \n\\[\n\\mbox{Goh-Abn}^{min}(x) \\subset \\mbox{Abn}^{min}(x).\n\\]\nAgrachev and Sarychev \\cite{as99} showed that any singular minimizing geodesic with no normal extremal lifts does admit an abnormal lift satisfying the Goh condition and Agrachev and Lee \\cite{al09} proved that the absence of Goh abnormal extremals imply lipschitzness properties for $f_x$. In some sense, the following result combines the two results. \n\n\\begin{proposition}\\label{PROPGohLip}\nFor every $x\\in M$, we have $M\\setminus \\mbox{Lip}(d_x) \\subset \\mbox{Goh-Abn}^{min}(x)$. Moreover, for any $y\\in M \\setminus \\mbox{Lip}(f_x) $, any $p \\in \\partial^-_{PL}f_x(y)$ and any sequences $\\{(y_k,p_k)\\}_{k\\in \\mathbb N}$ in $T^*M$ and $\\{\\lambda_k\\}_{k\\in \\mathbb N}$ in $(0,+\\infty)$ satisfying\n\\[\n\\lim_{k\\rightarrow +\\infty} \\left|p_k\\right|_{y_k} = +\\infty, \\quad \\lim_{k\\rightarrow +\\infty} \\left( y_k,\\lambda_k p_k\\right) = (y,p)\n\\quad \\mbox{and} \\quad p_k \\in \\partial^-f_x(y_k) \\quad \\forall k \\in \\mathbb N,\n\\]\nany minimizing horizontal path from $x$ to $y$, obtained as the uniform limit of a subsequence of the sequence of minimizing geodesics $\\{\\gamma_k\\}_{k\\in \\mathbb N}$ given by the projections of the normal extremals $\\psi_k:[0,1] \\rightarrow T^*M$ verifying $\\psi_k(1)=(y_k,p_k)$, does admit an abnormal extremal $\\psi:[0,1] \\rightarrow T^*M$ with $\\psi(1)=(y,p)$ which satisfies the Goh condition.\n\\end{proposition}\n\nLet us fix $\\bar{y}\\neq x$ in $M$ and a minimizing geodesic $\\bar{\\gamma} : [0,1] \\rightarrow M$ from $x$ to $\\bar{y}$ and by considering all notations introduced in Section \\ref{SECExtremals} (especially (\\ref{10oct1})) write $\\bar{\\gamma}=\\gamma^{\\bar{u}}$ for some $\\bar{u}\\in U$. Proposition \\ref{PROPGohLip} will be an easy consequence of the following result (we refer the reader to Appendix \\ref{SECSecondOrder} for further details on negative indices of quadratic forms ($ \\mbox{ind}_-$)): \n\n\\begin{proposition}\\label{PROPGohNormal}\nAssume that $\\|\\bar{u}\\|_{L^{\\infty}}0$. Then for every $\\kappa>0$ and every $N\\in \\mathbb N$, there are $\\rho, \\Lambda>0$ such that the following property is satisfied: For every $u\\in U$ such that\n\\begin{eqnarray}\\label{HypGohNormal1}\n\\left\\|u-\\bar{u}\\right\\|_{L^2}< \\rho, \\quad \\left\\|u\\right\\|_{L^{\\infty}}< 2L,\n\\end{eqnarray}\nand every $\\bar{p}^u \\in T_y^*M$ and $\\bar{p}_0^u \\in \\mathbb R$, with $y:=E(u)$ and $(\\bar{p}^u,\\bar{p}_0^u) \\neq (0,0)$, satisfying \n\\begin{eqnarray}\\label{Lagrange2}\n\\bar{p}^u \\cdot D_{u} E = \\bar{p}_0^u \\, D_{u} C,\n\\end{eqnarray}\n together with\n\\begin{eqnarray}\\label{HypGohNormal2}\n \\left| \\bar{p}^u\\right|_y \\geq \\Lambda \\, \\left| \\bar{p}_0^{u}\\right|\n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{HypGohNormal3}\n \\mbox{\\em ind}_- \\left( \\left(\\bar{p}^u,-\\bar{p}_0^u\\right) \\cdot \\left( D^2_{u} F \\right)_{\\vert \\mbox{\\em Ker} (D_{u}F)} \\right) < N,\n\\end{eqnarray}\nthe extremal $\\psi^{u,\\bar{p}^u} = (\\gamma^u(t),p^{u,\\bar{p}^u}(t)): [0,1] \\rightarrow T^*M$ defined by (\\ref{formulaextremal}) satisfies \n\\begin{eqnarray}\\label{AlmostGoh}\n\\left| p^{u,\\bar{p}^u}(t) \\cdot \\left[ X^i,X^j\\right] \\left(\\gamma^u(t)\\right)\\right| \\leq \\kappa \\left| \\bar{p}^u\\right|_y \\qquad \\forall t \\in [0,1], \\, \\forall i,j = 1, \\ldots, m.\n\\end{eqnarray}\n\\end{proposition} \n \n\\begin{proof}[Proof of Proposition \\ref{PROPGohNormal}]\nThe formula of $D_uE$ has been recalled in (\\ref{dE}) and the quadratics forms defined respectively by the second order differentials of $C$ and $E$ are given by \n\\begin{eqnarray}\\label{D2uC}\nD_{u}^2C(v)= \\|v\\|_{L^2}^2 \\qquad \\forall v \\in L^2( [0,1],\\mathbb R^m)\n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{d2uE}\nD_{u}^2 E(v) = 2 \\int_0^1 S^u(1) S^u(t)^{-1} \\left[ A^u_v(t)+D^u_v(t)\\right] \\, dt \\qquad \\forall v \\in L^2( [0,1],\\mathbb R^m),\n\\end{eqnarray}\nwhere for every $v\\in L^2([0,1],\\mathbb R^m)$ the functions $A^u_v, D^u_v:[0,1] \\rightarrow M_n(\\mathbb R)$ are defined for almost every $t\\in [0,1]$ by\n\\begin{eqnarray}\\label{5avril6}\nA^u_v(t) := \\sum_{i=1}^m v_i(t) D_{\\gamma^u(t)} X^i \\left( \\delta_v^1(t)\\right), \\quad D_v^u(t) := \\frac{1}{2} \\sum_{i=1}^m u_i(t) D^2 _{\\gamma^u(t)} X^i \\left( \\delta_v^1(t)\\right)\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\\label{5avril7}\n\\delta_v^1(t) := \\int_0^t S^u(t) S^u(s)^{-1} B^u(s) v(s)\\, ds.\n\\end{eqnarray}\nWe refer the reader to \\cite{riffordbook} for the above formulas. The proof of Proposition \\ref{PROPGohNormal} will follow from the following lemma (compare \\cite[Lemma 2.21 p. 63]{riffordbook}): \n\n\\begin{lemma}\\label{LEM8sept}\nThere are $\\rho, K>0$ such that for any $u\\in U$ with $\\|u-\\bar{u}\\|_{L^2}< \\rho$, $\\|u\\|_{L^{\\infty}}< 2L$ and any $\\bar{t}, \\delta>0$ with $[\\bar{t},\\bar{t}+\\delta]\\subset [0,1]$, the following property holds: For every $v\\in \\mbox{\\em Ker} (D_{u} E)$ with $\\mbox{\\em Supp} (v)\\in [\\bar{t},\\bar{t}+\\delta]$ and every $\\bar{p}^u\\in T_y^*M$, we have\n\\begin{eqnarray}\\label{8sept3}\n\\left| \\bar{p}^u \\cdot D_{u}^2 E (v) - Q^{u,\\bar{p}^u}_{\\bar{t},\\delta} (v) \\right| \\leq K \\, \\|v\\|_{L^2}^2 \\, \\left|\\bar{p}^u\\right|_{y} \\, \\delta^2, \n\\end{eqnarray}\nwhere $Q^{u,\\bar{p}^u}_{\\bar{t},\\delta}: L^2 \\left( [0,1],\\mathbb R^m\\right) \\rightarrow \\mathbb R^n$ is the quadratic form defined by \n\\begin{eqnarray}\\label{8septQ}\nQ^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v) := \\int_{\\bar{t}}^{\\bar{t}+\\delta} \\int_{\\bar{t}}^t \\langle v(s), M_{\\bar{t}}^{u,\\bar{p}^u}v(t)\\rangle \\, ds \\, dt \\qquad \\forall v\\in L^2 \\left( [0,1],\\mathbb R^m\\right)\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\\label{27avril1}\n\\left( M_{\\bar{t}}^{u,\\bar{p}^u}\\right)_{i,j} = 2 \\, p^{u,\\bar{p}^u} (\\bar{t}) \\cdot D_{\\gamma^u(\\bar{t})} X^i \\left( X^j\\bigl(\\gamma^u(\\bar{t}) \\bigr)\\right) \\qquad \\forall i,j =1, \\ldots,m.\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{LEM8sept}]\nLet $\\bar{t}, \\delta>0$ with $[\\bar{t},\\bar{t}+\\delta]\\subset [0,1]$ and $v\\in \\mbox{Ker} (D_{u} E)$ with $\\mbox{Supp} (v)\\in [\\bar{t},\\bar{t}+\\delta]$ be fixed. By (\\ref{formulaextremal}) and (\\ref{d2uE}), we have\n\\begin{eqnarray}\\label{6avril33}\n \\bar{p}^u \\cdot \\left( D^2_{u} E \\right)_{\\vert \\mbox{Ker} (D_{u} E)} (v) = 2 \\int_0^1 p^{u,\\bar{p}^u}(t) \\cdot \\left[ A^u_v(t)+ D^u_v(t)\\right] \\, dt.\n\\end{eqnarray}\nSince $v\\in \\mbox{Ker} (D_{u} E)$ and $\\mbox{Supp} (v)\\in [\\bar{t},\\bar{t}+\\delta]$, we have $\\delta_v^1(t)=0$ for every $t\\in [0,\\bar{t}] \\cup [\\bar{t}+\\delta,1]$ and by Cauchy-Schwarz's inequality, we have for every $t\\in [\\bar{t},\\bar{t}+\\delta]$,\n\\begin{eqnarray*}\n\\left| \\bar{\\delta}_v^1(t) \\right| \\leq \\sup_{s \\in [0,1]} \\Bigl\\{ \\bigl\\| S^u(t)S^u(s)^{-1}B^u(s) \\bigr\\| \\Bigr\\} \\, \\sqrt{t-\\bar{t}} \\, \\|v\\|_{L^2} \\leq K_1 \\, \\|v\\|_{L^2} \\, \\sqrt{\\delta},\n\\end{eqnarray*}\nwhere $K_1$ is a constant depending only upon the sizes of $S^u, (S^u)^{-1}, B^u$ for $u$ close to $\\bar{u}$. Then we have \n $$\nA^u_v(t)= D^u_v(t)=0 \\qquad \\forall t \\in t \\in \\bigl[0,\\bar{t}\\bigr] \\cup \\bigl[\\bar{t}+\\delta,1\\bigr],\n $$\n and\n $$\n \\bigl| D^u_v (t) \\bigr| \\leq K_2 \\, \\|v\\|_{L^2}^2 \\, \\bigl\\| u\\bigr\\|_{L^{\\infty}} \\, \\delta \\qquad \\forall t\\in \\bigl[\\bar{t},\\bar{t}+\\delta\\bigr],\n $$\n which gives\n\\begin{eqnarray}\\label{6avril1}\n \\left| \\int_0^1\\bar{p}^{u,\\bar{p}^u}(t) \\cdot D^u_v(t)\\, dt \\right| \\leq K_3 \\, \\|v\\|_{L^2}^2 \\left|\\bar{p}^u\\right|_y \\, \\delta^{2} ,\n\\end{eqnarray}\nwhere $K_2, K_3$ are some constants depending on $K_1$, on the size of the $D^2X^j$'s and $L$ (remember that $\\|u\\|_{L^{\\infty}}<2L$). Note that since we can write for every $t \\in[\\bar{t},\\bar{t}+\\delta]$,\n\\begin{eqnarray*}\n& \\quad & \\delta_v^1 (t) -\\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) X^j \\bigl(\\gamma^u(\\bar{t})\\bigr) \\, ds \\\\\n& = & \\delta_v^1 (t) -\\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) X^j \\bigl(\\gamma^u(s)\\bigr) \\, ds + \\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) \\left[ X^j \\bigl(\\gamma^u(s)\\bigr) - X^j \\bigl(\\gamma^u(\\bar{t})\\bigr)\\right] \\, ds \\\\\n& = & \\int_0^t S^u(t) S^u(s)^{-1} B^u(s) v(s) - B^u(s) v(s) \\, ds \\\\\n& \\quad & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) \\left[ X^j \\bigl(\\gamma^u(s)\\bigr) - X^j \\bigl(\\gamma^u(\\bar{t})\\bigr)\\right] \\, ds \\\\\n& = & \\int_{\\bar{t}}^t \\left( S^u(t) -S^u(s) \\right) S^u(s)^{-1} B^u(s) v(s) \\, ds \\\\\n& \\quad & \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + \\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) \\left[ X^j \\bigl(\\gamma^u(s)\\bigr) - X^j \\bigl(\\gamma^u(\\bar{t})\\bigr)\\right] \\, ds,\n \\end{eqnarray*}\nwe have (since $\\|u\\|_{L^{\\infty}}< 2L$, $S^u$ and $\\gamma^u$ are both Lipschitz)\n\\begin{multline}\\label{10sept4}\n\\left| \\delta_v^1 (t) -\\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) X^j \\bigl(\\gamma^u(\\bar{t})\\bigr) \\,ds \\right| \\leq K_4 \\, \\|v\\|_{L^2} \\, \\delta^{\\frac{3}{2}}, \\quad \\forall t \\in \\bigl[\\bar{t},\\bar{t}+\\delta\\bigr],\n\\end{multline} \nwhere $K_4$ is a constant depending only upon the sizes of $S^u, (S^u)^{-1}, B^u$ (for $u$ close to $\\bar{u}$), upon the Lipschitz constants of the $X^j$'s in a neighborhood of the curve $\\bar{\\gamma}$ and upon $L$. Moreover, by noting that $Q^{u,\\bar{p}^u}_{\\bar{t},\\delta}$ satisfies for every $v\\in L^2([0,1],\\mathbb R^m)$ (by (\\ref{8septQ})-(\\ref{27avril1})),\n\\[\nQ^{u,\\bar{p}^u}_{\\bar{t},\\delta} = 2 \\int_{\\bar{t}}^{\\bar{t}+\\delta} p^{u,\\bar{p}^u}(t) \\cdot \\sum_{i=1}^m v_i(t) D_{\\gamma^u(\\bar{t})} X^i \\left( \\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) X^j(\\gamma^u(\\bar{t})) \\, ds \\right) \\, dt\n\\]\nthe first equality in (\\ref{5avril6}) gives\n\\begin{eqnarray*}\n& \\quad & 2 \\int_{0}^{1} p^{u,\\bar{p}^u}(t) \\cdot A^u_v(t)\\, dt - Q^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v) \\\\\n& = & 2 \\int_{\\bar{t}}^{\\bar{t}+\\delta} p^{u,\\bar{p}^u}(t) \\cdot \\left( \\sum_{i=1}^m v_i(t) D_{\\gamma^u(t)} X^i \\left[\\delta_v^1(t)\\right] \\right. \\\\\n& \\quad & \\qquad \\qquad \\qquad \\qquad \\left. - \\sum_{i=1}^m v_i(t) D_{\\gamma^u(\\bar{t})} X^i \\left[ \\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) X^j\\bigl(\\gamma^u(\\bar{t}) \\bigr) \\, ds \\right] \\right) \\, dt \\\\\n& = & 2 \\int_{\\bar{t}}^{\\bar{t}+\\delta} p^{u,\\bar{p}^u}(t) \\cdot \\left( \\sum_{i=1}^m v_i(t) D_{\\bar{\\gamma}(t)} X^i \\right) \\left[ \\delta_v^1(t) - \\int_{\\bar{t}}^t \\sum_{j=1}^m v_j(s) X^j\\bigl(\\gamma^u(\\bar{t}) \\bigr) \\, ds \\right] \\, dt.\n\\end{eqnarray*}\nBy (\\ref{10sept4}), we infer that \n$$\n\\left| 2 \\int_{0}^{1} p^{u,\\bar{p}^u}(t) \\cdot A^u_v(t)\\, dt - Q^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v) \\right| \\leq K_5 \\, \\|v\\|_{L^2}^2 \\, \\left|\\bar{p}^u\\right|_{y} \\, \\delta^{2},\n$$\nfor some constant $K_5$ depending on the datas, which by (\\ref{6avril1}) gives \n$$\n\\left| 2 \\int_0^1 p^{u,\\bar{p}^u}(t) \\cdot \\left[ A^u_v(t)+ D^u_v(t)\\right] \\, dt - Q^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v) \\right| \\leq K_6 \\, \\|v\\|_{L^2}^2 \\, \\left|\\bar{p}^u\\right|_{y} \\, \\delta^{2},\n$$\nfor some constant $K_6$ depending on the datas. We conclude easily by (\\ref{6avril33}).\n\\end{proof}\n\nReturning to the proof of Proposition \\ref{PROPGohNormal}, we fix $\\kappa>0$, $N\\in \\mathbb N$, $u\\in U$ with $\\|u-\\bar{u}\\|_{L^2}< \\rho$, $\\|u\\|_{L^{\\infty}}< 2L$ and we suppose for contradiction that there are $\\bar{t} \\in (0,1)$ and $\\bar{i}\\neq \\bar{j} \\in \\{1,\\cdots ,m\\}$ such that \n\\begin{eqnarray}\\label{27oct1}\nP_{\\bar{i},\\bar{j}} (\\bar{t}) & := & \\frac{ p^{u,\\bar{p}^{u}}( \\bar{t})}{|\\bar{p}^u|_{y}} \\cdot \\left[X^{\\bar{i}},X^{\\bar{j}}\\right] \\bigl(\\gamma^u(\\bar{t}) \\bigr) > \\kappa.\n\\end{eqnarray}\nLet us consider $\\delta>0$ with $[\\bar{t},\\bar{t}+\\delta] \\subset [0,1]$ to be chosen later, we note that we have for every $v \\in L^2\\bigl([0,1],\\mathbb R^m\\bigr)$,\n\\begin{eqnarray}\\label{EQ27avril}\nQ^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v) & = & \\int_{\\bar{t}}^{\\bar{t}+\\delta} \\int_{\\bar{t}}^t \\langle v(s), M_{\\bar{t}}^{u,\\bar{p}^u}v(t)\\rangle \\, ds \\, dt \\nonumber\\\\\n& = & \\int_{\\bar{t}}^{\\bar{t}+\\delta} \\langle w^v(t), M_{\\bar{t}}^{u,\\bar{p}^u}v(t)\\rangle \\, dt \\nonumber\\\\\n& = & \\int_{\\bar{t}}^{\\bar{t}+\\delta} \\sum_{i,j=1}^m w^v_i(t) \\left( M_{\\bar{t}}^{u,\\bar{p}^u}\\right)_{i,j} v_j(t) \\, dt,\n\\end{eqnarray}\nwith\n\\[\nw^v(t) := \\int_{\\bar{t}}^t v(s) \\, ds \\qquad \\forall t \\in [\\bar{t},\\bar{t}+\\delta].\n\\]\nSet $\\bar{N}:=N+n+1$ and denote by $L= L_{\\bar{t},\\delta}$ the vector space in $L^2\\bigl( [0,1],\\mathbb R^m\\bigr)$ of all controls $v$ for which there is a sequence $\\{a_1, \\ldots, a_{\\bar{N}}\\}$ such that \n\\[\n\\left\\{\n\\begin{array}{rcl}\nv_{\\bar{i}}(t) & = & \\sum_{k=1}^{\\bar{N}} a_k \\cos \\left(k \\frac{(t-\\bar{t}) 2\\pi}{\\delta}\\right) \\\\\n v_{\\bar{j}}(t) & = & \\sum_{k=1}^{\\bar{N}} a_k \\sin \\left(k \\frac{(t-\\bar{t}) 2\\pi}{\\delta}\\right)\n \\end{array}\n \\right.\n \\qquad \\forall t \\in \\bigl[ \\bar{t},\\bar{t}+\\delta\\bigr],\n\\]\n\\[\n\\mbox{and} \\quad v_i(t) =0, \\quad \\forall i\\neq \\bar{i},\\bar{j} \\qquad \\forall t \\in [0,1].\n\\]\nThen, we have for every $v\\in L$,\n\\[\n\\left\\{\n\\begin{array}{rcl}\nw^v_{\\bar{i}}(t) & = & \\frac{\\delta}{2\\pi} \\, \\sum_{k=1}^{\\bar{N}} \\frac{a_k}{k} \\sin \\left(k \\frac{(t-\\bar{t}) 2\\pi}{\\delta}\\right) \\\\\nw^v_{\\bar{j}}(t) & = & \\frac{\\delta}{2\\pi} \\, \\sum_{k=1}^{\\bar{N}} \\frac{a_k}{k} \\Bigl(1 - \\cos \\left(k \\frac{(t-\\bar{t}) 2\\pi}{\\delta}\\right) \\Bigr),\n\\end{array}\n \\right.\n \\qquad \\forall t \\in \\bigl[ \\bar{t},\\bar{t}+\\delta\\bigr],\n\\]\n\\[\n\\mbox{and} \\quad w^v_i(t) =0, \\quad \\forall i\\neq \\bar{i},\\bar{j} \\qquad \\forall t \\in [0,1],\n\\]\nwhich gives\n \\[\n\\int_{\\bar{t}}^{\\bar{t}+\\delta} w_{\\bar{i}}^v(t) v_{\\bar{j}}(t) \\, dt = \\sum_{k=1}^{\\bar{N}} \\frac{\\delta^2 a_k^2}{4 \\pi k}, \\quad \n\\int_{\\bar{t}}^{\\bar{t}+\\delta} w_{\\bar{j}}^v(t) v_{\\bar{i}}(t) \\, dt = - \\sum_{k=1}^{\\bar{N}} \\frac{\\delta^2 a_k^2}{4 \\pi k}\n\\]\nand\n\\[\n\\int_{\\bar{t}}^{\\bar{t}+\\delta} w_{\\bar{i}}^v (t) v_{\\bar{i}}(t) \\, dt = \\int_{\\bar{t}}^{\\bar{t}+\\delta} w_{\\bar{j}}^v (t) v_{\\bar{j}}(t) \\, dt=0.\n\\]\nIn conclusion, by (\\ref{EQ27avril}) and by noting that\n\\[\nP_{\\bar{i},\\bar{j}} (\\bar{t}) = \\frac{1}{2|\\bar{p}^u|_{y}} \\cdot \\left( \\left( M_{\\bar{t}}^{u,\\bar{p}^u}\\right)_{\\bar{j},\\bar{i}} - \\left( M_{\\bar{t}}^{u,\\bar{p}^u}\\right)_{\\bar{i},\\bar{j}} \\right)\n\\]\nwe infer that\n\\begin{eqnarray}\\label{1oct1}\nQ^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v)\n& = & \\int_{\\bar{t}}^{\\bar{t}+\\delta} w^v_{\\bar{i}}(t) \\left( M_{\\bar{t}}^{u,\\bar{p}^u}\\right)_{\\bar{i},\\bar{j}} v_{\\bar{j}}(t) + w^v_{\\bar{j}}(t) \\left( M_{\\bar{t}}^{u,\\bar{p}^u}\\right)_{\\bar{j},\\bar{i}} v_{\\bar{i}}(t) \\, dt\\nonumber\\\\\n & = & - \\sum_{k=1}^{\\bar{N}} \\frac{\\delta^2 a_k^2|\\bar{p}^u|_{y}}{2 \\pi k} P_{\\bar{i},\\bar{j}} (\\bar{t}),\n\\end{eqnarray}\nwhere we have\n\\begin{eqnarray}\\label{1oct2}\n\\|v\\|_{L^2}^2 = \\delta \\sum_{k=1}^{\\bar{N}} a_k^2.\n\\end{eqnarray}\nLet us now distinguish between the cases $\\bar{p}_0^u=0$ and $\\bar{p}_0^u\\neq0$.\\\\\n\n\\noindent Case 1: $\\bar{p}_0^u=0$.\\\\\nIf $\\delta>0$ is small enough then (\\ref{8sept3}), (\\ref{1oct1}) and (\\ref{1oct2}) imply that \n\\[\n \\bar{p}^u \\cdot D_{u}^2 E (v) < 0 \\qquad \\forall v \\in L\\setminus\\{0\\},\n\\]\nwhere the vector space $L\\subset L^2\\bigl([0,1],\\mathbb R^m\\bigr)$ has dimension $\\bar{N}=N+n+1$. Since $\\bar{p}_0^{u}=0$ and $\\mbox{codim}(\\mbox{Ker} (D_{u}E))\\leq n$, this contradicts (\\ref{HypGohNormal3}). \\\\\n\n\\noindent Case 2: $\\bar{p}_0^{u}\\neq 0$.\\\\\nBy (\\ref{27oct1}), (\\ref{1oct1}) and (\\ref{1oct2}) and by noticing that\n\\[\n\\sum_{k=1}^{\\bar{N}} a_k^2 \\leq \\bar{N} \\sum_{k=1}^{\\bar{N}} \\frac{a_k^2}{k}, \n\\]\nwe note that we have for every $v\\in L\\setminus \\{0\\}$,\n\\[\n\\frac{Q^{u,\\bar{p}^u}_{\\bar{t},\\delta}(v) - \\bar{p}_0^u \\|v\\|_{L^2}^2}{\\|v\\|_{L^2}^2 |\\bar{p}^u|_{y} \\delta^2} = \\frac{- P_{\\bar{i},\\bar{j}} \\bigl( \\bar{t}\\bigr) }{2\\pi} \\frac{\\sum_{k=1}^{\\bar{N}} \\frac{a_k^2}{k}}{ \\delta \\sum_{k=1}^{\\bar{N}} a_k^2} - \\frac{\\bar{p}_0^u}{|\\bar{p}^u|_{y} \\delta^2} \\leq -\\frac{\\kappa}{2\\pi\\bar{N}\\delta} + \\frac{\\left|\\bar{p}_0^u\\right|}{|\\bar{p}^u|_{y} \\delta^2}.\n\\]\nTake $\\delta>0$ small such that\n\\[\n- \\frac{\\kappa}{2\\pi \\bar{N}\\delta} +K <0\n\\]\nand suppose that \n\\[\n\\frac{\\left|\\bar{p}_0^u\\right|}{|\\bar{p}^u|_{y} \\delta^2}< \\left| - \\frac{\\kappa }{2\\pi \\bar{N}\\delta} +K\\right| \\quad \\Leftrightarrow \\quad |\\bar{p}^u|_y > \\left| K\\delta^2 - \\frac{\\kappa \\delta}{2\\pi \\bar{N}} \\right|^{-1} \\, \\left| \\bar{p}_0^u\\right|.\n\\]\nThen by (\\ref{8sept3}) and the above inequality, since $D^2_uC=\\|\\cdot\\|_{L^2}$, we infer that\n\\[\n \\bar{p}^u \\cdot D_{u}^2 E (v) - \\bar{p}_0^u D_u^2C(v)< 0 \\qquad \\forall v \\in L\\setminus\\{0\\},\n\\]\nwhere the vector space $L\\subset L^2\\bigl([0,1],\\mathbb R^m\\bigr)$ has dimension $\\bar{N}=N+n+1$. Since $\\mbox{codim}(\\mbox{Ker} (D_{u}F))\\leq n+1$, this contradicts (\\ref{HypGohNormal3}).\n\\end{proof}\n\n\\begin{remark}\\label{20oct1}\nWe note that in the case where $u=\\bar{u}$ and $\\bar{p}_0^u=0$, Proposition \\ref{PROPGohNormal} asserts that if we have \n\\begin{eqnarray*}\n \\mbox{\\em ind}_- \\left( \\bar{p}^{\\bar{u}} \\cdot \\left( D^2_{\\bar{u}} E \\right)_{\\vert \\mbox{\\em Ker} (D_{\\bar{u}}E)} \\right) < +\\infty\n\\end{eqnarray*}\nfor some $ \\bar{p}^{\\bar{u}}\\neq 0$ such that $\\bar{p}^{\\bar{u}} \\cdot D_{\\bar{u}} E =0$, that is $\\bar{p}^{\\bar{u}}\\in \\mbox{Im}(D_{\\bar{u}}E)^{\\perp}$, then the corresponding abnormal extremal $\\psi^{\\bar{u},\\bar{p}^{\\bar{u}}}$ satisfies the Goh condition. This is the classical Goh Condition for Abnormal Geodesics due to Agrachev and Sarychev, see \\cite[Proposition 3.6 p. 389]{as99} and \\cite[Theorem 2.20 p. 61]{riffordbook}.\n\\end{remark}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPGohLip}]\nLet $x\\in M$, $y\\in M \\setminus \\mbox{Lip}(f_x) $, $p \\in \\partial^-_{PL}f_x(y)$, two sequences $\\{(y_k,p_k)\\}_{k\\in \\mathbb N}$ in $T^*M$ and $\\{\\lambda_k\\}_{k\\in \\mathbb N}$ in $(0,+\\infty)$ satisfying\n\\[\n\\lim_{k\\rightarrow +\\infty} \\left|p_k\\right|_{y_k} = +\\infty, \\quad \\lim_{k\\rightarrow +\\infty} \\left( y_k,\\lambda_k p_k\\right) = (y,p)\n\\quad \\mbox{and} \\quad p_k \\in \\partial^-f_x(y_k) \\quad \\forall k \\in \\mathbb N,\n\\]\nand $\\gamma :[0,1] \\rightarrow M$ be an horizontal minimizing path from $x$ to $y$ obtained as the uniform limit of a subsequence of the sequence of minimizing geodesics $\\{\\gamma_k\\}_{k\\in \\mathbb N}$ given by the projections of the normal extremals $\\psi_k:[0,1] \\rightarrow T^*M$ verifying $\\psi_k(1)=(y_k,p_k)$. By Proposition \\ref{PROPidem}, we may indeed assume that there are two sequences $\\{(\\tilde{y}_k,\\tilde{p}_k)\\}_{k\\in \\mathbb N}$ in $T^*M$ and $\\{\\tilde{\\lambda}_k\\}_{k\\in \\mathbb N}$ in $(0,+\\infty)$ satisfying\n\\[\n\\lim_{k\\rightarrow +\\infty} \\left| \\tilde{p}_k\\right|_{\\tilde{y}_k} = +\\infty, \\quad \\lim_{k\\rightarrow +\\infty} \\left( \\tilde{y}_k,\\tilde{\\lambda}_k \\tilde{p}_k\\right) = (y,p)\n\\quad \\mbox{and} \\quad \\tilde{p}_k \\in \\partial^-_Pf_x(\\tilde{y}_k) \\quad \\forall k \\in \\mathbb N,\n\\]\nand that $\\gamma :[0,1] \\rightarrow M$ is the uniform limit of a subsequence of the sequence of minimizing geodesics $\\{\\tilde{\\gamma}_k\\}_{k\\in \\mathbb N}$ given by the projections of the normal extremals $\\tilde{\\psi}_k:[0,1] \\rightarrow T^*M$ verifying $\\tilde{\\psi}_k(1)=(\\tilde{y}_k,\\tilde{p}_k)$.\nAssume as in Section \\ref{SECExtremals} that $\\gamma=\\gamma ^{\\bar{u}}$ for some $\\bar{u}$ in an open set $U$ of $L^2([0,1],\\mathbb R^m)$ where the end-point mapping $E$ is well-defined and smooth. Then, for $k$ large enough there is $\\tilde{u}_k \\in U$ such that $\\tilde{\\gamma}_k=\\bar{\\gamma}^{\\tilde{u}_k}$ and moreover we can without loss of generality parametrize $\\bar{\\gamma}^{\\tilde{u}_k}$ by arc-length and assume that $|\\tilde{u}_k(t)|=d_{SR}(x,\\tilde{y}_k)$ almost everywhere in $[0,1]$, so that $\\|\\tilde{u}_k\\|_{\\infty}<\\infty$. By Remark \\ref{REM28oct}, we have \n\\[\n D^2_{\\tilde{u}_k}C (v) - \\tilde{p}_k \\cdot D^2_{\\tilde{u}_k}E\\geq 0 \\qquad \\forall v \\in \\mbox{Ker} (D_{\\tilde{u}_k} E)\n\\]\nso we have \n\\[\n\\mbox{ind}_- \\left( \\left( -\\tilde{p}_k,1\\right) \\cdot \\left( D^2_{\\tilde{u}_k} F \\right)_{\\vert \\mbox{Ker} (D_{\\tilde{u}_k} F)} \\right) =0.\n\\]\nTherefore, by Proposition \\ref{PROPGohNormal}, we have \n\\[\n\\lim_{k \\rightarrow +\\infty} \\frac{ \\tilde{\\psi}_k(t) }{ |\\tilde{p}_k|_{\\tilde{y}_k} } \\cdot \\left[ X^i,X^j\\right] \\left(\\tilde{\\gamma}_k(t)\\right) = 0.\n\\]\nBy a compactness argument, we conclude that, up to a subsequence, the sequence of extremals $\\{\\tilde{\\lambda}_k\\tilde{\\psi}_k \\}_k$ converges to an abnormal extremal satisfying the required property (see {\\it e.g.} \\cite[Lemma 4.8]{trelat00}). \n\\end{proof}\n\n\\subsection{Characterizations of the Minimizing Sard Conjecture}\n\nWe are now ready to give several different characterizations of the Minimizing Sard Conjecture (recall that $f_x$ has been defined in (\\ref{19oct1})). \n\n\\begin{proposition}\\label{PROPcharac}\nFor every $x\\in M$, the following properties are equivalent:\n\\begin{itemize}\n\\item[(i)] The closed set $\\mbox{Abn}^{min}(x)$ has Lebesgue measure zero in $M$.\n\\item[(ii)] The closed set $\\mbox{Goh-Abn}^{min}(x)$ has Lebesgue measure zero in $M$.\n\\item[(iii)] For almost every $y\\in M$, $\\partial^-_{PL}f_x(y)= \\emptyset$. \n\\item[(iv)] For almost every $y\\in M$, $\\partial^-f_x(y)\\neq \\emptyset$. \n\\item[(v)] The set $\\mbox{Lip}^-(f_x)$ has full Lebesgue measure in $M$. \n\\item[(vi)] The function $d_x$ is differentiable almost everywhere in $M$. \n\\item[(vii)] The open set $\\mbox{Lip}\\,(f_x)$ has full Lebesgue measure in $M$.\n\\item[(viii)] For almost every $y\\in M$, $\\partial^-_Pf_x(y)\\neq \\emptyset$. \n\\item[(ix)] The function $f_x$ is smooth on an open subset of $M$ of full Lebesgue measure in $M$. \n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPcharac}]\nThe implications (ix) $\\Rightarrow$ (viii) and (ix) $\\Rightarrow$ (vii) are immediate, (viii) $\\Rightarrow$ (iv) follows from the inclusion $\\partial^-_Pf_x\\subset \\partial^-f_x$, (vii) $\\Rightarrow$ (vi) is a consequence of Rademacher's Theorem, (vii) $\\Rightarrow$ (v) follows from the inclusion $\\mbox{Lip}(d_x) \\subset \\mbox{Lip}^-(d_x)$, (vii) $\\Leftrightarrow$ (iii) is a consequence of Proposition \\ref{PROPLip-PL}, (v) $\\Rightarrow$ (iv) is a corollary of Proposition \\ref{PROPLip-Sub}, and (vi) $\\Rightarrow$ (iv) follows by definition of $\\partial^-d_x$. Moreover, (iv) $\\Rightarrow$ (i) follows by Proposition \\ref{PROPsub1} together with Sard's Theorem applied to the mapping $\\exp_x$, (ii) $\\Rightarrow$ (vii) is a consequence of Proposition \\ref{PROPGohLip} and (i) $\\Rightarrow$ (ii) follows from the inclusion $\\mbox{Goh-Abn}^{min}(x) \\subset \\mbox{Abn}^{min}(x)$. Thus it remains to show that (i) $\\Rightarrow$ (ix), this proof can be found in \\cite[\\S 2.1]{br20}.\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{THM}}\\label{SECProofTHM}\n\nLet $x\\in M$ be fixed and $M_x$ be the set of points $y\\in M\\setminus\\{x\\}$ for which all minimizing geodesics have Goh-rank at most $1$, which by assumption has full Lebesgue measure in $M$. The first step of the proof of Theorem \\ref{THM} consists in showing that we may indeed assume that we work in the open unit ball of $\\mathbb R^n$ with a distribution that admits a global parametrization by $m$ smooth vector fields. For every $y \\in \\mathbb R^n$ and $r>0$, we denote by $B_r(y)$ the open unit ball in $\\mathbb R^n$ centered at $y$ with radius $r$. Moreover, we recall that a point $y\\in \\mathbb R^n$ is a Lebesgue density point (or a point of density $1$) for a measurable set $E\\subset \\mathbb R^n$ if\n\\[\n\\lim_{r\\rightarrow 0} \\frac{\\mathcal{L}^n\\left(B_r(y)\\cap E \\right)}{\\mathcal{L}^n\\left(B_r(y)\\right)}=1,\n\\]\nwhere $\\mathcal{L}^n$ stands for the Lebesgue measure in $\\mathbb R^n$ and that if $E$ has positive Lebesgue measure then almost every point of $E$ is a Lebesgue density point for $E$ (see {\\it e.g.} \\cite{eg15}). Our first result is the following:\n\n\\begin{proposition}\\label{PROPSTEP1}\nIf $\\mbox{\\rm Abn}^{min}(x)$ has positive Lebesgue measure in $M$, then there are a complete sub-Riemannian structure $(\\bar{\\Delta},\\bar{g})$ of rank $m$ on $B_1(0)$ generated by an orthonormal family of smooth vector fields $\\bar{\\mathcal{F}}=\\{\\bar{X}^1, \\ldots, \\bar{X}^m\\}$ on $B_1(0)$ along with a point $\\bar{y} \\in B_1(0)\\setminus \\{0\\}$ such that the following properties are satisfied:\n\\begin{itemize}\n\\item[(i)] all minimizing horizontal paths from $0$ to $\\bar{y}$ have Goh-rank at most $1$,\n\\item[(ii)] $\\bar{y}$ is a Lebesgue density point of $\\mbox{\\rm Abn}^{min}(0)$. \n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPSTEP1}]\nLet us first recall a result of compactness for minimizing geodesics. For every $y\\in M$, we denote by $\\Gamma^y \\subset W^{1,2}([0,1],M)$ the set of minimizing geodesics from $x$ to $y$. We refer the reader to \\cite{agrachev01,riffordbook} for the proof of the following result:\n\n\\begin{lemma}\\label{STEP1_LEM1}\nFor every compact set $\\mathcal{K}\\subset M$, the set of $\\gamma \\in \\Gamma^y$ with $y\\in \\mathcal{K}$ is a compact subset of $W^{1,2}([0,1],M)$ and the mapping $y\\in M \\mapsto \\Gamma^y \\in \\mathcal{C}(W^{1,2}([0,1],M))$ has closed graph (here $\\mathcal{C}(W^{1,2}([0,1],M))$ stands for the set of compact subsets of $W^{1,2}([0,1],M)$ equipped with the Hausdorff topology).\n\\end{lemma}\n\nAssume now that $\\mbox{Abn}^{min}(x)$ has positive Lebesgue measure in $M$ and pick a Lebesgue density point $\\bar{y}$ of the set $\\mbox{Abn}^{min}(x)\\cap M_x$. Lemma \\ref{STEP1_LEM1} along with the parametrization that can be made along every minimizing geodesic as shown in Section \\ref{SECExtremals} yields the following result:\n\n\\begin{lemma}\\label{STEP1_LEM2}\nThere are an open neighborhood $\\mathcal{B}$ of $\\bar{y}$, a positive integer $N$, $N$ minimizing geodesics $\\bar{\\gamma}^1, \\ldots$, $\\bar{\\gamma}^N \\in \\Gamma^{\\bar{y}}$, $N$ open sets $\\mathcal{V}^1, \\ldots, \\mathcal{V}^N \\subset M$ diffeomorphic to $B_1(0)$ containing respectively $\\bar{\\gamma}^1([0,1]), \\ldots, \\bar{\\gamma}^N([0,1])$, $N$ orthonormal families of smooth vector fields $\\mathcal{F}^1 = \\{X^{1,1}, \\ldots, X^{1,m}\\}, \\ldots, \\mathcal{F}^N = \\{X^{N,1}, \\ldots, X^{N,m}\\}$ defined respectively on $\\mathcal{V}^1, \\ldots, \\mathcal{V}^N$ and $N$ open sets $\\mathcal{W}^1, \\ldots, \\mathcal{W}^N$ containing $\\mathcal{B}$ with \n\\[\n\\bar{\\gamma}^1([0,1]) \\subset \\overline{\\mathcal{W}}^1 \\subset \\mathcal{V}^1, \\ldots, \\bar{\\gamma}^N([0,1]) \\subset \\overline{\\mathcal{W}}^N \\subset \\mathcal{V}^N\n\\]\nsuch that the following properties are satisfied:\n\\begin{itemize}\n\\item[(i)] For every $k=1, \\ldots, N$ and every $z\\in \\mathcal{V}^k$, $\\Delta(z) = \\mbox{Span}\\{X^{k,1}(z), \\ldots, X^{k,m}(z)\\}$. \n\\item[(ii)] For every $y\\in \\mathcal{B}$ and every $\\gamma \\in \\Gamma^y$, there is $k_{\\gamma} \\in \\{1, \\ldots,N\\}$ such that \n\\[\n \\gamma([0,1]) \\subset \\mathcal{W}^{k_{\\gamma}}.\n\\]\n\\end{itemize}\n\\end{lemma}\n\nWe now need to consider a family of sub-Riemannian distances from $x$ whose minimum over $\\mathcal{B}$ coincides with $d_{SR}(x,\\cdot)$ (the sub-Riemannian distance with respect to $(\\Delta,g)$). For this purpose, we state the following lemma whose proof is left to the reader.\n\n\\begin{lemma}\\label{STEP1_LEM3}\nFor every $k=1, \\ldots,N$, there is a smooth function $\\psi^k : \\mathcal{V}^k \\rightarrow [1,+\\infty)$ satisfying the following properties:\n\\begin{itemize}\n\\item[(i)] $\\psi^k(z)=1$ for all $z\\in \\mathcal{W}^k$.\n\\item[(ii)] $\\psi^k(z)>1$ for all $z\\in \\mathcal{V}^k \\setminus \\overline{\\mathcal{W}}^k$.\n\\item[(iii)] The sub-Riemannian structure $(\\Delta,g^k:=\\psi^k g)$ on $\\mathcal{V}^k$ is complete. \n\\end{itemize}\n\\end{lemma}\n\nFor every $k=1, \\ldots,N$, we define the function $F^k:\\mathcal{V}^k\u00a0\\rightarrow [0,+\\infty)$ by\n\\[\nF^k(y):= \\frac{1}{2} d_{SR}^{g^k}(x,y)^2 \\qquad \\forall y \\in \\mathcal{V}^k,\n\\]\nwhere $d_{SR}^{g^k}$ stands for the sub-Riemannian distance associated with $(\\Delta,g^k)$. Since each sub-Riemannian structure $(\\Delta,g^k)$ is complete on $\\mathcal{V}_k$, each $F^k$ is continuous. Moreover, since $\\mathcal{B}$ is contained in all $\\mathcal{W}^k$, we have thanks to Lemma \\ref{STEP1_LEM2} (ii) and Lemma \\ref{STEP1_LEM3} (i),\n\\[\nf_x(y) := \\frac{1}{2} d_{SR}(x,y)^2 = \\min \\Bigl\\{ F^1(y), \\ldots, F^N(y) \\Bigr\\} \\qquad \\forall y \\in \\mathcal{B}.\n\\]\nBy permuting the indices $1, \\ldots, N$ if necessary, we may indeed assume that there is $\\bar{N} \\in \\{1, \\ldots, N\\}$ such that \n\\[\nf_x(\\bar{y}) = F^1(\\bar{y})= \\cdots = F^{\\bar{N}}(\\bar{y}),\n\\]\nso that there is an open neighborhood $\\mathcal{B}'\\subset \\mathcal{B}$ of $\\bar{y}$ satisfying\n\\[\nf_x(y)= \\min \\Bigl\\{ F^1(y), \\ldots, F^{\\bar{N}}(y) \\Bigr\\} \\qquad \\forall y \\in \\mathcal{B}'.\n\\]\nFurthermore, if for every $y\\in \\mathcal{B}'$ and every $k\\in \\{1,\\ldots, \\bar{N}\\}$, we denote by $\\Gamma^y_k \\subset W^{1,2}([0,1],M)$ the set of minimizing geodesics from $x$ to $y$ with respect to $(\\Delta,g^k)$, then we have (by Lemma \\ref{STEP1_LEM2} (ii) and Lemma \\ref{STEP1_LEM3} (i)-(ii))\n\\[\n\\Gamma^{\\bar{y}}_k = \\Gamma^{\\bar{y}} \\qquad \\forall k=1, \\ldots, \\bar{N},\n\\]\nand in addition, since $\\bar{y}\\in M_x$, all minimizing geodesics from $x$ to $\\bar{y}$ have Goh-rank at most $1$. As a consequence, since $\\Gamma^{\\bar{y}}$ is a compact subset of $W^{1,2}([0,1],M)$ and the mappings $y\\in \\mathcal{B}' \\mapsto \\Gamma^y_k \\in \\mathcal{C}(W^{1,2}([0,1],M))$, with $k=1, \\ldots, \\bar{N}$, have closed graph and since the mapping $\\gamma \\in \\Omega^{\\Delta}_x \\mapsto \\mbox{Goh-rank}(\\gamma)\\in \\mathbb N$ (recall that $\\Omega^{\\Delta}_x$ stands for the set of horizontal paths $\\gamma:[0,1] \\rightarrow M$ such that $\\gamma(0)=x$) is upper semi-continuous, we infer that there is an open neighborhood $\\mathcal{B}''\\subset \\mathcal{B}'$ of $\\bar{y}$ such that \n\\[\n\\mbox{Goh-rank}(\\gamma)\\leq 1 \\qquad \\forall y \\in \\mathcal{B}'', \\, \\forall k=1, \\ldots,\\bar{N}, \\, \\forall \\gamma \\in \\Gamma^{y}_k. \n\\]\nWe are ready to complete the proof of Proposition \\ref{PROPSTEP1}. We claim that there is $\\bar{k}$ in $\\{1,\\ldots, \\bar{N}\\}$ such that $\\mathcal{B}'' \\setminus \\mbox{Lip}^-(F^{\\bar{k}})$ has positive Lebesgue measure. As a matter of fact, otherwise there is a set $\\tilde{\\mathcal{B}}$ of full Lebesgue measure in $\\mathcal{B}''$ such that every point in $\\tilde{\\mathcal{B}}$ belongs to $\\mbox{Lip}^-(F^k)$ for all $k=1, \\ldots, \\bar{N}$. Then, for every $y\\in \\tilde{\\mathcal{B}}$, each $F^k$ admits a support function $\\varphi^k$ from below at $y$ which is Lipschitz on its domain and so the function $\\min \\{\\varphi^k\\, \\vert \\, k=1,\\ldots,N\\}$ is a support function from below for $f_x$ at $y$ which is Lipschitz on its domain (given by the intersection of the domains of $\\varphi^1, \\ldots, \\varphi^N$). Therefore, $\\tilde{\\mathcal{B}}$ is contained in $\\mbox{Lip}^-(f_x)$ and as a consequence the proof of Proposition \\ref{PROPcharac} shows that $\\mbox{Abn}^{min}(x)\\cap \\mathcal{B}''$ has Lebesgue measure zero. This contradicts the fact that $\\bar{y}\\in \\mathcal{B}$ is a Lebesgue density point of $\\mbox{Abn}^{min}(x)$. We conclude the proof by considering a Lebesgue density point $\\hat{y}$ of the set \n\\[\n\\mathcal{B}'' \\setminus \\mbox{Lip}^-(F^{\\bar{k}})\\subset \\mathcal{B}'' \\setminus \\mbox{Lip}(F^{\\bar{k}}) \\subset \\mbox{Abn}_{\\bar{k}}^{min}(x)\\cap \\mathcal{B}''\n\\]\n(the inclusion follows by Proposition \\ref{PROPLip-PL} and Proposition \\ref{PROPsub3}, and $\\mbox{Abn}_k^{min}(x)$ stands for the set of points that can be reached from $x$ through singular minimizing horizontal paths with respect to $(\\Delta,g^{\\bar{k}})$)\nand by pushing forward the sub-Riemannian structure $(\\Delta,g^{\\bar{k}})$, the family of vector fields $\\mathcal{F}^{\\bar{k}} = \\{X^{\\bar{k},1}, \\ldots, X^{\\bar{k},m}\\}$ on $\\mathcal{V}^{\\bar{k}}$ and $\\hat{y}$ to $B_1(0)$ by using the diffeomorphism from $\\mathcal{V}^{\\bar{k}}$ to $B_1(0)$ (note that we have to multiply each vector field by $1\/\\sqrt{\\psi^{\\bar{k}}}$ to obtain an orthonormal family).\n\\end{proof}\n\nNow, we suppose for contradiction that $\\mbox{Abn}^{min}(x)$ has positive Lebesgue measure in $M$ and we consider the sub-Riemannian structure $(\\bar{\\Delta},\\bar{g})$ on $B_1(0)$, the orthonormal family of smooth vector fields $\\bar{\\mathcal{F}}$ on $B_1(0)$ and the Lebesgue density point $\\bar{y}$ (of $\\mbox{Abn}^{min}(0)$) given by Proposition \\ref{PROPSTEP1}. We define the continuous function $F:B_1(0)\\rightarrow [0,+\\infty)$ by \n\\[\nF(y):= \\frac{1}{2} d_{SR} (0,y)^2 \\qquad \\forall y \\in B_1(0),\n\\]\nwhere $d_{SR}$ stands for the sub-Riemannian metric associated with $(\\bar{\\Delta},\\bar{g})$ and we denote respectively by $\\exp_0$ the exponential mapping associated with $(\\bar{\\Delta},\\bar{g})$, by $\\mathcal{C}_0$ the set of its critical values, by $M_0$ the set of points $y\\in B_1(0)\\setminus\\{0\\}$ for which all minimizing geodesics from $0$ to $y$ have Goh-rank at most $1$, and by $\\Gamma^y \\subset W^{1,2}([0,1],B_0(1))$ the set of minimizing geodesics from $0$ to $y\\in B_1(0)$. It follows from the proof of Proposition \\ref{PROPcharac} and the upper semi-continuity of the mapping $\\gamma \\in \\Omega^{\\bar{\\Delta}}_0 \\mapsto \\mbox{Goh-rank}(\\gamma)\\in \\mathbb N$ (where $\\Omega^{\\bar{\\Delta}}_0$ stands for the set of horizontal paths $\\gamma:[0,1] \\rightarrow B_0(1)$ such that $\\gamma(0)=0$) that the set $\\mathcal{A} \\subset \\mathcal{B}_1(0)$ defined by \n\\begin{eqnarray}\\label{20janv3}\n\\mathcal{A} := M_0 \\setminus \\left( \\mbox{\\rm Lip}^-(F) \\cup \\mathcal{C}_0 \\right)\n\\end{eqnarray}\nhas positive Lebesgue measure. We pick a Lebesgue density point $\\hat{y}$ of $\\mathcal{A}$. \n\nThen, we denote by $U$ the open set of $u \\in L^2([0,1],\\mathbb R^m)$ for which the solution $\\gamma^u:[0,1] \\rightarrow B_1(0)$ to the Cauchy problem \n\\[\n\\dot{\\gamma}^u (t) = \\sum_{i=1}^m u_i(t) \\, \\bar{X}^i \\left(\\gamma^u(t)\\right) \\, \\mbox{ for a.e. } t \\in [0,1], \\quad \\gamma^u(0)=0\n\\]\nis well-defined, we denote by $E:U \\rightarrow B_1(0)$ the end-point mapping associated with $0$ and $\\bar{\\mathcal{F}}$ defined by\n\\[\nE(u) := \\gamma^u(1) \\qquad \\forall u \\in U,\n\\]\nand we recall that the function $C: L^2([0,1],\\mathbb R^m)\\rightarrow [0,+\\infty)$ has been defined in Section \\ref{SECExtremals} by \n\\[\nC(u) :=\\frac{1}{2} \\|u\\|_{L^2}^2 \\qquad \\forall u \\in L^2([0,1],\\mathbb R^m).\n\\]\nMoreover, for every vector space $V \\subset\\mathbb R^n$, we denote by $\\mbox{Proj}_{V}^{\\perp}:\\mathbb R^n \\rightarrow V$ the orthogonal projection to $V$ (with respect to the Euclidean metric) and for every $y\\in \\mathbb R^n$ we define the affine space $V(y)$ by\n\\[\nV(y) := \\{y\\} + V.\n\\]\nThe second step of the proof of Theorem \\ref{THM} consists in proving the following result which is a preparatory result for the next step.\n\n\\begin{proposition}\\label{PROPSTEP2}\nThere are $\\delta, r, \\rho \\in (0,1)$, $K>0$, a control $\\hat{u}$ in $U$ with $\\gamma^{\\hat{u}}\\in \\Gamma^{\\hat{y}}$, a linear hyperplane $V\\subset \\mathbb R^n$ and a smooth function $h:[0,\\delta^2\/4) \\rightarrow [0,+\\infty)$ such that the following properties are satisfied: \n\\begin{itemize}\n\\item[(i)] For every $u\\in U$ with $\\|u-\\hat{u}\\|_{L^2} < \\delta$ and every $z\\in V(\\gamma^u(1)) \\cap B_{\\rho}(\\gamma^u(1))$, there is $v \\in U$ such that \n \\begin{eqnarray}\\label{PROPSTEP2_1}\n \\mbox{\\em Proj}_{V(\\gamma^u(1))}^{\\perp} \\left(\\gamma^{v}(1)\\right)=z,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{PROPSTEP2_2} \n \\left| \\gamma^v(1)-z \\right| \\leq K \\left| z-\\gamma^u(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{PROPSTEP2_3}\n C(v) \\leq C(u) + K \\left|z - \\gamma^u(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{PROPSTEP2_4}\n \\left| \\| v-\\hat{u}\\|_{L^2}^2 - \\| u-\\hat{u}\\|_{L^2}^2 \\right| \\leq K \\left| z-\\gamma^u(1)\\right|.\n \\end{eqnarray}\n \\item[(ii)] The function $h$ is smooth, nondecreasing and satisfies \n \\[\nh(\\alpha) = 0 \\, \\, \\forall \\alpha \\in [0,\\delta^2\/4], \\quad h(\\alpha) > 0 \\, \\,\\forall \\alpha \\in (\\delta^2\/4,\\delta^2),\u00a0\\quad \\lim_{\\alpha \\rightarrow \\delta^2} h(\\alpha)= +\\infty.\n\\]\n\\item[(iii)] The function $W: B_r(\\hat{y})\u00a0\\rightarrow [0,+\\infty)$ defined by\n\\[\nW(y) := \\inf \\Bigl\\{ C(u) + h\\left(\\|u-\\hat{u}\\|_{L^2}^2\\right) \\, \\vert \\, u \\in U \\mbox{ s.t. } \\gamma^{u}(1)=y \\Bigr\\} \\qquad \\forall y \\in B_r(\\hat{y})\u00a0\n\\]\nis continuous and the set \n\\[\n\\mathcal{K} := \\Bigl\\{y\\in B_{r\/2}(\\hat{y})\\setminus \\mbox{\\rm Lip}^-(W)\\, \\vert \\, W(y)=F(y) \\Bigr\\} \n\\]\nhave positive Lebesgue measure. \n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}[Proof of Proposition \\ref{PROPSTEP2}]\nWe start with the following lemma which is an easy consequence of the construction of $\\hat{y}$. \n\n\\begin{lemma}\\label{STEP2_LEM1}\nFor every $\\gamma \\in \\Gamma^{\\hat{y}}$, one of the following situation occurs: \n\\begin{itemize}\n\\item[(i)] $\\gamma$ is not a singular horizontal path. \n\\item[(ii)] $\\gamma$ is a singular horizontal path and $\\mbox{Goh-rank}(\\gamma)=1$. \n\\end{itemize} \n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{STEP2_LEM1}]\nLet $\\gamma \\in \\Gamma^{\\hat{y}}$ be fixed. If $\\gamma$ is the projection of a normal extremal (w.r.t $(\\bar{\\Delta},\\bar{g})$), then there is $p\\in T_0^*B_1(0)=(\\mathbb R^n)^*$ such that $y=\\exp_0(p)$. Since $\\hat{y}\\in \\mathcal{A}\\subset B_1(0) \\setminus \\mathcal{C}_0$, the mapping $\\exp_0$ is a submersion at $p$ and as a consequence $\\gamma$ is not a singular horizontal path. Otherwise, $\\gamma$ is not projection of a normal extremal and so it is singular and admit a abnormal lift satisfying the Goh condition (see Remark \\ref{20oct1}). Since $\\hat{y}$ belongs to $M_0$, we have $\\mbox{Goh-rank}(\\gamma)=1$. \n\\end{proof}\n\nWe need to consider all minimizing geodesics in $\\Gamma^{\\hat{y}}$ and so to distinguish between the cases (i) and (ii) of Lemma \\ref{STEP2_LEM1}. The following result follows easily from the Inverse Function Theorem (see {\\it e.g.} \\cite[Proof of Theorem 3.14 p. 99]{riffordbook}). \n\n\\begin{lemma}\\label{STEP2_LEM2}\nLet $u \\in U$ with $\\gamma^u \\in \\Gamma^{\\hat{y}}$ and $\\gamma^u$ non-singular be fixed. Then there are $\\delta, \\rho \\in (0,1)$ and $K>0$ such that the following property is satisfied: For every $v\\in U$ with $\\|v-u\\|_{L^2} < \\delta$ and every $z\\in B_{\\rho}(\\gamma^v(1))$, there is $w \\in U$ such that\n \\begin{eqnarray}\\label{STEP2_LEM2_1}\n \\gamma^w(1) =z, \n \\end{eqnarray}\n \\begin{eqnarray}\\label{STEP2_LEM2_2}\n C(w) \\leq C(v) + K \\left|z-\\gamma^v(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{STEP2_LEM2_3}\n \\| w-v\\|_{L^2} \\leq K \\left|z - \\gamma^v(1)\\right|,\n\\end{eqnarray}\nwhere the last inequality implies\n \\begin{eqnarray}\\label{STEP2_LEM2_4}\n \\left|\\| w-u\\|_{L^2}^2 - \\| v-u\\|_{L^2}^2\\right| \\leq K(K+2) \\left| z-\\gamma^v(1)\\right|.\n \\end{eqnarray}\n\\end{lemma}\n\nFor every singular minimizing geodesic $\\gamma^u \\in \\Gamma^{\\hat{y}}$ with $u\\in U$ (since $\\hat{y}\\in M_0$, all those $\\gamma^u$ verify $\\mbox{Goh-rank}(\\gamma^u)= 1$), we denote by $P^u$ the vector line of co-vectors $p\\in (\\mathbb R^n)^*$ for which there is an abnormal lift $\\psi$ of $\\gamma^u$ satisfying the Goh condition and $\\psi(1)=(\\hat{y},p)$, and we define the hyperplane $V^u\\subset \\mathbb R^n$ by \n\\begin{eqnarray}\\label{10oct2}\nV^{u} := \\left( P^{u}\\right)^{\\perp} := \\Bigl\\{ v \\in \\mathbb R^n \\, \\vert \\, p\\cdot v=0, \\, \\forall p \\in P^{u} \\Bigr\\}.\n\\end{eqnarray}\nNote that since any co-vector in $P^{u}$ annihilates the image of the end-point mapping $E$ from $0$ associated with $\\mathcal{F}$ in $U$ (see Section \\ref{SECExtremals}), we have \n\\begin{eqnarray}\\label{21oct7}\n\\mbox{Im} \\left(D_{u}E\\right) \\subset V^{u}.\n\\end{eqnarray}\nThe following lemma follows from an extension of a theorem providing a sufficient condition for local openness of a mapping at second order due to Agrachev and Sachkov \\cite{as04}, we refer the reader to Appendix \\ref{SECSecondOrder} for the statement of the result and its proof. \n\n\\begin{lemma}\\label{STEP2_LEM3}\n Let $u \\in U$ with $\\gamma^u \\in \\Gamma^{\\hat{y}}$ and $\\gamma^u$ singular of Goh-rank $1$, be fixed. Then there are $\\delta, \\rho \\in (0,1)$ and $K>0$ such that the following properties are satisfied: For every $v\\in U$ with $\\|v-u\\|_{L^2} < \\delta$ and every $z\\in V^{u}(\\gamma^v(1)) \\cap B_{\\rho}(\\gamma^v(1))$, there is $w \\in U$ such that \n \\begin{eqnarray}\\label{STEP2_LEM3_1}\n \\mbox{\\em Proj}_{V^{u}(\\gamma^v(1))}^{\\perp} \\left(\\gamma^{w}(1)\\right)=z, \n \\end{eqnarray}\n \\begin{eqnarray}\\label{STEP2_LEM3_2}\n \\left| \\gamma^w(1)-z \\right| \\leq K \\left| z-\\gamma^v(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{STEP2_LEM3_3}\n C(w) \\leq C(v) + K \\left|z - \\gamma^v(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{STEP2_LEM3_4}\n \\left|\\| w-u\\|_{L^2}^2 - \\| v-u\\|_{L^2}^2\\right| \\leq K \\left| z-\\gamma^v(1)\\right|.\n \\end{eqnarray}\n \\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{STEP2_LEM3}]\n Let $u \\in U$, with $\\gamma^u \\in \\Gamma^{\\hat{y}}$ singular of Goh-rank $1$, be fixed and let $\\mathcal{E}:U \\rightarrow V^u(\\hat{y})$ be the smooth mapping defined by\n\\[\n\\mathcal{E}(v) := \\mbox{Proj}_{V^u(\\hat{y})}^{\\perp} \\left( E(v) \\right) \\qquad \\forall v \\in U.\n\\]\nWe need to consider two different cases. \\\\\n\n\\noindent First case: $ \\mbox{Im} ( D_{u} \\mathcal{E})=V^{u}$.\\\\\nThere are $u^1, \\ldots, u^{n-1} \\in L^2([0,1],\\mathbb R^m)$ such that the linear operator\n\\[\n\\begin{array}{rcl}\n\\mathbb R^{n-1} & \\longrightarrow & V^{u} \\\\\n\\alpha & \\longmapsto & \\sum_{i=1}^{n-1} \\alpha_i D_u\\mathcal{E} \\left(u^{i}\\right)\n\\end{array}\n\\] \nis invertible. Thus, by the Inverse Function Theorem, the smooth mapping $\\mathcal{G}^u: \\mathbb R^{n-1} \\rightarrow V^{u}(\\hat{y})$ defined in a neighborhood of the origin by\n\\[\n\\mathcal{G}^u (\\alpha) := \\mathcal{E} \\left( u + \\sum_{i=1}^{n-1} \\alpha_i u^{i}\\right)\n\\]\nadmits an inverse $\\mathcal{H}^u$ of class $C^1$ on an open neighborhood of $u$. In fact, by $C^1$ regularity of $E$, this property holds uniformly on a neighborhood of $u$. There are $\\delta, \\rho \\in (0,1)$ and $K>0$ such that for every $v\\in U$ with $\\|v-u\\|_{L^2}< \\delta$, the smooth mapping $\\mathcal{G}^v: \\mathbb R^{n-1} \\rightarrow V^{u}(\\hat{y})$ defined in a neighborhood of the origin by\n\\[\n\\mathcal{G}^v (\\alpha) := \\mathcal{E} \\left( v + \\sum_{i=1}^{n-1} \\alpha_i u^{i}\\right)\n\\]\nadmits an inverse $\\mathcal{H}^v=(\\mathcal{H}^v_1,\\ldots, \\mathcal{H}^v_{n-1}): V^u(\\hat{y}) \\cap B_{\\rho}(\\mathcal{E}(v)) \\rightarrow \\mathbb R^{n-1}$ of class $C^1$ such that \n\\[\n\\left| \\mathcal{H}^v(z)-\\mathcal{H}^v(z') \\right| \\leq K |z-z'| \\qquad \\forall z,z' \\in V^u(\\hat{y}) \\cap B_{\\rho}(\\mathcal{E}(v)).\n\\]\nAs a consequence, if $v$ belongs to $U$ with $\\|v-u\\|_{L^2} < \\delta$ and $z$ belongs to $V^{u}(\\gamma^v(1)) \\cap B_{\\rho}(\\gamma^v(1))$, then we have \n\\[\nz' := \\mbox{Proj}_{V^{u}(\\hat{y})}^{\\perp} (z) \\in V^u(\\hat{y}) \\cap B_{\\rho}(\\mathcal{E}(v)).\n\\]\nHence, by the above result, the control $w\\in U$ defined by \n\\begin{eqnarray}\\label{20janv1}\nw:=v + \\sum_{i=1}^{n-1} \\mathcal{H}_i^v(z') u^{i}\n\\end{eqnarray}\nsatisfies \n\\[\n\\mbox{Proj}_{V^u(\\hat{y})}^{\\perp} \\left( \\gamma^w(1)\\right)= \\mathcal{E}(w) =z',\n\\]\nwhich implies\n\\[\n\\mbox{Proj}_{V^u(\\gamma^v(1))}^{\\perp} \\left( \\gamma^w(1)\\right)= z,\n\\]\nand, by noting that \n\\[\n\\mathcal{H}^v(\\mbox{Proj}_{V^u(\\hat{y})}^{\\perp}\\gamma^v(1))= \\mathcal{H}^v(\\mathcal{E}(v))=0 \\quad \\mbox{and} \\quad \\left|z'-\\mathcal{E}(v)\\right| = \\left| z-\\bar{\\gamma}^v(1)\\right|,\n\\]\nwe also have \n\\begin{eqnarray*}\n \\|w-v\\|_{L^2} & = & \\left\\| \\sum_{i=1}^{n-1} \\mathcal{H}_i^v(z') u^{i} - \\sum_{i=1}^{n-1} \\mathcal{H}^v_i(\\mathcal{E}(v)) u^i\\right\\|_{L^2} \\\\\n& \\leq & \\left| \\mathcal{H}^v(z') - \\mathcal{H}^v(\\mathcal{E}(v))\\right| \\sum_{i=1}^{n-1} \\left\\|u^i\\right\\|_{L^2} \\\\\n& \\leq & K \\left|z' -\\mathcal{E}(v) \\right| \\sum_{i=1}^{n-1} \\left\\|u^i\\right\\|_{L^2} \\\\\n& = & K \\left|z -\\gamma^v(1)\\right| \\sum_{i=1}^{n-1} \\left\\|u^i\\right\\|_{L^2} =: KK' \\left|z -\\gamma^v(1)\\right|.\n\\end{eqnarray*}\nSo, by considering local Lipschitz constants $L_E, L_C$ respectively for $E$ and $C$, we infer that for any $v\\in U$ with $\\|v-u\\|_{L^2} < \\delta$ and any $z\\in V^{u}(\\gamma^v(1)) \\cap B_{\\rho}(\\gamma^v(1))$, the control $w$ given by (\\ref{20janv1}) satisfies (\\ref{STEP2_LEM3_1}), \n\\begin{eqnarray*}\n\\left| \\gamma^w(1)-z \\right| & \\leq & \\left| \\gamma^w(1)-\\gamma^v(1) \\right| + |\\gamma^v(1)-z| \\\\\n& \\leq & L_E \\|w-v\\|_{L^2} + |\\gamma^v(1)-z| \\\\\n& = & (L_EKK'+1) \\left|z -\\gamma^v(1)\\right|\n\\end{eqnarray*}\nwhich gives (\\ref{STEP2_LEM3_2}) (for a certain constant), \n\\[\nC(w) \\leq C(v) + L_C \\|w-v\\|_{L^2} \\leq C(v) + L_CK K' \\left|z - \\gamma^v(1)\\right|\n\\]\nwhich gives (\\ref{STEP2_LEM3_3}) (for a certain constant), and (because $\\|v-u\\|_{L^2} < \\delta<1$ and $|z-\\gamma^v(1)|<\\rho<1$)\n\\begin{eqnarray*}\n\\left| \\| w- u\\|_{L^2}^2 - \\| v-u\\|_{L^2}^2 \\right| & = & \\left| \\| w-v\\|_{L^2}^2 + 2 \\langle w-v,v-u\\rangle_{L^2}\\right| \\\\\n& \\leq & \\| w-v\\|_{L^2}^2 + 2 \\| w-v\\|_{L^2}\\\\\n& \\leq & KK' (KK'+2)\\left|z - \\gamma^v(1)\\right|\n\\end{eqnarray*}\nwhich gives (\\ref{STEP2_LEM3_4}) (for a certain constant). We complete the proof by considering the maximum of the constants above.\\\\\n\n\\noindent Second case: $ \\mbox{Im} ( D_{u} \\mathcal{E})\\neq V^{u}$.\\\\\nWe claim that\n\\begin{eqnarray}\\label{21oct8}\n\\mbox{ind}_- \\left( \\lambda \\cdot \\left( D^2_{u} \\mathcal{E} \\right)_{\\vert \\mbox{Ker} (D_{u} \\mathcal{E})} \\right) =+\\infty \\qquad \\forall \\lambda \\in \\left( \\mbox{Im} \\left( D_{u} \\mathcal{E} \\right)\\right)^{\\perp^{V^u}} \\setminus \\{0\\},\n\\end{eqnarray} \nwhere $( \\mbox{Im} ( D_{u} \\mathcal{E}))^{\\perp^{V^u}}$ stands for the set of linear forms on $V^u$ which annihilate $ \\mbox{Im} ( D_{u} \\mathcal{E})$. To prove the claim, we observe that if (\\ref{21oct8}) is false then there is a linear form $\\lambda \\neq 0$ in $( \\mbox{Im} ( D_{u} \\mathcal{E}))^{\\perp^{V^u}}$ such that\n\\[\n\\mbox{ind}_- \\left( \\lambda \\cdot \\left( D^2_{u} \\mathcal{E} \\right)_{\\vert \\mbox{Ker} (D_{u} \\mathcal{E})} \\right) < +\\infty.\n\\]\nExtend $\\lambda$ into a non-zero linear form $\\tilde{\\lambda}$ on $\\mathbb R^n$ by setting $\\tilde{\\lambda} := \\lambda \\cdot \\mbox{Proj}_{V^u}^{\\perp}$. Since $\\mbox{Im} \\left(D_{u}E\\right) \\subset V^{u}$ (by (\\ref{21oct7})), the linear form $\\tilde{\\lambda}$ belongs to $( \\mbox{Im} ( D_{u}E))^{\\perp}$ and we have $\\mbox{Ker}(D_{u} \\mathcal{E})=\\mbox{Ker}(D_{u}E)$. As a consequence, since\n\\[\nD^2_{u} E = D^2_{u} \\mathcal{E} + \\left( D^2_{u} E -D^2_{u} \\mathcal{E} \\right),\n\\]\nwhere the image of $D^2_{u} E -D^2_{u} \\mathcal{E} $ is orthogonal to $V^{u}$, we infer that\n\\[\n\\mbox{ind}_- \\left( \\lambda \\cdot \\left( D^2_{u} \\mathcal{E} \\right)_{\\vert \\mbox{Ker} (D_{u} \\mathcal{E})} \\right) = \\mbox{ind}_- \\left( \\tilde{\\lambda} \\cdot \\left( D^2_{u} E \\right)_{\\vert \\mbox{Ker} (D_{u} E)} \\right).\n\\]\nThus if the claim is false, then by Remark \\ref{20oct1}, the horizontal path $\\gamma^{u}$ admits an abnormal extremal $\\psi$ satisfying the Goh condition with $\\psi(1)=\\tilde{\\lambda}$. Since $\\tilde{\\lambda}\\notin P^{u}$, this is a contradiction. \\\\\nWe can now conclude the proof of the second case by applying Theorem \\ref{THMopenquant} at $\\bar{u}=u$ with \n\\[\n(X,\\|\\cdot\\|)=\\left(L^2([0,1],\\mathbb R^m),\\|\\cdot\\|_{L^2}\\right), \\quad F=\\mathcal{E}:U \\longrightarrow V^u(\\hat{y}),\n\\]\n\\[ \n\\mbox{and} \\quad G=(E,C,\\bar{C}): U \\longrightarrow \\mathbb R^{n+2} \\quad \\mbox{with} \\quad \\bar{C}=\\|\\cdot-u\\|_{L^2}^2: U \\longrightarrow \\mathbb R.\n\\]\n By (\\ref{21oct8}), there exist $\\delta, \\rho\\in (0,1) $ and $K>0$ such that for any $v \\in U$ and any $z'\\in V^u(\\hat{y})$ with\n\\[\n\\left\\| v - u\\right\\|_{L^2} < \\delta \\quad \\mbox{and} \\quad |z'-\\mathcal{E}(v)|<\\rho,\n\\]\nthere are $w_1, w_2 \\in L^2([0,1],\\mathbb R^m)$ such that $v+w_1+w_2 \\in U$,\n\\[\nz'=\\mathcal{E}(v+w_1+w_2),\n\\]\n\\[\nw_1 \\in \\mbox{Ker}\\left(D_v\\mathcal{E}\\right) \\cap \\mbox{Ker}\\left(D_vG\\right) \n\\]\n\\[\n\\mbox{and} \\quad \\|w_1\\|_{L^2}0$. Therefore, if $z'$ is given by\n\\[\nz' = \\mbox{Proj}_{V^{u}(\\hat{y})}^{\\perp} (z) \\in V^u(\\hat{y}) \\cap B_{\\rho}(\\mathcal{E}(v)) \\quad \\mbox{with} \\quad z \\in V^{u}(\\gamma^v(1)) \\cap B_{\\rho}(\\gamma^v(1)),\n\\]\nthen we have (\\ref{STEP2_LEM3_1}), if we denote by $L_E$ a local Lipschitz constant for $E$ then we have (because $|z'-\\mathcal{E}(v)|=|z-\\gamma^v(1)|$)\n\\begin{eqnarray*}\n\\left| \\gamma^w(1)-z \\right| & \\leq & \\left| \\gamma^w(1)-\\gamma^v(1) \\right| + |\\gamma^v(1)-z| \\\\\n& \\leq & (L_EKK'+1) \\left|z -\\gamma^v(1)\\right|\n\\end{eqnarray*}\nwhich gives (\\ref{STEP2_LEM3_2}) (for a certain constant), by noting that $w_1 \\in \\mbox{Ker}(D_vG)\\subset \\mbox{Ker}(D_vC)$ and $|z'-\\mathcal{E}(v)|=|z-\\gamma^v(1)|<\\rho < 1$ we have\n\\begin{eqnarray*}\n2C(w) & = &\\|v+w_1+w_2\\|_{L^2}^2 \\\\\n& = & 2C(v) + 2 \\langle v,w_1+w_2\\rangle_{L^2} + \\|w_1+w_2\\|_{L^2}^2 \\\\\n& = & 2C(v) + 2 \\langle v,w_2\\rangle_{L^2} + \\|w_1+w_2\\|_{L^2}^2\\\\\n& \\leq & 2C(v) + 2 \\|v\\|_{L^2} \\|w_2\\|_{L^2} + \\left( \\|w_1\\|_{L^2} + \\|w_2\\|_{L^2}\\right)^2 \\\\\n& \\leq & 2C(v) + 2 \\left(\\|u\\|_{L^2}+\\delta\\right) K \\left|z-\\gamma^v(1)\\right| + \\left(2K\\sqrt{\\left|z-\\gamma^v(1)\\right|}\\right)^2\\\\\n& = & 2C(v) + K \\left(2 \\|u\\|_{L^2}+2+4K\\right) \\left|z-\\gamma^v(1)\\right|,\n\\end{eqnarray*}\nwhich gives (\\ref{STEP2_LEM3_3}) (for a certain constant), and finally by noting that $w_1 \\in \\mbox{Ker}(D_vG)\\subset \\mbox{Ker}(D_v\\bar{C})$ we also have\n\\begin{eqnarray*}\n\\left| \\| w-u\\|_{L^2}^2 - \\| v-u\\|_{L^2}^2\\right| & = &\\left| \\| w-v\\|_{L^2}^2 + 2 \\langle w-v,v-u\\rangle_{L^2} \\right| \\\\\n& \\leq & \\| w_1+w_2 \\|_{L^2}^2 + 2 \\left|\\langle w_2 ,v-u\\rangle_{L^2}\\right| \\\\\n& \\leq & \\left(2K\\sqrt{\\left|z-\\gamma^v(1)\\right|}\\right)^2 + 2K \\left|z-\\gamma^v(1)\\right|\\\\\n& = & 2K(2K+1)\\left|z - \\gamma^v(1)\\right|.\n\\end{eqnarray*}\nThe proof of Lemma \\ref{STEP2_LEM3} is complete.\n\\end{proof}\n \nThe compactness results of Lemma \\ref{STEP1_LEM1} together with the results of Lemmas \\ref{STEP2_LEM2} and \\ref{STEP2_LEM3} yield the following result:\n\n\\begin{lemma}\\label{STEP2_LEM4}\nThere are $\\delta, r , \\rho \\in (0,1)$, $K>0$, a positive integer $N$, $N$ controls $u^1, \\ldots, u^{N}$ in $U$ with $\\gamma^{u^l}\\in \\Gamma^{\\hat{y}}$ for $l=1, \\ldots, N$ and $N$ linear hyperplanes $V^{u^1}, \\ldots V^{u^{N}}$ in $\\mathbb R^n$ such that the following properties are satisfied: \n\\begin{itemize}\n\\item[(i)] For every $l\\in \\{1, \\ldots N\\}$, every $v\\in U$ with $\\|v-u^l\\|_{L^2} < \\delta$ and every $z\\in V^{u^l}(\\gamma^v(1)) \\cap B_{\\rho}(\\gamma^v(1))$, there is $w \\in U$ such that \n \\begin{eqnarray}\\label{24oct1}\n \\mbox{\\em Proj}_{V^{u^l}(\\gamma^v(1))}^{\\perp} \\left(\\gamma^{w}(1)\\right)=z, \n \\end{eqnarray}\n \\begin{eqnarray}\\label{24oct1bis}\n \\left| \\gamma^w(1)-z \\right| \\leq K \\left| z-\\gamma^v(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{24oct2}\n C(w) \\leq C(v) + K \\left|z - \\gamma^v(1)\\right|,\n \\end{eqnarray}\n \\begin{eqnarray}\\label{24oct3}\n \\left| \\| w-u^l\\|_{L^2}^2 - \\| v-u^l\\|_{L^2}^2 \\right| \\leq K \\left| z-\\gamma^v(1)\\right|.\n \\end{eqnarray}\n\\item[(ii)] For any $y \\in B_r(\\hat{y})$ and $v \\in \\Gamma^y$, there is $l\\in \\{1, \\ldots, N\\}$, such that $\\|v-u^l\\|_{L^2} < \\delta\/2$.\n\\end{itemize}\n \\end{lemma}\n\nPick a smooth nondecreasing function $h:[0,\\delta^2\/4) \\rightarrow [0,+\\infty)$ such that \n\\[\nh(\\alpha) = 0 \\, \\forall \\alpha \\in [0,\\delta^2\/4], \\quad h(\\alpha) > 0 \\, \\forall \\alpha \\in (\\delta^2\/4,\\delta^2) \\quad \\mbox{and}\u00a0\\quad \\lim_{\\alpha \\rightarrow \\delta^2} h(\\alpha)= +\\infty.\n\\]\nThen, for every $l=1, \\ldots, N$, define the function $W^l: B_r(\\hat{y}) \\rightarrow [0,+\\infty)$ by\n\\[\nW^l(y) := \\inf \\left\\{ C(u) + h\\left(\\|u-u^l\\|_{L^2}^2\\right) \\, \\vert \\, u \\in U \\mbox{ s.t. } \\bar{\\gamma}^{u}(1)=y \\right\\} \\qquad \\forall y \\in B_r(\\hat{y}). \n\\]\nBy construction, the functions $W^1, \\ldots, W^{\\hat{N}}$ are continuous on their domain (remember that the end-point mapping $E:U \\rightarrow M$ is open, see {\\it e.g.} \\cite[\\S 1.4]{riffordbook}) and we have (by Lemma \\ref{STEP2_LEM4} (ii) and the construction of $h$)\n\\[\nF(y) = \\min \\left\\{W^1(y), \\ldots, W^{N}(y)\\right\\} \\qquad \\forall y \\in B_r(\\hat{y})\n\\]\nand\n\\[\nF(\\hat{y}) = W^1(\\hat{y}) = \\cdots = W^{N}(\\hat{y}). \n\\]\nThen, for every set $I\\subset \\{1, \\ldots, N\\}$, we define the set $\\mathcal{Z}^{I}\\subset B_{r\/2}(\\hat{y})$ by\n\\[\n\\mathcal{Z}^{I} := \\Bigl\\{ y \\in B_{r\/2}(\\hat{y}) \\, \\vert \\, F(y)=W^k(y) \\, \\forall k \\in I \\mbox{ and } F(y)0$ such that for every $a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y})$ and every $\\check{y} \\in P_a$ the following property is satisfied: If $W_{|P_a}$ admits a support function from below $\\varphi$ at $\\check{y}$ which is Lipschitz on its domain (with Lipschitz constant $\\mbox{\\rm Lip}(\\varphi)$), then there is $\\nu>0$ such that\n\\[\nW(y) \\geq W(\\check{y}) - \\check{K} \\left(1+\\mbox{\\rm Lip}(\\varphi)\\right) \\left|y-\\check{y}\\right| \\qquad \\forall y \\in B_{\\nu}(\\check{y});\n\\] \nin particular, $\\check{y}$ belongs to $\\mbox{\\rm Lip}^-(W)$.\n \\end{proposition}\n\n \\begin{proof}[Proof of Proposition \\ref{PROPSTEP3}]\n First, we note that since $W$ is well-defined and continuous on the closed ball $\\bar{B}_{r\/2}(\\hat{y})$, there is $A>0$ such that $W(y)\\leq A$ for all $y\\in \\bar{B}_{r\/2}(\\hat{y})$. As a consequence, if we consider some $y\\in B_{r\/2}(\\hat{y})$ and $u\\in U$ such that \n\\begin{eqnarray}\\label{17janv1}\nW(y) = C(u) + h\\left(\\|u-\\hat{u}\\|_{L^2}^2\\right) \\quad \\mbox{and} \\quad \\gamma^u(1)=y,\n\\end{eqnarray}\nthen we have $h(\\|u-\\hat{u}\\|_{L^2}^2)\\leq A$, hence (because $h(\\alpha)$ goes to $+\\infty$ as $\\alpha$ increases to $\\delta^2$) there is $\\bar{\\delta}\\in (0,\\delta)$ such that $\\|u-\\hat{u}\\|_{L^2}< \\bar{\\delta}$. We denote by $L>0$ the Lipschitz constant of $h$ on the set \n$[0,\\check{\\delta}^2]$ with $\\check{\\delta}^2:=\\bar{\\delta}^2+(\\bar{\\delta}^2+\\delta^2)\/2$ (remember that $h$ is smooth on its domain). \n\n\nLet $a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y})$, $\\check{y} \\in P_a$ and $\\varphi$ a function which is Lipschitz (with Lipschitz constant $\\mbox{Lip}(\\varphi)$) on an open segment of $P_a$ containing $\\check{y}$ such that $W_{|P_a}$ admits $\\varphi$ as support function from below at $\\check{y}$. Given $y \\in B_{\\rho}(\\check{y}) \\cap B_{r\/2}(\\hat{y})$, we consider a control $u\\in U$ satisfying (\\ref{17janv1}) and we set (see Figure \\ref{fig1})\n\\[\nz:=\\mbox{Proj}_{P_a}^{\\perp}(y).\n\\]\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width=7cm]{pic1}\n\\caption{$z$ belongs to $P_a\\cap V(y)$ \\label{fig1}}\n\\end{center}\n\\end{figure}\n\nBy construction, we have $|y-z| \\leq |y-\\check{y}| < \\rho $ with $y=\\gamma^u(1)$ and since $h(\\|u-\\hat{u}\\|_{L^2}^2)<+\\infty$ we have $\\|u-\\hat{u}\\|_{L^2}<\\delta$. Therefore, by Proposition \\ref{PROPSTEP2}, there is $v\\in U$ such that\n \\[\n \\mbox{Proj}_{V(y)}^{\\perp} \\left(\\gamma^{v}(1)\\right)=z, \\quad \n \\left|\\gamma^v(1)-z \\right| \\leq K \\left| z-y\\right|,\n \\]\n \\[\n C(v) \\leq C(u) + K |z - y| \\quad \\mbox{and} \\quad \\left| \\| v-\\hat{u}\\|_{L^2}^2 - \\| u-\\hat{u}\\|_{L^2}^2 \\right| \\leq K \\left| z-y\\right|.\n \\]\n The properties $\\mbox{Proj}_{V(y)}^{\\perp}(\\gamma^{v}(1))=z$ and $z\\in P_a$ imply that $\\gamma^{v}(1)\\in P_a$. Moreover, we note that if $y$ belongs to the ball center at $\\check{y}$ with radius less than $(\\check{\\delta}^2-\\bar{\\delta}^2)\/K$ then we have \n \\[\n \\| u-\\hat{u}\\|_{L^2}^2 \\leq \\bar{\\delta}^2 \\leq \\check{\\delta}^2 \\quad\n \\mbox{and} \\quad \\|v-\\hat{u}\\|_{L^2}^2 \\leq \\| u-\\hat{u}\\|_{L^2}^2+K |z-y| \\leq \\bar{\\delta}^2+K |y-\\check{y}| \\leq \\check{\\delta}^2,\n \\]\n so that\n \\[\n h\\left(\\|v-\\hat{u}\\|_{L^2}^2\\right)\\leq h\\left(\\|u-\\hat{u}\\|_{L^2}^2\\right) + L \\left| \\| v-\\hat{u}\\|_{L^2}^2 - \\| u-\\hat{u}\\|_{L^2}^2 \\right| \\leq KL |z-y|.\n \\]\nConsequently, for every $y$ in a ball center at $\\check{y}$ with radius less than $(\\check{\\delta}^2-\\bar{\\delta}^2)\/K$, we have\n \\begin{eqnarray*}\n W_{|P_a} \\left(\\gamma^v(1)\\right) & \\leq & C(v) + h\\left(\\|v-\\hat{u}\\|_{L^2}^2\\right) \\\\\n & \\leq & C(u) + K |z -y| + h\\left(\\|u-\\hat{u}\\|_{L^2}^2\\right) + KL |z-y|\\\\\n & \\leq & W (y) + K(L+1) \\left|z - y\\right|.\n \\end{eqnarray*}\nMoreover, there is $\\nu >0$ such that if $y$ belongs to $B_{\\nu}(\\check{y})$ then $\\gamma^v(1)$ belongs to the domain of $\\varphi$ (because $|\\gamma^v(1)-\\check{y}| \\leq |\\gamma^v(1)-z| +|z-y| + |y-\\check{y}| \\leq K | z-y| +2|y-\\check{y}| \\leq (K+2) |y-\\check{y}|$), thus if $y\\in B_{\\nu}(\\check{y})$ then we have\n \\begin{eqnarray*}\n \\varphi \\left( \\gamma^v(1)\\right)\\leq W_{|P_a} \\left(\\gamma^v(1)\\right) \\leq W (y) + K(L+1) \\left|z - y\\right|.\n \\end{eqnarray*}\nIn conclusion, we obtain that for any $y$ sufficiently close to $\\check{y}$, there holds (we set $\\hat{K}:=K(L+1)$)\n \\begin{eqnarray*}\n W(y) & \\geq & \\varphi \\left(\\gamma^v(1)\\right) - \\hat{K} \\left| z-y\\right|\\\\\n & \\geq & \\varphi \\left( \\check{y}\\right) - \\mbox{Lip}(\\varphi) \\left| \\gamma^v(1)-\\check{y}\\right| - \\hat{K} \\left| z-y\\right|\\\\\n & \\geq & \\varphi \\left( \\check{y}\\right) - \\mbox{Lip}(\\varphi)\\left( \\left|\\gamma^v(1)-z\\right| + \\left|z-y\\right| + \\left|y-\\check{y}\\right| \\right)- \\hat{K} \\left| z-y\\right|\\\\\n & \\geq & \\varphi \\left( \\check{y}\\right) - \\mbox{Lip}(\\varphi)( \\hat{K}+2) \\left|y-\\check{y}\\right|- \\hat{K} \\left|y-\\check{y}\\right|\\\\\n & \\geq & \\varphi \\left( \\check{y}\\right) - \\check{K} \\left( 1+\\mbox{Lip}(\\varphi) \\right) \\left|y-\\check{y}\\right|,\n \\end{eqnarray*}\nby setting $\\check{K}:=\\hat{K}+2$. \n\\end{proof}\n\nProposition \\ref{PROPSTEP3} allows us to distinguish between two cases. For each $a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y})$ we denote by $\\mathcal{K}_a$ the intersection of $\\mathcal{K}$ (introduced in Proposition \\ref{PROPSTEP2} (iii)) with $P_a$, that is,\n\\[\n\\mathcal{K}_a := \\mathcal{K} \\cap P_a \\quad \\forall a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y}).\n\\]\nBy Proposition \\ref{PROPSTEP2} (iii) the set $\\mathcal{K}\\subset B_{r\/2}(\\hat{y})$ has positive Lebesgue measure and moreover by Proposition \\ref{PROPDichotomy}, for every $a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y})$ there are two measurable sets $\\mathcal{K}_{a}^{i}, \\mathcal{K}_{a}^{ii} \\subset \\mathcal{K}_{a}$ with \n\\[\n\\mathcal{L}^1 \\left( \\mathcal{K}_{a}^{i}\\cup \\mathcal{K}_{a}^{ii}\\right) = \\mathcal{L}^1\\left( \\mathcal{K}_{a}\\right)\n\\]\nsatisfying the following properties:\n\\begin{itemize}\n\\item[(i)] For every $y\\in \\mathcal{K}_{a}^{i}$, the function $W_{|P_a}$ is differentiable at $y$.\n\\item[(ii)] For every $y \\in \\mathcal{K}_{a}^{ii}$, the function $W_{|P_a}$ is not differentiable at $y$ and there is a sequence $\\{y_k\\}_{k\\in \\mathbb N}$ in $P_a$ converging to $y$ such that $0\\in \\partial^-W_{|P_a}(y_k)$ for all $k\\in \\mathbb N$, in particular we have $0\\in \\partial^-_L W_{|P_a}(y)$.\n\\end{itemize}\nWe set\n\\[\n\\mathcal{K}^{i}:= \\bigcup_{a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y}} \\mathcal{K}_{a}^{i} \\quad \\mbox{and} \\quad \\mathcal{K}^{ii}:= \\bigcup_{a\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y})} \\mathcal{K}_{a}^{ii}\n\\]\nand we note that by Fubini's Theorem, we have\n\\[\n\\mathcal{L}^n (\\mathcal{K})= \\mathcal{L}^n\\left(\\mathcal{K}^{i}\\cup \\mathcal{K}^{ii}\\right) >0.\n\\]\nTwo different cases have to be distinguished: $ \\mathcal{L}^n(\\mathcal{K}^{i})>0$ or $ \\mathcal{L}^n(\\mathcal{K}^{i})=0$ and $ \\mathcal{L}^n(\\mathcal{K}^{ii})>0$. The first case ($\\mathcal{L}^n(\\mathcal{K}^{i})>0$) leads easily to a contradiction because $\\mathcal{L}(\\mathcal{K}^{i})>0$ together with Proposition \\ref{PROPSTEP3} imply that any point of $\\mathcal{K}^{i}\\subset \\mathcal{K}$ belongs to $\\mbox{Lip}^-(W)$ which contradicts Proposition \\ref{PROPSTEP2} (iii). Therefore, we assume from now that\n\\[\n\\mathcal{L}^n(\\mathcal{K}^{i})=0 \\quad \\mbox{and} \\quad \\mathcal{L}^n(\\mathcal{K}^{ii})>0\n\\]\nand we explain how to get a contradiction. \\\\\n\nBy Fubini's Theorem there is $\\check{a}\\in B_{r\/2}(\\hat{y})\\cap V(\\hat{y})$ such that \n\\begin{eqnarray}\\label{lastcontradiction}\n\\mathcal{L}^1 \\left( \\mathcal{K}_{\\check{a}}^{ii}\\right) >0. \n\\end{eqnarray}\nWe pick a Lebesgue density point $\\check{y}$ of $\\mathcal{K}_{\\check{a}}^{ii}$ in $P_{\\check{a}}$ (w.r.t. $\\mathcal{L}^1$) and we notice that by construction of $\\mathcal{A}$ (see (\\ref{20janv3}) the point $\\check{y}$ does not belongs to $\\mathcal{C}_0$ the set of critical values of $\\exp_0$. Then we pick a unit vector $\\vec{v} \\in \\mathbb R^n$ tangent to $P_{\\check{a}}$ and we define the continuous function $W^{\\check{a}}: (-r\/10,r\/10) \\rightarrow [0,+\\infty)$ by \n\\[\nW^{\\check{a}}(t) := W_{|P_{\\check{a}}} \\left( \\check{y} + t \\vec{v}\\right)- W_{|P_{\\check{a}}} \\left( \\check{y}\\right)\\qquad \\forall t \\in (-r\/10,r\/10).\n\\]\nThe next lemma follows essentially from the construction of $\\check{y}$, Proposition \\ref{PROPSTEP3}, the Inverse Function Theorem and the Denjoy-Young-Saks Theorem (see Remark \\ref{RemDYS}). \n\n\\begin{proposition}\\label{PROPSTEP4}\nThere are $\\mu, \\sigma >0$ with $B_{\\mu}(\\check{y}) \\subset B_{r\/2}(\\hat{y})$ such that the following properties are satisfied: \n\\begin{itemize}\n\\item[(i)] For any $t\\in (-\\mu,\\mu)$ and any $p \\in \\partial^-W^{\\check{a}}(t)$, there holds\n\\begin{eqnarray*}\n|p| \\leq 1 \\quad \\Longrightarrow \\quad \\Bigl( W^{\\check{a}}(s) \\leq W^{\\check{a}}(t) + p (s-t)+\\sigma (s-t)^2 \\quad \\forall s \\in (-\\mu,\\mu)\\Bigr).\n\\end{eqnarray*} \n\\item[(ii)] $D^-W^{\\check{a}}(0)=+\\infty, D_+W^{\\check{a}}(0)=-\\infty$ and $D_-W^{\\check{a}}(0)=D^+W^{\\check{a}}(0)\\in \\mathbb R$.\n\\end{itemize}\n\\end{proposition} \n\n\\begin{proof}[Proof of Proposition \\ref{PROPSTEP4}]\n\nFor every $y\\in M$, we denote by $\\Gamma^y_W$ the set of controls $u \\in L^2([0,1],\\mathbb R^m)$ such that \n\\[\nW(y) = C(u) + h\\left(\\|u-\\hat{u}\\|_{L^2}^2\\right).\n\\]\nThe proof of the following result is a consequence of the continuity of $W$ and the fact that $h$ is nondecreasing.\n\n\\begin{lemma}\\label{STEP4_LEM1}\nFor every compact set $\\mathcal{K}\\subset B_{r\/2}(\\hat{y})$, the set of controls $u \\in \\Gamma^y_W$ with $y\\in \\mathcal{K}$ is a compact subset of $L^{2}([0,1],\\mathbb R^m)$ and the mapping $y\\in B_{r\/2}(\\hat{y}) \\mapsto \\Gamma^y_W \\in \\mathcal{C}(L^{2}([0,1],\\mathbb R^m))$ has closed graph (here $\\mathcal{C}(L^{2}([0,1],\\mathbb R^m))$ stands for the set of compact subsets of $L^{2}([0,1],\\mathbb R^m)$ equipped with the Hausdorff topology).\n\\end{lemma}\n\n\n\\begin{proof}[Proof of Lemma \\ref{STEP4_LEM1}]\nLet $\\mathcal{K}$ be a compact subset of $B_{r\/2}(\\hat{y})$ and $\\{u_k\\}_{k\\in \\mathbb N}, \\{y_k\\}_{k\\in \\mathbb N}$ be two sequences respectively in $L^{2}([0,1],\\mathbb R^m)$ and $\\mathcal{K}$ such that $u_k\\in \\Gamma_W^{y_k}$ for all $k\\in \\mathbb N$. The sequence $\\{y_k\\}_{k\\in \\mathbb N}$ is valued in $\\mathcal{K}$ compact and since $W$ is bounded on $\\mathcal{K}$ (because it is continuous) and\n\\[\nW(y_k) = \\frac{1}{2} \\|u_k\\|_{L^2}^2 + h\\left( \\|u_k-\\hat{u}\\|_{L^2}^2\\right) \\quad \\forall k\\in \\mathbb N,\n\\]\nthe sequence $\\{u\\}_{k\\in \\mathbb N}$ is bounded in $L^2([0,1],\\mathbb R^m)$, so there is an increasing subsequence $\\{k_l\\}_{l\\in \\mathbb N}$ such that $\\{y_{k_l}\\}_{l\\in \\mathbb N}$ tends to some $\\bar{y}\\in \\mathcal{K}$ at infinity and $\\{u_{k_l}\\}_{l\\in \\mathbb N}$ weakly converges to some $\\bar{u}\\in L^{2}([0,1],\\mathbb R^m)$. We note that by the Hahn-Banach Separation Theorem (see {\\it e.g.} \\cite{clarke13}) we have \n\\[\n\\|\\bar{u}\\|_{L^2} \\leq \\liminf_{l\\rightarrow +\\infty} \\|u_{k_l}\\|_{L^2} \\quad \\mbox{and} \\quad \\|\\bar{u}-\\hat{u}\\|_{L^2} \\leq \\liminf_{l\\rightarrow +\\infty} \\|u_{k_l}-\\hat{u}\\|_{L^2},\n\\]\ntherefore, since $h$ is nondecreasing, we obtain\n\\begin{eqnarray*}\n\\frac{1}{2} \\|\\bar{u}\\|_{L^2}^2 + h\\left( \\|\\bar{u}-\\hat{u}\\|_{L^2}^2\\right) & \\leq & \\liminf_{l\\rightarrow +\\infty} \\frac{1}{2} \\|u_{k_l}\\|_{L^2}^2 + \\liminf_{l\\rightarrow +\\infty} h\\left(\\|u_{k_l}-\\hat{u}\\|_{L^2}^2\\right)\\\\\n& \\leq & \\liminf_{l\\rightarrow +\\infty} \\left\\{\\frac{1}{2} \\|u_{k_l}\\|_{L^2}^2 + h\\left(\\|u_{k_l}-\\hat{u}\\|_{L^2}^2\\right)\\right\\}\\\\\n& = & \\lim_{l\\rightarrow +\\infty} W(y_{k_l}) = W(\\bar{y}). \n\\end{eqnarray*}\nThis shows that $\\bar{u}$ belongs to $\\Gamma_W^{\\bar{y}}$ and that, up to a subsequence, the sequence $\\{u_{k_l}\\}_{l\\in \\mathbb N}$ convergences strongly to $\\bar{u}$ in $L^2([0,1],\\mathbb R^m)$ (because $\\{u_{k_l}\\}_{l\\in \\mathbb N}$ converges weakly to $\\bar{u}$ and $\\|\\bar{u}\\|_{L^2} = \\liminf_{l\\rightarrow +\\infty} \\|u_{k_l}\\|_{L^2}$), which concludes the proof of the first part. The second part is left to the reader. \n\\end{proof}\n\nWe now consider the set $\\Theta \\subset \\Gamma_W^{\\check{y}} \\subset L^2([0,1],\\mathbb R^m)$ defined by\n\\begin{multline*}\n\\Theta := \\left\\{ u \\in \\Gamma_W^{\\check{y}}\\, \\vert \\, \\exists \\{y_k\\}_{k\\in \\mathbb N} \\mbox{ in } B_{r\/2}(\\hat{y}), \\{p_k\\}_{k\\in \\mathbb N} \\mbox{ in } (\\mathbb R^n)^*, \\{u_k\\}_{k\\in \\mathbb N} \\mbox{ in } L^2([0,1],\\mathbb R^m) \\mbox{ s.t. } \\right. \\\\\n\\left. \\lim_{k\\rightarrow +\\infty}y_k = \\check{y}, \\, \\lim_{k\\rightarrow +\\infty}u_k = u \\mbox{ and } u_k\\in \\Gamma_W^{y_k}, \\, p_k \\in \\partial^- W(y_k), \\, |p_k|\\leq 2\\check{K}+1 \\, \\forall k \\in \\mathbb N\\right\\}.\n\\end{multline*}\n In the following result, the first part follows from the property (ii) above together with Lemma \\ref{STEP4_LEM1} and the second part is due to the fact that $\\check{y}$ is in $\\mathcal{K}_{\\check{a}}^{ii}$ and so does not belong to $\\mathcal{C}_0$.\n\n\\begin{lemma}\\label{STEP4_LEM2}\nThe set $\\Theta$ is a nonempty compact subset of $L^2([0,1],\\mathbb R^m)$ and for every $u\\in \\Theta$ there are $v^1, \\ldots, v^n \\in L^2([0,1],\\mathbb R^m)$ such that the linear mapping \n\\[\n\\lambda =(\\lambda_1, \\ldots, \\lambda_n) \\in \\mathbb R^n \\, \\longmapsto \\, D_uE \\left( \\sum_{i=1}^m \\lambda_i v^{i}\\right) \\in \\mathbb R^n\n\\]\nis invertible at $\\lambda=0$.\n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{STEP4_LEM2}]\nBy construction of $\\check{y}$ and the property (ii) above, there is a sequence $\\{\\bar{y}_k\\}_{k\\in \\mathbb N}$ in $P_{\\check{a}}$ converging to $\\check{y}$ such that $0\\in \\partial^-W_{|P_{\\check{a}}}(\\bar{y}_k)$ for all $k\\in \\mathbb N$. Note that this property has to be understood as $0\\in \\partial^-W^{\\check{a}}(\\bar{t}_k)$ for all $k\\in \\mathbb N$, where the $\\bar{t}_k$'s are defined by $\\check{y}+\\bar{t}_k\\vec{v}=\\bar{y}_k$. Proposition \\ref{PROPSTEP3} shows that, in fact, for any $k\\in \\mathbb N$ the function $W$ admits a support function from below at $\\bar{y}_k$ which is Lipschitz $(2\\check{K})$-Lipschitz on its domain. Thus, Proposition \\ref{PROPLip-Lim} gives for every $k\\in \\mathbb N$ a co-vector $\\bar{p}_k\\in \\partial^-_LW(\\bar{y}_k)$ such that $|\\bar{p}_k| \\leq 2\\check{K}$. By definition of $\\partial^-_LW$ we infer that there is a sequence $\\{y_k\\}_{k\\in \\mathbb N}$ in $B_{r\/2}(\\hat{y})$ along with a sequence $\\{p_k\\}_{k\\in \\mathbb N}$ in $(\\mathbb R^n)^*$ such that \n\\[\n \\lim_{k\\rightarrow +\\infty}y_k = \\check{y} \\quad \\mbox{and} \\quad p_k \\in \\partial^- W(y_k), \\, |p_k|\\leq 2\\check{K}+1 \\quad \\forall k \\in \\mathbb N.\n \\]\nBy taking a control $u_k$ in each $\\Gamma_W^{y_k}$ and applying Lemma \\ref{STEP4_LEM1}, we conclude that $\\Theta$ is not empty. The compactness of $\\Theta$ is an easy consequence of Lemma \\ref{STEP4_LEM1}. \n\nLet us now prove the last part of Lemma \\ref{STEP4_LEM2} and fix some $u\\in \\Theta$. By definition, there are sequences $\\{y_k\\}_{k\\in \\mathbb N}$, $ \\{p_k\\}_{k\\in \\mathbb N}$, $ \\{u_k\\}_{k\\in \\mathbb N}$ respectively in $B_{r\/2}(\\hat{y})$, $(\\mathbb R^n)^*$ and $L^2([0,1],\\mathbb R^m)$ such that \n\\[\n \\lim_{k\\rightarrow +\\infty}y_k = \\check{y}, \\, \\lim_{k\\rightarrow +\\infty}u_k = u \\mbox{ and } u_k\\in \\Gamma_W^{y_k}, \\, p_k \\in \\partial^- W(y_k), \\, |p_k|\\leq 2\\check{K}+1 \\, \\forall k \\in \\mathbb N.\n \\]\n Thus, for each $k\\in \\mathbb N$, there is a support function from below $\\varphi_k: \\mathcal{U}_k \\rightarrow \\mathbb R$ of class $C^1$ on its domain $\\mathcal{U}_k \\subset B_{r\/2}(\\hat{y})$ with $d\\varphi_k(y_k)=p_k$ such that (we define $\\hat{C}:U\\rightarrow \\mathbb R$ by $\\hat{C}(u):=\\|u-\\hat{u}\\|_{L^2}$ for all $u\\in U$)\n\\begin{multline*}\nC(u_k) + h\\bigl( \\hat{C}(u_k)\\bigr) = W(y_k) = \\varphi(y_k) \\quad \\\\\n\\mbox{and} \\quad C(u) + h\\bigl( \\hat{C}(u)\\bigr) \\geq W\\left(E(u)\\right) \\geq \\varphi \\left(E(u)\\right) \\quad \\forall u\\in U_k,\n\\end{multline*}\nwhere $U_k$ is an open neighborhood of $u_k$ in $U$ such that $E(U_k)\\subset \\mathcal{U}_k$. Then, we infer that \n\\[\np_k\\cdot D_{u_k}E = D_{u_k}C + h'\\left(\\|u_k-\\hat{u}\\|_{L^2}^2\\right) \\cdot D_{u_k}\\hat{C} \\qquad \\forall k \\in \\mathbb N.\n\\]\nBy compactness (all $p_k$ satisfy $|p_k|\\leq 2\\check{K}+1$) and up to a subsequence, $\\{p_k\\}_{k\\in \\mathbb N}$ converges to some $p\\in \\partial^-_LW(\\check{y})$ and in addition \n$u_k-\\hat{u}$ converges in $L^2([0,1],\\mathbb R^m)$ to $u-\\hat{u}$ which satisfies $h'(\\|u-\\hat{u}\\|_{L^2})=h(\\|u-\\hat{u}\\|_{L^2})=0$ (by Proposition \\ref{PROPSTEP2} (ii) and because $u\\in \\Gamma_W^{\\check{y}}$ and $W(\\check{y})=F(\\check{y})$). The, by passing to the limit we obtain\n \\[\n p\\cdot D_uE = D_uC.\n \\]\nBy Proposition \\ref{PROPsub1}, we infer that $\\check{y}$ belongs to the image of $\\exp_0$ and since $\\check{y}\\notin \\mathcal{C}_0$ the result follows. \n\\end{proof}\n\nThe following result is an easy consequence of the Inverse Function Theorem and Lemma \\ref{STEP4_LEM2}, its proof is left to the reader.\n\n\\begin{lemma}\\label{STEP4_LEM3}\nThere are $\\check{\\delta}, \\check{\\rho} \\in (0,1)$, $\\check{M}>0$, a positive integer $N$, $N$ controls $u^1, \\ldots, u^{N}$ in $\\Theta$ such that the following properties are satisfied: For every $l\\in \\{1, \\ldots N\\}$, every $v\\in U$ with $\\|v-u^l\\|_{L^2} < \\check{\\delta}$, there is a mapping $\\mathcal{G}^{l,v}:B_{\\check{\\rho}}(\\gamma^v(1)) \\rightarrow U$ such that\n\\[\nE\\left( \\mathcal{G}^{l,v} (y)\\right) = y \\qquad \\forall y \\in B_{\\check{\\rho}}(\\gamma^v(1))\n\\]\nand\n\\[\n\\bigl\\| \\mathcal{G}^{l,v} \\bigr\\|_{C^2} \\leq \\check{M}.\n\\]\n\\end{lemma}\n\nWe are ready to complete the proof of Proposition \\ref{PROPSTEP4}. By construction of $\\Theta$ there is $\\mu>0$ such that for any $y\\in P_{\\check{a}} \\cap B_{\\mu}(\\check{y})$ for which $\\partial^-W_{|P_{\\check{a}}}(y)$ admits a co-vector of norm $\\leq 1$ we have that any $v$ in $\\Gamma_W^y$ satisfies $\\|v-u^l\\|_{L^2} < \\check{\\delta}$ for some $l\\in \\{1, \\ldots, N\\}$. Therefore, if we consider $y\\in P_{\\check{a}} \\cap B_{\\mu}(\\check{y})$, $p \\in \\partial^-W_{|P_{\\check{a}}}(y)$ with $|p|\\leq 1$ and $v \\in \\Gamma_W^y$ then we have \n\\[\nW(z) \\leq C\\left( \\mathcal{G}^{v}(z)\\right) + h\\left( \\tilde{C}\\left( \\mathcal{G}^{v}(z)\\right)\\right) \\qquad \\forall z \\in B_{\\check{\\rho}}(y)\n\\]\nand moreover if $\\varphi: \\mathcal{I} \\rightarrow \\mathbb R$ is a support function from below for $W_{|P_{\\check{a}}}$ at $y$, which is $C^1$ on an open segment containing $y$ in $P_{\\check{a}}$ and verifies $d\\varphi(y)=p$, then we also have \n\\[\n\\varphi (z) \\leq W_{|P_{\\check{a}}}(z) \\leq C\\left( \\mathcal{G}^{v}(z)\\right) + h\\left( \\tilde{C}\\left( \\mathcal{G}^{v}(z)\\right)\\right) \\qquad \\forall z \\in P_{\\check{a}} \\cap \\mathcal{I} \\cap B_{\\check{\\rho}}(y).\n\\]\nWe infer that the differential at $y$ of the function \n\\[\nz \\longmapsto C\\left( \\mathcal{G}^{v}(z)\\right) + h\\left( \\tilde{C}\\left( \\mathcal{G}^{v}(z)\\right)\\right)\n\\]\nis equal to $p$ and we conclude easily by noting that Lemma \\ref{STEP4_LEM3} allows to obtain an upper bound for the $C^2$-norm of that function.\n\nTo prove (ii) we note that, by the property (ii) above, there is a sequence $\\{t_k\\}_{k\\in \\mathbb N}$ in $(-\\mu,\\mu)$ converging to $0$ such that $0\\in \\partial^-W^{\\check{a}}(t_k)$ for all $k\\in \\mathbb N$ which by (i) yields\n\\[\nW^{\\check{a}}(s) \\leq W^{\\check{a}}(t_k) +\\sigma (s-t_k)^2 \\quad \\forall s \\in (-\\mu,\\mu), \\, \\forall k \\in \\mathbb N.\n\\]\nHence by passing to the limit we infer that we have (note that $W^{\\check{a}}(0)=0$)\n\\begin{eqnarray}\\label{27janv1}\nW^{\\check{a}}(s) \\leq \\sigma s^2 \\quad \\forall z \\in P_{\\check{a}} \\cap B_{\\mu}(\\check{y}).\n\\end{eqnarray}\nBy Denjoy-Young-Saks' Theorem (see Remark \\ref{RemDYS}), since $W^{\\check{a}}$ is not differentiable at $\\check{a}$, one of the following properties is satisfied:\n\\begin{itemize}\n\\item[(2)] $D^+W^{\\check{a}}(0)=D^-W^{\\check{a}}(0)=+\\infty$ and $D_+W^{\\check{a}}(0)=D_-W^{\\check{a}}(0)=-\\infty$,\n\\item[(3)] $D^+W^{\\check{a}}(0)=+\\infty, D_-W^{\\check{a}}(0)=-\\infty$ and $D_+W^{\\check{a}}(0)=D^-W^{\\check{a}}(0)\\in \\mathbb R$,\n\\item[(4)] $D^-W^{\\check{a}}(0)=+\\infty, D_+W{\\check{a}}(0)=-\\infty$ and $D_-W^{\\check{a}}(0)=D^+W^{\\check{a}}(0)\\in \\mathbb R$.\n\\end{itemize}\nBut the properties (2)-(3) are prohibited by (\\ref{27janv1}), so the proof is complete. \n\\end{proof}\n \nThe final contradiction will be a consequence of the following:\n\n\\begin{proposition}\\label{PROPSTEP5}\nLet $\\epsilon, \\sigma>0$, $a,b \\in \\mathbb R$ with $b>a$ be such that\n\\begin{eqnarray}\\label{9dec0}\n\\epsilon \\geq \\frac{(b-a)\\sigma}{4}\n\\end{eqnarray}\nand let $h:[a,b]\\rightarrow \\mathbb R$ with $h(a)=h(b)=0$ be a continuous function such that for any $s\\in (a,b)$ and $p\\in \\partial^-h(s)$ there holds\n\\begin{eqnarray}\\label{9dec1}\n|p| \\leq \\epsilon \\quad \\Longrightarrow \\quad \\Bigl( h(s') \\leq h(s) + p (s'-s)+\\sigma (s'-s)^2 \\quad \\forall s \\in [a,b]\\Bigr).\n\\end{eqnarray} \nThen we have \n\\begin{eqnarray}\\label{9dec2}\nh(s) \\geq D(s):= \\max\\Bigl\\{-\\epsilon (s-a), \\epsilon (s-b)\\Bigr\\} \\qquad \\forall s \\in [a,b].\n\\end{eqnarray}\n\\end{proposition}\n\\begin{proof}[Proof of Proposition \\ref{PROPSTEP5}]\nSuppose for contradiction that (\\ref{9dec2}) does not hold and consider a global minimum $\\bar{s} \\in (a,b)$ of the function $h-D$ on $[a,b]$. So, we have $h(\\bar{s})0$. So by Arzela-Ascoli's Theorem, the sequence $\\{\\gamma_{p_k}\\}_k$ converges uniformly on compact sets, up to subsequences, to horizontal paths on $[0,+\\infty)$ starting from $x$. We denote by $\\Gamma_x^{\\infty}$ the set of all such paths and we call it the {\\it normal container at infinity} from $x$. By construction any path of $\\Gamma_x^{\\infty}$ is singular. \n\n\n\n\nWe call {\\it minimizing normal container at infinity } from $x$, the set of minimizing horizontal paths $\\gamma:[0,1] \\rightarrow M$ obtained as uniform limits of paths $\\gamma_{p_k}:[0,1] \\rightarrow M$ where $\\{p_k\\}_k$ is a sequence in $T_x^*M$ such that \n \\[\n p_k \\in \\mathcal{P}_x^{min} \\quad \\forall k \\quad \\mbox{and} \\quad \\lim_{r\\rightarrow + \\infty} |p_k|_{x} = +\\infty.\n\\]\nBy construction, we have \n\\[\n\\Gamma_x^{\\infty,min}([0,1]) \\subset \\Gamma_x^{\\infty} \\left([0,+\\infty)\\right).\n\\] \nBy Proposition \\ref{PROPcharac}, the set $\\mbox{Abn}^{min}(x)$ has Lebesgue measure zero in $M$ if and only if there holds $\\partial^-_{PL}f_x(y)= \\emptyset$ for almost every $y\\in M$. Moreover, we infer easily from Proposition \\ref{PROPGohLip} that for any $(y,p) \\in T^*M$ with $p \\in \\partial^-_{PL}f_x(y)$, there is a minimizing horizontal path $\\gamma \\in \\Gamma_x^{\\infty,min}$ such that $\\gamma(1)=y$ which satisfies the Goh condition. Those results suggest that a fine study of normal containers at infinity $\\Gamma_x^{\\infty}$ and $\\Gamma_x^{\\infty,min}$ may help in the understanding of the minimizing Sard conjecture. \n\n\\subsection{Measure contraction properties}\n\nMeasure contraction properties consist in comparing the contraction of volumes along geodesics from a given point with what happens in classical model of Riemannian geometry. Unlike other notions of Ricci curvature (bounded from below) on measured metric spaces which are not relevant in sub-Riemannian geometry (see \\cite{juillet21}), measure contraction properties have been shown to be satisfied for several types of sub-Riemannian structures (see \\cite{juillet09,rifford13,al14,lee16,lcz16,rizzi16,br18,br20}), all of which do not admit strictly abnormal minimizing horizontal paths. The present paper provides new examples of sub-Riemannian structures which may have strictly abnormal minimizing horizontal paths and for which Ohta's definition of measure contraction property makes sense (see \\cite{ohta07,rifford13}), it is thus natural to wonder whether they might enjoy measure contraction properties.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nKnowledge graphs (KGs) have evolved to be the building blocks of many intelligent systems~\\cite{KG_survey}.\nDespite the importance, KGs are usually costly to construct~\\cite{paulheim2018much} and naturally suffer from incompleteness~\\cite{KBCompleteness}. \nHence, merging multiple KGs through entity alignment can lead to mutual enrichment of their knowledge \\cite{KENS}, and provide downstream applications with more comprehensive knowledge representations~\\cite{LinkNBed,KENS}. Entity alignment seeks to discover identical entities in different KGs, such as English entity \\texttt{Thailand} and its French counterpart \\texttt{Tha\u00eflande}. \nTo tackle this important problem, literature has attempted with the embedding-based entity alignment methods~\\cite{MTransE,GCN_Align,MuGNN,GraphMatch_iclr20,NMN_acl20,AttrGNN,HyperKA}. \nThese methods jointly embed different KGs and put similar entities at close positions in a vector space, where the nearest neighbor search can retrieve entity alignment.\nDue to its effectiveness, embedding-based entity alignment has drawn extensive attention in recent years \\cite{OpenEA}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.96\\linewidth]{figs\/overlap.pdf}\n\t\\caption{Illustration of entity alignment between two KGs with dangling cases. Paired red and black squares in the overlap region denote entity alignment while others are dangling entities without counterparts.}\n\t\\label{fig:na_ea}\n\\end{figure}\n\nNonetheless, to practically support the alignment of KGs as a real-world task, existing studies suffer one common problem of identifying entities without alignment across KGs (called \\emph{dangling entities}).\nSpecifically, current methods are all built upon the assumption that any source entity has a counterpart in the target KG~\\cite{OpenEA}, and are accordingly developed with learning resources that enforce the same assumption.\nHence, given every entity in a source KG, a model always tends to predict a counterpart via the nearest neighbor search in the embedding space.\nHowever, since each KG may be independently created based on separate corpora~\\cite{DBpedia} or contributed by different crowds~\\cite{speer2017conceptnet,carlson2010toward},\nit is natural for KGs to possess different sets of entities \\cite{Integration}, as illustrated in \\Cref{fig:na_ea}. \nEssentially, this problem overlooked in prior studies causes existing methods to fall short of distinguishing between matchable and dangling entities,\nhence hinders any of such methods to align KGs in a real-world scenario.\n\nTowards more practical solutions of entity alignment for KGs,\nwe provide a redefinition of the task with the incorporation of dangling cases (\\Cref{sect:task}), as the \\textit{first contribution} of this work. \nGiven a source entity, our setting does not assume that it must have a counterpart in the target KG as what previous studies do. \nInstead, conducting entity alignment also involves identifying whether the counterpart of an entity actually exists in another KG.\nHence, a system to tackle this realistic problem setting of entity alignment is also challenged by the requirement for justifying the validity of its prediction.\n\nTo facilitate the research towards the new problem,\nthe \\textit{second contribution} of this work is to construct a new dataset DBP2.0\\xspace for entity alignment with dangling cases (\\Cref{sect:dataset}). \nAs being discussed, existing benchmarks for entity alignment, including DBP15K~\\cite{JAPE}, WK3L~\\cite{MTransE} and the more recent OpenEA~\\cite{OpenEA}, are set with the constraint that any entity to be aligned should have a valid counterpart.\nWe use the full DBpedia~\\cite{DBpedia} to build a new dataset and the key challenge lies in that we need to guarantee the selected dangling entities actually do not have counterparts. \nWe first extract two subgraphs with one-to-one entity alignment (i.e., all entities have counterparts). \nThen, we randomly remove some entities to make their left counterparts in the peer KG dangling.\n\nAlthough embedding-based entity alignment has been investigated for several years, handling with dangling entities has not been studied yet. \nAs the \\textit{third contribution}, we present a multi-task learning framework for the proposed task (\\Cref{sect:model}). \nIt consists of two jointly optimized modules for \\emph{entity alignment} and \\emph{dangling entity detection}, respectively. \nWhile the entity alignment module can basically incorporate any existing techniques from prior studies~\\cite{OpenEA},\nin this paper, we experiment with two representative techniques, i.e., relational embedding based~\\cite{MTransE} and neighborhood aggregation based~\\cite{AliNet} methods.\nFor dangling entity detection, our framework incorporates an auxiliary learning objective, which seeks to learn a confidence metric for the inferred entity alignment.\nThe principle to realize such metric learning is that the embeddings of dangling entities should be isolated and are distant from others. \nAccording to this principle, we exploit several techniques to distinguish between matchable and dangling entities based on their distance distribution with their neighbors (\\Cref{sect:model}), including nearest neighbor classification, marginal ranking and background ranking~\\cite{Network_Agnostophobia}.\n\nWe conduct comprehensive experiments on the new DBP2.0\\xspace dataset, which demonstrate the proposed techniques to solve the dangling entity detection problem to different extents.\nMoreover, we observe that training the dangling detection model (marginal ranking) provides an effective indirect supervision that improves the detection of alignment for matchable entities. \nWe hope our task, dataset and framework can foster further investigation of entity alignment techniques in the suggested real scenario, leading to more effective and practical solutions to this challenging but important problem.\n\n\\section{Task and Dataset}\nWe hereby describe the problem setting of our task and introduce the new dataset.\n\n\\subsection{Task Definition}\\label{sect:task}\nA KG is a set of relational triples $\\mathcal{T} \\subseteq \\mathcal{E}\\times \\mathcal{R}\\times\\mathcal{E}$, where $\\mathcal{E}$ and $\\mathcal{R}$ denote vocabularies of entities and relations, respectively. Without loss of generality, we consider entity alignment between two KGs, i.e., a source KG $\\mathcal{K}_1\\!=\\!(\\mathcal{T}_1, \\mathcal{E}_1, \\mathcal{R}_1)$ and a target KG $\\mathcal{K}_2\\!=\\!(\\mathcal{T}_2,\\mathcal{E}_2, \\mathcal{R}_2)$. Given a small set of seed entity alignment $\\mathcal{A}_{12}=\\{(e_1, e_2) \\in \\mathcal{E}_1\\times\\mathcal{E}_2\\|e_1\\equiv e_2\\}$ along with a small set of source entities $\\mathcal{D}\\subset\\mathcal{E}_{1}$ known to have no counterparts as training data, \nthe task seeks to find the remaining entity alignment.\nDifferent from the conventional entity alignment setting \\cite{JAPE},\na portion (with an anticipated quantity) of entities in $\\mathcal{E}_1$ and $\\mathcal{E}_2$ may have no counterparts.\nOur training and inference stages take such dangling entities into consideration.\n\n\\subsection{Dataset Construction}\\label{sect:dataset}\n\nAs discussed, previous testbeds for entity alignment do not contain dangling entities \\cite{JAPE,KDCoE,OpenEA}.\nTherefore, we first create a new dataset to support the study of the proposed problem setting.\nSame as the widely used existing benchmark DBP15K \\cite{JAPE}, we choose DBpedia 2016-10\\footnote{Downloaded from \\url{https:\/\/wiki.dbpedia.org\/downloads-2016-10}. The latest 2020 version has not provided updated data for some languages other than English when this study is conducted.} as the raw data source.\nFollowing DBP15K, we also use English (EN), French (FR), Japanese (JA) and Chinese (ZH) versions of DBpedia to build three entity alignment settings of ZH-EN, JA-EN and FR-EN.\nFor each monolingual KG, the triples are extracted from the Infobox Data of DBpedia, where relations are not mapped to a unified ontology. \nThe reference entity alignment data is from the inter-language links (ILLs) of DBpedia across these three bridges of languages.\nSuch reference data is later used as alignment labels for training and testing, and also serves as references to recognize dangling entities.\n\n\\begin{table}[!t]\t\n\t\\centering\n\t{\\small\n\t\\setlength{\\tabcolsep}{4pt}\n\t\t\\begin{tabular}{clrrrr}\n\t\t\t\\toprule\t\t\t\\multicolumn{2}{c}{Datasets} & \\# Entities & \\# Rel. & \\# Triples & \\# Align. \\\\ \\midrule\n\t\t\t\\multirow{2}{*}{ZH-EN} \n\t\t\t& ZH & 84,996 & 3,706 & 286,067 &\\multirow{2}{*}{33,183} \\\\\n\t\t\t& EN & 118,996 & 3,402 & 586,868 & \\\\ \\midrule\n\t\t\t\\multirow{2}{*}{JA-EN} \n\t\t\t& JA & 100,860 & 3,243 & 347,204 &\\multirow{2}{*}{39,770} \\\\\n\t\t\t& EN & 139,304 & 3,396 & 668,341 & \\\\ \\midrule\n\t\t\t\\multirow{2}{*}{FR-EN}\n\t\t\t& FR & 221,327 & 2,841 & 802,678 &\\multirow{2}{*}{123,952} \\\\\n\t\t\t& EN & 278,411 & 4,598 & 1,287,231 & \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\caption{\\label{tab:dataset}Statistics of the DBP2.0\\xspace dataset.}\n\\end{table}\n\n\\stitle{Construction} \nThe key challenge of building our dataset lies in that we need to ensure the selected dangling entities are indeed without counterparts. \nSpecifcally, we cannot simply regard entities without ILLs as dangling ones, since the ILLs are also incomplete \\cite{MTransE}. \nUnder this circumstance, we use a two-step dataset extraction process, which first samples two subgraphs whose entities all have counterparts based on ILLs, \nand randomly removes a disjoint set of entities in the source and target graphs to make their counterparts dangling. \nFor the first step, we iteratively delete unlinked entities and their triples from the source and target KGs until the left two subgraphs are one-to-one aligned. \nIn the second step for entity removal, while the removed entities are disjoint in two KGs, the proportion of the removed entities \nalso complies with the proportion of unaligned entities in each KG.\n\n\\stitle{Statistics and evaluation} \n\\Cref{tab:dataset} lists the statistics our dataset. \nThe three entity alignment settings have different data scales and each is much larger than the same setting in DBP15K, \nthus can benefit better scalability analysis of models.\nFor dangling entity detection, we split $30\\%$ of dangling entities for training, $20\\%$ for validation and others for testing. \nThe splits of reference alignment follow the same partition ratio,\nwhich is also consistent with that of DBP15K to simulate the weak alignment nature of KGs \\cite{MTransE,JAPE}. \nWe also compare the degree distribution of matchable and dangling entities in our dataset against DBP15K in \\Cref{fig:degree} of \\Cref{appendix:degree}. \nWe find the matchable and unlabeled entities in DBP15K have biased degree distribution, which has an adverse effect on dangling entity detection and leads to unreal evaluation. By contrast, in DBP2.0\\xspace, matchable and dangling entities have similar degree distribution.\n\\section{Entity Alignment with Dangling Cases}\n\\label{sect:model}\n\nWe propose a multi-task learning framework for entity alignment with dangling cases, as illustrated in \\Cref{fig:framework}. \nIt has two jointly optimized modules, i.e., entity alignment and dangling entity detection. \nThe entity alignment module takes as input relational triples of two KGs (for KG embedding) and seed entity alignment (for alignment learning). \nAs for the detection of dangling entities, the module uses a small number of labeled dangling entities\nto jump-start the learning of a confidence metric for distinguishing between matchable and dangling entities. \nIn the inference stage for entity alignment, \nour framework is able to first identify and remove dangling entities, then predict alignment for those that are decided to be matchable. \n\n\\subsection{Entity Alignment}\nOur framework can incorporate any entity alignment technique.\nFor the sake of generality, we consider two representative techniques in our framework.\nOne technique is based on MTransE \\cite{MTransE}, which is among the earliest studies for embedding-based entity alignment. \nIt employs the translational model TransE \\cite{TransE} to embed KGs in separate spaces, meanwhile jointly learns a linear transformation\nbetween the embedding spaces to match entity counterparts.\nSpecifically, given an entity pair $(x_1,x_2)\\in\\mathcal{A}_{12}$, let $\\mathbf{x}_1$ and $\\mathbf{x}_2$ be their embeddings learned by the translational model. \nMTransE learns the linear transformation induced by a matrix $\\mathbf{M}$ by minimizing $\\|\\mathbf{M}\\mathbf{x}_1\\!-\\!\\mathbf{x}_2\\|$, where $\\|\\!\\cdot\\!\\|$ denotes the $L_1$ or $L_2$ norm.\n\nThe other technique is from AliNet \\cite{AliNet}, which is one of the SOTA methods based on graph neural networks. \nAliNet encodes entities by performing a multi-hop neighborhood aggregation, seeking to cope with heteromorphism of their neighborhood structures. \nFor alignment learning, different from MTransE that only minimizes the transformed embedding distance, AliNet additionally optimizes a margin-based ranking loss\nfor entity counterparts with negative samples. \nSpecifically, let $x$ be a matchable source entity in the seed entity alignment, and $x'$ is a randomly-sampled entity in the target KG, AliNet attempts to ensure $\\|\\mathbf{x}-\\mathbf{x}'\\|>\\lambda_1>0$, where $\\lambda_1$ is a distance margin. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.999\\linewidth]{figs\/framework.pdf}\n\t\\caption{Framework of entity alignment w\/ abstention.}\n\t\\label{fig:framework}\n\\end{figure}\n\n\\subsection{Dangling Entity Detection}\\label{sec:dangling}\nWe propose three techniques to implement the dangling detection module based on the distribution of the nearest neighbor distance in embedding space.\n\n\\subsubsection{NN Classification}\nThis technique is to train a binary classifier to distinguish between dangling entities (labeled $1$, i.e., $y=1$) and matchable ones ($y=0$). \nSpecifically, we experiment with a feed-forward network (FFN) classifier.\nGiven a source entity $x$, its input feature representation is the difference vector between its embedding $\\mathbf{x}$ and its transformed NN embedding $\\mathbf{x}_\\text{nn}$ in the target KG embedding space\\footnote{We use \\emph{transformed nearest neighbor (NN)} to denote the the NN of a source KG entity after it is transformed to the target embedding space.}. \nThe confidence of $x$ being a dangling entity is given by $p(y=1|x) = \\text{sigmoid}(\\text{FFN}(\\mathbf{M}\\mathbf{x}-\\mathbf{x}_\\text{nn}))$. \nLet $\\mathcal{D}$ be the training set of dangling source entities and $\\mathcal{A}$ denotes the set of matchable entities in the training alignment data. \nFor every $x\\in\\mathcal{D}\\cup\\mathcal{A}$, we minimize the cross-entropy loss:\n\\begin{equation}\\label{eq:cross-entropy}\n\\begin{split}\n\\mathcal{L}_x = - \\big(& y_x\\log(p(y=1|x)) \\\\ &+ (1-y_x)\\log(1-p(y=1|x))\\big),\n\\end{split}\n\\end{equation}\nwhere $y_x$ denotes the truth label for entity $x$. In a real-world entity alignment scenario, the dangling entities and matchable ones usually differ greatly in quantity, leading to unbalanced label distribution.\nIn that case, we apply label weights \\cite{huang2016learning} to balance between the losses for both labels.\n\n\\subsubsection{Marginal Ranking}\nConsidering that dangling entities are the noises for finding entity alignment based on embedding distance, we are motivated to let dangling entities have solitary representations in the embedding space, i.e., they should keep a distance away from their surrounding embeddings. \nHence, we seek to put a distance margin between dangling entities and their sampled NNs. \nFor every input dangling entity $x\\in\\mathcal{D}$, we minimize the following loss:\n\\begin{equation}\\label{eq:margin}\n\\mathcal{L}_x = \\max(0, \\lambda - \\|\\mathbf{M}\\mathbf{x}-\\mathbf{x}_\\text{nn}\\|),\n\\end{equation}\nwhere $\\lambda$ is a distance margin. \nThis loss and the entity alignment loss (e.g., that of MTransE) conduct joint learning-to-rank, i.e., the distance between unaligned entities should be larger than that of aligned entities while dangling entities should have a lower ranking in the candidate list of any source entity.\n\n\\subsubsection{Background Ranking}\nIn the two aforementioned techniques,\nsearching for the NN of an entity is time-consuming.\nFurthermore, selecting an appropriate value for the distance margin of the second technique is not trivial. \nBased on empirical studies, we find that the margin has a significant influence on the final performance.\nHence, we would like to find a more efficient and self-driven technique. \nInspired by the open-set classification approach \\cite{Network_Agnostophobia} that lets a classifier equally penalize the output logits for samples of classes that are unknown to training (i.e. \\emph{background classes}), \nwe follow a similar principle and let the model equally enlarge the distance of a dangling entity from any sampled target-space entities.\nThis method is to treat all dangling entities as the ``background\" of the embedding space, since they should be distant from matchable ones.\nWe also decrease the scale of the dangling entity embeddings to further provide a separation between the embeddings of matchable and dangling entities.\nFor the dangling entity $x\\in\\mathcal{D}$, let $X^v_x$ be the set of randomly-sampled target entities with size of $v$. The loss is defined as\n\\begin{equation}\\label{eq:background}\n\\mathcal{L}_x = \\sum_{x'\\in X^v_x} \\big|\\lambda_x - \\|\\mathbf{M}\\mathbf{x}-\\mathbf{x}'\\|\\big| + \\alpha\\|\\mathbf{x}\\|,\n\\end{equation}\nwhere $|\\cdot|$ denotes the absolute value and $\\alpha$ is a weight hyper-parameter for balance. $\\lambda_x$ is the average distance, i.e., $\\lambda_x = \\frac{1}{v} \\sum_{x'\\in X^v_x} \\| \\mathbf{M}\\mathbf{x}-\\mathbf{x}'\\|$. \nThis objective can push the relatively close entities away from the source entity without requiring a pre-defined distance margin.\n\n\\subsection{Learning and Inference}\nThe overall learning objective of the proposed framework is a combination of the entity alignment loss (e.g., MTransE's loss) and one of the dangling entity detection loss as mentioned above. \nThe two losses are optimized in alternate batches.\nMore training details are presented in \\Cref{sec:config}.\n\nLike the training phase,\nthe inference phase is also separated into dangling entity detection and entity alignment.\nThe way of inference for dangling entities differs with the employed technique.\nThe NN classification uses the jointly trained FFN classifier to estimate whether the input entity is a dangling one.\nThe marginal ranking takes the preset margin value in training as a confidence threshold, and decides whether an entity is a dangling one based on if its transformed NN distance is higher than the threshold.\nThe inference of background ranking is similar to that of marginal ranking, with only the difference, by its design, to be that the confidence threshold is set as the average NN distance of entities in the target embedding space.\nAfter detecting dangling entities, the framework finds alignment in the remaining entities based on the transformed NN search among the matchable entities in the embedding space of the target KG.\n\n\\stitle{Accelerated NN search}\nThe first and second techniques need to search NNs. We can use an efficient similarity search library Faiss \\cite{faiss} for fast NN retrieval in large embedding space. \nWe also maintain a cache to store the NNs of entities backstage and update it every ten training epochs.\n\n\\section{Experiments}\nIn this section, we report our experimental results.\nWe start with describing the experimental setups (\\Cref{sec:setting}). Next, we separately present the experimentation under two different evaluation settings (\\Cref{sec:relaxed}-\\Cref{sec:consolidated}), followed by an analysis on the similarity score distribution of the obtained representations for matchable and dangling entities (\\Cref{sect:viz}).\nTo faciliate the use of the contributed dataset and software, we have incorporated these resources into the OpenEA benchmark\\footnote{\\url{https:\/\/github.com\/nju-websoft\/OpenEA}}~\\cite{OpenEA}.\n\n\\begin{table*}[!t]\n\t\\centering\n\n\n\t{\\small\n\t\\setlength{\\tabcolsep}{1pt}\n\t\t\\begin{tabular}{lcccccccccccccccccc}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Methods} &\n\t\t\t\\multicolumn{3}{c}{ZH-EN} & \\multicolumn{3}{c}{EN-ZH} & \\multicolumn{3}{c}{JA-EN} & \\multicolumn{3}{c}{EN-JA} & \\multicolumn{3}{c}{FR-EN} & \\multicolumn{3}{c}{EN-FR}\\\\\n\t\t\t\\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10} \\cmidrule(lr){11-13} \\cmidrule(lr){14-16} \\cmidrule(lr){17-19}\n\t\t\t& H@1 & H@10 & MRR & H@1 & H@10 & MRR & H@1 & H@10 & MRR & H@1 & H@10 & MRR & H@1 & H@10 & MRR & H@1 & H@10 & MRR \\\\ \n\t\t\t\\midrule\n\t\t\tMTransE & {.358} & {.675} & {.463} & .353 & {.670} & {.461} & {.348} & {.661} & {.453} & .342 & {.670} & .452 & {.245} & {.524} & {.338} & {.247} & {.531} & {.342} \\\\\n\t\t \\;\\;w\/ NNC & .350 & .668 & .457 & .356 & .664 & .460 & .340 & .657 & .441 & .336 & .630 & .445 & .253 & .539 & .343 & .251 & .536 & .343 \\\\\n\t\t \n\t\t\t\\;\\;w\/ MR & \\textbf{.378} & \\textbf{.693} & \\textbf{.487} & \\textbf{.383}& \\textbf{.699} & \\textbf{.491} & \\textbf{.373} & \\textbf{.686} & \\textbf{.476} & \\textbf{.374} & \\textbf{.707} & .\\textbf{485} & \\textbf{.259} & \\textbf{.541} & \\textbf{.348} & \\textbf{.265} & \\textbf{.553} & \\textbf{.360} \\\\\n\t\t\t\\;\\;w\/ BR & .360 & .678 & .468 & .357 & .675 & .465 & .344 & .660 & .451 & .346 & .675 & .456 & .251 & .525 & .342 & .249 & .531 & .343 \\\\\n\t\t\t\\midrule\n\t\t\tAliNet & .332 & .594 & .421 & {.359} & .629 & .451 & .338 & .596 & .429 & {.363} & .630 & {.455} & .223 & .473 & .306 & .246 & .495 & .329 \\\\\n\t\t\t\\;\\;w\/ NNC & .321 & .598 & .415 & .335 & .608 & .428 & .330 & .602 & .422 & .344 & .627 & .439 & .212 & .467 & .294 & .230 & .476 & .312 \\\\\n\t\t\t\\;\\;w\/ MR & \\textbf{.343} & \\textbf{.606} & \\textbf{.433} & \\textbf{.364} & \\textbf{.637} & \\textbf{.459} & \\textbf{.349} & \\textbf{.608} & \\textbf{.438} & \\textbf{.377} & \\textbf{.646} & \\textbf{.469} & \\textbf{.230} & \\textbf{.477} & \\textbf{.312} & \\textbf{.252} & \\textbf{.502} & \\textbf{.335}\\\\\n\t\t\t\\;\\;w\/ BR & .333 & .599 & .426 & .357 & .632 & .451 & .341 & \\textbf{.608} & .431 & .369 & .636 & .461 & .214 & .468 & .298 & .238 & .487 & .321 \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\caption{Entity alignment results (relaxed setting) of MTransE and AliNet on DBP2.0\\xspace.}\n\t\\label{tab:synthetic_ent_alignment}\n\\end{table*}\n\n\\subsection{Experimental Settings}\\label{sec:setting}\nWe consider two evaluation settings. \nOne setting is for the proposed problem setting with dangling entities, for which we refer as the \\emph{consolidated evaluation setting}. \nWe first detect and remove the dangling source entities and then search alignment for the left entities.\nFor this evaluation setting, we also separately assess the performance of the dangling detection module.\nThe other simplified setting follows that in previous studies \\cite{JAPE,OpenEA} where the source entities in test set all have counterparts in the target KG, so no dangling source entities are considered. \nIn this \\emph{relaxed evaluation setting}, we seek to evaluate the effect of dangling entity detection on entity alignment and make our results comparable to previous work. \n\n\\stitle{Evaluation Protocol}\nFor the \\emph{relaxed evaluation setting}, given each source entity, the candidate counterpart list is selected via NN search in the embedding space. \nThe widely-used metrics on the ranking lists are Hits@$k$ ($k=1,10$, H@$k$ for short) and mean reciprocal rank (MRR).\nHigher H@$k$ and MRR indicate better performance.\n\nFor the \\emph{consolidated setting},\nwe report precision, recall and F1 for dangling entity detection.\nAs for assessing the eventual performance of realistic entity alignment, since the dangling entity detection may not be perfect,\nit is inevitable for some dangling entities to be incorrectly sent to the entity alignment module for aligning, while some matchable ones may be wrongly excluded.\nIn this case, H@$k$ and MRR are not applicable for the consolidated entity alignment evaluation. \nFollowing a relevant evaluation setting for entity resolution in database \\cite{DL4ER,EmbedER}, we also use precision, recall and F1 as metrics.\nMore specifically, if a source entity is dangling and is not identified by the detection module, the prediction is always regarded as incorrect. \nSimilarly, if a matchable entity is falsely excluded by the dangling detection module, this test case is also regarded as incorrect since the alignment model has no chance to search for alignment. \nOtherwise, the alignment module searches for the NN of a source entity in the target embedding space and assesses if the predicated counterpart is correct. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=.96\\linewidth]{figs\/data_comp.pdf}\n\t\\caption{Average neighbor overlap ratio of aligned entities in DBP15K and our DBP2.0\\xspace.}\n\t\\label{fig:data_comp}\n\\end{figure}\n\n\\begin{table*}[!t]\n\t\\centering\n\n\n\t{\n\t\\small\n\t\\setlength{\\tabcolsep}{3pt}\n\t\t\\begin{tabular}{llcccccccccccccccccc}\n\t\t\t\\toprule\n \\multicolumn{2}{c}{\\multirow{2}{*}{Methods}} &\n\t\t\t\\multicolumn{3}{c}{ZH-EN} & \\multicolumn{3}{c}{EN-ZH} & \\multicolumn{3}{c}{JA-EN} & \\multicolumn{3}{c}{EN-JA} & \\multicolumn{3}{c}{FR-EN} & \\multicolumn{3}{c}{EN-FR}\\\\\n\t\t\t\\cmidrule(lr){3-5} \\cmidrule(lr){6-8} \\cmidrule(lr){9-11} \\cmidrule(lr){12-14} \\cmidrule(lr){15-17} \\cmidrule(lr){18-20}\n\t\t\t&& Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 \\\\ \n\t\t\t\\midrule\n\t\t\t\\parbox[t]{2mm}{\\multirow{3}{*}{\\rotatebox[origin=c]{90}{{\\scriptsize MTransE}}}} \n\t\t\t& NNC & .604 & .485 & .538 & .719 & .511 & .598 & .622 & .491 & .549 & .686 & .506 & .583 & .459 & .447 & .453 & .557 & .543 & .550 \\\\\n\t\t\t& MR & .781 & .702 & .740 & .866 & .675 & .759 & .799 & .708 & .751 & .864 & .653 & .744 & .482 & .575 & .524 & .639 & .613 & .625 \\\\\n\t\t\t& BR & \\textbf{.811} & \\textbf{.728} & \\textbf{.767} & .\\textbf{892} & \\textbf{.700} & \\textbf{.785} & \\textbf{.816} & \\textbf{.733} & \\textbf{.772} & \\textbf{.888} & \\textbf{.731} & \\textbf{.801} & \\textbf{.539} & \\textbf{.686} & \\textbf{.604} & \\textbf{.692} & \\textbf{.735} & \\textbf{.713} \\\\\n\t\t\t\\midrule\n\t\t\t\\parbox[t]{2mm}{\\multirow{3}{*}{\\rotatebox[origin=c]{90}{{\\small AliNet}}}}\n\t\t\t& NNC & .676 & .419 & .517 & .738 & .558 & .634 & .597 & .482 & .534 & .761 & .120 & .207 & .466 & .365 & .409 & .545 & .162 & .250 \\\\\n\t\t\t& MR & .752 & .538 & .627 & .828 & .505 & .627 & .779 & .580 & .665 & \\textbf{.854} & .543 & \\textbf{.664} & \\textbf{.552} & \\textbf{.570} & \\textbf{.561} & \\textbf{.686} & .549 & \\textbf{.609} \\\\\n\t\t\t& BR & \\textbf{.762} & \\textbf{.556} & \\textbf{.643} & \\textbf{.829} & \\textbf{.515} & \\textbf{.635} & \\textbf{.783} & \\textbf{.591} & \\textbf{.673} & .846 & \\textbf{.546} & .663 & .547 & .556 & .552 & .674 & \\textbf{.556} & \\textbf{.609} \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\caption{Dangling entity detection results on DBP2.0\\xspace.}\n\t\\label{tab:detection}\n\\end{table*}\n\n\\stitle{Model Configuration}\\label{sec:config}\nAs described in \\Cref{sec:dangling}, our dangling detection module has three variants, i.e., NN classification (NNC), marginal ranking (MR), and background ranking (BR). \nWe report the implementation details of the entity alignment module (w\/ MTransE or AliNet) in \\Cref{appendix:config,appendix:setting}.\nWe initialize KG embeddings and model parameters using the Xavier initializer \\cite{Xavier}, and use Adam \\cite{Adam} to optimize the learning objectives with the learning rate $0.001$ for MTransE and $0.0005$ for AliNet. \nNote that we do not follow some methods to initialize with machine translated entity name embeddings \\cite{NMN_acl20}.\nAs being pointed out by recent studies \\cite{JEANS,EVA,AttrGNN}, this is necessary to prevent test data leakage.\nEntity similarity is measured by cross-domain similarity local scaling \\cite{CSLS} for reduced hubness effects, as being consistent to recent studies \\cite{AliNet,JEANS}.\nWe use a two-layer FFN in NNC. \nFor MR, the margin is set as $\\lambda=0.9$ for MTransE and $0.2$ for AliNet.\nBR randomly samples $20$ target entities for each entity per epoch and $\\alpha=0.01$.\nTraining is terminated based on F1 results of entity alignment on validation data.\n\n\\begin{table*}[!t]\n\t\\centering\n\n\n\t{\n\t\\small\n\t\\setlength{\\tabcolsep}{3pt}\n\t\t\\begin{tabular}{llcccccccccccccccccc}\n\t\t\t\\toprule\n\t\t\t\\multicolumn{2}{c}{\\multirow{2}{*}{Methods}} &\n\t\t\t\\multicolumn{3}{c}{ZH-EN} & \\multicolumn{3}{c}{EN-ZH} & \\multicolumn{3}{c}{JA-EN} & \\multicolumn{3}{c}{EN-JA} & \\multicolumn{3}{c}{FR-EN} & \\multicolumn{3}{c}{EN-FR}\\\\\n\t\t\t\\cmidrule(lr){3-5} \\cmidrule(lr){6-8} \\cmidrule(lr){9-11} \\cmidrule(lr){12-14} \\cmidrule(lr){15-17} \\cmidrule(lr){18-20}\n\t\t\t&& Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 \\\\ \n\t\t\t\\midrule\n\t\t\t\\parbox[t]{2mm}{\\multirow{3}{*}{\\rotatebox[origin=c]{90}{{\\scriptsize MTransE}}}} \n\t\t\t& NNC & .164 & .215 & .186 & .118 & .207 & .150 & .180 & .238 & .205 & .101 & .167 & .125 & .185 & .189 & .187 & .135 & .140 & .138 \\\\\n\t\t\t& MR & .302 & .349 & .324 & .231 & .362 & .282 & .313 & \\textbf{.367} & \\textbf{.338} & .227 & \\textbf{.366} & .280 & .260 & \\textbf{.220} & \\textbf{.238} & .213 & \\textbf{.224} & .218 \\\\\n\t\t\t& BR & \\textbf{.312} & \\textbf{.362} & \\textbf{.335} & \\textbf{.241} & \\textbf{.376} & \\textbf{.294} & \\textbf{.314} & .363 & .336 & \\textbf{.251} & .358 & \\textbf{.295} & \\textbf{.265} & .208 & .233 & \\textbf{.231} & .213 & \\textbf{.222} \\\\\n\t\t\t\\midrule\n\t\t\t\\parbox[t]{2mm}{\\multirow{3}{*}{\\rotatebox[origin=c]{90}{{\\small AliNet}}}} \n\t\t\t& NNC & .121 & .193 & .149 & .085 & .138 & .105 & .113 & .146 & .127 & .067 & .208 & .101 & .126 & .148 & .136 & .086 & .161 & .112 \\\\\n\t\t\t& MR & \\textbf{.207} & \\textbf{.299} & \\textbf{.245} & \\textbf{.159} & \\textbf{.320} & \\textbf{.213} & \\textbf{.231} & \\textbf{.321} & \\textbf{.269} & \\textbf{.178} & \\textbf{.340} & \\textbf{.234} & \\textbf{.195} & \\textbf{.190} & \\textbf{.193} & .160 & \\textbf{.200} & .178 \\\\\n\t\t\t& BR & .203 & .286 & .238 & .155 & .308 & .207 & .223 & .306 & .258 & .170 & .321 & .222 & .183 & .181 & .182 & \\textbf{.164} & \\textbf{.200} & \\textbf{.180} \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\t\\caption{Entity alignment results on DBP2.0\\xspace.}\n\t\\label{tab:ent_alignment}\n\\end{table*}\n\n\\subsection{Relaxed Evaluation}\\label{sec:relaxed}\nWe first present the evaluation under the relaxed entity alignment setting based on \\Cref{tab:synthetic_ent_alignment}.\nThis setting only involves matchable source entities to test entity alignment,\nwhich is an ideal (but less realistic) scenario similar to prior studies \\cite{OpenEA}. \nWe also examine if jointly learning to detect dangling entities can indirectly improve alignment.\n\nAs observed,\nMTransE, even without dangling detection, can achieve promising performance on DBP2.0\\xspace. \nThe results are even better than those on DBP15K as reported by \\citet{JAPE}.\nWe attribute this phenomenon to the robustness of this simple embedding method and our improved implementation (e.g., more effective negative sampling).\nBy contrast, although we have tried our best in tuning, the latest GNN-based AliNet falls behind MTransE.\nUnlike MTransE that learns entity embeddings from a first-order perspective (i.e., based on triple plausibility scores), AliNet represents an entity from a high-order perspective by aggregating its neighbor embeddings, and entities with similar neighborhood structures would have similar representations.\nHowever, the dangling entities in DBP2.0\\xspace inevitably become spread noises in entity neighborhoods.\nTo further probe into this issue, we count the average neighbor overlap ratio of aligned entities in DBP15K and our DBP2.0\\xspace. Given an entity alignment pair ($x_1,x_2$), let $\\pi(x_1)$ and $\\pi(x_2)$ be the sets of their neighboring entities\nrespectively, where we also merge their aligned neighbors as one identity based on reference entity alignment. \nThen the neighbor overlap ratio of $x_1$ and $x_2$ is calculated as $|\\pi(x_1)\\cap \\pi(x_2)|\/|\\pi(x_1)\\cup \\pi(x_2)|$. \nWe average such a ratio for both DBP15K and DBP2.0\\xspace as given in \\Cref{fig:data_comp}.\nWe can see that the three settings' overlap ratios in DBP2.0\\xspace are all much lower than those in DBP15K.\nThus, DBP2.0\\xspace poses additional challenges, as compared to DBP15K, specifically for those methods relying on neighborhood aggregation.\nBased on results and analysis, we argue that methods performing well on the previous synthetic entity alignment dataset may not robustly generalize to the more realistic dataset with dangling cases.\nThe performance of both MTransE and AliNet is relatively worse on FR-EN, \nwhich has more entities (i.e., larger candidate search space) and a low neighborhood overlap ratio (therefore, more difficult to match entities based on neighborhood similarity).\n\nMeanwhile,\nwe find that the dangling detection module can affect the performance of entity alignment.\nIn details, MR consistently leads to improvement to both MTransE and AliNet. \nBR can also noticeably boost entity alignment on most settings.\nThis shows that learning to isolate dangling entities from matchable ones naturally provides indirect help to discriminate the counterpart of a matchable entity from irrelevant ones.\nOn the other hand, such indirect supervision signals may be consumed by the additional trainable parameters in NNC, causing its effect on entity alignment to be negligible. \nOverall, the observation here calls for more robust entity alignment methods and dangling detection techniques, and lead to further analysis (\\Cref{sec:consolidated}).\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=.96\\linewidth]{figs\/accuracy.pdf}\n\t\\caption{Accuracy of dangling entity detection.}\n\t\\label{fig:accuracy}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=.96\\linewidth]{figs\/time.pdf}\n\t\\caption{Average training time (seconds) of one epoch for dangling entity detection (MTransE variants).}\n\t\\label{fig:time}\n\\end{figure}\n\n\\subsection{Consolidated Evaluation}\\label{sec:consolidated}\nWe now report the experiment on the more realistic consolidated evaluation setting.\n\\Cref{tab:detection} gives the precision, recall and F1 results of dangling entity detection, and the final entity alignment performance is presented in \\Cref{tab:ent_alignment}. \nIn addition, \\Cref{fig:accuracy} shows the accuracy of dangling entity detection.\nWe analyze the results from the following aspects.\n\n\\stitle{Dangling entity detection} \nRegardless of which alignment module is incorporated,\nNNC performs the worst (e.g., the low recall and accuracy around $0.5$) \namong the dangling detection techniques, whereas BR generally performs the best. \nNNC determines whether an entity is dangling based on the difference vector of the entity embedding and its NN, instead of directly capturing the embedding distance which is observed to be more important based on the results by the other two techniques.\nBy directly pushing dangling entities away from their NNs in the embedding space, both MR and BR offer much better performance. \nBesides, BR outperforms MR in most cases. \nBy carefully checking their prediction results and the actual distance of NNs, we find that the induced distance margin in BR better discriminates dangling entities from matchable ones than the pre-defined margin.\n\n\\stitle{Efficiency} \nWe compare the average epoch time of training the three dangling detection modules for MTransE in \\Cref{fig:time}. We conduct the experiment using a workstation with an Intel Xeon E5-1620 3.50GHz CPU and a NVIDIA GeForce RTX 2080 Ti GPU. \nSince NNC and MR need to search for NNs of source entities, both techniques spend much more training time \nthat is saved by random sampling in BR. Overall, BR is an effective and efficient technique for dangling entity detection.\n\n\\stitle{Entity alignment}\nGenerally, for both MTransE and AliNet variants, MR and BR lead to better entity alignment results than NNC. \nMR and BR obtain higher precision and recall performance on detecting dangling entities as listed in \\Cref{tab:detection}, resulting in less noise that enters the entity alignment stage.\nBy contrast, NNC has a low accuracy and thus introduces many noises.\nAs BR outperforms MR in dangling detection, it also achieves higher entity alignment results than MR on most settings.\nWe also notice that MR in a few settings, MR offer comparible or slightly better performance than BR.\nThis is because MR can enhance the learning of alignment modules (see \\Cref{sec:relaxed} for detailed analysis), thus delivering improvement to the final performance. \nMTransE variants generally excels AliNet variants in both entity alignment (see \\Cref{tab:synthetic_ent_alignment}) and dangling entity detection (see \\Cref{tab:detection}) than AliNet, similar to the observation in \\Cref{sec:relaxed}.\n\n\\stitle{Alignment direction} \nWe find that the alignment direction makes a difference in both dangling entity detection and entity alignment. \nUsing EN KG as the source is coupled with easier dangling detection than in other languages,\nas the most populated EN KG contributes more dangling entities and triples to training than other KGs.\nAs for entity alignment, we find the observation to be quite the opposite, as using the EN KG as a source leads to noticeable drops in results.\nFor example, the precision of MTransE-BR is $0.312$ on ZH-EN, but only $0.241$ on EN-ZH.\nThis is because the EN KG has a larger portion of dangling entities. \nAlthough the dangling detection module performs well on the EN KG than on others, there are still much more dangling entities entering the alignment search stage, thus reducing the entity alignment precision.\nThis observation suggests that choosing the alignment direction from a less populated KG to the more populated EN KG can be a more effective solution.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=.96\\linewidth]{figs\/histogram.pdf}\n\t\\caption{Kernel density estimate plot of the test {\\color{blue}{matchable}} and {\\color{red}{dangling}} entities' similarity distribution with their nearest target neighbors in ZH-EN.}\n\t\\label{fig:viz}\n\\end{figure}\n\n\\subsection{Similarity Score Distribution}\\label{sect:viz}\nTo illustrate how well the BR technique distinguishes between matchable and dangling entities, we plot in \\Cref{fig:viz} the distribution of similarity scores of each test entity and its NN. \nThe plot illustrates BR has the expected effect to isolate dangling entities from their NNs, whereas matchable entities are generally placed closer to their NNs.\nYet, we can still see a modest overlap between the two NN similarity distributions of dangling and matchable entities, and a number of dangling entities still have a quite large NN similarity.\nThis also reveals the fact that the proposed problem setting of entity alignment with dangling cases has many remaining challenges that await further investigation.\n\n\\section{Related Work}\nWe discuss two topics of relevant work.\n\n\\subsection{Entity Alignment}\nEmbedding-based entity alignment is first attempted in MTransE \\cite{MTransE}, which jointly learns a translational embedding model and a transform-based alignment model for two KGs.\nLater studies generally follow three lines of improvement.\n(\\romannumeral1) The first line improves the embedding technique to better suit the alignment task, including\ncontextual translation techniques \\cite{transedge}, long-term dependency techniques \\cite{RSN} and neighborhood aggregation (or GNN-based) ones \\cite{GCN_Align,MuGNN,KECG,AliNet,HyperKA,GraphMatch_iclr20}. \n(\\romannumeral2) The second line focuses on effective alignment learning with limited supervision. \nSome leverage semi-supervised learning techniques to resolve the training data insufficiency issue, including self-learning \\cite{BootEA,MRAEA} and co-training \\cite{KDCoE}.\n(\\romannumeral3) Another line of research seeks to retrieve auxiliary or indirect supervision signals from profile information or side features of entities, such as entity attributes \\cite{JAPE,AttrE,MultiKE,SEA}, literals \\cite{HGCN,NMN,AttrGNN}, free text \\cite{JEANS}, pre-trained language models \\cite{HMAN,BERTINT} or visual modalities \\cite{EVA}.\nDue to the large body of recent advances, we refer readers to a more comprehensive summarization in the survey \\cite{OpenEA}.\n\n\\subsection{Learning with Abstention}\nLearning with abstention is a fundamental machine learning, where the learner can opt to abstain from making a prediction if without enough decisive confidence \\cite{BoostingAbstention,Abstention}. \nRelated techniques include thresholding softmax \\cite{Stefano_reject}, selective classification \\cite{SelectiveClassification}, open-set classification with background classes \\cite{Network_Agnostophobia} and out-of-distribution detection \\cite{liang2018enhancing,vyas2018out}. The idea of learning with abstention also has applications in NLP, such as unanswerable QA, where correct answers of some questions are not stated in the given reference text \\cite{SQuAD2_acl2018,AskNAQA_acl2019,ReadVerifyQA}.\n\nTo the best of our knowledge, our task, dataset, and the proposed dangling detection techniques are the first contribution to support learning with abstention for entity alignment and structured representation learning.\n\\section{Conclusion and Future Work}\nIn this paper, we propose and study a new entity alignment task with dangling cases.\nWe construct a dataset to support the study of the proposed problem setting, and design a multi-learning framework for both entity alignment and dangling entity detection.\nThree types of dangling detection techniques are studied, which are based on nearest neighbor classification, marginal ranking, and background ranking. \nComprehensive experiments demonstrate the effectiveness of the method, and provide insights to foster further investigation on this new problem.\nWe further find that dangling entity detection can, in turn, effectively provide auxiliary supervision signals to improve the performance of entity alignment.\n\nFor future work, we plan to extend the benchmarking on DBP2.0 with results from more base models of entity alignment as well as more abstention inference techniques.\nExtending our framework to support more prediction tasks with abstention, such as entity type inference \\cite{JOIE} \nand relation extraction \\cite{alt2020tacred}, is another direction with potentially broad impact.\n\n\\section*{Acknowledgments} \nWe thank the anonymous reviewers for their insightful comments. \nThis work is supported by the National Natural Science Foundation of China (No. 61872172), and the Collaborative Innovation Center of Novel Software Technology \\& Industrialization.\nMuhao Chen's work is supported by the National Science Foundation of United States Grant IIS-2105329.\n\\section{Degree Distribution}\n\\label{appendix:degree}\n\\Cref{fig:degree} shows the degree distribution of the matchable and dangling entities in our dataset against DBP15K. \nAlthough DBP15K contains some entities that are not labeled to have counterparts, by checking the ILLs in the recent update of DBpedia, we find many of these entities to have counterparts in the target KG. Hence, these entities in DBP15k cannot act as dangling entities that are key to the more realistic evaluation protocol being proposed in this work.\nFrom the comparison, we can see that these unlabeled entities in DBP15K have much fewer triples than matchable entities. \nThis biased degree distribution will have an adverse effect on dangling entity detection and lead to unreal evaluation. By contrast, in our dataset, matchable and dangling entities have similar degree distribution.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{figs\/degree.pdf}\n\t\\caption{Degree distribution of matchable and dangling entities in DBP15K FR-EN and our FR-EN.}\n\t\\label{fig:degree}\n\\end{figure}\n\n\\section{Configuration of MTransE and AliNet}\n\\label{appendix:config}\nFor entity alignment, we experiment with MTransE \\cite{MTransE} and the SOTA method AliNet \\cite{AliNet}. \nThe implementation of our framework is extended based on OpenEA \\cite{OpenEA}. We adopt the truncated negative sampling method by BootEA \\cite{BootEA} to generate negative triples for MTransE and negative alignment links for AliNet, which leads to improved performance. The embedding size is $128$ for MTransE and $256$ for AliNet. The batch size of MTransE is $20,480$ on ZH-EN and JA-EN, and $102,400$ on FR-EN. The batch size of AliNet is $8,192$ on ZH-EN and JA-EN, and $20,480$ on FR-EN. $\\lambda_1=1.4$ in AliNet.\n\n\\section{Hyper-parameter Settings}\n\\label{appendix:setting}\nWe select each hyper-parameter setting within a wide range of values as follows:\n\n\\begin{noindlist}\n \\item Learning rate: $\\{0.0001, 0.0002, 0.0005, 0.001\\}$\n \\item Embedding dimension: $\\{64, 128, 256, 512\\}$\n \\item Batch size: $\\{4096, 8192, 10240, 20480, 102400\\}$\n \\item \\# FNN layers: $\\{1, 2, 3, 4\\}$\n \\item \\# Random targets: $\\{1, 10, 20, 30, 40, 50\\}$\n \\item $\\lambda$: $\\{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0\\}$\n\\end{noindlist}\n\n\\section{Recall@10 of Entity Alignment}\n\\label{appendix:recall}\n\\Cref{fig:recall} gives the recall@10 results of the MTransE variants with dangling entity detection in the consolidated evaluation setting. We can see that the recall@10 results on FR-EN are lower than that on ZH-EN and JA-EN, which is similar to the observation in entity alignment~\\Cref{sec:consolidated}. From the results, we think existing embedding-based entity alignment methods are still far from being usable in practice. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=.95\\linewidth]{figs\/recallk.pdf}\n\t\\caption{Recall@10 results of entity alignment.}\n\t\\label{fig:recall}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhora b/data_all_eng_slimpj/shuffled/split2/finalzzhora new file mode 100644 index 0000000000000000000000000000000000000000..ec807116d22f48fb920b0d10e68a90fa4ff33090 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhora @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn the centre of most galaxies lies a supermassive black hole (SMBH) with a mass exceeding $10^5M_{\\odot}$ \\citep{reines14}. It is believed that SMBH binaries in galactic nuclei form by successive coalescences following the merger of their respective host halos, and by accretion of gas present in the central regions of galaxies. The mechanisms assumed to form a SMBH binary after the merger of two galaxies are i) binary shrinking due to stellar or gas dynamical processes; followed by ii) coalescence due to low frequency gravitational wave emission ($\\leq 10^{-3}$ Hz) \\citep{mayeretal07, fillouxetal}. If the interacting galaxies are rich in gas, an accretion disc will form from gas rapidly falling towards the black holes \\citep{armitage02}. \n\nThe interaction between the SMBH binary and the surrounding material may be observable. Gas plays a key role in the binary merger process. It also powers the black hole luminosity via accretion. SMBH mergers are thus promising for multi-messenger astronomy since they produce an electromagnetic counterpart to a gravitational wave burst.\n\n \\citet{armitage02} showed, using 1D simulations, that rapid accretion of the inner disc due to the tidal effect of the merging secondary causes an accretion rate exceeding the Eddington limit prior to the merger of the SMBH binary. By contrast, the 2D hydrodynamical simulations of \\citet{baruteal12} suggested that such a `snowplough mechanism' is unlikely to work, because the binary shrinking time driven by gravitational waves is shorter than the viscous time-scale, meaning that fluid elements in the inner disc get funneled to the outer disc across the secondary gap via horseshoe orbits. However, 3D simulations carried out by \\citet{ceriolietal} demonstrated the presence of a squeezing mechanism caused by the compression of the inner disc gas as the secondary companion spirals towards the more massive black hole. The resulting accretion luminosity in the final stages of the merger exceeded the Eddington rate.\n\nRecent detections of gravitational waves have provided tests of General Relativity and direct measurements of compact objects parameters like mass and spin. These have been demonstrated in detections of gravitational radiation from stellar mass black holes binaries (events GW150914, GW151226, GW170104, GW170814 and GW170608) and by the first direct evidence of a link between a neutron star merger and short $\\gamma$-ray bursts (GW170817) detected by LIGO and Virgo observatories \\citep{abbott01, abbott02, abbott03}. However, the detection of gravitational waves from SMBH binaries will be possible only from future space detectors, such as LISA \\citep{amaro}. \n\nMost previous simulations of a SMBH binary coalescence in a gaseous environment have modeled aligned disc-binary systems \\citep{ceriolietal}. In the present work, we generalize the analysis to the case of misaligned disc-binary systems. Inclined binaries are expected when the black holes are spinning at an inclined angle with respect to the orbital plane. We model the disc-binary interaction with inclined discs at 1, 5, 10, 20, 30 and 180 degrees, using the 3D Smoothed Particle Hydrodynamics (SPH) code {\\sc phantom} \\citep{lodatoeprice10, priceefede10, price12, priceetal18}.\n\nThe paper is organized as follows: in Section~\\ref{section2} we introduce the interaction process in misaligned disc-binary systems. Section~\\ref{section3} describes the numerical method and initial conditions. Section~\\ref{section4} shows our results varying the disc inclination angle and the corresponding accretion rates. We discuss and conclude in Section~\\ref{section5}.\n\n\\section{ACCRETION DYNAMICS IN MISALIGNED DISC-BINARY SYSTEMS}\n\\label{section2}\n\nPrevious numerical simulations and analytic work has shown that the tidal torques of the binary secondary component can strongly influence the gas disc \\citep{linpapa79, goldtre79, lubowetal15}. The dynamics of a binary embedded in a gas disc depend on the mass ratio of the binary and the gas mass. Binaries with nearly equal mass black holes have tidal torques carrying angular momentum between the gas and the binary orbit. However, for binaries with small mass ratios, the secondary tidal force can only be felt when the gas is close enough to the secondary object. Therefore, angular momentum transfer mechanism between the secondary and the gas in the accretion disc can be determined by studying the close encounters through impulsive approximation. These encounters occur on a timescale much smaller than the typical orbital time \\citep{linpapa79, linpapa7902}.\n\nThe angular momentum exchange with the accretion disc can make the secondary component migrate. However, there are different types of migration depending on the relative mass of the companion. In particular, in this work we consider a low mass secondary SMBH embedded in a circumprimary disc. The low mass companion falls towards the supermassive primary. When the binary reaches separations smaller than $10^{-3}$pc the dominant process for the loss of energy and angular momentum is gravitational wave emission \\citep{peters63}. The energy dissipation results in a negative torque shrinking the binary down to merger. If we assume a circular orbit, the decay rate of the binary separation due to gravitational wave emission is given by \\citep{peters64}\n\\begin{equation}\n\t\\left(\\dfrac{da}{dt}\\right)_{\\rm{gw}} \\simeq -\\dfrac{64}{5} \\dfrac{G^3 M_p^3 q_1}{c^5 a^3},\n\t\\label{eq:decayrategw}\n\\end{equation}\nwhere $M_p$ is the primary SMBH mass, $a$ is the binary separation and the $q_1$ parameter is defined as $q_1$=$q$(1+$q$), where $q$ is the binary mass ratio and the final merger time is\n\\begin{equation}\n \\tau_{\\rm{gw}} \\approx \\dfrac{5}{256} \\dfrac{c^5 a_0^4}{G^3 M_p^3 q_1}\\cdot\n\t\\label{eq:mergertime}\n\\end{equation}\n\nWhen the disc and binary black hole orbits are aligned, gravitational radiation emission causes the binary separation to decrease, producing an increase in luminosity that may be detectable as an electromagnetic precursor. Previous investigations for this orbital configuration yielded accretion rates of the order of $10^2$ times the Eddington limit \\citep{armitage02, ceriolietal}.\n\n\\subsection{Exploring misaligned disc-binary systems}\n\n\nFor a SMBH binary in a circular orbit, the interaction between the misaligned disc and the binary is analogous to Lense-Thirring precession acting on an accretion disc around a single, spinning black hole \\citep{lt1918, lodatoepringle06,nealonetal15}. Misaligned disc-binary interaction involves the additional effect of inclination decay due to viscous dissipation effects. For a primary black hole surrounded by an inclined disc, the gas flow is dominated by viscous torques. In this case, tilted disc-binary systems are not expected to produce an electromagnetic counterpart to the binary merger.\n\nBinary interaction in a gaseous environment depends on the disc properties. The decoupling radius refers to the radius at which the gravitational torque becomes comparable to the viscous torque. When the initial distance between the two black holes is much larger than the decoupling radius, the binary can be surrounded by a circumbinary disc. By contrast, when the evolution is dominated by gravitational wave emission (i.e, when the black hole initial separation is small compared to the decoupling radius), then an individual disc surrounds only the more massive black hole. For a circumbinary disc, the angular momentum vector of the disc does not always coincide with that of the binary. The initial disc orientation is set by the angular momentum distribution of the gas rather than by the angular momentum of the binary \\citep{hayasaki13}. \n\n\\citet{lubowetal15} probed the effects of binary-disc inclination on Lindblad resonant tidal torques acting on a circumbinary disc. The authors studied the 2:1 inner Lindblad resonance (for m = 2) that dominates the tidal truncation of coplanar discs by a prograde binary. For that resonance, they found a rapid decrease of the torque with inclination angle --- by a factor of about 2 for 30 degrees, by a factor of about 20 for 90 degrees and to zero for 180 degrees (counter-rotating case). \n\nViscous torques can dominate Lindblad resonant torques (for m = 2) if the binary is counter-rotating. They can also dominate for smaller inclination angles, if the disc is sufficiently viscous. In this case, the gas in the disc can be captured by the secondary, flowing either into a circumbinary disc or escaping. In a inclined circumsecondary disc, the weakened tides allow mass transfer from the secondary component to the central one. Inclined discs are expected to be larger than coplanar discs due to the decrease in the resonant tidal torques \\citep{lubowetal00, lubowetal15}. \n\nIn the present work, it is assumed that the most important effect for the misalignment between the disc and the orbital plane, is that the direction of the orbital angular momentum vector does not match the spin of the primary \\citep{bardeen}. Therefore, since by the Bardeen-Petterson effect the disc is aligned with the primary spin axis, this results in a misalignment with the orbital plane of the companion.\n\n\\section{NUMERICAL SIMULATIONS}\n\\label{section3}\n\nWe model the evolution of a misaligned disc-binary system using the {\\sc phantom} smoothed particle hydrodynamics (SPH) code. {\\sc phantom} solves the hydrodynamics equations in Lagrangian form using SPH \\citep{price07, lodatoeprice10, priceefede10, price12, priceetal18}. The disc viscosity $\\nu$ is modeled according to the \\citet{ss73} $\\alpha$-prescription, given by\n\\begin{equation}\n \\nu \\simeq \\alpha_{\\rm SS} c_{\\rm s} H,\n\t\\label{eq:sspresciption}\n\\end{equation}\nwhere $\\alpha_{\\rm SS}$ is a dimensionless scale parameter, $c_{\\rm s}$ is the sound speed of the gas in the disc and $H$ = $c_{\\rm s}\/\\Omega$ is the disc thickness, where $\\Omega$ is the Keplerian velocity. In order to model the \\citet{ss73} viscosity we use the SPH artificial viscosity formalism set by the relation\n\\begin{equation}\n \\alpha_{\\rm SS} \\simeq \\dfrac{1}{10} \\alpha^{\\rm AV} \\dfrac{\\langle h \\rangle}{H},\n\t\\label{eq:artviscosity}\n\\end{equation}\nwhere $\\alpha^{\\rm AV}$ is the artificial viscosity coefficient and $\\langle h \\rangle$ is the azimuthally averaged smoothing length. SPH employs this artificial viscosity term primarily to resolve shocks, but we use this to represent a source of viscous diffusion following \\citet{lodatoeprice10}. \n\nThe gas in the disc follows a locally isothermal equation of state with the sound speed described by the power law\n\\begin{equation}\n c_{\\rm s} \\simeq c_{{\\rm s},0} R^{-\\beta},\n\t\\label{eq:sppowerlaw}\n\\end{equation}\nwhere $R$ is the radial distance from the centre of mass of the binary (in cylindrical coordinates) and $c_{{\\rm s},0}$ is a normalization that determine the disc thickness. The disc surface density profile is given by\n\\begin{equation}\n \\Sigma \\simeq \\Sigma_0 R^{-\\gamma},\n\t\\label{eq:discsfprofile}\n\\end{equation}\nwhere $\\Sigma_0$ is chosen in order to set the disc mass.\n\nIn equations~(\\ref{eq:sppowerlaw}) and~(\\ref{eq:discsfprofile}), we chose the parameters $\\beta$ = 3\/2 and $\\gamma$ = 3\/4 in order to uniformly resolve the disc, given that the smoothing length $h \\propto \\rho^{-\\frac13}$ (where $\\rho$ is the density) is proportional to the thickness $H \\propto R^{\\frac34}$, so the ratio $\\langle h \\rangle$\/$H$ in equation~(\\ref{eq:artviscosity}) is constant with respect to radius.\n\n\\subsection{Initial conditions}\n\\label{initialcond}\n\n We set up a binary system of SMBHs with unequal masses on a circular orbit. The binary mass ratio is $q$ = $M_{s}$\/$M_{p}$ = $10^{-3}$, where $M_p$ = $10^8 M_{\\odot}$ is the mass of the primary and $M_s$ = $10^5 M_{\\odot}$ is the mass of the secondary. Based on 3D SPH simulations by \\citet{ceriolietal}, we chose $a_0$ = $4.75 GM_p$\/$c^2$ (in code units) for the binary initial separation, corresponding to $a_0$ $\\simeq$ $2.28 \\cdot 10^{-5}$ pc. Moreover, since we were interested in estimating the mass accretion rate of the primary and secondary black holes, we set their accretion radii to $R_{ar,p}$ = $2 GM_p$\/$c^2$ and $R_{ar,s}$ = $0.2 GM_p$\/$c^2$. We assumed an initial binary orbital separation smaller than the decoupling radius ($a_{dec} \\simeq 36.4 GM_p$\/$c^2$) implying a circumprimary disc only. We adopt the inner disc radius $R_{\\rm in}$ = $R_{{\\rm ar},p}$ = $2 GM_p$\/$c^2$, equal to the Schwarzschild radius and the outer radius $R_{\\rm out}$ = $4.1 GM_p$\/$c^2$, according to the initial separation of the black holes. Assuming the values for initial separation and accretion radii, the binary coalescence time by gravitational decay is $\\tau_{\\rm gw} \\simeq$ 9476 $GM_p$\/$c^3$ (in code units), or approximately 54 days.\n\nWe focus on inclined disc models to investigate the effects of the tidal torques by the secondary component embedded in a geometrically thin gas disc. We assume an initial disc mass of $M_{\\rm disc}$ $\\approx$ $1 M_{\\odot}$, flat and misaligned by a specified inclination angle. The disc is truncated at an outer radius related to the intensity of the tidal torques. We neglect the outer disc, i.e. we assume it is frozen behind the companion such that the disc inner edge is located at the decoupling radius (the disc may then viscously move inwards).\n\nWe assume a disc aspect ratio $H$\/$R$ = 0.01 and Shakura-Sunyaev viscosity parameter $\\alpha_{\\rm SS}$ = 0.01. We consider discs inclined by 1, 5, 10, 20, 30 and 180 degrees. Moreover, we also consider a control simulation of the disc without the secondary black hole. We performed seven simulations with $5 \\cdot 10^5$ SPH particles for all angles mentioned above, including the simulation without no companion. The choice for the number of particles is for computational efficiency, but does not affect the quality of the numerical results as we will see in the next section. We adopted the following code units: for the length, the gravitational radius ($R_g$=$G M_p$\/$c^2$); for the mass, the primary black hole mass ($M_p$) and for the time, $t$=$GM_p$\/$c^3$. \n\n\\subsection{Physical validity of the disc model in the context of AGN accretion}\n\nThe accretion rates exceeding the Eddington rate presented in this article show that the primary SMBH disc is radiatively efficient. Currently, the mechanism powering Active Galactic Nuclei (AGN) is believed to come from gas accretion onto a SMBH. The standard thin accretion disc models are normally associated with radiatively efficient AGNs. A significant fraction of the energy generated by viscous dissipation in the flow is radiated locally and the advection of energy is negligible, as a consequence the disc is geometrically thin. So that AGN discs are in general quite thin, ranging H\/R $\\sim 0.001-0.01$ and the bolometric luminosities correspond to about 10 per cent of the rest-mass energy of the accreting matter and approximately 40 per cent for rapidly spinning black holes \\citep{ss73}.\n\n\\subsection{Disc aspect ratio}\n\nMotivated in the investigation of electromagnetic precursors just prior to binary merger, we chose the disc aspect ratio in the inner disc based on the following reasons. \\cite{tl15} estimated the fossil disc mass just prior to a SMBH merger and they found that for a $10^8 M_{\\odot}$ primary black hole the inner disc mass at decoupling is of the order of $1 M_{\\odot}$. This result shows that the rapid accretion of the whole circumprimary disc would produce peak luminosities of order 1--20 times Eddington luminosity. Merging binaries are expected to have thinner accretion discs and to provide an electromagnetic signature from the squeezing of the inner disc.\n\n\\cite{ceriolietal} performed simulations with thin discs for the aligned disc-binary orbital planes and they found an increase in luminosity exceeding the Eddington limit. In addition, the authors confirmed that if gas is present between the SMBHs it is squeezed and quickly accreted by the primary black hole during the orbital decay. On the other hand, if the disc is thick, gas can more easily flow outwards through the gap.\n\nBy contrast, \\cite{baruteal12} found that the snowplough effect does not occur for thicker discs. Those authors showed that a small fraction (about 20 per cent) of the disc mass is accreted by the primary SMBH. This quantity does not cause any rise in the luminosity prior to binary merger and it does not excite the formation of peaks in the surface density of the disc.\n\n\\begin{figure*}\n\t\\includegraphics[width=\\columnwidth]{primary} \\includegraphics[width=\\columnwidth]{sec5}\n \\caption{SMBH accretion rates in code units with discs at different inclination angles for simulations with $5 \\cdot 10^5$ particles. Left panel: primary accretion rates as a function of time, show super-Eddington peaks in $\\dot{M}$ for discs inclined by 1, 5 and 10 degrees. Right panel: secondary accretion rates as a function of time show squeezing phenomenon only in discs inclined by 1 and 5 degrees only. In both figures the black horizontal line shows the Eddington limit.} \n \\label{fig:allanglesaccrate}\n\\end{figure*}\n\nLong before the merger of black holes embedded in thicker discs, the binary hollows out any gas present and shrinks slowly compared to the viscous timescale of a circumbinary accretion disc. So that a small fraction of the energy liberated during the merger can go into heating the gas, producing an electromagnetic afterglow \\citep{miphinney}.\n\n\\section{RESULTS}\n\\label{section4}\n\n We investigate the accretion rates induced as the secondary orbit shrinks towards the primary. We compare our results with previous investigations for aligned disc-binary orbital planes.\n\nAfter binary decoupling from the gas disc, the tidal torques dominate over viscous torques. At this moment, gravitational wave emission is the dominant process driving the binary to merger, with a possible electromagnetic signature in the last stages of coalescence. However, for misaligned disc-binary orbital planes, that path does not necessarily occur. Some discs are expected to be inclined, for instance when the black holes are spinning at an inclined angle with respect to angular momentum vector of the disc. \n\n\n\\subsection{Varying disc inclination angles}\n\nFigure~\\ref{fig:allanglesaccrate} shows primary and secondary accretion rates as a function of time (in code units) for all simulated disc inclination angles. For discs with small inclination angles (1, 5 and 10 degrees) we found an increase in luminosity exceeding the Eddington rate and very close to the aligned disc case \\citep{ceriolietal}. Table~\\ref{tab:accratefinal} presents results for the angles less than 10 degrees with two times ($t_1$ and $t_2$) corresponding to the two peaks marked by episodes of strong accretion on the primary (near the binary merger). The times of the first and second spikes differ between simulations because the binary evolves embedded in a gas disc with different tilts. The double peak in the primary accretion rate occurs as the tidal torques dominate viscous torques, because the viscous time-scale is shorter than the gravitational decay time-scale. The tidal torques provide the disc angular momentum loss that drives the gas accretion onto the primary component.\n\n\\begin{figure} \n\t\\includegraphics[width=\\columnwidth]{inc180_limitEdd}\n \\caption{Primary (black line) and secondary (red line) accretion rates with circumprimary disc inclined at 180 degrees. The black horizontal line shows the Eddington limit.}\n \\label{fig:allanglesprimary}\n\\end{figure}\n\nThe disc inclined at 10 degrees shows a less pronounced first spike in the primary accretion rate ($dM_p$\/$dt$ $\\simeq 10.6 M_{\\odot}$\/$\\rm{yr}$) compared to discs with smaller angles $d{M}_p$\/$dt$ $\\simeq 23.1 M_{\\odot}$\/$\\rm{yr}$ and $d{M}_p$\/$dt$ $\\simeq 24.7 M_{\\odot}$\/$\\rm{yr}$ (1 and 5 degrees, respectively). In this case, tidal torques decline due to higher inclination (10 degrees) and the primary accretion rate reaches about $d{M}_p$\/$dt$ $\\approx 2.35 \\dot{M}_{\\rm Edd}$. The inclined disc with 1 degree yielded $d{M}_p$\/$dt$ $\\approx 5.49 \\dot{M}_{\\rm Edd}$ and with 5 degrees it obtained $d{M}_p$\/$dt$ $\\approx 5.13 \\dot{M}_{\\rm Edd}$. Moreover, when the disc is inclined at 10 degrees, the evolution between first and second spikes occurs more slowly. On the other hand, the second peak on the primary accretion rate at 10 degrees shows a narrower shape, reaching in $t_2$ a accretion rate of $d{M}_p$\/$dt$ $\\approx 6.58 \\dot{M}_{\\rm Edd}$, exceeding all other accretion rates (see Table~\\ref{tab:accratefinal}).\n\nWe also obtain two spikes at the initial times of binary evolution in the secondary black hole accretion rate, but these spikes occurred only in discs inclined at 1 and 5 degrees. The spikes are marked by the rapid decay of the secondary towards the primary, squeezing the inner disc gas. Our choice of a low-mass secondary ($10^5 M_{\\odot}$) resulted in its interaction with the inner disc implying in an exchange of angular momentum between the secondary component and the gas.\n\n\\begin{table*}\n\t\\centering\n\t\\label{tab:accratefinal}\n\t\\begin{tabular}{lccccccr}\n\t\t\\hline\n\t\t$Inclination$ & $t_1$ & $t_2$ & ($d{M}_p$\/$dt$)\/($M_{\\odot}$\/yr) [$t_{1}$] & ($d{M}_p$\/$dt$)\/($M_{\\odot}$\/yr) [$t_2$] & ($d{M}_p$\/$dt$)\/$M_{Edd}$ [$t_1$] & ($d{M}_p$\/$dt$)\/$M_{Edd}$ [$t_2$]\\\\\n\t\t\\hline\n\t\n\t\t$\\theta$=$1^o$ & 8560 & 8902 & 23.11 & 19.23 & 5.13 & 4.27\\\\\n\t\t$\\theta$=$5^o$ & 8569 & 8930 & 24.68 & 25.80 & 5.49 & 5.73\\\\\n $\\theta$=$10^o$ & 8513 & 8975 & 10.58 & 29.61 & 2.35 & 6.58\\\\\n \\hline\n \\end{tabular}\n \\caption{Primary black hole accretion rates at final stages of the binary merger. The first column shows inclination angle of the disc, the second and the third columns show the two different times $t_1$ and $t_2$, that correspond to the first and second spikes in the accretion rate, respectively, the fourth and the fifth columns show the accretion rates in units of $M_{\\odot}$\/$\\rm{yr}$ for each spike, the sixth and seventh columns show primary accretion rates in units of the Eddington rate ($d{M}_{Edd}$\/$dt$)=$4.5 M_{\\odot}$\/$\\rm{yr}$ for both times.}\n\\end{table*}\n\nFigure~\\ref{fig:allanglesaccrate} also shows the accretion rates with discs tilted at 20 and 30 degrees. However, discs with inclinations between these two angles showed a much less pronounced rise in the mass accretion rate, because the snowplough effect is important for misalignment up to 10 degrees, much larger than $H$\/$R$. In particular, for 20 \ndegrees of inclination we find only the first spike, with a primary accretion rate of order $1.1\\dot{M}_{\\rm Edd}$. Previous work by \\citet{lubowetal15} predicted such results based on analytic calculations of tidal torques acting on misaligned discs. By contrast, the primary accretion rate without the secondary presented in Figure~\\ref{fig:allanglesaccrate} shows no peak in the accretion rate due to the absence of the companion black hole and of the snowplough mechanism.\n\nFigure~\\ref{fig:allanglesprimary} reports the accretion rates in a circumprimary disc initially inclined at 180 degrees. No peaks are observed in the primary and secondary accretion rates. For a counter-rotating disc the tidal torques decline to zero, because viscous torques dominate over binary gravitational torques. These results are also in accordance with the analytic work by \\citet{lubowetal15}.\n\nFigure \\ref{fig:residual} shows the residual between the accretion rates of the primary with and without the companion. This figure clearly demonstrate what is the amount of the accretion rate due the presence of the secondary SMBH and of the snowplough effect. When comparing the accretion rate of the primary with and without the secondary, it can be seen that they are equal until $t \\sim 5000$ (in code units). After this time, the snowplough effect for small inclination angles becomes important and the accretion rate increases with respect to the accretion rate without the companion.\n\nFigure~\\ref{fig:allanglessnapshots} shows snapshots of column density in the $xy$ plane. In this plot each row presents the misaligned disc-binary evolution with discs inclined at 1, 5, 10, 20 and 30 degrees, from top to bottom. The second and fourth columns show the times corresponding to the first and second spikes in the primary accretion rate. In particular, the disc inclined at 20 degrees shows only a more pronounced first peak at time $t$=9025, reaching a primary accretion rate of the order of the Eddington rate. Discs with inclination angles at 1, 5 and 10 degrees produce an increase in luminosity exceeding the Eddington rate. The snapshots shown in Figure~\\ref{fig:allanglessnapshots}, with misaligned discs with $\\theta >10$ degrees show more extended discs compared those inclined with angles smaller than 5 degrees.\n\nFigure \\ref{fig:allangle180snapshots} shows the evolution for a counter-rotating disc ($\\theta$=180 degrees) showing a gap where the gas moves without falling towards the central black hole. In this case, the final merger take place in vacuum \\citep{miphinney}. \n\n\\subsection{Effect of thicker discs}\n\nThe disc thickness can affect the amount of gas ejected outside the orbit of the secondary black hole. In order to investigate if the thicker discs just prior to the binary merger show electromagnetic evidences, we performed simulations increasing the disc aspect ratio. In these simulations, we used inclination angles with 1 and 5 degrees, H\/R=0.05, $5 \\cdot 10^5$ SPH particles and the same initial conditions shown in subsection \\ref{initialcond}.\n\nFigure \\ref{fig:thickerdisc} shows the accretion rates of the primary and secondary black holes for a thicker disc (H\/R=0.05), with different inclination angles, normalized by initial disc mass. In both figures, we do not see any effects of the forced compression, independent of the disc inclination. All the mass is swallowed before it could be squeezed by the secondary black hole. These results are in agreement with previous simulations performed by \\cite{ceriolietal} for the same value of disc aspect ratio.\n\n\\begin{figure} \n\t\\includegraphics[width=\\columnwidth]{residual_final2}\n \\caption{Residual between the accretion rates of the primary SMBH with and without the secondary.}\n \\label{fig:residual}\n\\end{figure}\n\n\\begin{figure*}\n\t\\includegraphics[width=\\textwidth]{evolutioninc5}\n \\caption{Snapshots of column density (logarithmic scale). Each row indicates the evolution of a misaligned binary-disc system at 1, 5, 10, 20 and 30 degrees, respectively. The supermassive black holes are represented by black filled circles with sizes corresponding to the primary ($R_{ar,p}$=2 $GM_p$\/$c^2$) and secondary ($R_{ar,s}$=0.2 $GM_p$\/$c^2$) accretion radius. Time (in code units) is shown in the top right corner. The length unit is the gravitational radius $R_g$=$G M_p$\/$ c^2$, where $M_p$=$10^8 M_{\\odot}$ is the primary black hole mass.}\n \\label{fig:allanglessnapshots}\n\\end{figure*}\n\n\\subsection{Numerical resolution}\n\nUnder the SPH formalism the spatial resolution of a simulation increases in denser regions due to the Lagrangian formulation. The choice of initial conditions is largely regulated by the available computational resources. Here, all our simulations used $5 \\cdot 10^5$ SPH particles. Using higher resolution does not change the appearance of the peaks in accretion rates, but it shows improved values for the rates. Previous hydrodynamical simulations performed by \\citet{ceriolietal}, for the case of an aligned disc, examined different numbers of SPH particles and resolution effects. The authors emphasized that the two spikes in primary accretion rate are present at different resolutions ($5 \\cdot 10^5$, $1 \\cdot 10^6$ and $2 \\cdot 10^6$ SPH particles), indicating this is not a numerical artefact. However, the simulation with $2 \\cdot 10^6$ particles had the most significant enhancement in the primary accretion rate, with 4.5 times better the accretion spike compared to a resolution similar to ours. But, as seen in Figures \\ref{fig:allanglesaccrate} and \\ref{fig:allanglesprimary}, the low resolution did not affect our overall conclusions.\n\n\n\\begin{figure*}\n\t\\includegraphics[width=\\textwidth]{evolutioninc180}\n \\caption{Snapshots of column density during the evolution of a circumprimary disc inclined at 180 degrees. The code unit for length assumed is the gravitational radius $R_g$=$G M_p$\/$c^2$.}\n \\label{fig:allangle180snapshots}\n\\end{figure*}\n\n\\section{CONCLUSIONS}\n\\label{section5}\n\nWe performed SPH simulations for misaligned disc-binary systems, varying the disc inclination angle, in order to investigate a possible electromagnetic precursor during the final inspirals of a binary system of SMBHs. We concentrated on a model with a gas disc surrounding the primary black hole, while the secondary black hole spirals towards the primary. We assumed a binary mass ratio $q$=$10^{-3}$, a thin accretion disc with aspect ratio $H$\/$R$=0.01 and an inner disc mass of the order of $\\approx 1 M_{\\odot}$. We chose the disc aspect ratio in the inner disc based on the previous work of \\citet{tl15}. Those authors estimated the fossil disc mass just prior to a SMBH merger and they found that for a $10^8 M_{\\odot}$ primary black hole the inner disc mass at decoupling is of the order of $1 M_{\\odot}$. Thus merging binaries are expected to have thinner discs.\n\nWe found that inclined discs with inclination angles of 1, 5 and 10 degrees lead to the mass accretion rates exceeding the Eddington rate between the times $t \\simeq 8500 - 8975$, e.g., near binary merger time $\\tau_{gw} \\simeq 9476$. By contrast, inclined circumbinary discs between 20 and 30 degrees showed a much less pronounced rise in the accretion rates (see Figure \\ref{fig:allanglesaccrate}). On the other hand, discs with inclination at 180 degrees showed no increase in the accretion rates (see Figure \\ref{fig:allanglesprimary}). The inner mass of the disc is rapidly accreted just before of the merger, leading to an abrupt increase in the mass accretion rate above the Eddington limit. However, for inclination larger than 10 degrees the accretion rate fell abruptly with primary accretion rates smaller than the Eddington rate. These results show that the snowplough effect is important for misalignment up to 10 degrees, much larger than $H$\/$R$.\n\nMoreover, secondary accretion rates for inclined discs at 1 and 5 degrees presented double peaks confirming the squeezing phenomenon at initial times of binary evolution. The companion black hole quickly migrates towards the central black hole sweeping up the gas in the inner disc. Angles larger than 5 degrees did not present any pronounced rise in the secondary accretion rates during the binary evolution (see right panel from Figure \\ref{fig:allanglesaccrate}) and the misaligned discs show somewhat larger than the those with inclinations smaller than 5 degrees (see Figure \\ref{fig:allanglessnapshots}). \n\nOur results agree with previous analytic works by \\citet{lubowetal15}, who obtained a decrease in resonant tidal torques with increasing inclination angle; we also compared the results of the electromagnetic precursors studied by \\citet{ceriolietal} and found an increase in luminosity for small inclination angles (< 10 degrees), exceeding the Eddington rate. Furthermore, from the simulations we were able to identify the presence of the squeezing mechanism (for 1 and 5 degrees) during the secondary migration, such as found by \\citet{ceriolietal} for aligned disc-binary systems. It should be noted, however, that we performed low resolution simulations ($N_{\\rm part}$ = $5 \\cdot 10^5$), while \\citet{ceriolietal} considered different numerical resolutions. In particular, for a high resolution simulation ($N_{\\rm part}$ = $2 \\cdot 10^6$) they obtained an improvement in the accretion rates of the order of about 10 times the Eddington rate when compared with our results.\n\nWe also performed simulations with counter-rotating discs, which we found to be in concordance with the results of \\citet{lubowetal15}. Those authors found that the tidal torque is zero, because viscous torques dominate over the resonances. In this case, since the disc is inclined at 180 degrees, there is an angular momentum cancellation leading to direct gas accretion on a dynamical time-scale, since in this case the gas only needs to move into the created gap and not directly on to the primary black hole. The final black hole merger thus occurs in vacuum \\citep{miphinney}, as shown in Figure \\ref{fig:allangle180snapshots}.\n\nWe ran simulations with a thicker disc (H\/R = 0.05) motivated in the investigation of electromagnetic signatures just prior to binary merger. We found no effect of the forced compression, independent of the disc inclination (see Figure \\ref{fig:thickerdisc}). These results are in agreement with previous work of \\citet{baruteal12}. Those authors found that the snowplough effect does not occur for thicker discs. In order that a small fraction of the energy liberated during the merger can go into heating the gas, producing an electromagnetic afterglow \\citep{miphinney}.\n\nAlthough we have investigated tidal transport of a small mass ratio $q$=$10^{-3}$, with the secondary component migrating via gravitational waves emission towards the primary, in low resolution simulations, our results are in agreement with earlier studies. A wider investigation at higher resolution may increase the predicted mass accretion rates for small inclination angle values. \n\nWe also concluded that our results can be applied to electromagnetic counterparts from stellar mass black hole mergers. In that case, if even a fraction of gas remains on the disc, the accretion rates produced when the gas disc is pushed into the primary black hole by the decaying secondary during the merger, lead to accretion peaks comparable or exceeding to Eddington rate. The electromagnetic signature can occur about 8-10 days before the binary final merger. \n\n\\begin{figure*}\n\t\\includegraphics[width=\\columnwidth]{hr005inc1} \\includegraphics[width=\\columnwidth]{hr005inc53}\n \\caption{SMBH accretion rates in code units for a thicker disc with disc aspect ratio H\/R=0.05. Left panel: primary and secondary accretion rates as a function of time for a disc inclined at 1 degree. Right panel: primary and secondary accretion rates for a disc inclined at 5 degrees. The black horizontal line shows the Eddington limit.}\n \\label{fig:thickerdisc}\n\\end{figure*}\n\nA scenario where mergers of stellar mass black hole binaries driven by gravitational radiation producing an electromagnetic signature had already been suggested by \\citet{mink17}. Those authors argued that scenario happens if a low mass accretion disc survives until coalescence. Moreover, they proposed that the disc responds to some processes of the final evolutionary phases (such as sudden mass loss and gravitational wave recoil) within hours of the merger. The electromagnetic signal will possibly arise in medium energy X-rays, perhaps extending to the infrared and last at least a few hours. In addition, \\citet{martinetal18} investigated the evolution of a disc around a merging stellar mass black hole binary in the two extreme limits of an accretion disc and a decretion disc. They concluded that dynamical readjustment of the disc after the black holes merger is probably to release significant energy in electromagnetic form, corroborating with previous results by \\citet{mink17}.\n\nIn conclusion, the 3D SPH simulations performed in this work predict the existence of electromagnetic precursors from the primary and from the secondary in a binary system of SMBHs when the angle of misalignment of the circumprimary accretion disc is small (less than 10 degrees), though we have only investigated a limited parameter space. This work suggests a link between electromagnetic signals and the gravitational radiation potentially detected by ground-based and future space detectors.\n\t\n\n\\section*{Acknowledgements}\n\nFACP acknowledges financial support from CAPES (88881.133179\/2016-01). \nMESA would like to thank the Brazilian agency FAPESP for financial support (grant 13\/26258-4). DJP acknowledges financial support from the Australian Research Council (FT130100034). All figures in this paper were produced using SPLASH \\citep{price07}. \n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWhether a universal quantum computer is sufficiently powerful to be able\nto perform quantum field-theoretical computations efficiently has been a\nlong-standing and important open question.\nEfficient quantum algorithms for simulating quantum many-body systems have been\ndeveloped theoretically \\cite{Lloyd_science,Abrams_Lloyd,Zalka} \nand implemented experimentally \\cite{Lanyon:2011,Mueller:2011,Barreiro:2011}, \nbut quantum field theory\npresents additional technical challenges, such as the formally\ninfinite number of degrees of freedom per unit volume. In earlier work\n\\cite{phi4, longversion}, we presented and analyzed a quantum\nalgorithm for simulating a bosonic quantum field theory called\n$\\phi^4$ theory. That algorithm runs in a time that is polynomial in\nthe number of particles, their energy, and the desired precision, and\napplies at both weak and strong coupling. Hence, it offers exponential\nspeedup over existing classical methods at high precision or strong\ncoupling. In this paper, we extend our work to fermionic quantum field\ntheories, exemplified by the massive Gross-Neveu model, a theory in\ntwo spacetime dimensions with quartic interactions. Although our\nanalysis is specific to this theory, our algorithm can be adapted to\nother massive fermionic quantum field theories with only minor\nmodification while retaining polynomial complexity.\n\nOur quantum algorithm generates scattering events: it takes (as the\ninput) the momenta of the incoming particles and, sampling from the\nprobability distribution of possible outcomes, returns (as the\noutput) the momenta of the outgoing particles produced by the\nphysical scattering process. Physical quantities of interest, such as\nscattering cross sections, can thus be approximated by repeated\nruns of the simulation, together with statistical data analysis\nsimilar to that used for particle-accelerator experiments.\n\nThe features of fermionic field theories not present in bosonic theories pose\nnew technical problems, the solutions to which require different techniques.\nPerhaps the most obvious difference is the anticommutation, rather than\ncommutation, of fermionic fields.\nThis forces a change in the representation of the state by qubits: we\nuse an encoding method for fermionic mode occupation numbers introduced by\nBravyi and Kitaev \\cite{Bravyi_Kitaev}. In \\cite{longversion}, it was shown \nthat simulation of Hamiltonian time evolution via Suzuki-Trotter formulae\nhas efficiency advantages when applied to spatially local Hamiltonians.\nFermionic anticommutation makes it more difficult to gain efficiency\nby exploiting spatial locality. Nevertheless, we obtain a construction\nthat gives quasi-linear asymptotic scaling in time and the number of\nlattice sites, as in the bosonic case.\n\nIn contrast with bosonic field theories,\ndiscretization of fermionic field theories leads to the well-known\n``fermion doubling'' problem, in which spurious fermion species not in the\ncontinuum theory appear in the discretized theory.\nOne solution used in lattice gauge theory is to add to the action\nthe so-called Wilson term, a second-derivative operator that vanishes\nin the naive continuum limit. The Wilson term can also\nbe accommodated in our quantum algorithm; in particular, we show how\nit can be turned on during the preparation of the ground state.\n\nIn general, state preparation is a demanding task.\nThe algorithm in \\cite{phi4,longversion} uses a three-step procedure.\nFirst, the free vacuum is prepared. For the free scalar theory, this is a\nmultivariate Gaussian wavefunction.\nNext, wavepackets are excited within the free theory. In order that\nonly single-particle states are created, an ancillary qubit is used,\ntogether with a particular Hamiltonian that acts on the enlarged space.\nFinally, the interaction is turned on via a generalization of adiabatic \nstate preparation that can be applied to superpositions of eigenstates.\nThis procedure intersperses backwards time evolutions governed by \ntime-independent Hamiltonians into the turn-on to undo the different \ndynamical phases, which otherwise would cause undesirable propagation \nand broadening of wavepackets.\n\nThe state-preparation method analyzed here differs from that of\n\\cite{phi4,longversion} in two main ways.\nPreparation of the free vacuum requires modification because the vacuum\nof the free fermionic theory is different from that of the free\nbosonic theory. For this purpose, we incorporate a separate adiabatic\nturn-on step. Furthermore, sources are used to create particle\nexcitations after the coupling constant is adiabatically turned on,\nrather than before. (This difference is not required by the\nfermionic nature of the theory.) This method has the advantage that it\nworks when bound states are possible, in which case the adiabatic\nwavepacket preparation of \\cite{phi4, longversion} might fail. Another\nconsequence is that the procedure no longer requires the interleaving\nof backwards time evolutions to undo dynamical phases. \nOn the other hand, a disadvantage is that the \npreparation of each particle has a significant probability of producing \nno particle. In the case of two-particle scattering, one can perform \nadditional repetitions of the simulation, and recognize and discard\nsimulations in which fewer than two particles have been created. However,\nthe procedure is not well suited to processes involving\nmore than two incoming particles.\n\nWe analyze two different measurement procedures to be used as the last step\nof the simulation. The first method is to return adiabatically to the free\ntheory and then measure the number operators of the momentum\nmodes. For unbound states, this procedure yields complete information\nabout particle momenta, but is not well-suited to detecting bound\nstates or resolving spatial information. The second procedure is to\nmeasure charge within local regions of space. These measurements can\ndetect charged bound states, although they are blind to neutral\nones. Which of these measurement schemes is preferable depends on the\ndesired application.\n\nThere is a substantial body of work on analog quantum simulation\nof quantum systems, including lattice field theories. (See \\cite{Wiese}\nfor a recent review.) In such work, proposals are made for the\nengineering of experimental systems so that they mimic systems of\ninterest, that is, so that the Hamiltonians of the laboratory systems\napproximate Hamiltonians of interest. The proposed quantum simulators\ncan be thought of as specialized quantum computers. In contrast, we\naddress digital quantum algorithms, namely, algorithms to be run on a\nuniversal, fault-tolerant, digital quantum computer. Our work thus\nprobes the fundamental asymptotic computational complexity of quantum\nfield theories. \n\nThere is also an extensive literature on the study of quantum field \ntheories on classical computers via lattice field theory. \n(See Ch.~17 of \\cite{Beringer:1900zz}\nfor a review of its results and status.)\nHowever, classical lattice algorithms rely on analytic continuation\nto imaginary time, $t \\to -i\\tau$. Thus, they are useful for \ncomputing static quantities such as mass ratios, but are unsuitable\nfor calculating dynamical quantities such as scattering cross\nsections. In contrast, our quantum algorithm simulates the dynamics of\nquantum field theories, a problem that is expected to be \n$\\mathsf{BQP}$-complete and thus impossible to solve by \npolynomial-time classical algorithms.\nAlthough our algorithm draws upon some concepts from lattice field \ntheory, new techniques are needed, particularly for state preparation and\nmeasurement. \n\n\nThe work presented in this paper is another step towards the goal of \nobtaining an efficient quantum algorithm for simulating the Standard Model\nof particle physics. Such an algorithm would establish that, except for\nquantum-gravity effects, the standard quantum circuit model suffices to\ncapture completely the computational power of our universe.\n\nThe rest of this paper is organized as follows. Section 2 introduces\nthe massive Gross-Neveu model, gives an overview of our quantum algorithm\nfor computing the theory's scattering amplitudes, and analyzes\nthe algorithm's complexity. Section 3 describes in detail the\nefficient simulation of the Hamiltonian time evolution in the quantum\ncircuit model. Section 4 presents our procedures for state preparation\nand measurement. Finally, Section 5 addresses some field-theoretical\naspects, namely, the effects of a non-zero lattice spacing and the\nrenormalization of mass, which are crucial elements in our complexity\nanalysis.\n\n\n\\section{Quantum Algorithm}\n\nIn this section we describe the massive Gross-Neveu model\n(\\sect{sec:MGN}), outline the steps in our algorithm for simulating\nparticle scattering processes within this model (\\sect{sec:alg}), and\ngive an overview of the algorithm's complexity\n(\\sect{sec:complexity}). The run time is polynomial in the inverse of\nthe desired precision and in the momenta of the incoming particles.\nThe detailed analysis of the steps of the algorithm that contribute\nto the overall complexity stated in \\sect{sec:complexity} is given in\nlater sections.\n\n\\subsection{The Massive Gross-Neveu Model}\n\\label{sec:MGN}\n\nThe theory we consider is a generalization of the Gross-Neveu model to\ninclude an explicit mass term in the Lagrangian. The (original) \nGross-Neveu model \\cite{Gross:1974jv} is a quantum field\ntheory in two spacetime dimensions consisting of\n$N$ fermion species with quartic interactions. It has a rich\nphenomenology. Like quantum chromodynamics (QCD), the theory governing\nthe strong interactions, it has the remarkable property of asymptotic\nfreedom, whereby the interaction becomes weaker at higher\nenergies. The theory has a discrete chiral symmetry, \n$\\psi \\rightarrow \\gamma^5 \\psi$, where \n\\begin{equation}\n\\gamma^5 = \\left[ \\begin{array}{cc} 1 & 0 \\\\ 0 & -1\n \\end{array} \\right].\n\\end{equation}\nThis symmetry is spontaneously broken by the\nnon-perturbative vacuum. (The related theory known as the chiral\nGross-Neveu model has a continuous chiral symmetry, $\\psi \\rightarrow\ne^{i\\theta\\gamma^5}\\psi$.) Correspondingly, mass is generated\ndynamically, and the theory admits a topological soliton, the\nCallan-Coleman-Gross-Zee (CCGZ) kink. Non-topological\nsolitons also exist \\cite{Dashen:1975xh}.\n\nThese interesting characteristics have attracted intense study\nand led to applications not only in particle physics but also in\ncondensed-matter physics, including studies of\nferromagnetic superconductors \\cite{Machida:1984zz},\nconducting polymers, and systems of strongly correlated electrons\n\\cite{Lin:1998zz}.\n\nThe Gross-Neveu model, together with the chiral Gross-Neveu model,\nwas originally solved in the limit $N \\to \\infty$ \\cite{Gross:1974jv}.\nVia inverse scattering methods \\cite{Neveu:1977cr}, and later through\na generalized Bethe Ansatz \\cite{Andrei:1979sq}, integrability was\ndemonstrated for general values of $N$, a feature related to the existence of\ninfinitely many conserved currents \\cite{Brezin:1979am}.\nThe model's $S$-matrix is factorizable \\cite{Zamolodchikov:1978xm,\nKarowski:1980kq}: the $n$-body $S$-matrix is expressible as the product of\ntwo-body $S$-matrices.\n\nIn contrast, the massive Gross-Neveu model, in which there is an explicit\nbare mass, is thought not to be integrable for arbitrary values of $N$.\nThis theory still exhibits asymptotic freedom, but it does not admit\nsolitons: for any non-zero mass, the CCGZ kink becomes infinitely massive\nand disappears \\cite{Feinberg:1996kr}.\nThe asymptotic freedom and non-zero bare mass make a rigorous perturbative\nconstruction of the theory satisfying the Osterwalder-Schrader axioms\npossible \\cite{Feldman:1985ar,Feldman:1986ax}.\n\n\\smallskip\n\nThe massive $N$-component Gross-Neveu model is given by the following\nLagrangian in two spacetime dimensions:\n\\begin{equation} \\label{eq:MGN}\n{\\cal L} = \\sum_{j=1}^N \\bar{\\psi}_j (i\\gamma^\\mu \\partial_\\mu - m) \\psi_j \n+ \\frac{g^2}{2} \\bigg( \\sum_{j=1}^N \\bar{\\psi}_j\\psi_j \\bigg)^2 \\,,\n\\end{equation}\nwhere each field $\\psi_j(x)$ has two components,\n$\\gamma^\\mu$ is a two-dimensional representation of the Dirac algebra,\nand $\\bar{\\psi} = \\psi^\\dag \\gamma^0$.\\footnote{\nThe Dirac matrices satisfy \n$\\{\\gamma^\\mu,\\,\\gamma^\\nu\\} \\equiv \\gamma^\\mu\\gamma^\\nu \n+ \\gamma^\\nu\\gamma^\\mu = 2 g^{\\mu\\nu} \\mathds{1}$, and $\\psi_j(x)$ is a spinor,\nthat is, its Lorentz transformation is such that \\eq{eq:MGN} is\nLorentz-invariant. We use the metric $g^{\\mu,\\nu} = \\mathrm{diag}(+1,-1)$.} \nWe use the Majorana representation, namely,\n\\begin{equation}\n\\label{imp}\n\\gamma^0 = \\left[ \\begin{array}{cc} 0 & -i \\\\\ni & 0 \\end{array} \\right] \\,,\\quad \\quad \\gamma^1 =\n-\\left[ \\begin{array}{cc} 0 & i \\\\ i & 0 \\end{array} \\right]\n\\,.\n\\end{equation}\nThe components of the field operator associated with the particle species\n$j \\in \\{1,2,\\ldots,N\\}$ will be denoted by $\\psi_{j,\\alpha}$,\n$\\alpha \\in \\{0,1\\}$.\nIn units where $\\hbar = c = 1$, any quantity has units of some power of \nmass, referred to as the mass dimension.\nWe shall use bold-face to represent spatial vectors, such as $\\mathbf{p}$ \nand $\\mathbf{x}$, to distinguish them from spacetime vectors $x^\\mu =\n(t,\\mathbf{x})$ and $p^\\mu = (E,\\mathbf{p})$. Note, however, that we are\nconsidering 1+1 dimensions; thus, spatial vectors have only one\ncomponent. \n\nThe dimensionless parameter $g$ determines the strength of the interaction. \nWhen $g=0$, the $\\psi_j$ are free fields obeying the Dirac equation,\n$(i\\gamma^\\mu \\partial_\\mu - m_0) \\psi_j(x) = 0$. \nThen one can write\n\\begin{equation}\n\\psi_j (x) = \\int\\frac{d\\mathbf{p}}{2\\pi}\\frac{1}{\\sqrt{2 E_{\\mathbf{p}}}} \n\\left( a_j(\\mathbf{p}) u(\\mathbf{p}) e^{-ip\\cdot x}\n + b_j^\\dag(\\mathbf{p}) v(\\mathbf{p}) e^{ip\\cdot x} \\right)\\,, \n \\label{eq:psi}\n\\end{equation}\nwhere \n\\begin{equation}\nE_{\\mathbf{p}} = \\sqrt{\\mathbf{p}^2 + m_0^2} \\,,\n\\end{equation}\n$ a_j(\\mathbf{p}),\\, b_j^\\dag(\\mathbf{p})$ are creation and annihilation operators,\nand $u,v$ satisfy\n\\begin{eqnarray}\n(m_0 \\gamma^0 + \\mathbf{p} \\gamma^0 \\gamma^1) u(\\mathbf{p}) & = &\n E_{\\mathbf{p}} u(\\mathbf{p}) \\,, \\label{ident1}\\\\\n(m_0 \\gamma^0 - \\mathbf{p} \\gamma^0 \\gamma^1 ) v(\\mathbf{p}) & = & -\n E_{\\mathbf{p}} v(\\mathbf{p}) \\,, \\label{ident2}\\\\\nu^\\dag(\\mathbf{p}) u(\\mathbf{p}) = v^\\dag(\\mathbf{p}) v(\\mathbf{p}) \n& = & 2 E_{\\mathbf{p}} \\,, \\label{ident3}\\\\\nu(\\mathbf{p})^\\dag v(-\\mathbf{p}) & = & 0 \\,, \\label{ident4}\\\\\n\\bar{u}(\\mathbf{p}) u(\\mathbf{p}) = - \\bar{v}(\\mathbf{p})\nv(\\mathbf{p}) & = & 2 m_0 \\,, \\label{ident5}\\\\\n\\bar{u}(\\mathbf{p}) v(\\mathbf{p}) = \\bar{v}(\\mathbf{p})\nu(\\mathbf{p}) & = & 0 \\,. \\label{ident6}\n\\end{eqnarray}\nIn the Majorana representation \\eq{imp}, one \nhas the following concrete solution:\n\\begin{equation}\n\\label{concrete}\nu(\\mathbf{p}) = \\left[ \\begin{array}{c} \\sqrt{E_{\\mathbf{p}} -\n \\mathbf{p}} \\\\\ni \\sqrt{E_{\\mathbf{p}} + \\mathbf{p}} \\end{array} \\right] \\, \\,, \\quad\nv(\\mathbf{p}) = \\left[ \\begin{array}{c} \\sqrt{E_{\\mathbf{p}} -\n \\mathbf{p}} \\\\\n-i \\sqrt{ E_{\\mathbf{p}} + \\mathbf{p}} \\end{array} \\right] \\,.\n\\end{equation}\n\n\n\n\\subsection{Description of Algorithm}\n\\label{sec:alg}\n\nTo represent the field using qubits, we first discretize the quantum\nfield theory, putting it on a spatial lattice. (Discretization errors are\nanalyzed in \\sect{EFT}.) Having done that, our algorithm consists of six\nmain steps, which we analyze in subsequent sections.\n\n\\begin{enumerate}\n\\item Prepare the ground state of the Hamiltonian with\n both the interaction term ($g_0^2$) and the\n nearest-neighbor lattice-site interactions turned off. This can be\n done efficiently because the ground state is a tensor product of the\n \n ground states of the individual lattice sites.\n\\item Simulate, via Suzuki-Trotter formulae, the adiabatic turn-on of\n the nearest-neighbor lattice-site interactions, thereby obtaining\n the ground state of the non-interacting theory.\n\\item Adiabatically turn on the interaction term, while\n adjusting the parameter $m_0$ to compensate for the renormalization\n of the physical mass.\n\\item Excite particle wavepackets, by introducing a\n source term in the Hamiltonian. The source term is chosen to be\n sinusoidally varying in time and space so as to select the desired\n mass and momentum of particle excitations by resonance.\n\\item Evolve in time, via Suzuki-Trotter formulae,\n according to the full massive Gross-Neveu Hamiltonian. It is during this\n time evolution that scattering may occur.\n\\item Either use phase estimation to measure local charge observables,\n or adiabatically return to the free theory and then use phase\n estimation to measure number operators of momentum modes. (The\n choice between these forms of measurement depends on the application.)\n\\end{enumerate}\n\n\n\\subsection{Complexity}\n\\label{sec:complexity}\n\nIn this section we bound the asymptotic scaling of the number of gates\nneeded to simulate scattering processes as a function of the momentum\n$p$ of the incoming particles and the precision $\\epsilon$ to which\nthe final results are desired. The effect of discretization, via a \nlattice of spacing $a$, is captured by (infinitely many) terms in the\neffective Hamiltonian that are not present in the continuum massive \nGross-Neveu theory (\\sect{EFT}). Truncation of these terms, \nwhich make contributions of $O(a)$ to scattering cross sections,\ntherefore constitutes an error. Thus, \nto ensure any cross section $\\sigma'$ in the discretized quantum field\ntheory matches the continuum value $\\sigma$ to within\n\\begin{equation}\n\\label{criterion}\n(1-\\epsilon) \\sigma \\leq \\sigma' \\leq (1+\\epsilon) \\sigma,\n\\end{equation}\none must choose the scaling $a \\sim \\epsilon$ in the high-precision limit, \nthat is, the limit $\\epsilon \\to 0$. Similarly, in the large-momentum limit, \none must choose the scaling $a \\sim p^{-1}$ in order to ensure that the \nwavelength of each particle is large compared with the lattice spacing.\n\nIt suffices to use an adiabatic\nprocess of duration\n\\begin{equation}\nT = O \\left( \\frac{L^2}{a^4 m^3 \\epsilon} \\right)\n\\end{equation}\n(where $L$ is the length of the spatial dimension and $m$ is the physical\nmass)\nto prepare a state within a distance $\\epsilon$ of the free vacuum \n(\\sect{freeprep}). \nUsing Suzuki-Trotter decompositions of the form described in \\sect{trotter},\nwe can simulate this adiabatic time evolution using a number of quantum gates\nscaling as\n\\begin{eqnarray}\nG_{\\mathrm{prep}} & = & O\\left( \\left( \\frac{TL}{a^2} \\right)^{1+o(1)}\n\\epsilon^{-o(1)} \\right) \\\\\n& = & O \\left( \\left( \\frac{L^3}{a^6 m^3 \\epsilon} \\right)^{1+o(1)}\n\\right) \\,. \\label{gprep}\n\\end{eqnarray}\n\nThe next state-preparation step is to simulate adiabatic turn-on of\nthe coupling, thereby obtaining the interacting vacuum.\nThis can be achieved in a time (\\sect{turnon})\n\\begin{equation}\nT_{\\mathrm{turn-on}} = O \\left( \\frac{L^2}{a^4 m^3 \\epsilon }\n\\right).\n\\end{equation}\nApplying Suzuki-Trotter formulae, one obtains a gate count of\n\\begin{equation}\nG_{\\mathrm{turn-on}} = O \\left( \\left( \\frac{L^3}{a^6 m^3 \\epsilon}\n\\right)^{1+o(1)} \\right). \\label{gturnon}\n\\end{equation}\n\nThe final state-preparation step is to excite particle wavepackets. We\ndo this by applying a time-dependent perturbation $\\lambda W(t)$ for\ntime $\\tau$. It is necessary to choose $\\tau$ large enough and\n$\\lambda$ small enough to suppress the production of particle\npairs. The choice of small $\\lambda$ means that there will be a substantial\nprobability that no particle is produced. Let $p_1$ denote the probability\nthat exactly one particle is produced. In a typical simulation one\nwishes to produce an initial state of two spatially separated incoming\nparticles. The probability that both of these are produced is\n$p_1^2$. The simulations in which one or both initial particles has failed \nto be created can be detected at the final measurement stage of the\nsimulation and discarded. This comes at the cost of a factor of\n$1\/p_1^2$ more repetitions of the simulation. \nThe probability $p_1$ is independent of momentum and scales with \nprecision as $p_1 \\sim \\epsilon$ (\\sect{exciting}). \nAlso, in \\sect{exciting} one finds that the total\nnumber of quantum gates needed for the excitation step is\n\\begin{equation}\nG_{\\mathrm{excite}} = \\left\\{ \\begin{array}{ll}\n\\epsilon^{-4-o(1)} \\,, & \\textrm{as} \\,\\,\\, \\epsilon \\to 0 \\,, \\\\\np^{3+o(1)} \\,, & \\textrm{as} \\,\\,\\, p \\to \\infty \\,.\n\\end{array} \\right.\n\\end{equation}\n\nIn both the high-momentum and high-precision limits, the dominant costs in\nthe algorithm are the two adiabatic state preparation steps, whose\ncomplexity is given in \\eq{gprep} and \\eq{gturnon}.\nIn the high-precision limit, to compute physical quantities such as\nscattering cross sections to within a factor of $(1+\\epsilon)$, one\nmust choose $a$ to scale as $\\epsilon$ (\\sect{EFT}). Also, in this limit,\nthe complexity contains a further factor of $1\/\\epsilon$ owing to\npostselection of simulations in which both wavepacket excitations have been\nsuccessful (\\sect{exciting}). \nSubstituting\n$a \\sim \\epsilon$ into \\eq{gprep} and including this extra factor of\n$1\/\\epsilon$ yield a total complexity of\n$O(\\epsilon^{-8-o(1)})$. \nIn the high-momentum limit, $a$ must\nscale as $1\/p$ to ensure that the particle wavelength is long compared\nto the lattice spacing, and $L$ must scale as $p$ to accommodate the\nexcitation step (\\sect{exciting}). \nIn summary, we obtain\n\\begin{equation}\nG_{\\mathrm{total}} = \\left\\{ \\begin{array}{ll}\nO(\\epsilon^{-8-o(1)}) \\,, & \\textrm{as} \\,\\,\\, \\epsilon \\to 0 \\,, \\\\\nO(p^{9+o(1)}) \\,, & \\textrm{as} \\,\\,\\, p \\to \\infty \\,. \\end{array} \\right.\n\\end{equation}\nNote that these are only upper bounds on the complexity, and it may be\npossible to improve them by using more detailed analysis, such as more\nspecialized adiabatic theorems.\n\n\n\\section{Qubits and Quantum Gates}\n\nWe divide the problem of simulating Hamiltonian time evolutions\nin the massive Gross-Neveu model into three subproblems. \nThe first subproblem is to represent the state of the field with qubits. \nWe do this by choosing a complete set of commuting observables and encoding \ntheir eigenvalues with strings of bits (\\sect{rep}). The second subproblem\nis to simulate local fermionic gates on the degrees of freedom defined by\nthe commuting observables. Achieving this in an efficient manner is\nnon-trivial because of the fermionic statistics. For this purpose, we employ\na technique due to Bravyi and Kitaev \\cite{Bravyi_Kitaev}, which\nimplements fermionic statistics with only logarithmic overhead in the\nnumber of lattice sites (\\sect{BK}). The third subproblem is to decompose the\ntime evolution governed by the massive Gross-Neveu Hamiltonian into a product\nof local fermionic gates. We do this using high-order\nSuzuki-Trotter formulae \\cite{Suzuki90} with optimizations tailored to\nthe fermionic statistics and the spatially local nature of the Hamiltonian \n(\\sect{trotter}). The local unitary transformations act on at most\n$2^{2N}$-dimensional Hilbert spaces and can therefore be efficiently\ndecomposed into elementary gates for any constant number of particle\nspecies, $N$, via the Solovay-Kitaev algorithm \\cite{Kitaev97, Dawson_Nielsen}.\n\n\n\\subsection{Representation by Qubits}\n\\label{rep}\n\nFirst, we put the massive Gross-Neveu model on a spatial lattice\n\\begin{equation}\n\\Omega = a \\mathbb{Z}_{\\hat{L}} \\,.\n\\end{equation}\nFor simplicity, we impose periodic\nboundary conditions, so that $\\Omega$ can be considered a circle of\ncircumference $L = a \\hat{L}$. \nThe Hamiltonian is\n\\begin{equation}\n\\label{h}\nH = H_0 + H_g + H_W \\,,\n\\end{equation}\nwhere\n\\begin{eqnarray}\nH_0 & = & \\sum_{\\mathbf{x} \\in \\Omega} a \\sum_{j=1}^N \\bar{\\psi}_j(\\mathbf{x}) \n\\left[ -i \\gamma^1 \\frac{\\psi_j(\\mathbf{x} + a)\n- \\psi_j(\\mathbf{x}-a)}{2a} + m_0 \\psi_j(\\mathbf{x}) \\right] \\label{h0} \\,, \\\\\nH_g & = & -\\frac{g_0^2}{2} \\sum_{\\mathbf{x} \\in \\Omega} a \\bigg( \\sum_{j=1}^N\n\\bar{\\psi}_j (\\mathbf{x}) \\psi_j(\\mathbf{x}) \\bigg)^2 \\label{hg} \\,, \\\\\nH_W & = & \\sum_{\\mathbf{x} \\in \\Omega} a \\sum_{j=1}^N \\left[ - \\frac{r}{2a} \n\\bar{\\psi}_j(\\mathbf{x}) \\left( \\psi_j(\\mathbf{x}+a) - 2 \\psi_j(\\mathbf{x}) + \\psi_j(\\mathbf{x}-a) \\right) \n\\right] \\,. \n\\label{hw} \n\\end{eqnarray}\nHere, $H_g$ is the interaction term, and $H_W$ is the Wilson term, used to\nprevent fermion doubling \\cite{Wilson74}. Correspondingly, $0 < r\n\\leq 1$ is called the Wilson parameter. $H$ is spatially local in the sense \nthat it consists only of single-site and nearest-neighbor terms on the\nlattice.\n\nLet $\\Gamma$ denote the momentum-space lattice corresponding to\n$\\Omega$, namely,\n\\begin{equation}\n\\Gamma = \\frac{2\\pi}{L} \\mathbb{Z}_{\\hat{L}} \\,.\n\\end{equation}\nWe can deduce the spectrum $H_0 + H_W$ using\n\\begin{eqnarray}\n\\label{psij}\n\\psi_j(\\mathbf{x}) & = & \\sum_{\\mathbf{p} \\in \\Gamma} \\frac{1}{L}\n\\frac{1}{\\sqrt{2 E_{\\mathbf{p}}}} \\left( a_j(\\mathbf{p}) u(\\mathbf{p}) e^{i \\mathbf{p} \\cdot \\mathbf{x}}\n + b_j^\\dag(\\mathbf{p}) v(\\mathbf{p}) e^{-i \\mathbf{p} \\cdot \\mathbf{x}} \\right) \\,,\\\\\n\\label{psidagj}\n\\bar{\\psi}_j(\\mathbf{x}) & = & \\sum_{\\mathbf{p} \\in \\Gamma} \\frac{1}{L}\n\\frac{1}{\\sqrt{2 E_{\\mathbf{p}}}} \\left( a_j^\\dag(\\mathbf{p}) \\bar{u}(\\mathbf{p}) e^{-i \\mathbf{p}\n \\cdot \\mathbf{x}} + b_j(\\mathbf{p}) \\bar{v}(\\mathbf{p}) e^{i \\mathbf{p} \\cdot \\mathbf{x}} \\right) \\,.\n\\end{eqnarray}\nThe inverse transformation is\n\\begin{eqnarray}\na_j(\\mathbf{p}) & = & \\frac{1}{\\sqrt{2 E_{\\mathbf{p}}}} u^\\dag(\\mathbf{p}) \\sum_{\\mathbf{x} \\in \\Omega}\na e^{-i \\mathbf{p} \\cdot \\mathbf{x}} \\psi_j(\\mathbf{x}) \\,, \\label{adef}\\\\\nb_j^\\dag(\\mathbf{p}) & = & \\frac{1}{\\sqrt{2 E_{\\mathbf{p}}}} v^\\dag(\\mathbf{p}) \\sum_{\\mathbf{x}\n \\in \\Omega} a e^{i \\mathbf{p} \\cdot \\mathbf{x}} \\psi_j(\\mathbf{x}) \\label{bdef}\\,.\n\\end{eqnarray}\nSubstituting \\eq{psij} and \\eq{psidagj} into \\eq{h0} and \\eq{hw} and\nneglecting the vacuum energy, we obtain\n\\begin{equation}\nH_0 + H_W = \\sum_{j=1}^N \\sum_{\\mathbf{p} \\in \\Gamma} \\frac{1}{L} E^{(a)}_{\\mathbf{p}}(m_0)\n\\left( a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p}) + b_j^\\dag(\\mathbf{p}) b_j(\\mathbf{p}) \\right) \\,,\n\\end{equation}\nwhere\n\\begin{equation}\nE^{(a)}_{\\mathbf{p}}(m_0) = \\sqrt{\\left( m_0 + \\frac{2r}{a} \\sin^2 \\left(\n \\frac{\\mathbf{p} a}{2} \\right) \\right)^2 + \\frac{1}{a^2} \\sin^2 (\\mathbf{p} a)} \\,.\n\\label{eq:Epa}\n\\end{equation}\nFrom the canonical fermionic anticommutation relations\n\\begin{eqnarray}\n\\label{anticanon1}\n\\{ \\psi_{j,\\alpha}(\\mathbf{x}), \\psi^\\dag_{k,\\beta}(\\mathbf{y})\\} & = & a^{-1}\n\\delta_{\\mathbf{x}, \\mathbf{y}} \\delta_{j,k} \\delta_{\\alpha, \\beta} \\mathds{1} \\,, \\\\\n\\label{anticanon2}\n\\{ \\psi^\\dag_{j,\\alpha}(\\mathbf{x}), \\psi^\\dag_{k,\\beta}(\\mathbf{y}) \\} & = &\n\\{\\psi_{j,\\alpha}(\\mathbf{x}), \\psi_{k,\\beta}(\\mathbf{y}) \\} = 0 \\,,\n\\end{eqnarray}\nit follows that\n\\begin{eqnarray}\n\\label{anticanonp1}\n\\{ a_j(\\mathbf{p}), a_k^\\dag(\\mathbf{q}) \\} & = & L \\delta_{\\mathbf{p}, \\mathbf{q}} \\delta_{j,k}\n\\mathds{1} \\,, \\\\\n\\label{anticanonp2}\n\\{ b_j(\\mathbf{p}), b_k^\\dag(\\mathbf{q}) \\} & = & L \\delta_{\\mathbf{p}, \\mathbf{q}} \\delta_{j,k}\n\\mathds{1} \\,,\n\\end{eqnarray}\nwith all other anticommutators involving $a$ and $b$ operators equal to\nzero. We thus have the following interpretation: there are $N$\nindependent fermion species, created (with momentum $\\mathbf{p}$) by\n$a_1^\\dag(\\mathbf{p}),\\ldots,a_N^\\dag(\\mathbf{p})$ and annihilated by\n$a_1(\\mathbf{p}),\\ldots,a_N(\\mathbf{p})$. Similarly, for each species $j$, \n$b_j^\\dag(\\mathbf{p})$ and $b_j(\\mathbf{p})$ are the creation and annihilation\noperators for a corresponding antifermion. Thus, $H$ acts\non a Hilbert space of dimension $2^{2N\\hat{L}}$.\n\n\\medskip\n\nWe can specify a basis for the Hilbert space of field states by\nchoosing a complete set of commuting observables. The basis is then\nindexed by the set of eigenvalues of these observables. \nThe fermionic anticommutation relations $ \\{a,a^\\dag\\} = \\mathds{1} ,\\,\n\\{a,a\\} = 0$ imply that the algebra generated by $a$ and $a^\\dag$ has\nthe irreducible representation\n$\na \\to \\left[ \\begin{array}{cc} 0 & 1 \\\\ 0 & 0 \\end{array} \\right]\n\\,,\\,\na^\\dag \\to \\left[ \\begin{array}{cc} 0 & 0 \\\\ 1 & 0 \\end{array} \\right]\n$,\nwhich is unique up to the choice of basis. Hence, the eigenvalues of\n$a^\\dag a$ are $0$ and $1$. The two basis vectors for the space on\nwhich $a$ and $a^\\dag$ act are interpreted as the presence or absence\nof a fermion.\n\nThus, by \\eq{anticanon1} and \\eq{anticanon2},\n\\begin{equation}\nS_x = \\{a \\psi^\\dag_{j,\\alpha}(\\mathbf{x})\n\\psi_{j,\\alpha}(\\mathbf{x})|j=1,\\ldots,N; \\ \\alpha=0,1; \\ \n\\mathbf{x} \\in \\Omega \\}\n\\end{equation}\nis a set of $2N\\hat{L}$ commuting observables, each of which has\neigenvalues zero and one. Similarly, by \\eq{anticanonp1} and\n\\eq{anticanonp2}, \n\\begin{equation}\nS_p = \\{ L^{-1} a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p})| j=1,\\ldots,N;\n\\ \\mathbf{p} \\in \\Gamma \\} \\cup \\{ L^{-1} b_j^\\dag(\\mathbf{p})\nb_j(\\mathbf{p})| j=1,\\ldots,N; \\ \\mathbf{p} \\in \\Gamma \\}\n\\end{equation}\nis a set of $2N\\hat{L}$ commuting observables, each with eigenvalues\nzero and one. In the non-interacting theory, the eigenvalues of the\nelements of $S_p$ are interpreted as the fermionic occupation numbers \nof different momentum modes.\n\nThe Hamiltonian $H_0+H_W$ is called the free theory. The\neigenstates of the number operators in $S_p$ are eigenstates of\n$H_0+H_W$, and thus the particles do not interact. The rest mass\nof these non-interacting particles is $E_0^{(a)}(m_0) =\nm_0$. It is not known how to solve for the spectrum of\n$H_0+H_W+H_g$ analytically, but the eigenvalue spectrum of\n$H_0+H_W+H_g$ can still be characterized in terms of particles. The\nrest mass $m$ of the particles in $H_0+H_W+H_g$ is equal to the\neigenvalue gap between the ground state (also called the vacuum) and\nthe first excited state. In the interacting theory, it is no longer\ntrue that $m = m_0$. Rather, $m$ depends in a non-trivial way on $m_0$,\n$g_0$, and $a$; the mass is said to be renormalized. \nA quantitative analysis of this effect contributes to our\nanalysis of adiabatic state preparation and is given in\n\\sect{massren}.\n\nOne can represent the quantum state of the fermionic fields using\n$2N\\hat{L}$ qubits to store the eigenvalues of the elements of either\n$S_x$ or $S_p$. The ground state of\nthe free theory in the $S_p$ representation is thus \n$\\ket{000\\ldots}$. However, the ground state of the interacting theory\nis non-trivial in both\nrepresentations. We define our qubit basis in terms of the elements of\n$S_x$, because the Gross-Neveu Hamiltonian is local in this basis,\nwhich improves the scaling of the Suzuki-Trotter formulae used to\nimplement time evolution. \nHowever, we do not simply store the eigenvalues of the\nelements of $S_x$ directly as the values of the qubits. \nThis representation would be somewhat inefficient to act upon, because \ndirect implementation of the fermionic minus signs requires $O(\\hat{L})$ \ngates. Instead,\nwe apply the method of \\cite{Bravyi_Kitaev} to reduce this overhead to\n$O(\\log \\hat{L})$, as described next.\n\n\n\\subsection{Simulating Fermionic Gates}\n\\label{BK}\n\nThe implementation of fermionic gates using qubits can present a technical\nchallenge \\cite{Bravyi_Kitaev}. As an example, consider the unitary \ntransformation $U_{j,\\alpha}(\\mathbf{x}) = \\sqrt{a} \\big(\n \\psi_{j,\\alpha}(\\mathbf{x}) + \\psi^\\dag_{j,\\alpha}(\\mathbf{x})\n\\big)$. This toggles the eigenvalue of $a\n\\psi_{j,\\alpha}(\\mathbf{x}) \\psi^\\dag_{j,\\alpha}(\\mathbf{x})$ between\nzero and one. Such a toggling can be implemented on qubits with the\nNOT gate. However, to satisfy the fermionic anticommutation relations\n\\eq{anticanon1} and \\eq{anticanon2} the sign of the transition\namplitude between the zero and one state must depend on the occupation\nof other modes. A well-known way to satisfy\n\\eq{anticanon1} and \\eq{anticanon2} is to use a Jordan-Wigner\ntransformation, in which the modes are given an ordering and\n$U_{j,\\alpha}(\\mathbf{x})$ is represented by the operator $\\sigma_x\n\\otimes \\sigma_z \\otimes \\ldots \\otimes \\sigma_z$, where the\n$\\sigma_z$ operators apply to all preceding modes\\footnote{Note that\n one can apply both the Jordan-Wigner and Bravyi-Kitaev methods for\n implementing fermionic operators on quantum computers\n in any number of spatial dimensions, using an arbitrary numbering of\n modes.\n}\n\\cite{Jordan_Wigner}. Unfortunately, this method clearly has an $O(\\hat{L})$\noverhead. In \\cite{Bravyi_Kitaev}, Bravyi and Kitaev give a method \nwith only $O(\\log \\hat{L})$ overhead, which we briefly review\nhere. \n\nLet $n_i$ be the occupation number of the $i\\th$ fermionic mode according \nto some chosen numbering of the modes from 1 to $2N\\hat{L}$. To implement\nthe minus signs in $U_{j,\\alpha}(\\mathbf{x})$, one needs to know\n$\\sum_i n_i$, where the sum is over all preceding modes. Thus, a\nnatural encoding of fermionic mode occupation numbers is to store\nthe quantities $t_i = \\sum_{j=1}^i n_j$ instead of the quantities\n$n_i$. This encoding has the advantage that calculating the relevant signs has\nan $O(1)$ cost. However, it has the disadvantage that, if the\noccupation number of the $i\\th$ mode changes, then $i-1$ of the $t_i$ values\nmust be updated. Thus, updates have an $O(\\hat{L})$ cost. \nThe Bravyi-Kitaev encoding uses the following compromise, \nin which the calculation of the relevant signs and the\nupdate steps can both be performed in time $O(\\log \\hat{L})$.\n\nThe mode index $i \\in \\{1,\\ldots,2N\\hat{L}\\}$ can be represented by a\nbit string of length $l = \\lceil \\log_2 (2N\\hat{L}) \\rceil$. One can\ndefine the following partial order on these bit strings. Consider two\nbit strings $x = x_l x_{l-1} \\ldots x_1$ and $y = y_l y_{l-1} \\ldots\ny_1$. Then $x \\preceq y$ if, for some $r$, $x_j = y_j$ for $j > r$ and\n$y_{r-1} = y_{r-2} = \\ldots = y_1 = 1$. Now, let $k_j = \\sum_{s\n \\preceq j} n_s$. Any total\noccupation number $t_i$ can be computed from the $k_j$ quantities in $O(\\log\n\\hat{L})$ time and changing the occcupation of any mode $n_j$\nrequires updating only $O(\\log \\hat{L})$ of the $k_j$ quantities \n\\cite{Bravyi_Kitaev}.\n\nIn fact, the Bravyi-Kitaev construction is\nrelevant only to the excitation of wavepackets\n(\\sect{exciting}). In all other parts of our algorithm, we simulate a\nHamiltonian in which every term is a product of an even number of fermionic\nfield operators, all acting on the same site or on\nnearest-neighbor sites in one dimension. In this case, traditional\nJordan-Wigner techniques incur only $O(1)$ overhead.\n\n\n\n\n\\subsection{Application of Suzuki-Trotter Formulae to Fermionic systems}\n\\label{trotter}\n\nIn this section, we describe how to construct efficient quantum\ncircuits that simulate time evolution induced by the Hamiltonian $H$\ndefined in \\eq{h}, \\eq{h0}, \\eq{hg}, and\n\\eq{hw}. We present the case in which $H$ is time-independent. By the results\nof \\cite{Suzuki93}, the same analysis applies to the simulation of the\ntime-dependent Hamiltonians that we use in adiabatic state\npreparation. (See also \\cite{Wiebe}.)\n\nUsing a $k^{\\mathrm{th}}$-order Suzuki-Trotter formula, one can implement\nHamiltonian time evolution of duration $t$ using a number of quantum\ngates that scales as $t^{1+\\frac{1}{2k}}$ \\cite{Suzuki90, Cleve_sim}. \nGenerally, applying a Suzuki-Trotter formula directly to a Hamiltonian of \nthe form\n\\begin{equation}\n\\label{manyterms}\nH = \\sum_{i=1}^m H_i\n\\end{equation}\nyields an algorithm with $O(m^{1+o(1)})$ timesteps, and\nhence $O(m^{2+o(1)})$ gates, if the $H_i$ are not mutually commuting. \nThus, \nit is often\nadvantageous to group terms in a Hamiltonian like \\eq{manyterms}\ninto as small a collection as possible of sets of mutually\ncommuting terms \\cite{Raeisi,longversion}. \n\nConsider the problem of simulating the Hamiltonian $H$ defined in\n\\eq{h}, \\eq{h0}, \\eq{hg}, and \\eq{hw}. By\n\\eq{anticanon1} and \\eq{anticanon2}, one\nsees that \n\\begin{equation}\n\\label{sitecom}\n[\\bar{\\psi}_j(\\mathbf{x}) \\psi_j(\\mathbf{x}), \\bar{\\psi}_k(\\mathbf{y})\n\\psi_k(\\mathbf{y})] = 0 \\,,\n\\end{equation}\nregardless of whether $j=k$ or $\\mathbf{x}=\\mathbf{y}$. Thus, we start by\ndecomposing $H$ as a sum of two parts, the single-site terms and the\nterms that couple nearest neighbors:\n\\begin{equation}\nH = H_{\\mathrm{ss}} + H_{\\mathrm{nn}} \\,,\n\\end{equation}\nwhere\n\\begin{equation}\nH_{\\mathrm{ss}} = \\sum_{\\mathbf{x} \\in \\Omega} a \\Bigg[ \\sum_{j=1}^N\n \\left( m_0 \\bar{\\psi}_j(\\mathbf{x}) \\psi_j(\\mathbf{x}) + \\frac{r}{a}\n \\bar{\\psi}_j(\\mathbf{x}) \\psi_j(\\mathbf{x}) \\right) +\n \\frac{g_0^2}{2} \\bigg( \\sum_{j=1}^N \\bar{\\psi}_j(\\mathbf{x})\n \\psi_j(\\mathbf{x}) \\bigg)^2 \\Bigg] \\,.\n\\end{equation}\nBy \\eq{sitecom}, $e^{-i H_{\\mathrm{ss}} \\delta t}$ decomposes into a\nproduct of local unitary transformations.\n\nAll terms in $H_{\\mathrm{nn}}$ are of the form\n\\begin{equation}\n\\label{form}\n\\psi_{j,\\alpha}^\\dag (\\mathbf{x}) \\psi_{j, \\beta}(\\mathbf{y}) +\n\\psi^\\dag_{j, \\beta}(\\mathbf{y}) \\psi_{j,\\alpha}(\\mathbf{x}) \\,,\n\\end{equation}\nfor $\\mathbf{x} = \\mathbf{y} \\pm a$. Terms with $\\alpha = \\beta$ and\nterms with $\\alpha \\neq \\beta$ are both present in $H_{\\mathrm{nn}}$.\n\nGiven an operator of the form \\eq{form}, let us refer to the\nsubset of $\\{1,\\ldots,N\\} \\times \\{0,1\\} \\times \\Omega$ on which it\nacts as its support. Because they consist of a product of an even number \nof fermionic operators, any two operators of the form \\eq{form} commute\nprovided they have disjoint support. Thus, we next decompose\n$H_{\\mathrm{nn}}$ as\n\\begin{equation}\n\\label{fourcolors}\nH_{\\mathrm{nn}} = H_1 + H_2 + H_3 + H_4 \\,,\n\\end{equation}\nwhere each of $H_1,\\ldots,H_4$ consists of a sum of terms with\nnon-intersecting support.\n\nIn $H_{\\mathrm{nn}}$ there is no coupling between different species,\nthat is, no products of $\\psi_j$ and $\\psi_k$ for $j \\neq\nk$. Thus, we can ignore the index $j$. We now construct a graph whose\nvertices correspond to the elements of $\\{0,1\\} \\times \\Omega$. We\ndraw an edge between two vertices if there exists a term in\n$H_{\\mathrm{nn}}$ with the corresponding support. One sees that this\ngraph is as shown in Fig.~\\ref{coloring}. The graph is\nedge-colorable with four colors, and therefore $H_{\\mathrm{nn}}$ is\ncorrespondingly decomposable as in \\eq{fourcolors} with each of\n$H_1,H_2,H_3,H_4$ consisting of a sum of commuting terms. (Because of\nthe periodic boundary conditions, this works only if $\\hat{L}$ is\neven, which we assume henceforth.)\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{coloring2.eps}\n\\caption{\\label{coloring} Vertices represent elements of $\\{0,1\\}\n \\times \\Omega$ two vertices are connected by an edge if\n $H_{\\mathrm{nn}}$ couples these sites. (Different species are never\n coupled by $H_{\\mathrm{nn}}$, so the full graph with vertices\n corresponding to elements of $\\{1,\\ldots,N\\} \\times \\{0,1\\} \\times\n \\Omega$ would consist of $N$ disconnected copies of the graph\n shown.) The edges can be colored with four colors such that each node\n has no more than one incident edge of each color. \n One can obtain the decomposition\n $H_{\\mathrm{nn}} = H_1 + H_2 + H_3 + H_4$ \n by choosing $H_1$ to be the sum of all interaction terms\n along the edges labeled 1 (which are blue), $H_2$ to be the sum of\n all the interaction terms along edges labeled 2 (which are red), and\n so on.}\n\\end{center}\n\\end{figure}\n\nThe unitary time evolution induced by $H=H_{\\mathrm{ss}} +\nH_1 + H_2 + H_3 + H_4$ can be approximately decomposed via high-order\nSuzuki-Trotter formulae into a sequence of \n\\begin{equation}\n\\label{trottersteps}\nn_{\\mathrm{S-T}} = O\\big((t\/a)^{1+o(1)} \\hat{L}^{o(1)} \\epsilon^{-o(1)} \\big)\n\\end{equation}\ntime evolutions induced by individual members of $\\{H_{\\mathrm{ss}},\nH_1, H_2, H_3, H_4\\}$. The scaling with $t$ follows from\n\\cite{Suzuki90, Suzuki93}. The scaling with $\\hat{L}$ is a consequence\nof the spatial locality of $H$ (see \\S 4.3 of\n\\cite{longversion}), that is, the property that only \nnearest-neighbor sites are coupled. \nThe scaling with $a$ is a consequence of the fact\nthat the individual terms in the Hamiltonian each have norm at most of\norder $a^{-1}$. This affects the magnitude of the error term in the\nSuzuki-Trotter decomposition, which arises from commutators of these\nterms.\n\nEach member of $\\{H_{\\mathrm{ss}}, H_1, H_2, H_3, H_4\\}$ is a sum of\n$O(\\hat{L})$ commuting terms. The time evolution $e^{-i \\sum_i\n M_i t}$ induced by \ncommuting terms $M_i$\ndecomposes as $e^{-i \\sum_i M_i t} = \\prod_i e^{-iM_i t}$. If each\n$H_i$ acts on only a constant number of qubits, then the individual\nfactors $e^{-iH_i t}$ in this product can each be simulated in\n$\\widetilde{O}(1)$ time, by the Solovay-Kitaev theorem \\cite{Kitaev97,\n Dawson_Nielsen}. Thus, including a logarithmic overhead for fermionic\nstatistics, the cost of implementing $e^{-iJt}$ for any\n$J \\in \\{H_{\\mathrm{ss}}, H_1, H_2, H_3, H_4\\}$ is $\\widetilde{O}(\\hat{L})$.\nBy \\eq{trottersteps}, the total cost of time evolution is\n$O \\big( \\left( \\frac{tL}{a^2} \\right)^{1+o(1)} \\epsilon^{-o(1)}\n\\big)$ quantum gates.\n\n\n\\section{State Preparation and Measurement}\n\nWe divide the problem of state preparation into three steps, described\nin \\sect{freeprep}--\\sect{exciting}:\npreparing the free vacuum,\ntransforming the free vacuum into the interacting vacuum, and exciting\nwavepackets on the background of the interacting vacuum.\nTwo possible measurement procedures are described in \n\\sect{measurements} and \\sect{sec:charge}.\n\n\\subsection{Preparing the Free Vacuum}\n\\label{freeprep}\n\nAlthough the free Hamiltonian $H_0 + H_W$ is exactly solvable,\npreparing its ground state in the $S_x$ representation on a quantum computer \nis non-trivial. We do so using adiabatic state preparation, as\nfollows. Let\n\\begin{equation}\nH(s) = \\sum_{\\mathbf{x} \\in \\Omega} a \\sum_{j=1}^N \\bar{\\psi}_j(\\mathbf{x}) \\left[\n -si\\gamma^1 \\frac{\\psi_j(\\mathbf{x} + a) - \\psi_j(\\mathbf{x} - a)}{2a} + m\n \\psi_j(\\mathbf{x}) \\right] + s H_W.\n\\end{equation}\nThe energy gap of this Hamiltonian is equal to the parameter $m$ for all\n$s$. We set this equal to the physical mass of the particles whose\nscattering we ultimately wish to simulate.\n\n$H(0)$ is a sum of separate Hamiltonians acting on each lattice\nsite and each species of particle. Its ground state is therefore the\ntensor product of the ground states of the\nfour-dimensional Hilbert spaces associated with each pair \n$(\\mathbf{x},j) \\in \\Omega \\times \\{1,\\ldots,N\\}$. \n(Specifically, the ground state for a given site is\n$\\frac{1}{\\sqrt{2}} \\left( \\ket{01} + i\\ket{10} \\right)$, where\n$\\ket{b_0 b_1}$ with $b_0,b_1 \\in \\{0,1\\}$ denotes the state\nsatisfying $a \\psi^\\dag_{j,0}(\\mathbf{x}) \\psi_{j,0}(\\mathbf{x})\n\\ket{b_0 b_1} = b_0 \\ket{b_0 b_1}$ and\n$a \\psi^\\dag_{j,1}(\\mathbf{x}) \\psi_{j,1}(\\mathbf{x}) \\ket{b_0 b_1} =\nb_1 \\ket{b_0 b_1}$.) The cost of\nproducing this tensor product of $N \\hat{L}$ local states, \nincluding the cost of fermionic\nantisymmetrization via the encoding of \\cite{Bravyi_Kitaev},\nis $O(N \\hat{L} \\log (N \\hat{L}))$.\n\nAfter the ground state of $H_0$ has been prepared, the complexity of the\nremaining adiabatic state preparation is determined by the\nadiabatic theorem \\cite{Ruskai, Goldstone}.\n\n\\begin{theorem}\n\\label{adiabaticthm}\nLet $H(s)$ be a finite-dimensional twice differentiable Hamiltonian on\n$0 \\leq s \\leq 1$ with a non-degenerate ground state $\\ket{\\phi_0(s)}$\nseparated by an energy gap $\\gamma(s)$. Let $\\ket{\\psi(t)}$ be\nthe state obtained by Schr\\\"odinger time evolution according to the \nHamiltonian $H(t\/T)$ from the state $\\ket{\\phi_0(0)}$ at $t = 0$. \nThen, with an appropriate choice of phase for $\\ket{\\phi_0(t)}$, \nthe error $\\Delta \\equiv \\| \\Ket{\\psi(T)} - \\Ket{\\phi_0(1)} \\|$ satisfies\n\\begin{equation}\n\\label{adiabateq}\n\\Delta \\leq\n\\frac{1}{T} \\left[ \\frac{1}{\\gamma(0)^2} \\left\\| \\frac{\\mathrm{d} H}{\\mathrm{d} s} \\right\\|_{s=0}\n+ \\frac{1}{\\gamma(1)^2} \\left\\| \\frac{\\mathrm{d} H}{\\mathrm{d} s} \\right\\|_{s=1}\n+ \\int_0^1 \\mathrm{d} s \\left( \\frac{5}{\\gamma^3} \\left\\| \\frac{\\mathrm{d} H}{\\mathrm{d} s}\n\\right\\|^2 + \\frac{1}{\\gamma^2} \\left\\| \\frac{\\mathrm{d}^2 H}{\\mathrm{d} s^2}\n\\right\\| \\right) \\right]. \n\\end{equation}\n\\end{theorem}\n\nAnalyzing the adiabaticity of this process is relatively easy, because\n\\eq{psij} and \\eq{psidagj} diagonalize $H(s)$\n(and $\\frac{dH}{ds}$) for all $s$. One finds that the eigenvalue gap\nof $H(s)$ throughout the adiabatic path $0 \\leq s \\leq 1$ is always\nprecisely $m$. Furthermore,\n\\begin{equation}\n\\frac{dH}{ds} = \\sum_{j=1}^N \\sum_{\\mathbf{p} \\in \\Gamma} \\frac{1}{L}\nE^{(a)}_{\\mathbf{p}}(0) \\left( a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p}) + b_j^\\dag(\\mathbf{p}) b_j(\\mathbf{p})\n\\right).\n\\end{equation}\nThus,\n\\begin{equation}\n\\left\\| \\frac{dH}{ds} \\right\\| = 2N \\sum_{\\mathbf{p} \\in \\Gamma} E^{(a)}_{\\mathbf{p}}(0) \\,.\n\\end{equation}\nFor large $L$, $\\sum_{\\mathbf{p} \\in \\Gamma} \\frac{1}{L}$ becomes well\napproximated by the integral $\\int_0^{2 \\pi\/a} d \\mathbf{p}$. Thus, using\n\\eq{eq:Epa}, we obtain \n\\begin{eqnarray}\n\\left\\| \\frac{dH}{ds} \\right\\| & \\simeq &\n2NL \\int_0^{2 \\pi\/a} d \\mathbf{p} E^{(a)}_{\\mathbf{p}}(0) \\\\\n& = & 2NL \\int_0^{2 \\pi\/a} d \\mathbf{p} \\sqrt{\\frac{4r^2}{a^2} \\sin^4 \\left(\n \\frac{\\mathbf{p} a}{2} \\right) + \\frac{1}{a^2} \\sin^2 (\\mathbf{p} a)} \\\\\n& = & \\frac{2NL}{a^2} \\eta(r) \\,, \\label{finalderiv}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\eta(r) = \\int_0^{2 \\pi} d \\hat{p} \\sqrt{4 r^2 \\sin^4 \\Big(\n \\frac{\\hat{p}}{2} \\Big) + \\sin^2 \\left( \\hat{p} \\right)} \\,.\n\\end{equation}\nWe can therefore substitute $\\frac{d^2 H}{ds^2} = 0$, $\\left\\|\n \\frac{dH}{ds} \\right\\| = O(L a^{-2})$ and $\\gamma = m$ into\n \\eq{adiabateq}. Theorem \\ref{adiabaticthm} then shows that we can\nprepare a state with distance no more than $\\epsilon_{\\mathrm{prep}}$\nfrom the exact state using\n\\begin{equation}\nT = O \\left( \\frac{L^2}{a^4 m^3 \\epsilon_{\\mathrm{prep}}} \\right).\n\\end{equation}\nNote that the adiabatic theorem applied here, though convenient because \nof its generality, may not yield a tight upper bound on the run time.\n\n\\subsection{Preparing the Interacting Vacuum}\n\\label{turnon}\n\nGiven the ground state of the free theory, we can prepare the ground\nstate of the interacting theory by adiabatically varying the parameters\n$g_0^2$ and $m_0$ in the massive Gross-Neveu Hamiltonian, starting from\n$g_0^2 = 0$. For adiabaticity to be maintained, the physical mass must\nnot vanish at any point in the adiabatic path. By \\sect{massren}, \nthe physical mass varies with $g_0^2$ according to\n\\begin{equation}\n\\label{two-orders}\nm = m_0 - c_1 g_0^2 - c_2 g_0^4 + O(g_0^6) \\,,\n\\end{equation}\nwhere $c_1,c_2>0$ are given by\n\\begin{eqnarray}\nc_1 & = & \\frac{m}{2\\pi} \\log\\Big(\\frac{1}{ma}\\Big) + \\cdots \\,, \\\\\nc_2 & \\simeq & \\frac{m}{16\\pi^3}\\big(9.3 N - 0.07\\big)\\log^2(ma) + \\cdots\\,. \n\\label{c2}\n\\end{eqnarray}\n(The coefficients in \\eq{c2} were determined numerically.)\nThe vanishing of the physical mass marks the location of a quantum phase \ntransition, which cannot be adiabatically crossed. \nEquation \\eq{two-orders} indicates that the phase diagram takes the\nschematic form as shown in Fig.~\\ref{paths}.\n\nAs in \\sect{freeprep}, we parametrize our adiabatic state\npreparation by a quantity $s$, which increases over time from $0$ to\n$1$. In this second adiabatic process, the Hamiltonian is the full\nmassive Gross-Neveu Hamiltonian with $s$-dependent parameters\n$g_0^2(s)$ and $m_0(s)$. We choose $g_0^2(0) = 0$ and $m_0(0)=m$ so\nthat the initial Hamiltonian of this adiabatic process matches the\nfinal Hamiltonian of the preceding adiabatic step. Thus, the ground\nstate at $s=0$ is the free vacuum prepared in the previous step of the\nalgorithm. To keep our analysis simple, we choose a linear adiabatic\npath, namely,\n\\begin{eqnarray}\ng_0^2(s) & = & s g_0^2 \\,, \\nonumber \\\\\nm_0(s) & = & m + s \\delta_m \\,. \\label{linearpath}\n\\end{eqnarray}\nWe choose $\\delta_m$ so that the physical mass at $s=1$ is\nequal to the physical mass at $s=0$. To second order in\n$g_0^2$,\n\\begin{equation}\n\\label{deltam}\n\\delta_m = c_1 g_0^2 + c_2 g_0^4 + \\cdots \\,,\n\\end{equation}\nas illustrated in Fig.~\\ref{paths}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.25\\textwidth]{paths.eps}\n\\caption{\\label{paths} Our perturbative calculations of the physical mass\n in the massive Gross-Neveu model indicate a phase diagram with the\n qualitative features illustrated above. The phase above the dashed\n curve is accessible adiabatically from the free theory but the phase below\n is not. The arrow depicts our linear adiabatic path, described in\n \\eq{deltam}. Our perturbative analysis shows that the first two\n derivatives of the phase transition curve with respect to $g_0^2$\n are both positive and diverge only as $\\mathrm{poly}(\\log(m_0 a))$\n in the limit $a \\to 0$.\n}\n\\end{center}\n\\end{figure}\n\nBy \\eq{linearpath}, $\\frac{d^2 H}{ds^2} = 0$ and \n\\begin{equation}\n\\label{deriv}\n\\frac{dH}{ds} = \\sum_{\\mathbf{x} \\in \\Omega} a \\Bigg[\n \\delta_m \\bar{\\psi}_j(\\mathbf{x}) \\psi_j(\\mathbf{x}) + \\frac{g_0^2}{2} \\bigg( \\sum_{j=1}^N\n\\bar{\\psi}_j (\\mathbf{x}) \\psi_j(\\mathbf{x}) \\bigg)^2 \\Bigg] \\,.\n\\end{equation}\nFurthermore, the minimal eigenvalue gaps occur at $s=0$ and $s=1$ and\nare equal to the final physical mass $m$. Thus, to apply Theorem\n\\ref{adiabaticthm} we need only bound $\\left\\| \\frac{d H}{ds}\n\\right\\|$. \n\nWe can deduce the spectrum of $\\frac{dH}{ds}$ by the following\ntransformation:\n\\begin{eqnarray}\na_j(\\mathbf{x}) & = & \\frac{1}{\\sqrt{2}} \\big( \\psi_{j,0}(\\mathbf{x}) - i\n \\psi_{j,1}(\\mathbf{x}) \\big) \\,,\\\\\nb_j^\\dag(\\mathbf{x}) & = & \\frac{1}{\\sqrt{2}} \\big( \\psi_{j,0}(\\mathbf{x}) + i\n \\psi_{j,1}(\\mathbf{x}) \\big) \\,.\n\\end{eqnarray}\nThis corresponds to\n\\begin{equation}\n\\label{localtrans}\n\\psi_j(\\mathbf{x}) = \\frac{1}{\\sqrt{2m_0}} \\left( a_j(\\mathbf{x}) u(0) + b_j^\\dag(\\mathbf{x})\nv(0) \\right) \\,,\n\\end{equation}\nwhere $u,v$ are defined in \\eq{concrete}. Using \\eq{anticanon1} and\n\\eq{anticanon2}, one can verify that\n\\begin{eqnarray}\n\\{ a_j(\\mathbf{x}), a_k^\\dag(\\mathbf{y}) \\} = \\{ b_j(\\mathbf{x}), b_k^\\dag(\\mathbf{y}) \\} \n& = & a^{-1} \\delta_{j,k} \\delta_{\\mathbf{x}, \\mathbf{y}} \\mathds{1} \\,,\\\\\n\\{ a_j(\\mathbf{x}), a_k(\\mathbf{y}) \\} = \\{ b_j(\\mathbf{x}), b_k(\\mathbf{y}) \\} & = & 0 \\,,\\\\\n\\{ a_j(\\mathbf{x}), b_k(\\mathbf{y}) \\} = \\{a_j^\\dag(\\mathbf{x}), b_k(\\mathbf{y}) \\} & = & 0 \\,.\n\\end{eqnarray}\nThus, $a_j(\\mathbf{x}),a_j^\\dag(\\mathbf{x}),b_j(\\mathbf{x}),b_j^\\dag(\\mathbf{x})$ are\ncreation and annihilation operators for $2N$ species of fermions\nlocalized on the spatial lattice. By \\eq{localtrans},\n\\begin{equation}\n\\bar{\\psi}_j(\\mathbf{x}) \\psi_j(\\mathbf{x}) = a_j^\\dag(\\mathbf{x}) a_j(\\mathbf{x}) - b_j(\\mathbf{x})\nb_j^\\dag(\\mathbf{x}) \\,, \n\\end{equation}\nfrom which we obtain\n\\begin{equation}\n\\Bigg\\| \\sum_{j=1}^N \\bar{\\psi_j}(\\mathbf{x}) \\psi_j(\\mathbf{x}) \\Bigg\\| = 2Na^{-1},\n\\end{equation}\nand hence\n\\begin{equation}\n\\left\\| \\frac{dH}{ds} \\right\\| = \\delta_m 2N\\hat{L} + \\frac{2\n \\hat{L} g_0^2 N^2}{a} \\,. \n\\end{equation}\n\nBy the results of \\sect{massren}, we find that $\\delta_m =\nO(\\log^2(ma))$. Hence, recalling that $\\hat{L} = L\/a$, we obtain\n\\begin{equation}\n\\left\\| \\frac{dH}{ds} \\right\\| = O \\left( \\frac{L}{a^2} \\right).\n\\end{equation} \nTherefore, by Theorem \\ref{adiabaticthm} the diabatic error is at most\n\\begin{eqnarray}\n\\epsilon & = & O \\left( \\frac{1}{T_{\\mathrm{turn-on}}} \\frac{ \\left\\|\n \\frac{dH}{ds} \\right\\|^2}{\\gamma^3} \\right) \\\\\n& = & O \\left( \\frac{L^2}{T_{\\mathrm{turn-on}} a^4 m^3} \\right).\n\\end{eqnarray}\nIt thus suffices to choose\n\\begin{equation}\nT_{\\mathrm{turn-on}} = O \\left( \\frac{L^2}{a^4 \\epsilon m^3} \\right).\n\\end{equation}\n\nIn the above procedure, we choose our adiabatic path so that the\ninitial and final physical masses equal some user-specified value\n$m$. To achieve this, one needs to tune the quantity\n$\\delta_m$ in accordance with \\eq{linearpath} and \\eq{deltam}. For\nsufficiently weak coupling, the proper choice of $\\delta_m$ can be\ndetermined by the perturbative calculations performed in\n\\sect{massren}. In the strongly coupled case, these perturbative\ncalculations no longer provide precise guidance as to a choice of\n$\\delta_m$. Instead, as previously discussed in \\cite{longversion},\nthe adiabatic path can be determined by the quantum\ncomputer. Specifically, one can measure the physical mass at a given\ncoupling strength $g_0$ by exciting a particle and measuring energy\nvia phase estimation. This measurement guides the choice of a suitable\nadiabatic path to a slightly larger coupling strength, at which\npoint the mass can be measured again. Iterating this process, one\ncan reach any coupling strength for which the corresponding vacuum is\nin the same quantum phase as the free vacuum.\n\n\\subsection{Exciting Wavepackets}\n\\label{exciting}\n\nAfter preparing the interacting vacuum, $\\ket{\\mathrm{vac}}$, we\nexcite wavepackets by simulating a source that varies sinusoidally in\nspace and time so as to induce excitations of some particular total energy \nand momentum by resonance. Given the physical rest mass $m$\nof the particles, we can choose this energy and momentum so that the\nonly corresponding state is a single-particle state. (For a given total\nmomentum, an unbound state of two particles will have greater energy\nthan the corresponding state of one particle. In the ultrarelativistic\nlimit, $p \\gg m$, this energy difference scales as $m^2\/p$.)\nIn the remainder of this section, we show that, using a source of\nspatial extent $l$ and duration $\\tau$, one can ensure that\nexcitations off resonance are suppressed as $\\sim \\exp \\left[\n - \\frac{1}{4} \\left( l^2 (\\mathbf{p}-\\mathbf{p}_0)^2 + \\tau^2 (E-E_0)^2 \\right)\n\\right]$. Hence, by simulating a process of duration $\\tau \\sim p\/m^2$\nand spatial extent $l \\sim p\/m^2$, one can control the incoming momentum \nand ensure that the probability of obtaining more than one particle is small.\n\nThe creation of two incoming particles has only an\n$O(\\epsilon)$ success probability, which can be compensated for by\nrepeated attempts. (See the discussion following \\eq{w}.) \nThe total complexity of preparing two particles is\nthe cost of simulating the time evolution given in \\eq{bigR} a total\nof $1\/\\epsilon$ times. Thus, by the results of \\sect{trotter},\nthe complexity is $\\big( \\frac{\\tau l}{a^2 \\epsilon}\n\\big)^{1+o(1)}$. Thus, since $p \\sim a^{-1}$ for fixed\n$\\epsilon$ and $a \\sim \\epsilon$ for fixed $p$, the number of quantum\ngates $G_{\\mathrm{excite}}$ needed to excite the two initial particles\nis\n\\begin{equation}\nG_{\\mathrm{excite}} \\sim \\left\\{ \\begin{array}{ll}\n\\epsilon^{-3-o(1)}\\,, & \\textrm{as} \\,\\,\\, \\epsilon \\to 0 \\,, \\\\\np^{4+o(1)} \\,, & \\textrm{as} \\,\\,\\, p \\to \\infty \\,.\n\\end{array} \\right.\n\\end{equation}\nNote also that for the initial wavepackets to be well separated, $L$\nmust be larger than $2l$. Hence, in the high-momentum limit $L \\sim\np$, which affects the complexity of other steps of the algorithm.\n\n\\subsubsection*{Perturbative Expansion}\n\\label{dyson}\n\nThe resonant excitation can be analyzed with time-dependent perturbation\ntheory. Let\n\\begin{equation}\n\\label{bigR}\nR = T \\left\\{ \\exp \\left[-i \\int_0^\\tau \\mathrm{d} t \n\\left( H+ \\lambda W(t) \\right) \\right] \\right\\} \\,,\n\\end{equation}\nwhere $T\\{ \\cdot \\}$ denotes the time-ordered product, \n$H$ is given by \\eq{h},\n\\begin{equation}\n\\label{Wt}\nW(t) = \\int \\mathrm{d} \\mathbf{x} \\left( f(t,\\mathbf{x})\\psi_{i,\\alpha}(\\mathbf{x}) +\n f^*(t,\\mathbf{x})\\psi_{i,\\alpha}^\\dag(\\mathbf{x}) \\right),\n\\end{equation}\n$i$ and $\\alpha$ are chosen\naccording to the desired type of particle, and $f(t,\\mathbf{x})$ is an \noscillatory function whose form we optimize in the next\nsubsection. The end product of the excitation process is $R\n\\ket{\\mathrm{vac}}$. One can expand this quantity using the Dyson\nseries, as follows:\n\\begin{equation}\n\\label{Dyson}\nR = \\mathds{1} - i \\lambda \\int_0^\\tau \\mathrm{d} t_1 W_I(t_1) + (-i \\lambda)^2\n\\int_0^\\tau \\mathrm{d} t_1 \\int_0^{t_1} \\mathrm{d} t_2 W_I(t_1) W_I(t_2) + \\cdots \\,,\n\\end{equation}\nwhere \n\\begin{equation}\n\\label{WI}\nW_I(t) = e^{iHt} W(t) e^{-iHt}\n\\end{equation}\nand the $n\\th$-order term in $\\lambda$ is\n\\begin{equation}\n(-i \\lambda)^n \\int_0^\\tau \\mathrm{d} t_1 \\ldots \\int_0^{t_{n-1}} d t_n\nW_I(t_1) \\ldots W_I(t_n) \\,.\n\\end{equation}\nThe total contribution from orders $\\lambda^2$ and higher is bounded by\n\\begin{eqnarray}\n\\left\\| \\sum_{n=2}^\\infty (-i \\lambda)^n \\int_0^\\tau \\mathrm{d} t_1\n \\ldots\n\\int_0^{t_{n-1}} \\mathrm{d} t_n W_I(t_1) \\ldots W_I(t_n) \\right\\|\n& \\leq & \\sum_{n=2}^\\infty \\frac{\\lambda^n \\tau^n}{n!} w^n \\\\\n& = & \\exp[\\lambda \\tau w] - 1 - \\lambda \\tau w \\,,\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\label{w}\nw = \\max_{0 \\leq t \\leq \\tau} \\left\\| W(t) \\right\\|.\n\\end{equation}\n\nFrom the above analysis, one sees that the Dyson series converges\nrapidly. The single-particle excitation amplitude is of order\n$\\lambda$, and the dominant error, other than non-excitation, is the\ntwo-particle excitation amplitude, which is of order\n$\\lambda^2$. Setting the two-particle excitation probability to\n$\\epsilon$, one obtains a single-particle excitation with probability\n$p_1 \\sim \\sqrt{\\epsilon}$, and non-excitation with probability on the order of\n$1-\\sqrt{\\epsilon}$. In a standard scattering simulation, one wishes\nto prepare as an initial state single-particle excitations at two\nspatially separated locations. The fraction of simulations in which\nthis is achieved (rather than one or both particles failing to be produced) \nis thus of order $p_1^2 \\sim \\epsilon$. One\ncan detect such instances and compensate by repeating the\nsimulation $O(1\/p_1^2)$ times and postselecting the instances in\nwhich both particles were produced.\n\nNext, we consider the first-order excitation amplitude in more\ndetail. Let $\\ket{E,\\mathbf{p}}$ be any state with total momentum $\\mathbf{p}$ and\nenergy $E$ above the vacuum energy, so that $P \\ket{E,\\mathbf{p}} = \\mathbf{p}\n\\ket{E,\\mathbf{p}}$ and $H \\ket{E,\\mathbf{p}} = E \\ket{E,\\mathbf{p}}$, where $P$ is the total\nmomentum operator. (Here, we rely on the fact that $[H,P] = 0$.) Then,\nto first order in $\\lambda$, by \\eq{Dyson} and \\eq{WI},\n\\begin{eqnarray}\n\\bra{E,\\mathbf{p}} R \\ket{\\mathrm{vac}} \n& \\simeq & - i \\lambda \\int_0^\\tau \\mathrm{d} t \\bra{E,\\mathbf{p}} W_I(t)\n\\ket{\\mathrm{vac}} \\\\\n& = & - i \\lambda \\int_0^\\tau \\mathrm{d} t \\, e^{-iEt} \\bra{E,\\mathbf{p}} W(t)\n\\ket{\\mathrm{vac}} \\,.\n\\end{eqnarray}\nRecalling that the momentum operator is the generator of spatial\ntranslations, one has $\\psi_{i,\\alpha}(\\mathbf{x}) = e^{iP\\mathbf{x}} \\psi_{i,\\alpha}(0)\ne^{-iP\\mathbf{x}}$. Thus, to first order in $\\lambda$,\n\\begin{equation}\n\\bra{E,\\mathbf{p}} R \\ket{\\mathrm{vac}} \n\\simeq - i \\lambda \\int_0^\\tau \\mathrm{d} t \\int\n\\mathrm{d} \\mathbf{x} e^{-i(Et+\\mathbf{p}\\mathbf{x})} \\left[ f(t,\\mathbf{x}) \\bra{E,\\mathbf{p}} \\psi_{i,\\alpha}(0)\n \\ket{\\mathrm{vac}} + f^*(t,\\mathbf{x}) \\bra{E,\\mathbf{p}} \\psi^\\dag_{i,\\alpha}(0)\n \\ket{\\mathrm{vac}} \\right] \\,.\\\\\n\\end{equation}\n(Here we have used $P \\ket{\\mathrm{vac}} = 0$.) Defining $f(t,\\mathbf{x}) = 0$\nfor $t \\notin [0,\\tau]$, we can extend the time integration to\ninfinity and express $\\bra{E,\\mathbf{p}} R \\ket{\\mathrm{vac}}$ in terms of\n$\\tilde{f}$, the Fourier transform of $f$. For our choice of $f$,\ngiven in the next subsection, $\\tilde{f}$ is real, and therefore\n\\begin{equation}\n\\label{amp}\n\\bra{E,\\mathbf{p}} R \\ket{\\mathrm{vac}} = - i \\lambda \n\\left[ \\tilde{f}(E,\\mathbf{p}) \\bra{E,\\mathbf{p}} \\psi_{i,\\alpha}(0) \\ket{\\mathrm{vac}}\n+ \\tilde{f}(-E,-\\mathbf{p}) \\bra{E,\\mathbf{p}} \\psi_{i,\\alpha}^\\dag(0)\n\\ket{\\mathrm{vac}} \\right] + O(\\lambda^2).\n\\end{equation}\n\n\\subsubsection*{Wavepacket Shaping}\n\\label{shaping}\n\nWe now show that a Gaussian wavepacket is a good choice for $f(t,\\mathbf{x})$.\nSpecifically, for chosen constants $\\alpha, \\beta > 0$, let\n\\begin{equation}\nf(t,\\mathbf{x}) = \\left\\{ \\begin{array}{cl}\n\\eta \\exp \\left[-(\\alpha t)^2 - (\\beta \\mathbf{x})^2 - i E_0 t + i \\mathbf{p}_0 \\mathbf{x} \\right] \\,, &\n-\\tau\/2 \\leq t \\leq \\tau\/2, -l\/2 \\leq \\mathbf{x} \\leq l\/2 \\,, \\\\\n0 \\,, & \\textrm{otherwise} \\,.\n\\end{array} \\right.\n\\end{equation}\n(For convenience, we have shifted the origin of the coordinate\nsystem.) Here $\\eta$ is a normalization factor\\footnote{It is \n reasonable to choose $\\eta$ so that $\\int_0^{\\tau} dt W_I(t)\n \\ket{\\mathrm{vac}}$ is a normalized state. In the ultrarelativistic\n limit this implies that $\\eta \\sim \\left( \\alpha^2\n \\beta^4 + \\alpha^4 \\beta^2 \\right)^{1\/4}$.} with mass dimension $3\/2$. \nWith this choice of $f$,\n\\begin{equation}\n\\label{1peak}\n\\tilde{f}(E,\\mathbf{p}) = \\eta q_{\\beta,l}(\\mathbf{p}-\\mathbf{p}_0) q_{\\alpha,\\tau}(E-E_0) \\,,\n\\end{equation}\nwhere\n\\begin{equation}\nq_{\\rho,r}(d) = \\int_{-r\/2}^{r\/2} \\mathrm{d} \\mathbf{x} \\, e^{id\\mathbf{x}-(\\rho \\mathbf{x})^2}.\n\\end{equation}\nIn the limit $r \\to \\infty$, the function $q_{\\rho,r}(d)$ converges to a\nGaussian peak of width $\\sim 1\/\\rho$. Since $E$ must be positive, the\n$\\tilde{f}(-E,-\\mathbf{p}) \\bra{E,\\mathbf{p}} \\psi^\\dag_{i,\\alpha}(0)\n\\ket{\\mathrm{vac}}$ term in \\eq{amp} is exponentially small. Hence,\none obtains\n\\begin{equation}\n\\label{amp2}\n\\bra{E,\\mathbf{p}} R \\ket{\\mathrm{vac}} \\simeq - i \\lambda \n\\tilde{f}(E,\\mathbf{p}) \\bra{E,\\mathbf{p}} \\psi_{i,\\alpha}(0) \\ket{\\mathrm{vac}}.\n\\end{equation}\nfor $E \\gg 1\/\\tau$ and $\\lambda \\ll 1$. By \\eq{amp2} and \\eq{psij}, one\nsees that $R \\ket{\\mathrm{vac}}$ is a antifermion wavepacket with\nmomentum centered around $\\mathbf{p}$. To create a fermion, one interchanges\n$\\psi$ and $\\psi^\\dag$ in \\eq{Wt}.\n\nUsing the asymptotics of error functions, we can furthermore bound the\ncontributions due to $r$ being finite. One finds that\n\\begin{equation}\n\\big|q_{\\rho,r}(d) - q_{\\rho,\\infty}(d)\\big| \\leq \\frac{2}{r \\rho^2}\ne^{-(\\rho r)^2\/4}.\n\\end{equation}\n\n\\subsection{Measuring Number Operators}\n\\label{measurements}\n\nRecall from \\sect{rep} that the free theory ($g_0^2 = 0$) \nis exactly solvable, with the number operators\n$L^{-1} a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p})$ counting fermions of species\n$j$ in momentum-mode $\\mathbf{p}$ and $L^{-1} b_j^\\dag(\\mathbf{p})\nb_j(\\mathbf{p})$ similarly counting antifermions. \nThus, as one possible set of measurements to perform on the final state of \nthe simulation, we propose, as in \\cite{longversion}, adiabatically returning \nto the free theory and then measuring number operators via the \nphase-estimation algorithm. \nWe analyze this measurement procedure in this section. \nAn alternative set of measurements that is more suitable when bound states\nare present is analyzed in \\sect{sec:charge}.\n\n\nThe adiabatic return to the free theory is performed in the presence\nof particle wavepackets, so the state being adiabatically\ntransformed is not an energy eigenstate. Different energy eigenstates\nin the superposition will acquire different dynamical phases\nduring the adiabatic process and thus, in physical terms, the simulated\nparticles will propagate. Such propagation is undesirable because we do \nnot want any scattering to occur while the interaction is being turned off. \n\nHence, we apply the same technique proposed in \\cite{longversion} to\nsuppress particle propagation: we interleave (simulated) backwards\ntime evolutions governed by time-independent Hamiltonians into the\nadiabatic process. By an analysis similar to that performed in\n\\cite{longversion}, one finds that, to ensure that a particle\npropagates no further than a distance $\\mathcal{D}$, it suffices to use\n\\begin{equation}\nJ = \\widetilde{O} \\left( \\frac{\\sqrt{\\tau}}{p \\mathcal{D}} \\right)\n\\end{equation}\nbackwards evolutions, where $\\tau$ is the duration of the original\nadiabatic process and $p$ is the momentum of the particle. Further,\none finds that the total probability of diabatically exciting one or\nmore particles is\\footnote{This result is based on the\n adiabatic criterion of \\cite{Messiah} which\n appears to be applicable \\cite{longversion} to our Hamiltonian \n although it may not apply to all Hamiltonians.}\n\\begin{equation}\nP_{\\mathrm{diabatic}} = O\\left( \\frac{J^2 L p^2}{\\tau^2} \\right).\n\\end{equation}\nHence, setting $\\mathcal{D}$ to a constant\n$P_{\\mathrm{diabatic}}$ to $\\epsilon$, one obtains\n\\begin{equation}\n\\tau = \\widetilde{O} \\left( \\frac{L}{\\epsilon} \\right). \n\\end{equation}\nA process of this duration can\nbe implemented with (\\sect{trotter})\n\\begin{equation}\nG_{\\mathrm{turn-off}} = O \\left( \\left(\n \\frac{L^2}{a \\epsilon} \\right)^{1+o(1)} \\right)\n\\end{equation}\nquantum gates.\n\nThe phase-estimation algorithm \\cite{Kitaev95} enables one to\nmeasure in the eigenbasis of $L^{-1} a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p})$,\nprovided one can efficiently implement $e^{-i L^{-1} a_j^\\dag(\\mathbf{p})\n a_j(\\mathbf{p}) t}$ for various $t$ using quantum circuits. By\n\\eq{adef} and \\eq{bdef}, one sees that the problems of simulating \n$e^{-i L^{-1} a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p}) t}$ and its\nantifermionic counterpart are largely similar to the problem of\nsimulating the time evolution $e^{-iHt}$, which was analyzed in detail\nin \\sect{trotter}. However, these number operators are spatially\nnonlocal, which means that the methods of \\sect{trotter} do not\nperform well as a function of $\\hat{L}$. Instead, it is more\nefficient to use recent techniques from \\cite{BCCKS}. \n\nIn \\cite{BCCKS}, a method is described for simulating sparse\nHamiltonians in which the matrix elements are given by an oracle. As\ndiscussed on pg. 2 of \\cite{BCCKS}, in the case where the sparse\nHamiltonian consists of a sum of $d$ terms each acting on $O(1)$ qubits,\nthe number of oracle queries and non-oracle-related quantum gates both\nscale as $O(d)$. A number operator for a momentum mode consists of\n$O(\\hat{L}^2)$ terms, acting between all pairs of spatial lattice sites.\nThus, if one ignored the fermionic statistics, the number of non-oracle-related\ngates needed to simulate the time-evolution induced by a number operator\nwould be $O(\\hat{L}^2 n) = O(\\hat{L}^3)$. The number of gates needed to\nimplement one oracle query to the sparse matrix defined by the number\noperator would be $O(n)$, and number of quantum gates needed to implement\nall of the oracle queries would be $O(\\hat{L}^3)$. Using the Bravyi-Kitaev\nencoding for fermionic statistics adds a logarithmic factor to the\ncomplexity. Measuring all $2N\\hat{L}$ of the number operators thus has total\ncomplexity $\\widetilde{O}(\\hat{L}^4) = \\widetilde{O}(L^4\/a^4)$.\n\n\\subsection{Measuring Local Charge}\n\\label{sec:charge}\n\nIn previous work \\cite{longversion}, we proposed measuring local\nenergy observables as an alternative to returning to the free theory\nand measuring number operators. This procedure has the advantage that\nit can detect bound states. It has the disadvantage that the local\nenergy observables have ultraviolet-divergent vacuum fluctuations \nthat represent a noise background above which particle excitations must be\ndiscerned.\nIn this paper, we instead measure simpler local observables, namely \ncharges, whose vacuum fluctuations are less difficult to\ncontrol. These observables can thus detect charged bound states,\nalthough they are blind to neutral ones.\n\nFrom the equation of motion of the massive Gross-Neveu model, one finds \nthat for each $j \\in \\{1,2,\\ldots,N\\}$ the\nquantity\n\\begin{equation}\nJ^\\mu_j(x) = \\bar{\\psi}_j(x) \\gamma^\\mu \\psi_j(x)\n\\end{equation}\nobeys\n\\begin{equation}\n\\partial_\\mu J_j^\\mu = 0.\n\\end{equation}\nHence,\n\\begin{equation}\n\\widetilde{Q}_j \\equiv \\sum_{\\mathbf{x}} J_j^0(\\mathbf{x}) = \\sum_{\\mathbf{x}}\n\\bar{\\psi}_j(\\mathbf{x}) \\gamma^0 \\psi_j(\\mathbf{x})\n\\end{equation}\nis a conserved charge. Note that, for any \n$b,c \\in \\mathbb{R}$, $Q_j = b \\widetilde{Q}_j + c$ is also\nconserved. We can calibrate the charge observable by demanding that\nthe vacuum have zero charge and that particle creation change the\ncharge by $\\pm 1$. One satisfies these criteria with the following\ndefinition:\n\\begin{equation}\nQ_j = \\sum_{\\mathbf{x} \\in \\Omega} a \\bar{\\psi}_j(\\mathbf{x}) \\gamma^0\n\\psi_j(\\mathbf{x}) - \\hat{L} \\mathds{1} \\,.\n\\end{equation}\nBy (\\ref{psij}), (\\ref{psidagj}), and (\\ref{anticanonp2}), one finds that\n\\begin{equation}\nQ_j = \\frac{1}{L} \\sum_{\\mathbf{p} \\in \\Gamma} \\left(\n a_j^\\dag(\\mathbf{p}) a_j(\\mathbf{p}) - b_j^\\dag(\\mathbf{p})\n b_j(\\mathbf{p})\\right).\n\\end{equation}\nFor any envelope function $f:\\Omega \\to [0,1]$, one can similarly\ndefine\n\\begin{equation}\nQ_j^{(f)} = \\sum_{\\mathbf{x} \\in \\Omega} f(\\mathbf{x}) \\left( a\n \\bar{\\psi}_j(\\mathbf{x}) \\gamma^0 \\psi_j(\\mathbf{x}) - \\mathds{1} \\right).\n\\end{equation}\nIf $f$ has support only in some region $R \\subset \\Omega$, then\n$Q_j^{(f)}$ can be thought of as an observable for the charge in that\nregion. \n\nThe most obvious choice of $f$ is a square function that is\nequal to one inside $R$ and zero elsewhere. However,\na better signal-to-noise ratio can be obtained by choosing $f$ to\ndecay from one to zero more smoothly at the boundary of $R$.\nSpecifically, calculations (in Appendix \\ref{fluctuations}) show that, \nwhen $f$ is chosen to be a\nGaussian of width $R$, the variance of the observable $Q_j^{(f)}$ in\nthe vacuum state is $O(1\/mR)$, independent of the lattice spacing $a$. \nHence the noise background above which particle excitations are to be\ndetected is nondivergent in $a$ and can be brought to an arbitrarily\nlow level at the cost of increasing the detector size. \nIn practice, one will use a truncated Gaussian, replacing the\nexponentially small tails with zero at distances greater than some constant\nmultiple of $R$. This modified $f$ then has support on a region of\nsize $O(R)$, but the corresponding operator is exponentially close to\nthe Gaussian case treated by our analysis.\n\n$Q_j^{(f)}$ has eigenvalues with $O(1)$ separations. Thus, measuring\n$Q_j^{(f)}$ by phase estimation entails simulating the unitary\ntransformation $\\exp\\big[i Q_j^{(f)} t\\big]$ for $t$ of order one. Because\n$Q_j^{(f)}$ is the sum of local terms, these unitary\ntransformations can be implemented by techniques similar to those\nin \\sect{trotter} with complexity $O(a^{-1-o(1)}\n\\epsilon^{-o(1)})$.\n\n\n\\section{Some Field-Theoretical Aspects}\n\nThis section describes some quantum field-theoretical calculations:\nanalysis of the effect of discretizing the spatial dimension of the \nmassive Gross-Neveu model, and the perturbative renormalization of \nthe mass in the discretized theory.\n\nIn our complexity analysis (\\sect{sec:complexity}), our criterion for\nchoosing the lattice spacing $a$ is that the scattering cross sections \nfor processes at a momentum scale $p$ in the discretized theory should \ndiffer from their continuum values by at most a factor of $(1+\\epsilon)$. \nThe results of \\sect{EFT} show that one can satisfy this criterion \nby choosing $a \\sim \\epsilon\/p$.\nThis choice then affects the overall scaling of the algorithm in the\nlarge-momentum and high-precision limits. As one would expect,\nhigher energies and greater precision require a smaller lattice spacing\nand thus a larger number of lattice sites (for fixed $L$). \nConsequently, the number of quantum gates needed to simulate\ntime evolutions via Suzuki-Trotter formulae is larger.\n\nIn \\sect{massren}, we perturbatively calculate the relationship\nbetween the bare mass $m_0$, which is a parameter in the\nlattice Hamiltonian (see \\eq{h} and \\eq{h0}), and the physical mass\n$m$ of the particles in the theory. We need to know the behavior of\n$m$ in order to design and analyze the procedure for preparing the\ninteracting vacuum (\\sect{turnon}). In particular, a suitable\nadiabatic path must maintain a non-zero mass, the magnitude of which\naffects the algorithmic complexity, as indicated by the adiabatic\ntheorem.\n\n\n\n\\subsection{Effects of Non-zero Lattice Spacing}\n\\label{EFT}\n\nThe effects of a non-zero lattice spacing can be analyzed via effective\nfield theory. The discretized Lagrangian can be thought of as the leading \ncontribution to an effective field theory, neglected terms of which \ncorrespond to discretization errors. Hence, the scaling of the error \nwith the lattice spacing is given by the scaling of the coefficients of \nthose terms. \n\nThe symmetries of the continuum theory restrict the possible operators \nin the effective field theory.\nConsider the discrete transformations parity (denoted $P$), \ntime reversal ($T$), and charge conjugation ($C$).\nParity changes the handedness of space and hence reverses the momentum.\nThus,\n\\begin{equation}\nP a(\\mathbf{p}) P = a(-\\mathbf{p}) \\,,\\qquad\nP b(\\mathbf{p}) P = -b(-\\mathbf{p}) \\,.\n \\label{eq:P}\n\\end{equation}\nUsing \\eq{eq:psi} and \\eq{eq:P}, we then obtain\n\\begin{equation}\nP \\psi(t,\\mathbf{x}) P = \\gamma^0\\psi(t,-\\mathbf{x}) \\,,\\qquad\nP \\bar{\\psi}(t,\\mathbf{x}) P = \\bar{\\psi}(t,-\\mathbf{x})\\gamma^0 \\,.\n\\end{equation}\nLikewise, \n\\begin{equation}\nT a(\\mathbf{p}) T = a(-\\mathbf{p}) \\,,\\qquad\nT b(\\mathbf{p}) T = -b(-\\mathbf{p}) \\,.\n\\end{equation}\nIt turns out that time reversal needs to be an antilinear operator.\nThen\n\\begin{equation}\nT \\psi(t,\\mathbf{x}) T = \\gamma^1\\psi(-t,\\mathbf{x}) \\,,\\qquad\nT \\bar{\\psi}(t,\\mathbf{x}) T = -\\bar{\\psi}(-t,\\mathbf{x})\\gamma^1 \\,.\n\\end{equation}\nFinally, charge conjugation interchanges particles and antiparticles.\nThus,\n\\begin{equation}\nC a(\\mathbf{p}) C = b(\\mathbf{p}) \\,,\\qquad\nC b(\\mathbf{p}) C = a(\\mathbf{p}) \\,,\n\\end{equation}\nand\n\\begin{equation}\nC \\psi(t,\\mathbf{x}) C = \\psi^*(t,\\mathbf{x}) \\,,\\qquad\nC \\bar{\\psi}(t,\\mathbf{x}) C = \\psi^T(t,\\mathbf{x})\\gamma^0 \\,.\n\\end{equation}\nOne can verify that the Lagrangian (\\ref{eq:MGN}) is invariant under\neach of the transformations $P$, $T$ and $C$.\n\nNow consider the operator $\\psi^\\dagger \\mathbb{M} \\psi$, where\n$\\mathbb{M}$ is Hermitian. \nInvariance under $P$, $T$ and $C$ requires\n\\begin{eqnarray}\n\\mathbb{M} & = & \\gamma^0 \\mathbb{M} \\gamma^0 \\,, \\\\\n\\mathbb{M} & = & -\\gamma^1 \\mathbb{M}^* \\gamma^1 \\,, \\\\\n\\mathbb{M} & = & - \\mathbb{M}^T \\,. \n\\end{eqnarray}\nThese conditions imply that\n\\begin{equation}\n\\mathbb{M} = c \\gamma^0\\,, \\,\\, c \\in \\mathbb{R} \\,.\n\\end{equation}\nLikewise, for $i\\psi^\\dagger \\mathbb{M} \\partial_\\mu \\psi$,\nwhere $\\mathbb{M}$ is Hermitian, $P$, $T$ and $C$ invariance requires\n\\begin{eqnarray}\n\\mathbb{M} & = & (-1)^\\mu \\gamma^0 \\mathbb{M} \\gamma^0 \\,, \\\\\n\\mathbb{M} & = & -(-1)^\\mu \\gamma^1 \\mathbb{M}^* \\gamma^1 \\,, \\\\\n\\mathbb{M} & = & \\mathbb{M}^T \\,. \n\\end{eqnarray}\nThese conditions imply that, for $\\mu=0$,\n\\begin{equation}\n\\mathbb{M} = c \\mathds{1} = c (\\gamma^0)^2 \\,, \\,\\, c \\in \\mathbb{R} \\,,\n\\end{equation}\nwhile, for $\\mu=1$,\n\\begin{equation}\n\\mathbb{M} = c \\gamma^5 = -c\\gamma^0\\gamma^1 \\,, \\,\\, c \\in \\mathbb{R} \\,.\n\\end{equation}\nThus, the only $P$-, $T$- and $C$-invariant bilinears of Dirac fields are\n$\\bar{\\psi}\\psi$ and $i\\bar{\\psi}\\gamma^\\mu\\partial_\\mu \\psi$\n($\\mu=0$ or $1$).\n\nNow consider four-fermion operators, namely, products of two bilinears.\nThe set $\\{\\mathds{1},\\sigma^i\\}$ forms a complete basis, elements of which\nsatisfy the identity\n\\begin{eqnarray}\n\\delta_{\\alpha\\beta} \\delta_{\\gamma\\delta} \n& = & \\frac{1}{2}\\big(\\delta_{\\alpha\\delta} \\delta_{\\gamma\\beta} \n+ \\sum_{i=1}^{3} \\sigma^i_{\\alpha\\delta} \\sigma^i_{\\gamma\\beta}\\big)\n\\,.\n\\end{eqnarray}\nFor \n$\\gamma^0 = \\sigma^2$, $\\gamma^1 = -i\\sigma^1$, $\\gamma^5 = \\sigma^3$,\nthis is equivalent to\n\\begin{eqnarray}\n\\delta_{\\alpha\\beta} \\delta_{\\gamma\\delta} \n& = & \\frac{1}{2}(\\delta_{\\alpha\\delta} \\delta_{\\gamma\\beta} \n+ (\\gamma^\\mu)_{\\alpha\\delta} (\\gamma_\\mu)_{\\gamma\\beta}\n+ (\\gamma^5)_{\\alpha\\delta} (\\gamma^5)_{\\gamma\\beta})\n\\,.\n \\label{eq:Fierz0}\n\\end{eqnarray}\nEquation~(\\ref{eq:Fierz0}) can be used to obtain Fierz identities.\nFor example,\n\\begin{eqnarray}\n\\bar{\\psi}_i\\psi_j \\bar{\\psi}_j\\psi_i \n& = & (\\bar{\\psi}_i)_\\alpha(\\psi_j)_\\beta \n(\\bar{\\psi}_j)_\\gamma(\\psi_i)_\\delta \\delta_{\\alpha\\beta}\\delta_{\\gamma\\delta} \n\\nonumber \\\\\n& = & -\\frac{1}{2} \\big(\\bar{\\psi}_i\\psi_i \\bar{\\psi}_j\\psi_j\n+ \\bar{\\psi}_i\\gamma^\\mu\\psi_i \\bar{\\psi}_j\\gamma_\\mu\\psi_j\n+ \\bar{\\psi}_i\\gamma^5\\psi_i \\bar{\\psi}_j\\gamma^5\\psi_j \\big)\\,,\n\\end{eqnarray}\nwhere the minus sign comes from fermion anticommutation.\nThus, any operator of the form \n$\\bar{\\psi}_i\\tilde\\Gamma_1\\psi_j \\bar{\\psi}_j\\tilde\\Gamma_2\\psi_i$ \ncan be rewritten as a sum of operators of the form \n$\\bar{\\psi}_i\\Gamma_1\\psi_i \\bar{\\psi}_j\\Gamma_2\\psi_j$,\nwith $\\Gamma_{1,2} \\in \\{\\mathds{1}, \\gamma^\\mu, \\gamma^5 \\}$.\n\nIf $\\Gamma_1 \\neq \\Gamma_2$, then \n$\\bar{\\psi}_i\\Gamma_1\\psi_i \\bar{\\psi}_j\\Gamma_2\\psi_j$ will violate\nat least one of the discrete symmetries.\nFurthermore, the $O(N)$ symmetry\\footnote{In fact, the massive Gross-Neveu \nmodel has an $O(2N)$ symmetry. \n} \nassociated with the $N$ fermion species restricts \nthe allowed form of operators to functions of \n$\\sum_{i=1}^N \\bar{\\psi}_i \\Gamma \\psi_i$.\nFor $i\\neq j$, $\\bar{\\psi}_i\\gamma^5\\psi_i \\bar{\\psi}_j\\gamma^5\\psi_j$\nis ruled out by invariance under $P$ (or $C$) of any single\nfield $\\psi_i$, and thus \n$\\big(\\sum_{i=1}^N \\bar{\\psi}_i \\gamma^5 \\psi_i\\big)^2$ is ruled out. \nLikewise, \n$\\bar{\\psi}_i\\gamma^\\mu\\psi_i \\bar{\\psi}_j\\gamma_\\mu\\psi_j$ ($i\\neq j$)\nand consequently\n$\\big(\\sum_{i=1}^N \\bar{\\psi}_i \\gamma^5 \\psi_i\\big)^2$ \nare ruled out.\n\nWe conclude that the only four-fermion operator (without derivatives) \nin the effective field theory is \n$(\\sum_{i=1}^N \\bar{\\psi}_i \\psi_i)^2$.\n\nEach extra derivative or factor of $\\bar{\\psi}\\Gamma\\psi$ in an operator \nwill increase its mass dimension by one; correspondingly, it will be \nsuppressed by an extra power of $a$.\nWe therefore conclude that no new unsuppressed operators are\ninduced in the effective field theory.\nThe spatial derivative in the continuum theory is approximated by a \ndifference operator, with an error of order $a$, and the Wilson term is\nalso formally of order $a$.\nSpatial discretization errors are hence of order $a$.\n\n\n\n\\subsection{Mass Renormalization}\n\\label{massren}\n\nIn this subsection, we calculate the renormalized (or physical) mass of the\ndiscretized theory, using second-order perturbation theory. \nA convenient way to obtain a suitable expression is to use a partially \nrenormalized form of perturbation theory (as was done in \\cite{longversion}), \nin which one uses the bare coupling but the renormalized mass. \n\nTo perform perturbative calculations, we need the Feynman rules\nfor the discretized theory.\nThe propagator is\n\\begin{equation}\n\\begin{array}{l}\n\\includegraphics[width=0.6in]{fermiprop.eps} \n\\end{array} \n= \n\\frac{\\gamma^\\mu\\tilde{p}_\\mu+\\widetilde{m}(p)}{\\tilde{p}^2-\\widetilde{m}(p)^2} \n\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n\\tilde{p}^\\mu = \\left( p^0,\\frac{1}{a}\\sin(a p^1) \\right) ,\\qquad \n\\widetilde{m}(p) = m + \\frac{2r}{a} \\sin^2\\left(\\frac{a p^1}{2}\\right)\n\\,.\n\\end{equation}\nFor convenience, we use the standard technique of introducing an auxiliary\nfield $\\sigma$ and rewrite the Lagrangian as\n\\begin{equation}\n{\\cal L} = {\\cal L}_0 + {\\cal L}_\\sigma \\,,\n\\end{equation}\nwhere ${\\cal L}_0$ is the discretized free Lagrangian and\n\\begin{equation} \n {\\cal L}_\\sigma = -\\frac{1}{2} \\sigma^2 - g \\sigma \\bar{\\psi}_j \\psi_j\n\\,.\n\\end{equation}\nThe corresponding Feynman rules are\n\\begin{equation} \\label{y}\n\\begin{array}{l}\n\\includegraphics[width=0.6in]{scalarprop2.eps} \n\\end{array} \n = -i \\,,\\qquad\n\\begin{array}{l}\n\\includegraphics[width=0.6in]{y.eps} \n\\end{array} \n = -ig \\,.\n\\end{equation}\n\nAt one-loop order,\n\\begin{eqnarray}\n-i M(p)\n& = & \n\\begin{array}{l} \\includegraphics[width=0.6in]{dia1b.eps} \n\\end{array} \n+\n\\begin{array}{l} \\includegraphics[width=0.6in]{countercircle.eps} \n\\end{array} \n\\,,\n \\label{eq:diags}\n\\end{eqnarray}\nwhere the second diagram is the counterterm.\n\nThe first diagram gives\n\\begin{eqnarray}\n\\begin{array}{l}\n\\includegraphics[width=0.6in]{dia1b.eps} \n\\end{array} \n& = & -g_0^2 \\int_{-\\infty}^{\\infty} \\frac{dk^0}{2\\pi} \n\\int_{-\\pi\/a}^{\\pi\/a} \\frac{dk^1}{2\\pi}\n\\frac{\\gamma^\\mu\\tilde{k}_\\mu+\\widetilde{m}(k)}{\\tilde{k}^2-\\widetilde{m}(k)^2} \n\\\\\n& = & \\frac{ig_0^2}{4\\pi a} \\int_{-\\pi}^{\\pi} dk^1\n\\frac{ma + 2r \\sin^2\\big(\\frac{k^1}{2}\\big)}{\n\\sqrt{ \\sin^2 k^1 + \\big( ma + 2r \\sin^2\\big(\\frac{k^1}{2}\\big) \\big)^2}\n}\n\\,.\n\\label{eq:m1}\n\\end{eqnarray}\nThe term in \\eq{eq:m1} proportional to $r$ scales as $1\/a$ and gives the \nmass correction to the doubler (spurious fermion). The term proportional to \n$m$ gives the following:\n\\begin{equation}\nm_0 = m - \\frac{g_0^2 m}{2\\pi} \\log(ma) + \\cdots \\,.\n\\end{equation}\n\nAt two-loop order, the 1PI amplitude has the additional contributions\n\\begin{eqnarray}\n\\begin{array}{l} \\includegraphics[width=0.6in]{rainbow3.eps}\n\\end{array}\n+\n\\begin{array}{l} \\includegraphics[width=0.6in]{countersun3.eps} \n\\end{array} \n+\n\\begin{array}{l} \\includegraphics[width=0.6in]{dia2b.eps}\n\\end{array}\n+\n\\begin{array}{l} \\includegraphics[width=0.6in]{dubub2.eps} \n\\end{array} \n\\,.\n\\nonumber\n \\label{eq:diags2}\n\\end{eqnarray}\nThe renormalization condition satisfied at first order implies that\nthe first two diagrams cancel.\n\n\n\nThe last two diagrams give\n\\begin{equation}\n\\begin{array}{l}\n\\includegraphics[width=0.6in]{dia2b.eps} \n\\end{array} \n = -\\frac{ig_0^4}{16\\pi^3}\\left(m I_1^{(a)} + \\frac{1}{a} I_1^{(b)}\\right)\n\\end{equation}\nand\n\\begin{equation}\n\\begin{array}{l}\n\\includegraphics[width=0.6in]{dubub2.eps} \n\\end{array} \n = \\,\\frac{ig_0^4 N}{16\\pi^3}\\left(m I_2^{(a)} + \\frac{1}{a} I_2^{(b)}\\right) \n\\,,\n\\end{equation}\nwhere $I_1^{(a)}$, $I_1^{(b)}$, $I_2^{(a)}$ and $I_2^{(b)}$ are given\nin Appendix \\ref{integrals}. Numerical evaluation of these integrals\nreveals the forms\n\\begin{eqnarray}\nI_{i}^{(b)} & = & c^{(b1)} - c^{(b2)} \\, m a + \\cdots \\,, \\\\\nI_{i}^{(a)} & = & c_{i}^{(a1)} \\log^2(ma) - c_{i}^{(a2)} \\log(ma)\n+ c_{i}^{(a3)} + \\cdots \\,, \n\\end{eqnarray}\nwith $c_i^{(j)} > 0$.\nWe thus obtain\n\\begin{equation}\nm = m^{(1)} - \\frac{g_0^4m^{(1)}}{16\\pi^3}\\big(N c_2^{(a1)}-c_1^{(a1)}\\big) \n\\log^2(m^{(1)}a)\n+ \\cdots \\,,\n\\end{equation}\nwhere $m^{(1)}$ denotes the physical mass at one-loop\norder. \n\n\\bigskip\n\\bigskip\n\n\\noindent \\textbf{Acknowledgments:}\nWe thank William George for help with numerical calculations. This\nwork was supported by NSF grant PHY-0803371, DOE grant\nDE-FG03-92-ER40701, and NSA\/ARO grant W911NF-09-1-0442. IQC and\nPerimeter Institute are supported in part by the Government of Canada\nthrough Industry Canada and by the Province of Ontario through the\nMinistry of Research and Innovation. The Institute for Quantum\nInformation and Matter (IQIM) is an NSF physics Frontiers Center with\nsupport from the Gordon and Betty Moore Foundation. S.J. and K.L. are\ngrateful for the hospitality of the IQIM (formerly IQI), Caltech,\nduring parts of this work. Portions of this work are a contribution of\nNIST, an agency of the US Government, and are not subject to US\ncopyright.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently, there has been growing interest in the electronic and photonic systems displaying\na dispersionless flat band in the band structure \\cite{flach1,tang,im3}.\nFor the modes belonging to the flat band, the particle energy or wave frequency does not depend on the momentum or wave vector and the group velocity vanishes.\nThis has a strong effect on the behavior of quasiparticles and waves and can cause many interesting phenomena by greatly amplifying\nthe effects of various perturbations such as interactions and disorder \\cite{liu,der,goda,shukla,ley,luck}.\nExamples include superconductivity in magic-angle twisted bilayer graphene, flat-band ferromagnetism, and anomalous Landau levels \\cite{cao,lieb,mielke1,mielke2,tasaki1,pons,balents,im1}.\n\nMany models having one or more flat bands have been studied theoretically.\nTwo-dimensional (2D) lattices such as the Lieb, dice, and kagome lattices and one-dimensional (1D) lattices\nsuch as the stub, sawtooth, and diamond lattices are among the examples \\cite{dora,vic,flach,mizo}.\nThe low-energy physics of the aforementioned 2D lattices can be described by two Dirac cones intersected by a flat band\nand modeled by the pseudospin-1 Dirac equation in 2D \\cite{shen,urban,ocam,chan,kima}.\nThere also have been many recent attempts to realize flat-band systems experimentally \\cite{vicen,muk,zong,bab,slot,xie}.\n\nIn this paper, we show that there exists another interesting phenomenon which has avoided\nthe attention of researchers until now, though it should occur generically in all flat-band systems unless forbidden by symmetry.\nIn plasma physics, the phenomenon termed (somewhat ambiguously) as mode conversion has been known for a long time and has played\na crucial role in explaining a variety of processes including the heating of solar corona and fusion plasmas and\nthe sudden appearance or disappearance of specific wave modes in space plasmas \\cite{swan,mjo,hink,pp1,pp2,ehkim,ehkim2,yu1,yu2,jkps}. The simplest example is as follows.\nIn an inhomogeneous unmagnetized plasma where the plasma density $n$ varies smoothly along the $z$ direction,\nthe plasma frequency $\\omega_p$ ($=\\sqrt{4\\pi ne^2\/m}$), where $m$ and $e$ are the mass and charge of an electron, is also a function of $z$.\nLet us consider a situation where a $p$-polarized electromagnetic (EM) wave of frequency $\\omega$ is obliquely incident on this plasma and propagates within it.\nIf there exists a resonant region where $\\omega$ is matched to the local plasma frequency, then the local dielectric permittivity\nvanishes and the transverse wave excites\na longitudinal plasma oscillation there. Since the group velocity for the plasma oscillation mode is zero, the energy of the incident\nwave is continuously converted into that of the plasma oscillation mode and is accumulated at the resonant region. Ultimately this energy will be dissipated as heat\nand contribute to the heating of the plasma.\n\nWe point out that the plasma oscillation mode is an example of flat band. In an inhomogeneous plasma where this band crosses\nthe dispersive band describing EM waves, the energy can flow from the (fast) dispersive mode to the (slow) flat-band mode.\nWe will demonstrate that a precisely analogous phenomenon occurs in the systems described by the pseudospin-1 Dirac equation.\nFurthermore, we will\nshow that similar phenomena take place in many other systems with flat bands\nincluding pseudospin-2 Dirac systems, continuum models derived for 1D stub and sawtooth lattices, and a 2D model with a nearly flat band.\n\n\n\\section{Pseudospin-1 Dirac equation}\n\\label{sec_sp1}\n\n\\begin{figure}\n\\centering\\includegraphics[width=6cm]{fig1.eps}\n\\caption{Sketch of the configuration considered in Sec.~\\ref{sec_sp1}. A plane wave is incident at an angle $\\theta$ from the region\n$x>L$ where $U=0$ onto the nonuniform region in $0\\le x\\le L$ where $U=U(x)$ and then transmitted at an angle $\\theta_t$ to the uniform region $x<0$ where $U=U_t$.\nIf the wave is evanescent in the region $x<0$, then the transmittance $T$ vanishes and the angle $\\theta_t$ is undefined.}\n\\label{fig_c1}\n\\end{figure}\n\nThe effective Hamiltonian that describes massive pseudospin-1 Dirac particles moving in the 2D $xy$ plane\nin a 1D scalar potential $U=U(x)$ takes the form\n\\begin{eqnarray}\n{\\mathcal H}=v_F \\left(S_x p_x+S_y p_y\\right)+UI+M V,\n\\label{eq:ham1}\n\\end{eqnarray}\nwhere $v_F$ is the Fermi velocity and $M$ ($=m{v_F}^2$) is the mass energy.\nThe $x$ and $y$ components, $S_x$ and $S_y$, of the pseudospin-1 operator are represented by\n\\begin{eqnarray}\nS_x=\\frac{1}{\\sqrt{2}}\\begin{pmatrix} 0& 1& 0\\\\ 1& 0& 1\\\\ 0& 1& 0\\end{pmatrix},~~\nS_y=\\frac{1}{\\sqrt{2}}\\begin{pmatrix} 0& -i& 0\\\\ i& 0& -i\\\\ 0& i& 0\\end{pmatrix}\n\\end{eqnarray}\nand $I$ is the $3\\times 3$ unity matrix.\nThe $x$ and $y$ components of the momentum operator, $p_x$ and $p_y$, are\n\\begin{eqnarray}\np_x=\\frac{\\hbar}{i}\\frac{d}{dx},~~p_y=\\hbar k_y,\n\\end{eqnarray}\nwhere $k_y$ is the $y$ component of the wave vector.\nWe assume that the mass energy $M$ is a constant.\nThe mass term $MV$ describes the generation of the band gap between the conduction and valence bands and the position of the flat band.\nFor the matrix $V$, we choose\n\\begin{eqnarray}\nV=\\begin{pmatrix} 1& 0& 0\\\\ 0& -1& 0\\\\ 0& 0& 1\\end{pmatrix}.\n\\end{eqnarray}\nThen the flat band is located at the bottom of the conduction band if $M>0$\nand at the top of the valence band if $M<0$ \\cite{ocam}.\nThe size of the band gap is $2\\vert M\\vert$.\n\nThe time-independent Dirac equation in 2D for the three-component vector wave function $\\psi$ [$=\\left( \\psi_1, \\psi_2, \\psi_3\\right)^{\\rm T}$] is\n\\begin{eqnarray}\n{\\mathcal H}\\psi=E\\psi,\n\\end{eqnarray}\nwhere $E$ is the particle energy.\nWe can eliminate $\\psi_1$ and $\\psi_3$ using the equations\n\\begin{eqnarray}\n\\psi_1&=&-\\frac{i}{\\sqrt{2}}\\frac{\\hbar v_F}{E-M-U}\\left(\\frac{d}{dx}+k_y\\right)\\psi_2,\\nonumber\\\\\n\\psi_3&=&-\\frac{i}{\\sqrt{2}}\\frac{\\hbar v_F}{E-M-U}\\left(\\frac{d}{dx}-k_y\\right)\\psi_2,\n\\label{eq:ff}\n\\end{eqnarray}\nand obtain a single wave equation for $\\psi_2$ of the form\n\\begin{eqnarray}\n&&\\frac{d}{dx}\\left(\\frac{\\hbar v_F}{E-M-U}\\frac{d\\psi_2}{dx}\\right)\\nonumber\\\\&&~~~~+\\left[\\frac{E+M-U}{\\hbar v_F}\n-\\frac{\\hbar v_F{k_y}^2}{E-M-U}\\right]\\psi_2=0.\n\\label{eq:we1}\n\\end{eqnarray}\nWe assume that a plane wave described by $\\psi_2$ is incident obliquely from the region $x>L$ where $U=0$\nonto the nonuniform region in $0\\le x\\le L$ where $U=U(x)$ and then transmitted to the uniform region $x<0$ where $U=U_t$.\nThen the wave number $k$ and the {\\it negative} $x$ component of the wave vector, $p$, in the incident region\nand the constant of motion $k_y$ are given by\n\\begin{eqnarray}\nk=\\frac{\\sqrt{E^2-{M}^2}}{\\hbar v_F},~~ p=k\\cos\\theta, ~~k_y=k\\sin\\theta,\n\\end{eqnarray}\nwhere we assume that $E >M \\ge 0$ and $\\theta$ is the incident angle. A sketch of the configuration considered here is shown in Fig.~\\ref{fig_c1}.\n\nWe introduce the dimensionless parameters $\\epsilon$ and $\\mu$ defined by\n\\begin{eqnarray}\n\\epsilon=1-\\frac{U}{E-M},~~\\mu=1-\\frac{U}{E+M},\n\\end{eqnarray}\nwhich are equal to each other in the massless case.\nIn the incident region, we have $\\epsilon=\\mu=1$.\nIn terms of the parameters $\\epsilon$ and $\\mu$, the wave equation, Eq.~(\\ref{eq:we1}), can be written as\n\\begin{eqnarray}\n\\frac{d}{dx}\\left(\\frac{1}{\\epsilon}\\frac{d\\psi_2}{dx}\\right)+k^2\\left(\\mu-\\frac{\\sin^2\\theta}{\\epsilon}\\right)\\psi_2=0.\n\\label{eq:we0}\n\\end{eqnarray}\nWe notice that if we replace $\\psi_2$, $\\epsilon$, and $\\mu$ with the $z$ component of the magnetic field $H_z$, the dielectric permittivity,\nand the magnetic permeability, this equation has precisely the same form as the wave equation for $p$-polarized EM waves\npropagating in the $xy$ plane. In Table \\ref{tab:table1}, we make a comparison between the pseudospin-1 Dirac equation and the $p$ wave equation in a plasma.\n\n\\begin{table*}\n\t\\caption{\\label{tab:table1} Comparison between the pseudospin-1 Dirac equation and the $p$ wave equation in a plasma.}\n\t\\begin{ruledtabular}\n\t\t\\begin{tabular}{cccccc}\n\t\t\t& & $\\epsilon$ & $\\mu$ & flat band & local oscillation\\\\\n\t\t\t\\hline\n\t\t\t& pseudospin-1 Dirac equation & $1-\\frac{U}{E-M}$ & $1-\\frac{U}{E+M}$ & $E=U+M$ & compact localized states \\\\\n\t\t\t& $p$ wave equation in a plasma & $1-\\frac{{\\omega_p}^2}{\\omega^2}$ & 1 & $\\omega=\\omega_p$ & plasmon \\\\\n\t\t\\end{tabular}\n\t\\end{ruledtabular}\n\\end{table*}\n\nWe solve the wave equation in the presence of an arbitrary potential using\nthe invariant imbedding method \\cite{kly,epl,sk1}. In this method, we first calculate the reflection and transmission coefficients $r$ and $t$ defined\nby the wave functions in the incident and transmitted regions:\n\\begin{eqnarray}\n\\psi_2\\left(x,L\\right)=\\left\\{\\begin{array}{ll}\n e^{ip\\left(L-x\\right)}+r(L)e^{ip\\left(x-L\\right)}, & x>L \\\\\n t(L)e^{-ip^\\prime x}, & x<0\n \\end{array},\\right.\n\\end{eqnarray}\nwhere $p^\\prime$ is the negative $x$ component of the wave vector in the region $x<0$ and $r$ and $t$ are regarded as functions of $L$.\nFollowing the procedure described in \\cite{sk1}, we derive the exact differential equations for $r$ and $t$:\n\\begin{eqnarray}\n&&\\frac{1}{k}\\frac{dr}{dl}=-\\frac{i\\cos\\theta}{2}\\epsilon\\left(r-1\\right)^2\\nonumber\\\\&&~~~~~~~~~\n+\\frac{i}{2\\cos\\theta}\\left(\\mu-\\frac{\\sin^2\\theta}{\\epsilon}\\right)\\left(r+1\\right)^2,\\nonumber\\\\\n&&\\frac{1}{k}\\frac{dt}{dl}=-\\frac{i\\cos\\theta}{2}\\epsilon\\left(r-1\\right)t\\nonumber\\\\&&~~~~~~~~~\n+\\frac{i}{2\\cos\\theta}\\left(\\mu-\\frac{\\sin^2\\theta}{\\epsilon}\\right)\\left(r+1\\right)t.\n\\label{eq:imb1}\n\\end{eqnarray}\nFor any functional form of $U$ and for any values of $kL$ and $\\theta$,\nwe can integrate these equations from $l=0$ to $l=L$ using the initial conditions\n\\begin{eqnarray}\nr(0)=\\frac{\\epsilon_2\\cos\\theta-\\tilde p}{\\epsilon_2\\cos\\theta+\\tilde p},~~t(0)=\\frac{2\\epsilon_2\\cos\\theta}{\\epsilon_2\\cos\\theta+\\tilde p},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&&\\tilde p=\\left\\{\\begin{matrix} \\mbox{sgn}(\\epsilon_2)\\sqrt{\\epsilon_2\\mu_2-\\sin^2\\theta} &\\mbox{if }\\epsilon_2\\mu_2\\ge \\sin^2\\theta \\\\\ni\\sqrt{\\sin^2\\theta-\\epsilon_2\\mu_2} & \\mbox{if }\\epsilon_2\\mu_2< \\sin^2\\theta \\end{matrix}\\right.,\\nonumber\\\\\n&&\\epsilon_2=1-\\frac{U_t}{E-M},~~\\mu_2=1-\\frac{U_t}{E+M},\n\\end{eqnarray}\nand obtain $r(L)$ and $t(L)$.\nThe reflectance $R$ and the transmittance $T$ are obtained using\n\\begin{eqnarray}\nR=\\vert r\\vert^2,~~T=\\left\\{\\begin{matrix} \\frac{\\tilde p}{\\epsilon_2\\cos\\theta}\\vert t\\vert^2\n&\\mbox{if }\\epsilon_2\\mu_2\\ge \\sin^2\\theta \\\\ 0 & \\mbox{if }\\epsilon_2\\mu_2< \\sin^2\\theta \\end{matrix}\\right..\n\\end{eqnarray}\nIn the absence of dissipation and mode conversion, the identity $R+T=1$ is satisfied.\n\nThe initial conditions $r(0)$ and $t(0)$ are the reflection and transmission coefficients for the case where there is no inhomogeneous layer (that is, $L=0$),\nand therefore the incident region with $\\epsilon=\\mu=1$ and the transmitted region with $\\epsilon=\\epsilon_2$ and $\\mu=\\mu_2$ have a single interface at $l=0$.\nThey are derived from the continuity of $\\psi_2$ and $\\epsilon^{-1}d\\psi_2\/dx$\nat the interface and are nothing but the well-known Fresnel coefficients. In the simplest case where the incident and transmitted regions have the same potential,\nthe initial conditions are trivially given by $r(0)=0$ and $t(0)=1$.\n\nWe point out that the invariant imbedding equations, Eq.~(\\ref{eq:imb1}), become {\\it singular} at the resonance point $x_r$ where $\\epsilon=0$, which corresponds to $E=M+U(x_r)$.\nThis singularity causes mode conversion in a very similar manner to that\nof transverse EM waves to longitudinal plasma oscillations in an inhomogeneous unmagnetized plasma. When a wave described by $\\psi_2$ with finite group velocity\nis incident obliquely on the inhomogeneous layer in $0\\le x\\le L$, it propagates up to the resonance point $x=x_r$, where the dispersive wave mode is strongly and resonantly coupled to the local flat-band state and the wave energy flows to the latter. Since\nthe group velocity associated with the flat-band state is zero, the energy is accumulated locally and is ultimately converted into heat.\nIn the steady state, a finite fraction of the energy of the incident wave is converted into that of the flat-band state.\n\nIn order to regularize the singularity,\nwe introduce a small imaginary part of $\\epsilon$, $\\epsilon_I$ ($>0$), in Eq.~(\\ref{eq:imb1}) when calculating $r$ and $t$.\nWe find numerically that the absorptance $A$ ($=1-R-T$) converges to a finite value\nin the limit $\\epsilon_I\\rightarrow 0$, if there exists a value of $x$ such that ${\\rm Re}~\\epsilon(x)=0$ in the region $0\\le x\\le L$.\nWe emphasize that this kind of absorption is not due to dissipation but due to the conversion of a propagating wave mode into a local\noscillating mode associated with the flat band. From now on, we will call $A$ as the mode conversion coefficient.\n\nA clear signature of mode conversion is the occurrence of a singularity in the invariant imbedding equations, such as the $(\\sin^2\\theta)\/\\epsilon$ term in Eq.~(\\ref{eq:imb1}).\nIn the systems with no flat band, there appears no singularity in those equations and mode conversion does not occur. In the case of the pseudospin-1\/2\nDirac equation in the presence of inhomogeneous scalar and vector potentials, which describes single-layer graphene and does not have a flat band in its spectrum,\nthe invariant imbedding equations have been derived previously in \\cite{sk11}. It has been verified that there appears no singularity and therefore no mode conversion.\n\n\\begin{figure}\n\\centering\\includegraphics[width=8cm]{fig2.eps}\n\\caption{Mode conversion coefficient $A$ obtained by solving Eq.~(\\ref{eq:imb1}) for the configuration given by Eq.~(\\ref{eq:slow}) plotted versus incident angle $\\theta$,\n(a) when $\\zeta\\equiv U_0L\/(\\hbar v_F)=20$, $M\/U_0=0.2$, and $E\/U_0=0.4$, 0.7, 1 and (b) when $E\/U_0=0.4$, $M\/U_0=0.2$, and $\\zeta=1$, 10, 100, 1000.\nIn all calculations, $\\epsilon_I$ is chosen to be $10^{-8}$.}\n\\label{fig1}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=8.5cm]{fig3.eps}\n\\caption{Color graph of the mode conversion coefficient $A$ obtained by solving Eq.~(7) for the configuration given by Eq.~(8) as a function of $\\theta$\nand $(E-M)\/U_0$, when $M\/U_0=0.2$, $\\zeta=20$, and $\\epsilon_I=10^{-8}$.\n$A$ vanishes for all $\\theta$ if $(E-M)\/U_0> 1$.}\n\\label{sfig1}\n\\end{figure}\n\nTo illustrate the mode conversion phenomenon, we consider a simple linear configuration of the potential\n\\begin{eqnarray}\n \\frac{U(x)}{U_0}=\\left\\{ \\begin{array}{ll}\n 1,& \\mbox{if } x<0\\\\\n 1-\\frac{x}{L},& \\mbox{if } 0\\le x\\le L\\\\\n 0, & \\mbox{if } x>L\n \\end{array} \\right..\n\\label{eq:slow}\n\\end{eqnarray}\nThe resonance occurs inside the region $0\\le x\\le L$ if\nthe energy satisfies $M 1$.}\n\\label{f1}\n\\end{figure}\n\n\\section{Pseudospin-2 Dirac equation}\n\\label{sec_sp2}\n\nThe band structure of pseudospin-$N$ Dirac systems with $N$ a positive integer consists of $2N$ dispersive\nbands (that is, Dirac cones) and one flat band \\cite{dora}. Therefore we expect all of these systems to display mode conversion.\nWe consider here the case of pseudospin-2 Dirac systems \\cite{feng}.\nThe Hamiltonian that describes massless pseudospin-2 Dirac particles in 2D\nin a 1D scalar potential $U=U(x)$ has a similar form as Eq.~(\\ref{eq:ham1}), but with $M=0$ and\n$S_x$ and $S_y$ given by\n\\begin{eqnarray}\n&&S_x=\\frac{1}{2}\\begin{pmatrix} 0& 2& 0 &0 &0\\\\ 2& 0& \\sqrt{6} &0 &0\\\\ 0& \\sqrt{6}& 0 &\\sqrt{6} &0\\\\ 0& 0& \\sqrt{6} &0 &2\\\\ 0 &0 &0 &2 &0 \\end{pmatrix},\\nonumber\\\\\n&&S_y=\\frac{i}{2}\\begin{pmatrix} 0& -2& 0 &0 &0\\\\ 2& 0& -\\sqrt{6} &0 &0\\\\ 0& \\sqrt{6}& 0 & -\\sqrt{6} &0\\\\ 0& 0& \\sqrt{6} &0 & -2\\\\ 0 &0 &0 &2 &0 \\end{pmatrix}.\n\\end{eqnarray}\n\nIn the uniform case where the potential $U$ is constant, the eigenvalues of the Hamiltonian are given by\n\\begin{eqnarray}\n&&E=U,\\nonumber\\\\\n&&E=U\\pm \\hbar v_F\\sqrt{{k_x}^2+{k_y}^2},\\nonumber\\\\\n&&E=U\\pm 2\\hbar v_F\\sqrt{{k_x}^2+{k_y}^2}.\n\\end{eqnarray}\nTherefore the spectrum consists of two pairs of Dirac cones with different slopes which are intersected at the common apex by the flat band.\nWe now allow for the $x$ dependence of the potential $U$.\nStarting from the pseudospin-2 Dirac equation for the five-component vector wave function $\\psi$ [$=\\left( \\psi_1, \\psi_2, \\psi_3, \\psi_4, \\psi_5 \\right)^{\\rm T}$],\nwe can eliminate $\\psi_1$, $\\psi_3$,\nand $\\psi_5$ and derive two coupled wave equations for $\\psi_2$ and $\\psi_4$ of the form\n\\begin{eqnarray}\n\\frac{d}{dx}\\left(A\\frac{d\\Psi}{dx} +B\\Psi\\right)+C\\left(A\\frac{d\\Psi}{dx} +B\\Psi\\right)+D\\Psi=0,\\nonumber\\\\\n\\label{eq:s2}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&& \\Psi=\\begin{pmatrix} \\psi_2 \\\\ \\psi_4 \\end{pmatrix},~~A=\\frac{1}{\\epsilon}\\begin{pmatrix} 1 & 0 \\\\ 0 & 1 \\end{pmatrix},\\nonumber\\\\ && B=\\frac{k_y}{4\\epsilon}\\begin{pmatrix} 1 & 3 \\\\ -3 & -1 \\end{pmatrix},~~\nC=\\frac{k_y}{8}\\begin{pmatrix} 7 & 9\\\\ -9 & -7 \\end{pmatrix},\\nonumber\\\\ && D=\\frac{\\epsilon {k_0}^2}{8}\\begin{pmatrix} 5 & -3 \\\\ -3 & 5 \\end{pmatrix}+\\frac{3{k_y}^2}{2\\epsilon}\\begin{pmatrix} -1 & 1 \\\\ 1 & -1 \\end{pmatrix},\\nonumber\\\\\n&&\\epsilon=1-\\frac{U}{E},~~k_0=\\frac{E}{\\hbar v_F}.\n\\label{eq:abc}\n\\end{eqnarray}\nIn the uniform region where $U$ is constant, there are four solutions for the $x$ component of the wave vector, $p$, obtained from Eq.~(\\ref{eq:s2}), which are\n\\begin{eqnarray}\np=\\pm\\sqrt{{k_0}^2\\epsilon^2-{k_y}^2},~~p=\\pm\\sqrt{\\frac{1}{4}{k_0}^2\\epsilon^2-{k_y}^2}.\n\\end{eqnarray}\nThe $\\pm$ signs represent the direction of the phase velocity.\nWe notice that there are two orthogonal eigenmodes obtained as linear combinations of $\\psi_2$ and $\\psi_4$, which are associated with the inner and outer cones and called\nhere as $a$ and $b$ modes respectively. These modes are characterized by different effective refractive indices\n$\\epsilon$ and $\\epsilon\/2$ and the system is birefringent. The $a$ and $b$ modes can alternatively be called as $h=1$ and $h=2$ modes respectively, where $h$ refers to the helicity.\nThe effect of the flat band is absorbed into the coefficients of Eq.~(\\ref{eq:s2}).\nIn inhomogeneous media, $a$ and $b$ modes\ninteract with each other and with the local flat band mode.\n\nIn the uniform region, we can show that there exists a linear proportionality relation between $\\psi_2$ and $\\psi_4$, which is different for $a$ and $b$ modes and for the left-moving\nand right-moving waves.\nWe obtain\n\\begin{eqnarray}\n&&{\\psi_4}^{(la)}=\\eta_{la}{\\psi_2}^{(la)},~~{\\psi_4}^{(lb)}=\\eta_{lb}{\\psi_2}^{(lb)},\\nonumber\\\\&& {\\psi_4}^{(ra)}=\\eta_{ra}{\\psi_2}^{(ra)},~~{\\psi_4}^{(rb)}=\\eta_{rb}{\\psi_2}^{(rb)},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&&\\eta_{la}=-\\frac{{k_0}^2\\epsilon^2}{\\left(-\\sqrt{{k_0}^2\\epsilon^2-{k_y}^2}-ik_y\\right)^2},\\nonumber\\\\\n&&\\eta_{lb}=\\frac{\\frac{1}{4}{k_0}^2\\epsilon^2}{\\left(-\\sqrt{\\frac{1}{4}{k_0}^2\\epsilon^2-{k_y}^2}-ik_y\\right)^2},\n\\nonumber\\\\ && \\eta_{ra}=-\\frac{{k_0}^2\\epsilon^2}{\\left(\\sqrt{{k_0}^2\\epsilon^2-{k_y}^2}-ik_y\\right)^2},\\nonumber\\\\\n&&\\eta_{rb}=\\frac{\\frac{1}{4}{k_0}^2\\epsilon^2}{\\left(\\sqrt{\\frac{1}{4}{k_0}^2\\epsilon^2-{k_y}^2}-ik_y\\right)^2}.\n\\label{eq:eta}\n\\end{eqnarray}\nThe wave function is expanded in terms of $a$ and $b$ modes as\n\\begin{eqnarray}\n\\Psi=\\begin{pmatrix} \\psi_2 \\\\ \\psi_4 \\end{pmatrix}&=&\\begin{pmatrix} {\\psi_2}^{(la)}+{\\psi_2}^{(lb)}+{\\psi_2}^{(ra)}+{\\psi_2}^{(rb)}\n\\\\ {\\psi_4}^{(la)}+{\\psi_4}^{(lb)}+{\\psi_4}^{(ra)}+{\\psi_4}^{(rb)} \\end{pmatrix}\\nonumber\\\\&=&N_l \\begin{pmatrix} {\\psi_2}^{(la)} \\\\ {\\psi_2}^{(lb)} \\end{pmatrix}\n+N_r \\begin{pmatrix} {\\psi_2}^{(ra)} \\\\ {\\psi_2}^{(rb)} \\end{pmatrix},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nN_l=\\begin{pmatrix} 1 & 1 \\\\ \\eta_{la} & \\eta_{lb} \\end{pmatrix},~~N_r=\\begin{pmatrix} 1 & 1 \\\\ \\eta_{ra} & \\eta_{rb} \\end{pmatrix}.\n\\label{eq:cf}\n\\end{eqnarray}\nSince there are two propagating wave modes, we need to define the reflection and transmission coefficients $r$ and $t$ as $2\\times 2$ matrices.\nIn our notation, $r_{21}$ ($r_{11}$) is the reflection coefficient when the incident wave is $a$ mode and the reflected wave is $b$ ($a$) mode.\nSimilarly, $r_{12}$ ($r_{22}$) is the reflection coefficient when the incident wave is $b$ mode and the reflected wave is $a$ ($b$) mode.\nSimilar definitions are applied to the transmission coefficients.\n\nFollowing the procedure given in \\cite{sk1}, we derive the invariant imbedding equations for $r$ and $t$:\n\\begin{widetext}\n\\begin{eqnarray}\n&&\\frac{dr}{dl}={N_{ri}}^{-1}\\left\\{-A^{-1}B+i\\left(N_{li}+N_{ri}r\\right)\\left(A_iN_{li}P_i+A_iN_{ri}P_i{N_{ri}}^{-1}N_{li}\\right)^{-1}\n\\left[-\\left(iA_iN_{ri}P_i{N_{ri}}^{-1}+B_i\\right)A^{-1}B+D\\right]\\right\\}\\nonumber\\\\\n&&~~~~~~~~~\\times\\left(N_{li}+N_{ri}r\\right)\\nonumber\\\\\n&&~~~~~~+{N_{ri}}^{-1}\\left\\{A^{-1}+i\\left(N_{li}+N_{ri}r\\right)\\left(A_iN_{li}P_i+A_iN_{ri}P_i{N_{ri}}^{-1}N_{li}\\right)^{-1}\n\\left[\\left(iA_iN_{ri}P_i{N_{ri}}^{-1}+B_i\\right)A^{-1}+C\\right]\\right\\}\\nonumber\\\\\n&&~~~~~~~~~\\times\\left[iA_i\\left(-N_{li}P_i+N_{ri}P_ir\\right)+B_i\\left(N_{li}+N_{ri}r\\right)\\right],\\nonumber\\\\\n&&\\frac{dt}{dl}=it\\left(A_i N_{li}P_i+A_i N_{ri}P_i{N_{ri}}^{-1}N_{li}\\right)^{-1}\\Big\\{\\left[-\\left(iA_i N_{ri}P_i{N_{ri}}^{-1}+B_i\\right)A^{-1}B+D\\right]\\left(N_{li}+N_{ri}r\\right)\\nonumber\\\\\n&&~~~~~~+\\left[\\left(iA_iN_{ri}P_i{N_{ri}}^{-1}+B_i\\right)A^{-1}+C\\right]\\left[iA_i\\left(-N_{li}P_i+N_{ri}P_ir\\right)+B_i\\left(N_{li}+N_{ri}r\\right)\\right]\\Big\\},\n\\label{eq:imb2}\n\\end{eqnarray}\n\\end{widetext}\nwhere $A_i$, $B_i$, $N_{li}$, and $N_{ri}$ are the values of $A$, $B$, $N_l$, and $N_r$ in the incident region obtained by setting $\\epsilon=\\epsilon_i=1$ in Eqs.~(\\ref{eq:abc}) and (\\ref{eq:eta}).\nThese equations are integrated using the initial conditions of the form\n\\begin{widetext}\n\\begin{eqnarray}\n&&r(0)=\\left(A_i N_{ri}P_i+A_t N_{lt}P_t{N_{lt}}^{-1}N_{ri}-i B_i N_{ri}+iB_t N_{ri}\\right)^{-1}\n\\left(A_i N_{li}P_i+A_i N_{ri}P_i{N_{ri}}^{-1}N_{li}\\right)-{N_{ri}}^{-1}N_{li},\\nonumber\\\\\n&&t(0)=\\left(A_i N_{ri}P_i{N_{ri}}^{-1}N_{lt}+ A_t N_{lt}P_t-i B_i N_{lt}+i B_t N_{lt}\\right)^{-1}\n\\left( A_i N_{li}P_i+A_i N_{ri}P_i{N_{ri}}^{-1}N_{li}\\right),\n\\label{eq:ic2}\n\\end{eqnarray}\n\\end{widetext}\nwhere $A_t$, $B_t$, and $N_{lt}$ are the values of $A$, $B$, and $N_l$ in the transmitted region obtained by setting $\\epsilon=\\epsilon_t$ in Eqs.~(\\ref{eq:abc}) and (\\ref{eq:eta}).\nThe matrices $P_i$ and $P_t$ in Eqs.~(\\ref{eq:imb2}) and (\\ref{eq:ic2}) are defined by\n\\begin{eqnarray}\nP_i=\\begin{pmatrix} p_{ai} & 0 \\\\ 0 & p_{bi} \\end{pmatrix},~~P_t=\\begin{pmatrix} p_{at} & 0 \\\\ 0 & p_{bt} \\end{pmatrix},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&&p_{ai}=\\sqrt{{k_0}^2{\\epsilon_i}^2-{k_y}^2},~~p_{bi}=\\sqrt{\\frac{1}{4}{k_0}^2{\\epsilon_i}^2-{k_y}^2},\\nonumber\\\\&&\np_{at}=\\sqrt{{k_0}^2{\\epsilon_t}^2-{k_y}^2},~~p_{bt}=\\sqrt{\\frac{1}{4}{k_0}^2{\\epsilon_t}^2-{k_y}^2}.\n\\label{eq:wns}\n\\end{eqnarray}\nThese initial conditions have been obtained following the procedure and using Eq.~(18) given in \\cite{sk1}.\nIf the incident and transmitted regions have the same potential, they reduce to very simple $2\\times 2$ matrices $r(0)=0$ and $t(0)=I$.\n\nWe assume that the incident waves are propagating waves with a real-valued wave vector. The effective refractive index associated with the $a$ mode is twice as large as that of the $b$ mode. When an $a$ mode wave is incident from the region where $\\epsilon_i=1$, $p_{ai}$ is\nreal and the incident angle $\\theta$ is related to $k_y$ by $k_y=k_0\\sin\\theta$. In this case, if $\\theta$ is greater than $30^\\circ$, we note that $p_{bi}$ becomes imaginary\nand the reflected $b$ wave is evanescent, while the reflected $a$ wave is propagative. However, when a $b$ mode wave is incident, the incident angle $\\theta$ is related to $k_y$ by $k_y=(k_0\\sin\\theta)\/2$. Then $p_{ai}$ is always real regardless of the incident angle and both reflected $a$ and $b$ waves are propagative.\nThe other quantities\n$p_{at}$ and $p_{bt}$ can be either real or imaginary depending on the value of $U_t$. For instance, when $p_{at}$ is imaginary, we use $p_{at}=i\\sqrt{{k_y}^2-{k_0}^2{\\epsilon_t}^2}$ instead of\nthe expression in Eq.~(\\ref{eq:wns}). A sketch of the configuration considered in this section is shown in Fig.~\\ref{fig_cf}.\n\nFinally, from the consideration of the probability currents, we obtain the expressions for the reflectance and transmittance matrices $R_{ij}$ and $T_{ij}$ ($i,j=1,2$),\nwhich are applicable when $p_{bi}$, $p_{at}$, and $p_{bt}$ are real:\n\\begin{eqnarray}\n&&R_{11}=\\vert r_{11}\\vert^2,~~R_{22}=\\vert r_{22}\\vert^2,\\nonumber\\\\&&\nR_{21}=\\frac{4p_{bi}}{p_{ai}}\\vert r_{21}\\vert^2,~~R_{12}=\\frac{p_{ai}}{4p_{bi}}\\vert r_{12}\\vert^2,\\nonumber\\\\\n&&T_{11}=\\frac{\\vert\\epsilon_i\\vert p_{at}}{\\vert\\epsilon_t\\vert p_{ai}}\\vert t_{11}\\vert^2,~~T_{22}=\\frac{\\vert\\epsilon_i\\vert p_{bt}}{\\vert\\epsilon_t\\vert p_{bi}}\\vert t_{22}\\vert^2,\\nonumber\\\\&&\nT_{21}=\\frac{4\\vert\\epsilon_i\\vert p_{bt}}{\\vert\\epsilon_t\\vert p_{ai}}\\vert t_{21}\\vert^2,~~T_{12}=\\frac{\\vert\\epsilon_i\\vert p_{at}}{4\\vert\\epsilon_t\\vert p_{bi}}\\vert t_{12}\\vert^2.\n\\end{eqnarray}\nIf $p_{at}$ ($p_{bt}$) is imaginary, we have to set $T_{11}$ and $T_{12}$ ($T_{21}$ and $T_{22}$) to be identically zero.\nIf $p_{bi}$ is imaginary, we have to set $R_{21}$ to zero and the mode conversion coefficient $A_2$ (see below) is undefined.\nWith these definitions, if there is no dissipation or mode conversion, the law of energy conservation $R_{11}+R_{21}+T_{11}+T_{21}=R_{12}+R_{22}+T_{12}+T_{22}=1$ should\nbe satisfied. When the mode conversion occurs,\nthe mode conversion coefficients $A_1$ and $A_2$ are defined by\n\\begin{eqnarray}\n&&A_1=1-R_{11}-R_{21}-T_{11}-T_{21},\\nonumber\\\\&& A_2=1-R_{12}-R_{22}-T_{12}-T_{22}.\n\\end{eqnarray}\n\nIn Fig.~\\ref{fig3}, we plot the mode conversion coefficients $A_1$ and $A_2$ versus $\\theta$, when $\\zeta=15$ and $E\/U_0=0.3$, 0.6, 0.9. $A_1$ ($A_2$) is obtained by calculating the absorptance when the incident wave is $a$ ($b$) mode.\nWe find that there exists a wide range of the incident angle in both $A_1$ and $A_2$ curves in which the mode conversion is substantially strong.\nSince there are two propagating modes interacting\nwith the local flat band mode, these curves display multiple peaks and cusps associated with various cutoffs.\nInside the inhomogeneous layer, the $a$ and $b$ modes are coupled to each other. When an $a$ mode wave is incident, it can propagate directly to the resonance region and\nconvert to flat-band state or it can take an indirect route, first converting to $b$ wave and then converting to flat-band state at the resonance region.\nSince the mode conversion coefficient obtains the maximum at different parameter values in these two cases, the curves\nof $A_1$ often show two peaks as a function of the incident angle or the energy, as illustrated in Fig.~\\ref{fig3}(a).\nThe case where a $b$ mode wave is incident is significantly different.\nSince the refractive index associated with the $a$ mode is twice as large as that of the $b$ mode, the $a$ wave converted from the incident $b$ wave propagates\nat a much smaller angle with respect to the $x$ axis than the incident angle. Mode conversion is not efficient for small propagation angles and therefore this\nindirect process does not contribute greatly to mode conversion. Therefore, when a $b$ wave is incident, mode conversion is dominated by the direct conversion from $b$ wave to flat-band state and one usually obtains a single peak for $A_2$ as shown in Fig.~\\ref{fig3}(b).\nIn addition, the occurrence of various cutoffs makes cusps to appear in the curves. For example, the cusp at $\\theta=30^\\circ$ in Fig.~\\ref{fig3}(a) comes from the\ncutoff condition that the reflected $b$ wave becomes evanescent.\n\nIn Fig.~\\ref{f1}, we show color graphs of $A_1$ and $A_2$ as a function of the incident angle and the particle energy when $\\zeta=15$.\nWhen the energy satisfies $01 \\\\ 0 & \\mbox{if }{\\varepsilon_2}^2\\le 1 \\end{matrix}\\right..\n\\end{eqnarray}\nWe notice that the invariant imbedding equations, Eq.~(\\ref{eq:iest}), have\na singularity at $\\varepsilon=0$, which corresponds to $E=U$.\n\nThe tight-binding equations for the sawtooth lattice at energy $E$ are written as\n\\begin{eqnarray}\n&& E \\psi_n^{\\rm A}= v_n^{\\rm A}\\psi_n^{\\rm A}+\\tau\\psi_{n-1}^{\\rm A}+\\tau\\psi_{n+1}^{\\rm A}+\\tau^\\prime\\psi_{n-1}^{\\rm B}+\\tau^\\prime\\psi_{n}^{\\rm B}, \\nonumber\\\\\n&& E \\psi_n^{\\rm B}= v_n^{\\rm B}\\psi_n^{\\rm B}+\\tau^\\prime\\psi_{n}^{\\rm A}+\\tau^\\prime\\psi_{n+1}^{\\rm A},\n\\end{eqnarray}\nwhere A and B indicate the sublattice sites A and B shown in Fig.~\\ref{fig4}(b) and $v_n^{\\rm A}$ and $v_n^{\\rm B}$ are the potentials at each site.\nIn order for this model to have a flat band, the hopping integral $\\tau^\\prime$ should be fine-tuned to satisfy $\\tau^\\prime=\\sqrt{2}\\tau$.\nIn the homogeneous case with no potential, the spectrum of this model consists of a flat band at $E\/\\tau=-2$ and a dispersive band\n$E\/\\tau=2[1+\\cos(qa)]$. Following a similar procedure as that for the stub lattice, it is straightforward to derive\n\\begin{eqnarray}\na^2\\frac{d}{dx}\\left(\\frac{\\varepsilon+2}{\\varepsilon}\\frac{d{\\psi_A}}{dx}\\right)+(\\varepsilon+2)\\psi_A=0,\n\\label{eq:saw}\n\\end{eqnarray}\nwhere $\\psi_A$ describes the wave function at the A sites in Fig.~\\ref{fig4}(b).\nWe apply the invariant imbedding method to it and obtain the equations for the reflection and transmission coefficients:\n\\begin{eqnarray}\n&&\\frac{dr}{dl}=2ip\\frac{\\varepsilon\\left(\\varepsilon_1+2\\right)}{\\varepsilon_1\\left(\\varepsilon+2\\right)}r\n+\\frac{ip}{2}\\left[\\frac{\\varepsilon+2}{\\varepsilon_1+2}-\\frac{\\varepsilon\\left(\\varepsilon_1+2\\right)}{\\varepsilon_1\\left(\\varepsilon+2\\right)}\\right]\\left(1+r\\right)^2,\\nonumber\\\\\n&&\\frac{dt}{dl}=ip\\frac{\\varepsilon\\left(\\varepsilon_1+2\\right)}{\\varepsilon_1\\left(\\varepsilon+2\\right)}t\n+\\frac{ip}{2}\\left[\\frac{\\varepsilon+2}{\\varepsilon_1+2}-\\frac{\\varepsilon\\left(\\varepsilon_1+2\\right)}{\\varepsilon_1\\left(\\varepsilon+2\\right)}\\right]\\left(1+r\\right)t,\\nonumber\\\\\n\\label{eq:iesaw}\n\\end{eqnarray}\nwhere $\\varepsilon_1$ [$=(E-U_1)\/\\tau=E\/\\tau$] is the value of $\\varepsilon$ in the incident region and $p$ ($=\\sqrt{\\varepsilon_1}\/a$) is the wave number of the incident wave.\nThese equations are integrated using the initial conditions\n\\begin{eqnarray}\n&&r(0)=\\frac{p\\varepsilon_2\\left(\\varepsilon_1+2\\right)-p^\\prime\\varepsilon_1\\left(\\varepsilon_2+2\\right)}\n{p\\varepsilon_2\\left(\\varepsilon_1+2\\right)+p^\\prime\\varepsilon_1\\left(\\varepsilon_2+2\\right)},\\nonumber\\\\\n&&t(0)=\\frac{2p\\varepsilon_2\\left(\\varepsilon_1+2\\right)}\n{p\\varepsilon_2\\left(\\varepsilon_1+2\\right)+p^\\prime\\varepsilon_1\\left(\\varepsilon_2+2\\right)},\n\\end{eqnarray}\nwhere $\\varepsilon_2$ [$=(E-U_2)\/\\tau$] is the value of $\\varepsilon$ in the transmitted region and $p^\\prime$ ($=\\sqrt{\\varepsilon_2}\/a$) is the wave number of the transmitted wave. The reflectance $R$ and the transmittance $T$ are given by\n\\begin{eqnarray}\nR=\\vert r\\vert^2,~~T=\\left\\{\\begin{matrix} \\frac{\\sqrt{\\varepsilon_1}\\left(\\varepsilon_2+2\\right)}{\\sqrt{\\varepsilon_2}\\left(\\varepsilon_1+2\\right)}\\vert t\\vert^2\n&\\mbox{if }\\varepsilon_2> 0 \\\\ 0 & \\mbox{if }\\varepsilon_2\\le 0 \\end{matrix}\\right..\n\\end{eqnarray}\nWe notice that the invariant imbedding equations, Eq.~(\\ref{eq:iesaw}), have\na singularity at\n$\\varepsilon=-2$, that is, $E=U-2\\tau$.\n\nIn Fig.~\\ref{fig5}(a), we plot the\nmode conversion coefficient $A$ obtained by solving Eq.~(\\ref{eq:stub}) when $L\/a=25$ and $U_0\/\\tau=50$ versus normalized energy $E\/\\tau$.\nIn Fig.~\\ref{fig5}(b), we plot $A$ obtained by solving Eq.~(\\ref{eq:saw})\nwhen $L\/a=25$ and $U_0\/\\tau=250$. The configuration of the potential is given by Eq.~(\\ref{eq:slow})\nin both cases. In Fig.~\\ref{fig5}(a), $A$ is nonzero in the range $11$\nand the resonance point exists when $049$, the wave number in the transmitted region becomes imaginary and the wave gets strongly reflected, which\ncauses a sharp peak to occur in the region $491$, whether $P$ is a\n$c$-chain in $O\\left(n^{2.5}\\ {\\rm polylog}\\ n\\right)$\n expected time and $O(n\\log n)$ space.\n (ii)~As a corollary, there is a randomized algorithm that finds, for a polygonal chain\n $P$ with $n$ vertices, the minimum $c\\geq 1$ for which $P$ is a $c$-chain\n in $O\\left(n^{2.5}\\ {\\rm polylog}\\ n\\right)$ expected time and $O(n\\log n)$ space.\n\n\n\n\\section{Upper Bounds} \\label{sec:upper}\n\nAt first glance, one might expect the stretch factor of a $c$-chain, for $c\\geq 1$, to be bounded by\nsome function of $c$. For example, the stretch factor of a $1$-chain is necessarily $1$.\nWe derive three upper bounds on the stretch factor of a $c$-chain with $n$ vertices\nin terms of $c$ and $n$ (cf.~Theorems~\\ref{thm:log{c}}--\\ref{thm:1\/2});\nsee Fig.~\\ref{fig:upperbds} for a visual comparison between the bounds.\nFor large $n$, the bound in Theorem~\\ref{thm:log{c}} is the best for $1 \\leq c \\leq 2^{1\/2}$,\nwhile the bound in Theorem~\\ref{thm:1\/2} is the best for $c > 2^{1\/2}$.\nIn particular, the bound in Theorem~\\ref{thm:log{c}} is tight for $c=1$.\nThe bound in Theorem~\\ref{thm:linear} is the best for $c\\geq 2$ and $n\\leq 111c^2$.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{upperbds}\n\\caption{The values of $n$ and $c$ for which (i) Theorem~\\ref{thm:log{c}}, (ii) Theorem~\\ref{thm:linear},\nand (iii) Theorem~\\ref{thm:1\/2} give the current best upper bound.\\label{fig:upperbds}}\n\\end{figure}\n\nOur first upper bound is obtained by a recursive application of the\n$c$-chain property. It holds for any positive distance\nfunction that\nmay not even\nsatisfy the triangle inequality.\n\\begin{theorem}\\label{thm:log{c}}\nFor a $c$-chain $P$ with $n$ vertices, we have $\\delta_P \\leq c(n-1)^{\\log c}$.\n\\end{theorem}\n\\begin{proof}\nWe prove, by induction on $n$, that\n\\begin{equation}\\label{eq:logc}\n\\delta_P\\leq c^{\\left\\lceil\\log (n-1)\\right\\rceil},\n\\end{equation}\nfor every $c$-chain $P$ with $n \\geq 2$ vertices. In the base case, $n=2$, we\nhave $\\delta_P=1$ and $c^{\\left\\lceil\\log (2-1)\\right\\rceil}=1$.\nNow let $n \\geq 3$, and assume that (\\ref{eq:logc}) holds for every $c$-chain\nwith fewer than $n$ vertices.\nLet $P = (p_1, \\dots, p_n)$ be a $c$-chain with $n$ vertices. Then,\napplying (\\ref{eq:logc}) to the first and second half of $P$, followed\nby the $c$-chain property for the first, middle, and last vertex of $P$, we get\n\\begin{align*}\n\\sum_{i=1}^{n-1}|p_{i}p_{i+1}|\n&\\leq \\sum_{i=1}^{\\lceil n\/2\\rceil-1}|p_{i}p_{i+1}| + \\sum_{i=\\lceil n\/2\\rceil}^{n-1} |p_{i}p_{i+1}|\\\\\n&\\leq c^{\\left\\lceil\\log (\\lceil n\/2\\rceil-1) \\right\\rceil} \\left( |p_1p_{\\lceil n\/2\\rceil}| + |p_{\\lceil n\/2\\rceil}p_n|\\right)\\\\\n&\\leq c^{\\left\\lceil\\log (\\lceil n\/2\\rceil-1) \\right\\rceil}\\cdot c|p_1p_n|\\\\\n&\\leq c^{\\left\\lceil\\log (n-1)\\right\\rceil} |p_1p_n|,\n\\end{align*}\nso (\\ref{eq:logc}) holds also for $P$.\nConsequently,\n\\[ \\delta_P \\leq c^{\\left\\lceil\\log (n-1)\\right\\rceil} \\leq c^{\\log (n-1) +1} =c \\cdot c^{\\log (n-1)}\n= c \\, (n-1)^{\\log{c}}, \\]\nas required.\n\\end{proof}\n\nOur second bound interprets the $c$-chain property geometrically and\nmakes use of the fact that $P$ resides in the Euclidean plane.\n\\begin{theorem}\\label{thm:linear}\nFor a $c$-chain $P$ with $n$ vertices, we have $\\delta_P\\leq c(n-2)+1$.\n\\end{theorem}\n\\begin{figure}[htpb]\n\\centering\n\\begin{tikzpicture}[scale=0.45]\n\\draw (5,0) circle [x radius=5, y radius=4];\n\\draw (0,0) --node[above]{$\\frac{c-1}{2}$} (2,0)node[circle, fill, inner sep=1pt, label=below:$p_1$]{}\n-- node[above]{$1$} (8,0)node[circle, fill, inner sep=1pt, label=below:$p_n$]{}\n-- node[above]{$\\frac{c-1}{2}$} (10,0);\n\\draw[dashed] (2,0) -- node[above]{$\\frac{c}{2}$} (5, 4)\n-- node[above]{$\\frac{c}{2}$} (8, 0);\n\\end{tikzpicture}\n\\caption{The entire chain $P$ lies in an ellipse with foci $p_1$ and $p_n$.}\\label{fig:ellipse}\n\\end{figure}\n\n\\begin{proof}\nWithout loss of generality, assume that $|p_1p_n|=1$.\nSince $P$ is a $c$-chain, for every $10$, we can set $c=\\frac{2^{2\\eps+1}}{2^{2\\eps}-1}$,\nand then the chains above have stretch factor\n$(n-1)^{\\frac{1+\\log(c-2)-\\log c}{2}}=(n-1)^{1\/2-\\eps}=\\Omega(n^{1\/2-\\eps})$.\n\nWe first construct a family $\\P_c=\\{P^k\\}_{k\\in\\NN}$ of polygonal chains.\nThen we show, in Lemmata~\\ref{lemma:simple} and~\\ref{lemma:c-chain},\nthat every chain in $\\P_c$ is simple and indeed a $c$-chain.\nThe theorem follows since the claimed stretch factor is a consequence of the construction.\n\n\\subparagraph*{Construction of $\\P_c$.}\nThe construction here is a generalization of the iterative construction of the \\emph{Koch curve};\nwhen $c=6$, the result is the original Ces\\`aro fractal (which is a variant of the Koch curve)~\\cite{Ces05}.\nWe start with a unit line segment $P^0$, and for $k=0, 1, \\dots$,\nwe construct $P^{k+1}$ by replacing each segment in $P^k$ by four segments such that\nthe middle three points achieve a stretch factor of $c_*=\\frac{c-2}{2}$ (this choice will be justified\nin the proof of Lemma~\\ref{lemma:c-chain}). Note that $c_*\\geq 1$, since $c\\geq 4$.\n\nWe continue with the details. Let $P^0$ be the unit line segment from $(0,0)$ to $(1,0)$;\nsee Figure~\\ref{fig:p0p1}\\,(left).\nGiven the polygonal chain $P^k$ $(k=0,1,\\dots$), we construct $P^{k+1}$ by replacing\neach segment of $P^k$ by four segments as follows. Consider a segment of $P^k$,\nand denote its length by $\\ell$. Subdivide this segment into three segments of\nlengths $(\\frac{1}{2}-\\frac{a}{c_*})\\ell$, $\\frac{2a}{c_*}\\ell$, and $(\\frac{1}{2}-\\frac{a}{c_*})\\ell$,\nrespectively, where $0\\frac{c_*}{2(c_*+1)^2}$.\n\\end{claim*}\n\n\\begin{claimproof}\nAs noted above, we assume that $p_i$ is in $\\conv(g_2(P^{m}) \\setminus g_5(P^{m}))=\n\\Delta q_1q_2q_3$ in Figure~\\ref{fig:c-chain-case4}.\nIf $p_k\\in g_5(P^m) \\cap g_3(P^m)=\\Delta q_7q_6q_5$, then the configuration is illustrated in\nFigure~\\ref{fig:c-chain-case4}\\,(left).\nNote that $\\Delta q_1q_2q_3$ and $\\Delta q_7q_6q_5$ are reflections of each other with respect to\nthe bisector of $\\angle q_3q_4q_5$.\nHence the shortest distance between $\\Delta q_1q_2q_3$ and $\\Delta q_7q_6q_5$ is\n$\\min\\{|q_3q_5|, |q_2q_6|, |q_1q_7|\\}$. Since $c_*\\geq 1$, we have\n\\[|q_1q_7|>|q_7q_9|=|q_3q_5|=a^{3\/2}=\\left(\\frac{c_*}{2(c_*+1)}\\right)^{3\/2}\\geq \\frac{c_*}{2(c_*+1)^2}.\\]\nFurther note that $q_2q_4q_6q_8$ is an isosceles trapezoid, so the length of its diagonal is bounded by\n$|q_2q_6|>|q_2q_4|=\\frac{c_*}{2(c_*+1)^2}$.\nTherefore the claim holds when $p_k\\in\\Delta q_7q_6q_5$.\n\nOtherwise $p_k\\in g_3(P^m) \\setminus g_5(P^m)=\\Delta q_9q_8q_7$: see\nFigure~\\ref{fig:c-chain-case4}\\,(right).\nNote that $\\Delta q_1q_2q_3$ and $\\Delta q_9q_8q_7$ are reflections of each other with respect to\nthe bisector of $\\angle q_4q_5q_6$.\nSo the shortest distance between the shaded triangles is $\\min\\{|q_3q_7|, |q_2q_8|, |q_1q_9|\\}$.\nHowever, all three candidates are strictly larger than $|q_4q_6|=\\frac{c_*}{2(c_*+1)^2}$.\nThis completes the proof of the claim.\n\\end{claimproof}\n\\begin{figure}[!ht]\n \\centering\n\\begin{tikzpicture}[scale=0.5]\n\\fill[black!25] (4.545, 0.000) -- (2.727, 2.467) -- (4.752, 2.056)--cycle;\n\\fill[black!25] (5.000, 4.623) -- (5.207, 2.467) -- (7.273, 2.467)--cycle;\n\\draw[red] (4.752, 2.056) -- (4.793, 2.467) -- (5.207, 2.467);\n\\draw (0.000, 0.000) -- (2.066, 0.000)\n-- (2.273, 2.056) -- (2.479, 0.000)\n-- (4.545, 0.000)node[circle, fill, inner sep=1pt, label=135:$q_1$]{}\n-- (4.752, 2.056)node[circle, fill, inner sep=1pt, label=225:$q_2$]{}\n-- (2.727, 2.467)node[circle, fill, inner sep=1pt, label=135:$q_3$]{}\n-- (4.793, 2.467)node[circle, fill, inner sep=1pt, label=135:$q_4$]{}\n-- (5.000, 4.523)node[circle, fill, inner sep=1pt, label=above:$q_5$]{}\n-- (5.207, 2.467)node[circle, fill, inner sep=1pt, label=45:$q_6$]{}\n-- (7.273, 2.467)node[circle, fill, inner sep=1pt, label=45:$q_7$]{}\n-- (5.248, 2.056)node[circle, fill, inner sep=1pt, label=-45:$q_8$]{}\n-- (5.455, 0.000)node[circle, fill, inner sep=1pt, label=45:$q_9$]{}\n-- (7.521, 0.000) -- (7.727, 2.056) -- (7.934, 0.000) -- (10.000, 0.000);\n\\end{tikzpicture}\n\\hspace*{.1mm}\n\\begin{tikzpicture}[scale=0.5]\n\\fill[black!25] (4.545, 0.000) -- (2.727, 2.467) -- (4.752, 2.056)--cycle;\n\\fill[black!25] (7.273, 2.467) -- (5.248, 2.056) -- (5.455, 0.000)--cycle;\n\\draw[red] (4.752, 2.056) -- (4.793, 2.467) -- (5.207, 2.467);\n\\draw (0.000, 0.000) -- (2.066, 0.000)\n-- (2.273, 2.056) -- (2.479, 0.000)\n-- (4.545, 0.000)node[circle, fill, inner sep=1pt, label=135:$q_1$]{}\n-- (4.752, 2.056)node[circle, fill, inner sep=1pt, label=225:$q_2$]{}\n-- (2.727, 2.467)node[circle, fill, inner sep=1pt, label=135:$q_3$]{}\n-- (4.793, 2.467)node[circle, fill, inner sep=1pt, label=135:$q_4$]{}\n-- (5.000, 4.523)node[circle, fill, inner sep=1pt, label=above:$q_5$]{}\n-- (5.207, 2.467)node[circle, fill, inner sep=1pt, label=45:$q_6$]{}\n-- (7.273, 2.467)node[circle, fill, inner sep=1pt, label=45:$q_7$]{}\n-- (5.248, 2.056)node[circle, fill, inner sep=1pt, label=-45:$q_8$]{}\n-- (5.455, 0.000)node[circle, fill, inner sep=1pt, label=45:$q_9$]{}\n-- (7.521, 0.000) -- (7.727, 2.056) -- (7.934, 0.000) -- (10.000, 0.000);\n\\end{tikzpicture}\n\\caption{$p_i\\in\\Delta q_1q_2q_3$,\n Left: $p_k\\in\\Delta q_7q_6q_5$;\n Right: $p_k\\in \\Delta q_9q_8q_7$.}\\label{fig:c-chain-case4}\n\\end{figure}\n\nNow the diameter of $g_2(P^{m}) \\cup g_3(P^{m})$ is $a=\\frac{c_*}{2(c_*+1)}$\n(note that there are three diameter pairs), so\n\\[\n\\frac{|p_ip_j|+|p_jp_k|}{|p_ip_k|}<\\frac{2\\cdot \\frac{c_*}{2(c_*+1)}}{\\frac{c_*}{2(c_*+1)^2}}= 2c_*+2=c,\n\\]\nas required.\nThis concludes the proof of Lemma~\\ref{lemma:c-chain} and Theorem~\\ref{thm:lower-bound}.\n\\end{proof}\n\n\\section{Algorithm for Recognizing $c$-Chains}\n\\label{sec:algo}\n\nIn this section, we design a randomized Las Vegas algorithm to recognize $c$-chains.\nMore precisely, given a polygonal chain $P=(p_1,\\dots, ,p_n)$, and a parameter $c\\geq 1$,\nthe algorithm decides whether $P$ is a $c$-chain, in $O\\left(n^{2.5}\\ {\\rm polylog}\\ n\\right)$\nexpected time.\nBy definition, $P=(p_1,\\dots, p_n)$ is a $c$-chain if\n$|p_ip_j| + |p_jp_k|\\leq c\\ |p_ip_k|$ for all $1\\leq i1$, whether $P$ is a $c$-chain in $O\\left(n^{2.5}\\ {\\rm polylog}\\ n\\right)$\n expected time and $O(n\\log n)$ space.\n\\end{theorem}\n\nAgarwal, Matou\\v{s}ek and Sharir~\\cite[Theorem~1.4]{AMS13} constructed, for a set $S$ of $n$ points\nin $\\RR^2$, a data structure that can answer ellipse range searching queries: it reports the number\nof points in $S$ that are contained in a query ellipse.\nIn particular, they showed that, for every $\\eps>0$, there is a constant $B$\nand a data structure with $O(n)$ space, $O\\left(n^{1+\\eps}\\right)$ expected preprocessing time,\nand $O\\left(n^{1\/2}\\log^B n\\right)$ query time. The construction was later simplified by\nMatou\\v{s}ek and Pat\\'akov\\'a~\\cite{MP15}. Using this data structure, we can quickly\ndecide whether a given polygonal chain is a $c$-chain.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:alg}.]\n Subdivide the polygonal chain $P=(p_1,\\dots , p_n)$ into two subchains of equal or almost equal sizes,\n $P_1=(p_1,\\dots, p_{\\lceil n\/2\\rceil})$ and $P_2=(p_{\\lceil n\/2\\rceil},\\dots, p_n)$;\n and recursively subdivide $P_1$ and $P_2$ until reaching 1-vertex chains.\n Denote by $T$ the recursion tree. Then, $T$ is a binary tree of depth $\\lceil \\log n\\rceil$.\n There are at most $2^i$ nodes at level $i$; the nodes at level $i$ correspond to edge-disjoint subchains of $P$,\n each of which has at most $n\/2^i$ edges. Let $W_i$ be the set of subchains on level $i$ of $T$;\n and let $W=\\bigcup_{i\\geq 0}W_i$. We have $|W|\\leq 2n$.\n\n For each polygonal chain $Q\\in W$, construct an ellipse range searching data structure\n $\\DS(Q)$ described above~\\cite{AMS13} for the vertices of $Q$, with a suitable parameter $\\eps>0$.\n Their overall expected preprocessing time is\n\\[\n\\sum_{i=0}^{\\lceil \\log n\\rceil} 2^i\\cdot O\\left( \\left(\\frac{n}{2^i}\\right)^{1+\\eps} \\right)\n=O\\left(n^{1+\\eps}\\sum_{i=0}^{\\lceil \\log n\\rceil} \\left(\\frac{1}{2^i}\\right)^{\\eps}\\right)\n=O\\left(n^{1+\\eps}\\right),\n\\]\ntheir space requirement is\n$\\sum_{i=0}^{\\lceil \\log n\\rceil} 2^i\\cdot O\\left(n\/2^i\\right)=O(n\\log n)$,\nand their query time at level $i$ is $O\\left(\\left(n\/2^i\\right)^{1\/2}\\ {\\rm polylog}\\ \\left(n\/2^i\\right)\\right)\n= O\\left(n^{1\/2}\\ {\\rm polylog}\\ n\\right)$.\n\nFor each pair of indices $1\\leq ic|p_ip_k|$, witnessing that $P$ is not a $c$-chain.\n\nThe query time over all pairs $1\\leq i0$\ndepends on $c$?\n\n\\item Our algorithm in Section~\\ref{sec:algo} can recognize $c$-chains with $n$ vertices\n in $O\\left(n^{2.5}\\ {\\rm polylog}\\ n\\right)$ expected time and $O(n\\log n)$ space,\n using ellipse range searching data structures.\nIt is likely that the running time can be improved in the future, perhaps at the expense of increased space,\nwhen suitable time-space trade-offs for semi-algebraic range searching become available.\nThe existence of such data structures is conjectured~\\cite{AMS13}, but currently remains open.\n\\end{enumerate}\n\n\\bibliographystyle{plainurl\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor a finite group $G$, let $G$ be the quotient of a free group $F$ by a normal\nsubgroup $R$, then the $c$-nilpotent multiplier $\\mathcal{M}^{(c)}(G)$ is defined as\n\\[R\\cap\\gamma_{c+1}(F)\/\\gamma_{c+1}[R,F],\\]\nin which $\\gamma_{c+1}[R,F]=[\\gamma_{c}[R,F],F]$ for $c\\geq 1$. It is an especial case of the Baer invariant \\cite{ba} with respect to the variety of nilpotent\ngroups of class at most $c$. When $c=1$, the abelian group $\\mathcal{M}(G)=\\mathcal{M}^{(1)}(G)$ is more known as the Schur multiplier of $G$ and it is much more studied, for instance in \\cite{ka, ni1, ni9}.\n\nSince determining the $c$-nilpotent multiplier of groups can be used for the classification of group into\nisoclinism classes$($see \\cite{bay}$)$, there are multiple papers concerning this subject.\n\nRecently, several authors investigated to develop some results on the group theory case to Lie algebra.\nIn \\cite{sal}, analogues to the $c$-nilpotent multiplier of groups, for a given Lie algebra $L$, the $c$-nilpotent multiplier of $L$ is defined as\n\\[\\mathcal{M}^{(c)}(L)=R\\cap F^{c+1}\/[R,F]^{c+1},\\]\nin which $L$ presented as the quotient of a free Lie algebra $F$ by an ideal $R$, $F^{c+1}=\\gamma_{c+1} (F)$ and $[R,F]^{c+1}=\\gamma_{c+1}[R,F]$.\nSimilarly, for the case $c=1$, the abelian Lie algebra $\\mathcal{M}(L)=\\mathcal{M}^{(1)}(L)$ is more studied by the first author and the others\n$($see for instance \\cite{es, es2, bos, el2, har1, har, ni7, ni5, ni6, yan}$)$.\n\nThe $c$-nilpotent multiplier of a finite dimensional nilpotent\nLie algebra $L$ is a new field of interest in literature.\nThe present context is involving the $2$-nilpotent multiplier of a finite dimensional nilpotent Lie algebra $L$.\nThe aim of the current paper is divided into several steps.\nIn \\cite[Corollary 2.8]{sal}, by a parallel result to the group theory result, showed for every finite nilpotent Lie algebra $L$, we have\n\\begin{equation}\\label{e1}\n\\mathrm{dim}(\\mathcal{M}^{(2)}(L))+\\mathrm{dim}(L^3)\\leq\\frac{1}{3}n(n-1)(n+1).\n\\end{equation}\nHere we prove that abelian Lie algebras just attain the bound \\ref{e1}. It shows that always $\\mathrm{Ker}~\\theta=0$ in \\cite[Corollary 2.8 (ii)a]{sal}.\n\nSince Heisenberg algebras $H(m)$ $($a Lie algebra of dimension $2m+1$ with $L^2=Z(L)$ and\n$\\mathrm{dim}~(L^2) = 1)$ have interest in several areas of Lie algebra, similar to the result of\n\\cite[Example 3]{es2} and \\cite[Theorem 24]{mo}, by a quite different way, we give explicit structure of $2$-nilpotent multiplier of these algebras.\nAmong the other results since the Lie algebra which attained the upper bound \\ref{e1} completely described in Lemma \\ref{ab} $($they are just abelian Lie algebras$)$, by obtaining some new inequalities on dimension $\\mathcal{M}^{(2)}(L)$,\nwe reduce bound \\ref{e1} for non abelian Lie algebras as much as possible.\n\nFinally, among the class of Heisenberg algebras, we show that which of them is $2$-capable. It means which of them is isomorphic to\n$H\/Z_2(H)$ for a Lie algebra $H$. For more information about the capability of Lie algebras see \\cite{ni9, sal1}. These generalized the recently results for the group theory case in \\cite{ni10}.\n\n\\section{Further investigation on $2$-nilpotent multiplier of finite dimensional nilpotent Lie algebra }\nThe present section illustrates to obtain further results on $2$-nilpotent multiplier of finite dimensional nilpotent Lie algebra. At first we give basic definitions and known results for the seek of convenience the reader.\n\nLet $F$ be a free Lie algebra on an arbitrary totaly ordered set $X$. Recall from \\cite{sh},\nthe basic commutator on the set $X$, which is defined as follows, is a basis of $F$.\n\nThe elements of $X$ are basic commutators of length one and ordered relative to the total order previously chosen.\nSuppose all the basic commutators $a_i$ of length less than $k \\geq 1$\nhave been defined and ordered. Then the basic commutators of length $k$ to be defined as\nall commutators of the form $[a_i , a_j ]$ such that the sum of lengths of $a_i$ and $a_j$ is $k$, $a_i > a_j$, and if\n$a_i = [a_s, a_t ]$, then $a_j \\geq a_t$.\nAlso the number of basic commutators on $X$ of length $n$, namely $l_d(n)$, is\n\\[\\frac{1}{n}\\sum_{m|n}\\mu(m)d^{\\frac{n}{m}},\\] where $\\mu$ is the M\\\"{o}bius function.\n\nFrom \\cite{el2}, let $F$ be a fixed field, $L, K$ be two Lie algebras and $[ \\ , \\ ]$ denote the Lie bracket. By an action of $L$\non $K$ we mean an $F$-bilinear map \\[(l,k) \\in L\\times K \\mapsto\n~^lk \\in K~\\text{satisfying}~\\]\n\\[^{[l,l']}k= ~^l(~^{l'}k)- ~^{l'}(~^lk)~\\text{and}~^l[k,k']=[~^lk,k']+[k,~^lk'], ~\\text{for all}~ c \\in F, l, l' \\in L, k, k' \\in K.\\]\nWhen $L$ is a subalgebra of a Lie algebra $P$ and $K$ is an ideal in $P$, then $L$\nacts on $K$ by Lie multiplications $~^lk=[l,k]$.\nA crossed module is a Lie homomorphism\n$\\sigma: K\\rightarrow L$ together with an action of $L$ on $K$ such that\n\\[\\sigma(^lk) = [l,\\sigma(k)]~\\text{and}~ ^{\\sigma(k)}k' = [k,k'] ~\\text{for all}~k,k'\\in K ~\\text{and}~ l\\in L.\\]\n\nLet $\\sigma: L\\rightarrow M $ and $\\eta: K\\rightarrow M$ be two crossed modules, $L$ and $K$ act on each other and on themselves by Lie. Then these actions are called compatible provided that\n\\[~^{~^kl}k'=~^{k'}(~^lk)~\\text{and}~^{~^lk}l'=~^{l'}(~^kl).\\]\n\nThe non-abelian tensor product $L\\otimes K$ of $L$ and $K$ is\nthe Lie algebra generated by the symbols $l\\otimes k$ with defining\nrelations\n\\[c(l \\otimes k)=cl \\otimes k = l \\otimes ck,\n(l+l')\\otimes k = l \\otimes k + l' \\otimes k,\\]\n\\[l \\otimes (k+k') = l\\otimes k + l \\otimes k',\n~^ll' \\otimes k = l \\otimes ~^{l'}k -l' \\otimes ~^lk,~ l \\otimes ~^kk'= ~^{k'}l \\otimes k - ~^kl\n\\otimes k',\\]\n\\[[l\\otimes k, l' \\otimes k']=- ~^kl \\otimes ~^{l'}k',~\\text{for all}~c \\in F, l, l' \\in L, k, k' \\in K.\\]\n\n\nThe non-abelian tensor square of $L$ is a special case of tensor product\n$L\\otimes K$ when $K=L$. Note that we denote the usual\nabelian tensor product $L \\otimes_\\mathbb{Z} K$, when $L$ and $K$\nare abelian and the actions are trivial.\n\nLet $L\\square K$ be the submodule of $L\\otimes K$ generated by the elements $l\\otimes k$ such that $\\sigma(l)=\\eta(k)$. The factor Lie algebra $L\\wedge K\\cong L\\otimes K\/L\\square K$ is called the exterior product of $L$ and $K$, and the image of $l\\otimes k$\nis denoted by $l\\wedge k$ for all $l\\in L,k \\in K$. Throughout the paper $\\Gamma$ is denoted the universal quadratic functor $($see \\cite{el2}$)$.\n\nRecall from \\cite{ni9}, the exterior centre of a Lie algebra $Z^{\\wedge}(L)=\\{l\\in L~|~l\\wedge l'=1_{L\\wedge L},~\\forall~l'\\in L \\}$ of $L$.\nIt is shown that in \\cite{ni9} the exterior centre $L$ is a central ideal of $L$ which allows us to decide when\nLie algebra $L$ is capable, that is, whether $L\\cong H\/Z(H)$ for a Lie algebra $H$.\n\nThe following Lemma is a consequence of \\cite[Lemma 3.1]{ni9}.\n\\begin{lem} \\label{ca} Let $L$ be a finite dimensional Lie algebra, $L$ is capable if and only if $Z^{\\wedge}(L)=0$.\n\\end{lem}\nThe next two lemmas are special cases of \\cite[Proposition 2.1 (i)]{sal} when $c=2$ and that is useful for proving the next theorem.\n\\begin{lem}\\label{ln}Let $I$ be an ideal in a Lie algebra $L$. Then the following sequences are exact.\n\\begin{itemize}\n\\item[(i)]$\\mathrm{Ker}(\\mu_I^2)\\rightarrow \\mathcal{M}^{(2)}(L)\\rightarrow\\mathcal{M}^{(2)}(L\/I)\\rightarrow \\frac {I\\cap L^3}{[[I,L],L]}\\rightarrow 0.$\n \\item[(ii)]$(I\\wedge L\/L^3)\\wedge L\/L^3 \\rightarrow \\mathcal{M}^{(2)}(L)\\rightarrow\\mathcal{M}^{(2)}(L\/I)\\rightarrow I\\cap L^3\\rightarrow 0,$ when $[[I,L],L]=0$.\n\\end{itemize}\n\\end{lem}\n\\begin{lem}\\label{l3}Let $I$ be an ideal of $L$, and put $K=L\/I$. Then\n\\begin{itemize}\n\\item[(i)] $\\mathrm{dim}~\\mathcal{M}^{(2)}(K)\\leq \\mathrm{dim}~\\mathcal{M}^{(2)}(L)+ \\displaystyle\\mathrm{dim}~\\frac{I\\cap L^3}{[[I,L],L]}.$\n\\item[(ii)] Moreover, if $I$ is a $2$-central subalgebra. Then\n\n$(a).$ $(I\\wedge L)\\wedge L\\rightarrow \\mathcal{M}^{(2)}(L)\\rightarrow ~\\mathcal{M}^{(2)}(K)\\rightarrow \\mathrm{dim}~I\\cap L^3\\rightarrow 0$.\n\n$(b).$ $ \\mathrm{dim}~\\mathcal{M}^{(2)}(L)+\\mathrm{dim}~I\\cap L^3 \\leq \\mathrm{dim}~\\mathcal{M}^{(2)}(K)+\\mathrm{dim}~(I\\otimes L\/L^3)\\otimes L\/L^3$.\n\\end{itemize}\\end{lem}\n\\begin{proof} $(i)$. Using Lemma \\ref{ln} (i).\n\n$(ii)(a)$. Since $[I,L]\\subseteq Z(L)$, $\\mathrm{Ker}~\\mu^2_I=(I\\wedge L)\\wedge L$ and $[[I,L],L]=0$ by Lemma \\ref{ln}. It follows the result.\n\n$(ii)(b)$. Since there is a natural epimorphism $(I\\otimes L\/L^3)\\otimes L\/L^3\\rightarrow (I\\wedge L\/L^3)\\wedge L\/L^3 $, the result deduces from\nLemma \\ref{ln} (ii).\n\\end{proof}\nThe following theorem gives the explicit structure of the Schur multiplier of all Heisenberg algebra.\n\\begin{thm}\\cite[Example 3]{es2} $\\mathrm{and}$ \\cite[Theorem 24]{mo}\\label{h}\nLet $H(m)$ be Heisenberg algebra of dimension $2m+1$. Then\n\\begin{itemize}\n\\item[(i)]$\\mathcal{M}(H(1))\\cong A(2)$.\n\\item[(ii)]$\\mathcal{M}(H(m))=A(2m^2-m-1)$ for all $m\\geq 2$.\n\\end{itemize}\n\\end{thm}\nThe following result comes from \\cite[Theorem 2.8]{ni8} and shows the behavior of 2-nilpotent multiplier respect to the direct sum of two Lie algebras.\n\\begin{thm}\\label{ds}\nLet $A$ and $B$ be finite dimensional Lie algebras. Then\n\\[\\begin{array}{lcl}\\mathcal{M}^{(2)}(A\\oplus B)) &\\cong& \\mathcal{M}^{(2)}(A)\\oplus ~\\mathcal{M}^{(2)}(B)\n\\oplus \\big((A\/A^2\\otimes_{\\mathbb{Z}} A\/A^2)\\otimes_{\\mathbb{Z}} B\/B^2\\big )\\vspace{.3cm}\\\\&\\oplus&\\big((B\/B^2\\otimes_{\\mathbb{Z}} B\/B^2)\\otimes_{\\mathbb{Z}} A\/A^2\\big).\\end{array}\\]\n\\end{thm}\nThe following theorem is proved in \\cite{sal} and will be used in the next contribution. At this point, we may give a short proof with a quite different way of \\cite[Proposition 1.2]{sal} as follows.\n\\begin{thm}\\label{ab}Let $L= A(n)$ be an abelian Lie algebra of dimension $L$. Then $\\mathcal{M}^{(2)}(L)\\cong A(\\frac{1}{3}n(n-1)(n+1))$.\n\\end{thm}\n\\begin{proof}\nWe perform induction on $n$. Assume $n=2$. Then Theorem \\ref{ds} allows us to conclude that\n\\[\\begin{array}{lcl}\\mathcal{M}^{(2)}(L) &\\cong& \\mathcal{M}^{(2)}(A(1)) \\oplus\\mathcal{M}^{(2)}(A(1))\n\\oplus \\big(A(1)\\otimes_{\\mathbb{Z}} A(1)\\otimes _{\\mathbb{Z}}A(1)\\big )\\vspace{.3cm}\\\\&\\oplus&\\big(A(1)\\otimes_{\\mathbb{Z}} A(1))\\otimes _{\\mathbb{Z}}A(1)\\big)\\cong A(1)\\oplus A(1)\\cong A(2).\\end{array}\\]\n\nNow assume that $L\\cong A(n)\\cong A(n-1)\\oplus A(1)$. By using induction hypothesis and Theorem \\ref{ds}, we have\n\\[\\begin{array}{lcl}\\mathcal{M}^{(2)}(A(n-1)\\oplus A(1))&\\cong& \\mathcal{M}^{(2)}(A(n-1))\\oplus\\big(A(n-1)\\otimes_{\\mathbb{Z}} A(n-1)\\otimes _{\\mathbb{Z}} A(1)\\big)\n\\vspace{.3cm}\\\\&\\oplus&\\big(A(1)\\otimes_{\\mathbb{Z}} A(1)\\otimes_{\\mathbb{Z}} A(n-1)\\big)\\vspace{.3cm}\\\\&\\cong&\nA(\\frac{1}{3}n(n-1)(n-2))\\oplus A((n-1)^2)\\oplus A(n-1)\\vspace{.3cm}\\\\&\\cong& A(\\frac{1}{3}n(n-1)(n+1)).\\end{array}\\]\n\\end{proof}\n\nThe main strategy, in the next contribution, is to give the similar argument of Theorem \\ref{h} for the $2$-nilpotent multiplier.\nIn the first theorem, we obtain the structure of $\\mathcal{M}^{(2)}(L)$ when $L$ is non-capable Heisenberg algebra.\n\\begin{thm} Let $L=H(m)$ be a non-capable Heisenberg algebra. Then\n\\[\\mathcal{M}^{(2)}(H(m))\\cong A(\\frac{8m^3-2m}{3}).\\]\n\\end{thm}\n\\begin{proof} Since $L$ is non-capable, Lemma \\ref{ca} implies $Z^{\\wedge}(L)=L^2=Z(L)$.\nInvoking Lemma \\ref{l3} by putting $I=Z^{\\wedge}(L)$, we have $\\mathcal{M}^{(2)}(H(m))\\cong \\mathcal{M}^{(2)}(H(m)\/H(m)^2)$.\nNow result follows from Theorem \\ref{ab}.\n\\end{proof}\nThe following theorem from \\cite[Theorem 3.4]{ni9} shows in the class of all Heisenberg algebras which one is capable.\n\\begin{thm}\\label{ca1}$H(m)$ is capable if and only if $m = 1$.\n\\end{thm}\n\\begin{cor} \\label{ca11}$H(m)$ is not $2$-capable for all $m\\geq 2$.\n\\end{cor}\n\\begin{proof} Since every $2$-capable Lie algebra is capable, the result follows from Theorem \\ref{ca1}.\n\\end{proof}\nSince $H(m)$ for all $m\\geq 2$ is not $2$-capable, we just need to discus about the $2$-capability of $H(1)$. Here, we obtain 2-nilpotent multiplier of $H(1)$ and in the next section we show $H(1)$ is $2$-capable.\n\\begin{thm} Let $L=H(1)$. Then\n\\[\\mathcal{M}^{(2)}(H(1))\\cong A(5).\\]\n\\end{thm}\n\\begin{proof}\nWe know that $H(1)$ is in fact the free nilpotent Lie algebra of rank 2 and class 2. That is $H(1)\\cong F\/F^3$ in which $F$ is the free Lie algebra on 2 letters $x,y$. The second nilpotent multiplier of $H(1)$ is $F^4\\cap F^3\/[F^3,F,F]$ which is isomorphic to $F^3\/F^5$ ant the latter is the abelian Lie algebra on the set of all basic commutators of weights 3 and 4 which is the set $\\{[y,x,x],[y,x,y],[y,x,x,x],[y,x,x,y],[y,x,y,y]\\}$. So the result holds.\n\\end{proof}\nWe summarize our result as below\n\\begin{thm}\\label{th1}\nLet $H(m)$ be Heisenberg algebra of dimension $2m+1$. Then\n\\begin{itemize}\n\\item[(i)]$\\mathcal{M}^{(2)}(H(1))\\cong A(5)$.\n\\item[(ii)]$\\mathcal{M}^{(2)}(H(m))=A(\\frac{8m^3-2m}{3})$ for all $m\\geq 2$.\n\\end{itemize}\n\\end{thm}The following Lemma lets us to obtain the structure of the $2$-nilpotent multiplier of all nilpotent Lie algebras with $\\mathrm{dim}~\nL^2=1$.\n\\begin{lem}\\cite[Lemma 3.3]{ni7}\\label{l1} Let $L$ be an $n$-dimensional Lie algebra and\n$\\mathrm{dim}~\nL^2=1$. Then \\[L\\cong H(m)\\oplus A(n-2m-1).\\]\n\\end{lem}\n\\begin{thm}\\label{mt1}Let $L$ be an $n$-dimensional Lie algebra with\n$\\mathrm{dim}~L^2=1$. Then \\[\\mathcal{M}^{(2)}(L) \\cong \\left\\{\\begin{array}{lcl} A(\\frac{1}{3}n(n-1)(n-2)) & if\\ m>1 ,\\\\\n\\\\\nA(\\frac{1}{3}n(n-1)(n-2)+3) & if\\ m=1.\\end{array}\\right.\\]\n\\end{thm}\n\\begin{proof}By using Lemma \\ref{l1}, we have $L\\cong H(m)\\oplus A(n-2m-1)$. Using the behavior of $2$-nilpotent multiplier respect to direct sum\n\\[\\begin{array}{lcl}\\mathcal{M}^{(2)}(L) &\\cong& \\mathcal{M}^{(2)}(H(m)) \\oplus~\\mathcal{M}^{(2)}(A(n-2m-1))\n\\vspace{.3cm}\\\\&\\oplus&\\big((H(m)\/H(m)^2\\otimes_{\\mathbb{Z}} H(m)\/H(m)^2)\\otimes_{\\mathbb{Z}} A(n-2m-1)\\big )\\vspace{.3cm}\\\\&\\oplus&\\big((A(n-2m-1)\\otimes_{\\mathbb{Z}} A(n-2m-1))\\otimes _{\\mathbb{Z}}H(m)\/H(m)^2\\big).\\end{array}\\]\nFirst assume that $m=1$, then by virtue of Theorems \\ref{ab} and \\ref{th1}\n\\[\\mathcal{M}^{(2)}(H(1))\\cong A(5)~\\text{and}~\\mathcal{M}^{(2)}(A(n-3))\\cong A(\\frac{1}{3}(n-2)(n-3)(n-4)).\\]\nThus\n\\[\\begin{array}{lcl}\\mathcal{M}^{(2)}(L) &\\cong& A(5) \\oplus~A(\\frac{1}{3}(n-2)(n-3)(n-4))\n\\vspace{.3cm}\\\\&\\oplus&\\big(A(2)\\otimes_{\\mathbb{Z}} A(2))\\otimes_{\\mathbb{Z}} A(n-3)\\big )\\vspace{.3cm}\\\\&\\oplus&\\big((A(n-3)\\otimes_{\\mathbb{Z}} A(n-3))\\otimes_{\\mathbb{Z}} A(2)\\big)\n\\vspace{.3cm}\\\\&\\cong& A(\\frac{1}{3}n(n-1)(n-2)+3).\\end{array}\\]The case $m\\geq 1$ is obtained by a similar fashion.\n\\end{proof}\n\n\\begin{thm} Let $L$ be a $n$-dimensional nilpotent Lie algebra such that $\\mathrm{dim}~L^2=m (m\\geq 1)$. Then\n\\[\\mathrm{dim}~\\mathcal{M}^{(2)}(L)\\leq \\f13(n-m)\\big((n+2m-2)(n-m-1)+3(m-1)\\big)+3.\\]\nIn particular, $\\mathrm{dim}~\\mathcal{M}^{(2)}(L) \\leq \\f13n(n-1)(n-2)+3$. The equality holds in last inequality if and only if $L\\cong\nH(1)\\oplus A(n-3)$.\n\\end{thm}\n\\begin{proof} We do induction on $m$. For $m=1$, the result follows from Theorem \\ref{mt1}. Let $m\\geq 2$, and taking $I$ a $1$-dimensional central\nideal of $L$. Since $I$ and $L\/L^3$ act to each other trivially we have $(I\\otimes L\/L^3)\\otimes L\/L^3\\cong (I\\otimes_{\\mathbb{Z}} \\frac{L\/L^3}{(L\/L^3)^2})\\otimes_{\\mathbb{Z}} \\frac{L\/L^3}{(L\/L^3)^2})$. Thus by Lemma \\ref{l3} $(ii)(b)$\n\\[\\mathrm{dim}~\\mathcal{M}^{(2)}(L)+\\mathrm{dim}~I\\cap L^3 \\leq \\mathrm{dim}~\\mathcal{M}^{(2)}(L\/I)+\\mathrm{dim}~(I\\otimes_{\\mathbb{Z}} \\frac{L\/L^3}{(L\/L^3)^2})\\otimes_{\\mathbb{Z}} \\frac{L\/L^3}{(L\/L^3)^2}).\\]\nSince \\[\\mathrm{dim}~\\mathcal{M}^{(2)}(L\/I)\\leq \\f13(n-m)\\big((n+2m-5)(n-m-1)+3(m-2)\\big),\\] we have\n\\[\\begin{array}{lcl}\\mathrm{dim}~\\mathcal{M}^{(2)}(L)&\\leq& \\f13(n-m)\\big((n+2m-5)(n-m-1)+3(m-2)\\big)+3+(n-m)^2\n\\vspace{.3cm}\\\\&=&\\f13(n-m)\\big((n+2m-2)(n-m-1)+3(m-1)\\big)+3,\\end{array}\\] as required.\n\\end{proof}\nThe following corollary shows that the converse of \\cite[Proposition 1.2]{sal} for $c=2$ is also true. In fact it proves always\n$\\mathrm{Ker}~\\theta=0$ in \\cite[Corollary 2.8 (ii)a]{sal}.\n\\begin{cor}\\label{lab}Let $L$ be a $n$-dimensional nilpotent Lie algebra. If $\\mathrm{dim}~\\mathcal{M}^{(2)}(L)=\\f13n(n-1)(n+1)$, then $L\\cong A(n)$.\n\\end{cor}\n\\section{2-capability of Lie algebras}\nFollowing the terminology of \\cite{ell2} for groups, a Lie algebra $L$ is said to be $2$-capable provided that\n$L\\cong H\/Z_2(H)$ for a Lie algebra $H$. The concept $Z_2^{*}(L)$ was defined in \\cite{salria} and it was proved that if $\\pi:F\/[R,F,F]\\rightarrow F\/R$ be a natural Lie epimorphism then\n\\[Z^{*}_2(L)=\\pi(Z_2(F\/[[R,F],F])),~ \\text{for}~ c\\geq 0 \\]\n\nThe following proposition gives the close relation between $2$-capability and $Z^{*}_2(L)$.\n\\begin{prop}\nA Lie algebra $L$ is $2$-capable if and only if $Z^{*}_2(L)=0.$ \\end{prop}\n\\begin{proof}Let $L$ has a free presentation $F\/R$, and $Z^{*}_2(L)=0$. Consider the natural epimorphism $\\pi: F\/[[R,F],F]\\twoheadrightarrow F\/R$.\nObviously\\[\\mathrm{Ker}~\\pi=R\/[[R,F],F]=Z_2(F\/[[R,F],F]),\\] and hence $L\\cong F\/[[R,F],F]\/Z_2(F\/[[R,F],F])$, which is a $2$-capable.\n\nConversely, let $L$ is $2$-capable and so $H\/Z_2(H)\\cong L$ for a Lie algebra $H$. Put $F\/R\\cong H$ and $Z_2(H)\\cong S\/R$. There is natural\nepimorphism $\\eta: F\/[[S,F],F]\\twoheadrightarrow F\/S\\cong L$. Since $Z_2(F\/[[R,F],F])\\subseteq \\mathrm{Ker}~\\eta$, $Z^{*}_2(L)=0$, as required.\n\\end{proof}\nThe following Theorem gives an instrumental tools to present the main.\n\\begin{thm}\\label{ti}Let $I$ be an ideal subalgebra of $L$ such that $I\\subseteq Z^{*}_2(L)$. Then the natural Lie homomorphism\n$\\mathcal{M}^{(2)}(L)\\rightarrow \\mathcal{M}^{(2)}(L\/I)$ is a monomorphism.\n\\end{thm}\n\\begin{proof} Let $S\/R$ and $F\/R$ be two free presentations of $L$ and $I$, respectively. Looking the natural homomorphism\n\\[\\phi:\\mathcal{M}^{(2)}(L)\\cong R\\cap F^2\/[[R,F],F]\\rightarrow \\mathcal{M}^{(2)}(L\/I)\\cong R\\cap S^2\/[[S,F],F]\\] and the fact that\n$S\/R\\subseteq Z_2(F\/R)$ show $\\phi$ has trivial kernel. The result follows.\n\\end{proof}\n\\begin{thm} A Heisenberg Lie algebra $H(m)$ is $2$-capable if and only if $m=1$.\n\\end{thm}\n\\begin{proof} Let $m\\geq 2$, by Corollary \\ref{ca11} $H(m)$ is not capable so it is not $2$-capable as well. Hence we may assume that $L\\cong H(1)$. Let $I$ be an ideal of $L$ od dimension 1. Then $L\/I$ is abelian of dimension 2, and hence $\\mathrm{dim}~\\mathcal{M}^{(2)}(L)=2$. On the other hands, Theorem \\ref{th1} implies $\\mathrm{dim}~\\mathcal{M}^{(2)}(L)=5$, and Theorem\n\\ref{ti} deduces $\\mathcal{M}^{(2)}(L)\\rightarrow \\mathcal{M}^{(2)}(L\/I)$ can not be a monomorphism, as required.\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhuxi b/data_all_eng_slimpj/shuffled/split2/finalzzhuxi new file mode 100644 index 0000000000000000000000000000000000000000..be5965e0ddb3486b1a7706bc31f6433fbdbf0bd5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhuxi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn smart factories, machine learning algorithms are increasingly used to extract values from multi-sensor data that monitor manufacturing processes whose characteristics are often complex and non-linear. In typical predictive models for manufacturing, the trust and safety in predictive models should be improved. As such, it is crucial to quantify and explain the confidence of the outcomes for the predictive models. An emerging area of research is the uncertainty quantification of deep learning models \\cite{ghahramani2015probabilistic}. Primarily, there are two types of uncertainties, epistemic and aleatoric - the former is the uncertainty in the model parameters due to limited data availability, while the latter arises from the noise in data \\cite{kendall2017uncertainties}. Nonetheless, to this end, the uncertainty of deep learning models remains understudied in most of realistic Industry 4.0 applications. \n\nDue to the dynamic and ad-hoc environment of factories, the collected data by multiple sensors are often non-stationary \\cite{yong2019multi}. In condition monitoring and quality prediction, machine learning (ML) methods rely on quantifying and detecting real changes in the environment and object of interest (real drift). As sensors degrade over time with increasing noise and drift level, ML methods which rely on the measurements of these sensors are affected (virtual drift).\n\nIn the occurrence of drifts (real and virtual), the noise level of the underlying distribution may change. Hence, the model must capture the change, which is termed heteroscedastic aleatoric uncertainty. In contrast, in a model where the estimated noise level is assumed constant, it is called homoscedastic aleatoric uncertainty \\cite{kendall2017uncertainties}.\n\nUnsupervised deep learning models such as autoencoders have been shown to perform well for detecting real drifts in quality predictive \\cite{wang2019generative} and prognosis applications \\cite{martinez2019visually}. The models can also detect virtual drifts that are caused by faults in the sensors \\cite{yang2018convolutional}. It's crucial to distinguish between real and virtual drifts such that operators can take appropriate mitigation actions depending on the source of anomalies.\n\nThis study is the first to shed light on the behaviour of quantified epistemic and aleatoric uncertainties in autoencoders for unsupervised learning within the context of real and virtual drift in sensor data. \n\nThe paper is structured as follows. Section~\\ref{autoencoder} provides the required background and our proposed approach. The performance evaluation including dataset, reproducibility of experimental results and evaluation are included in Section~\\ref{experiments} and Section~\\ref{results}. We conclude the paper and explain the future directions of our research in Section~\\ref{conclusion}.\n\n\\section{Bayesian Autoencoder}\n\\label{autoencoder}\n\nThe general structure of an autoencoder maps a given set of unlabelled training data; $X = {\\{x_1,x_2,x_3,...x_N\\}}$, $x_{i} \\in \\rm I\\!R^{D}$ into an output $\\hat{x}$ (i.e reconstructed signal), through a latent representation, $h$~\\cite{goodfellow2016deep}. Structurally, every autoencoder consists of two parts: an encoder $f$ for mapping original data $x$ into $h$ (i.e. $h= f(x)$) and a decoder $g$ for mapping $h$ to a reconstructed signal of the input $\\hat{x}$ (i.e. $\\hat{x} = g(h) = g(f(x)$). \n\nBased on Bayes rule,\n\\begin{equation}\\label{eq_posterior}\n p(\\theta|X) = \\frac{p(X|\\theta)\\ p(\\theta)}{p(X)} \\\\\n\\end{equation}\n\nwhere $p(X|\\theta)$ is the data likelihood which can be modelled as a diagonal Gaussian distribution with i.i.d assumption and the likelihood mean is the Bayesian Autoencoder's output. $p(\\theta)$ is the prior distribution of the Bayesian Autoencoder's parameters. For simplicity, one can assume a diagonal Gaussian prior which corresponds to an L2 regularisation. Since \\cref{eq_posterior} is analytically intractable for a deep neural network which is highly non-linear and consists of a large number of parameters compared to classical statistical models. To this end, various approximate methods were developed such as Markov Chain Monte Carlo (MCMC) \\cite{chen2014stochastic}, variational inference \\cite{blundell2015weight}, MC Dropout \\cite{gal2016dropout} and ensembling \\cite{pearce2018uncertainty} to sample from the posterior distribution. Although these methods have been explored within supervised neural networks, to the best of our knowledge, they have not been extensively applied on autoencoders which are unsupervised models. Within these methods, the marginal distribution, $p(X)$ (or evidence) is often assumed as a constant and ignored. \n\nIn this paper, we employ a sampling method, `anchored ensembling'\\cite{pearce2018uncertainty} for approximating the posterior distribution while training the autoencoders. In anchored ensembling, posteriors are approximated by Bayesian inference under the family of methods called randomised maximum a posteriori (MAP) sampling, where model parameters are regularised by values drawn from a distribution (so-called anchor distribution), which can be set equal to the prior.\n\nAssume our ensemble consists of M independent autoencoders and each j-th autoencoder contains a set of parameters, $\\theta_j$ where $j\\in{\\{1,2,3...M\\}}$. The autoencoders are trained by minimising the loss function, which is the negative sum of log-likelihood (based on i.i.d assumption) and log prior where both are assumed to be Gaussian. The loss due to the likelihood is: \n\n\\begin{equation} \\label{eq_lik_loss}\n\\mathcal{L}(X, \\hat{X}) = \\frac{1}{N} \\sum^{N}_{i=1}{\\frac{1}{2\\sigma_i^2}}||x_i-\\hat{x_i}||^2 + \\frac{1}{2}\\log{\\sigma_i^2}\n\\end{equation}\n\nwhere $\\sigma_i^2$ is the variance of the data point, which is also known as aleatoric uncertainty for regression tasks. Note that a typical autoencoder minimises the reconstruction loss, $||x_i-\\hat{x_i}||^2$ which corresponds to a diagonal Gaussian likelihood with a fixed variance of 1. Instead of a fixed variance, by `learning` the variance term as an output of the autoencoder, the model can estimate the noise level for every data point $x_i$. Following the method proposed by \\cite{kendall2017uncertainties} to compute heteroscedastic aleatoric uncertainty for regression tasks, an extra layer is added to the final layer of autoencoder with dimensions equal to the size of the inputs, to predict the log variance, $\\log{\\sigma_i^2}$ corresponding to each data point $x_i$. \n\nIn anchored ensembling for approximating the posterior distribution, the `anchored weights` for each autoencoder are unique and sampled during initialisation from a prior distribution $\\theta_{anc,j} \\sim N(\\mu_{anc,j},\\sigma_{anc,j}^2)$ and remain fixed throughout the training procedure. To scale the regulariser term arising from the prior, $\\lambda$ is set as a hyperparameter. The loss due to prior is:\n\n\\begin{equation} \\label{eq_prior_loss}\n\\mathcal{L}(\\theta_j) = \\frac{\\lambda}{N} \\sum^{N}_{i=1}||\\theta_j-\\theta_{anc,j}||^2\n\\end{equation}\n\n\nWith \\cref{eq_lik_loss} and \\cref{eq_prior_loss}, the resulting loss function to be minimised is:\n\n\\begin{equation} \\label{eq_final_loss}\n\\mathcal{L}(X, \\hat{X}, \\theta_j) = \\mathcal{L}(X, \\hat{X}) + \\mathcal{L}(\\theta_j)\n\\end{equation}\n\n\nFor model prediction, the predictive posterior distribution of an unseen test input $X^*$, is calculated by integrating over all possible $\\theta$: \n\n\\begin{equation}\\label{eq_predictive_full}\np(X^*|X) = \\int{p(X^*|\\theta,X)\\ p(\\theta|X)\\ d\\theta}\n\\end{equation}\n\nAlthough ~\\cref{eq_predictive_full} is intractable, we can estimate it with the samples of $p(\\theta|X)$ which we obtained by training the ensemble: \n\n\\begin{equation}\\label{eq_predictive_estimate}\n\\hat{p}(X^*|X) = \\sum_{j=1}^{M}{p(X^*|\\theta_j,X)}\n\\end{equation}\n\n\n\n\n\nTo compute epistemic uncertainty on a new single data point, $x^*$, the variance of reconstructed signals from the ensemble is computed: \n\n\\begin{equation}\nVar(\\hat{x}^*) = \\frac{\\sum_{j=1}^M{(\\hat{x}^*_j-\\bar{x})^2}}{M} \n\\end{equation}\nwhere M is the number of ensembled autoencoders, $\\bar{x}$ is the mean of reconstructed signals $\\hat{x}^*_j$.\n\nIn addition to the reconstructed signals, the Bayesian Autoencoder also outputs the log variance of data, $\\log{\\sigma_i}$, by which we can recover the heteroscedastic aleatoric uncertainty, $\\sigma_i^2$ with the exponential function.\n\n\n\\section{Experimental Evaluation}\n\\label{experiments}\nThis section explains the real-world dataset used in our evaluation, the reproducibility of our results and the evaluation criteria of our proposed method. \n\n\\subsection{Dataset}\nWe have tested our proposed approach on a publicly available dataset for condition monitoring of hydraulic system \\cite{helwig2015}. The dataset is obtained from a hydraulic test rig which permits safe and non-destructive changes to the states of various components (cooler, valve, pump and accumulator) to emulate faults and degradation. Redundant sensors are equipped on the test system on multiple locations to measure pressure, flow, temperature, power and vibration. There is a total of 17 sensors and various sampling frequencies of 1Hz, 10Hz and 100Hz. In the dataset, there is a total of 2205 cycles, and each working cycle of the hydraulic system lasts for 60 seconds. The methods developed in our study are not limited to condition monitoring and can be applied to other Industry 4.0 use cases. \n\n\\subsection{Data processing}\nDue to the inconsistent sampling frequencies, we resample the data to 1Hz. As such, this results in 60 (time points) * 17 (sensors) for each cycle. The features are then normalised using a standard scaler for each sensor with careful implementations to prevent train-test bias. We do not compute specific features from the data but instead we feed the resampled and rescaled raw signals to the Bayesian Autoencoder. By doing so, we are able to visualise and gain insights into the full reconstructed signals as predicted by the deep model.\n\n\\subsection{Experiment setup}\nWe set the number of hidden nodes and layers of the Bayesian Autoencoder to 1020-500-250-3-250-500-1020 with 10 samples in the ensemble. The Bayesian Autoencoder is trained and tested with 70\\% and 30\\%, respectively of the sensor data where the cooler condition is known to be healthy. Then, for the case of real drift, we test the model on data which the cooler condition has degraded to 20\\% and 3\\% (near failure) efficiency. To simulate virtual drifts scenarios, we create two datasets from the `healthy` test set and artificially inject a range of noise from 5-25\\% (i.e. injected noise of uniform distribution) and constant sensor drift of 5-25\\% of the mean in each one of the sensors (i.e. injected drift). \n\nTo ensure the reproducibility of our results, we have\nmade the code of our implementation available and have also provided details of a configurable experimental set-up at~\\url{https:\/\/github.com\/bangxiangyong\/bae-drift-detection-zema-hydraulic}.\n\n\\section{RESULTS AND DISCUSSION}\n\n\\label{results}\nWe have conducted three sets of experiments; 1) real drift, 2) injected noise, and 3) injected drift. The reconstruction loss, epistemic and aleatoric uncertainties for these experiments are summarised in Fig.~\\ref{fig-real-drift}, Fig.~\\ref{fig-injected-noise} and Fig.~\\ref{fig-injected-drift} respectively. Although we show the results for only one of the pressure sensors, denoted PS1, we have extended the experiment to every sensor and found similar results. One limitation of our analysis is we do not explore virtual drifts on combination of sensors.\nIn general, we note that the mean of reconstruction loss, epistemic and aleatoric uncertainties increase in both cases of real and virtual drift conditions. \n\nThe reconstruction loss for the cooler condition of \\%3 has a longer tail which overlaps with the less faulty conditions (Fig.~\\ref{fig-real-drift}a). In practice, when using it for anomaly detection, this may lead to false positives. Additionally, the reconstruction loss and aleatoric uncertainty increase exponentially with the degrading condition of cooler, whereas epistemic uncertainty increases linearly in the same scenarios (Fig.~\\ref{fig-real-drift}). \n\nMoreover, the epistemic uncertainty is generally less affected by noise in the sensor than the reconstruction loss (Fig.~\\ref{fig-injected-noise}). Unexpectedly, however, in the advent of increasing sensor noise, the aleatoric uncertainty does not increase, as shown in Fig.~\\ref{fig-injected-noise}c. Intuitively, we expect the estimated variance to be proportional to the level of sensor noise. In contrast, we note that the aleatoric uncertainty increases dramatically for the degrading cooler condition since multiple sensors are affected simultaneously. Therefore, for comparing these two situations, an exploration step is to investigate the effects of perturbations applied on a combination of sensors. With that, we can develop a feature importance ranking based on the sensitivity of the model's uncertainties. \n\nIn the case of injected sensor drift (Fig.~\\ref{fig-injected-drift}), the reconstruction loss increases exponentially, whereas the epistemic uncertainty increases almost linearly. In contrast, aleatoric uncertainty shows a convex behaviour. Unfortunately, since the aleatoric uncertainty is computed using a black-box model, we do not have an intuitive explanation for it \\cite{venkatesh2019heteroscedastic}. \n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[scale=0.6]{figures\/real_drift.png}}\n\\caption{a) Reconstruction loss, b) Epistemic uncertainty, c) Aleatoric uncertainty under real drift of degrading cooler condition}\n\\label{fig-real-drift}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[scale=0.6]{figures\/injected_noise.png}}\n\\caption{a) Reconstruction loss, b) Epistemic uncertainty, c) Aleatoric uncertainty under virtual drift of increasing noise in a pressure sensor}\n\\label{fig-injected-noise}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[scale=0.6]{figures\/injected_drift.png}}\n\\caption{a) Reconstruction loss, b) Epistemic uncertainty, c) Aleatoric uncertainty under virtual drift of increasing drift in a pressure sensor}\n\\label{fig-injected-drift}\n\\end{figure}\n\nBy solely relying on the reconstruction loss, we are unable to distinguish real and virtual drifts. Thus, we posit that, by capturing these patterns of uncertainties, novel methods can potentially be developed to distinguish real and virtual drifts in sensor data as shown in Fig.~\\ref{fig-3d-scatter-bae}. From a qualitative perspective, we note that the points form clusters which are separable. This implies we can apply a clustering algorithm (e.g k-means or hierarchical clustering) on these three metrics: reconstruction loss, epistemic uncertainty and aleatoric uncertainty of every data point to achieve unsupervised classification. To the best of our knowledge, we are the first to elicit this application within Bayesian Autoencoders.\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[scale=0.70]{figures\/bae_3d_plot.png}}\n\\caption{Scatter plot of Bayesian Autoencoder's outputs under various conditions: healthy condition, degrading cooler condition, noisy and drifting pressure sensor. This illustrates the separability of the types of drifts based on the trio: reconstruction loss, epistemic uncertainty and aleatoric uncertainty.}\n\\label{fig-3d-scatter-bae}\n\\end{figure}\n\nWe have conducted further experiments (in Fig.~\\ref{fig-recon-sig}) to gain more insights about the actual, reconstructed values and their uncertainties. For the nearly faulty cooler condition (Fig.~\\ref{fig-recon-sig}b), the reconstruction loss shows an insignificant increase compared to the normal actual signal (Fig.~\\ref{fig-recon-sig}a). However, epistemic and aleatoric uncertainties increase significantly. Despite the presence of noise and drifts (Fig.~\\ref{fig-recon-sig}c \\& d), we note that the Bayesian Autoencoder is able to reconstruct the shape of the normal signal. In such a case, the reconstruction loss increases rapidly; this is due to the large difference between the actual and reconstructed values. \nMeanwhile, the uncertainties do not show a significant increase in these situations. By observing the uncertainties of the reconstructed signal, operators can gain more interpretable insights into the model's predictions. Since the uncertainties are computed on a feature level, the uncertainty of every sensor on every time step can be leveraged for further decision making.\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[scale=0.65]{figures\/RECON-SIG.png}}\n\\caption{Measured signal and reconstructed signal of pressure sensor with epistemic and aleatoric uncertainties in a working cycle}\n\\label{fig-recon-sig}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{conclusion}\nDistinguishing between a real and virtual drift is of importance, especially in ML for manufacturing where the environments are highly dynamic. Our conducted experiments show that the reconstruction loss typically used in autoencoders is unable to distinguish a real drift in the environment and virtual drift due to sensor degradation. By observing the epistemic and aleatoric uncertainties, a difference is noticed in the quality of prediction in each case, which can be leveraged for distinguishing real and virtual drifts in sensors data. Since uncertainty quantification using Bayesian Autoencoders is mostly unexplored in the industrial context, this appears to be a promising field of research which deepens our understanding and trust of these deep models. We leave the detailed analysis of these observations for future studies. \n\nFuture work will involve using a Gaussian likelihood with a full covariance matrix, instead of a diagonal only (as in our conducted experiments), which may reveal more insights in interpreting the model's aleatoric uncertainty measures. Other than a Gaussian likelihood, the effects of using different likelihood distributions can also be explored. Moreover, we can leverage the Bayesian Autoencoder's outputs for a novel unsupervised classification method. We will also extend the experiments to study the effect of variant Bayesian Autoencoder architectures on various datasets for identifying the real and virtual drifts and their uncertainties. \n\n\\section*{Acknowledgment}\n\nThe research presented was supported by European Metrology Programme for Innovation and Research (EMPIR) under the project Metrology for the Factory of the Future (MET4FOF), project number 17IND12 as well as the PITCH-IN (Promoting the Internet of Things via Collaborations between HEIs and Industry) project funded by Research England. We express our gratitude to Tim Pearce for his inputs. \n\n\n\\subsection{Equations}\nNumber equations consecutively. To make your \nequations more compact, you may use the solidus (~\/~), the exp function, or \nappropriate exponents. Italicize Roman symbols for quantities and variables, \nbut not Greek symbols. Use a long dash rather than a hyphen for a minus \nsign. Punctuate equations with commas or periods when they are part of a \nsentence, as in:\n\\begin{equation}\na+b=\\gamma\\label{eq}\n\\end{equation}\n\nBe sure that the \nsymbols in your equation have been defined before or immediately following \nthe equation. Use ``\\eqref{eq}'', not ``Eq.~\\eqref{eq}'' or ``equation \\eqref{eq}'', except at \nthe beginning of a sentence: ``Equation \\eqref{eq} is . . .''\n\n\\subsection{\\LaTeX-Specific Advice}\n\nPlease use ``soft'' (e.g., \\verb|\\eqref{Eq}|) cross references instead\nof ``hard'' references (e.g., \\verb|(1)|). That will make it possible\nto combine sections, add equations, or change the order of figures or\ncitations without having to go through the file line by line.\n\nPlease don't use the \\verb|{eqnarray}| equation environment. Use\n\\verb|{align}| or \\verb|{IEEEeqnarray}| instead. The \\verb|{eqnarray}|\nenvironment leaves unsightly spaces around relation symbols.\n\nPlease note that the \\verb|{subequations}| environment in {\\LaTeX}\nwill increment the main equation counter even when there are no\nequation numbers displayed. If you forget that, you might write an\narticle in which the equation numbers skip from (17) to (20), causing\nthe copy editors to wonder if you've discovered a new method of\ncounting.\n\n{\\BibTeX} does not work by magic. It doesn't get the bibliographic\ndata from thin air but from .bib files. If you use {\\BibTeX} to produce a\nbibliography you must send the .bib files. \n\n{\\LaTeX} can't read your mind. If you assign the same label to a\nsubsubsection and a table, you might find that Table I has been cross\nreferenced as Table IV-B3. \n\n{\\LaTeX} does not have precognitive abilities. If you put a\n\\verb|\\label| command before the command that updates the counter it's\nsupposed to be using, the label will pick up the last counter to be\ncross referenced instead. In particular, a \\verb|\\label| command\nshould not go before the caption of a figure or a table.\n\nDo not use \\verb|\\nonumber| inside the \\verb|{array}| environment. It\nwill not stop equation numbers inside \\verb|{array}| (there won't be\nany anyway) and it might stop a wanted equation number in the\nsurrounding equation.\n\n\\subsection{Some Common Mistakes}\\label{SCM}\n\\begin{itemize}\n\\item The word ``data'' is plural, not singular.\n\\item The subscript for the permeability of vacuum $\\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.\n\\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)\n\\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).\n\\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.\n\\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.\n\\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.\n\\item Do not confuse ``imply'' and ``infer''.\n\\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.\n\\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.\n\\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.\n\\end{itemize}\nAn excellent style manual for science writers is \\cite{b7}.\n\n\\subsection{Authors and Affiliations}\n\\textbf{The class file is designed for, but not limited to, six authors.} A \nminimum of one author is required for all conference articles. Author names \nshould be listed starting from left to right and then moving down to the \nnext line. This is the author sequence that will be used in future citations \nand by indexing services. Names should not be listed in columns nor group by \naffiliation. Please keep your affiliations as succinct as possible (for \nexample, do not differentiate among departments of the same organization).\n\n\\subsection{Identify the Headings}\nHeadings, or heads, are organizational devices that guide the reader through \nyour paper. There are two types: component heads and text heads.\n\nComponent heads identify the different components of your paper and are not \ntopically subordinate to each other. Examples include Acknowledgments and \nReferences and, for these, the correct style to use is ``Heading 5''. Use \n``figure caption'' for your Figure captions, and ``table head'' for your \ntable title. Run-in heads, such as ``Abstract'', will require you to apply a \nstyle (in this case, italic) in addition to the style provided by the drop \ndown menu to differentiate the head from the text.\n\nText heads organize the topics on a relational, hierarchical basis. For \nexample, the paper title is the primary text head because all subsequent \nmaterial relates and elaborates on this one topic. If there are two or more \nsub-topics, the next level head (uppercase Roman numerals) should be used \nand, conversely, if there are not at least two sub-topics, then no subheads \nshould be introduced.\n\n\\subsection{Figures and Tables}\n\\paragraph{Positioning Figures and Tables} Place figures and tables at the top and \nbottom of columns. Avoid placing them in the middle of columns. Large \nfigures and tables may span across both columns. Figure captions should be \nbelow the figures; table heads should appear above the tables. Insert \nfigures and tables after they are cited in the text. Use the abbreviation \n``Fig.~\\ref{fig}'', even at the beginning of a sentence.\n\n\\begin{table}[htbp]\n\\caption{Table Type Styles}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Table}&\\multicolumn{3}{|c|}{\\textbf{Table Column Head}} \\\\\n\\cline{2-4} \n\\textbf{Head} & \\textbf{\\textit{Table column subhead}}& \\textbf{\\textit{Subhead}}& \\textbf{\\textit{Subhead}} \\\\\n\\hline\ncopy& More table copy$^{\\mathrm{a}}$& & \\\\\n\\hline\n\\multicolumn{4}{l}{$^{\\mathrm{a}}$Sample of a Table footnote.}\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics{fig1.png}}\n\\caption{Example of a figure caption.}\n\\label{fig}\n\\end{figure}\n\nFigure Labels: Use 8 point Times New Roman for Figure labels. Use words \nrather than symbols or abbreviations when writing Figure axis labels to \navoid confusing the reader. As an example, write the quantity \n``Magnetization'', or ``Magnetization, M'', not just ``M''. If including \nunits in the label, present them within parentheses. Do not label axes only \nwith units. In the example, write ``Magnetization (A\/m)'' or ``Magnetization \n\\{A[m(1)]\\}'', not just ``A\/m''. Do not label axes with a ratio of \nquantities and units. For example, write ``Temperature (K)'', not \n``Temperature\/K''.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{secintro}\n\nIn this paper we extend some well-known properties of the $q$-numbers and\n$q$-binomial coefficients (a.k.a.\\ \\textit{Gau{\\ss}ian binomial coefficients})\nto settings involving extra parameters.\nIn particular, we establish discrete and continuous log-concavity results\nfor certain uniparametric, biparametric, and even elliptic extensions\nof the $q$-numbers and $q$-binomial coefficients.\n\nGiven any complex $q\\neq1$, the $q$-analogue of a complex number $x$\nis defined by\n\\begin{equation*}\n[x]_q:=\\frac{1-q^x}{1-q}.\n\\end{equation*}\nWe refer to $[x]_q$ as a \\textit{$q$-number} (or \\textit{basic number}). \nThe $q$-numbers play an important role in the theory of integer partitions\n(see Andrews' book~\\cite{Andrews} and historical references cited in there).\nOne can recover $x$ from $[x]_q$ by letting $q$ tend to $1$. \nLog-concavity results for $q$-binomial coefficients were given by\nButler~\\cite{Butler}, Krattenthaler~\\cite{Krattenthaler} and\nSagan~\\cite{Sagan2}. (In this connection it should be mentioned\nthat the $q$-log-concavity or weighted log-concavity considered in the\nliterature implies log-concavity if the weights are non-negative.)\nRecently Kalmykov and Karp~\\cite{KalmykovKarp} established log-concavity\nresults (or equivalently, Tur\\'an type inequalities) for specific basic\nhypergeometric series.\n\nFor $a,b,q\\in\\C$ we define the $a,b;q$-extension\nof a complex number $x$ as follows:\n\\begin{equation}\\label{abqNum}\n[x]_{a,b;q} :=\n\\frac{(1-q^x)(1-aq^x)(1-bq)(1-aq\/b)}{(1-q)(1-aq)(1-bq^x)(1-aq^x\/b)},\n\\end{equation}\nwhere the variables are chosen such that none of the denominator factors vanish.\n(This definition corresponds to that of the $a,b;q$-numbers considered\nby the first author and Yoo in\n\\cite{SchlosserYoo1,SchlosserYoo2,SchlosserYoo3,SchlosserYoo5}.\nIn the latter $b$ has to be replaced by $bq^{-1}$ to match the definition\nused in \\eqref{abqNum}.)\nLetting $a\\to 0$ followed by $b\\to 0$ (or $b\\to 0$ followed by $a\\to\\infty$),\nthe $a,b;q$-numbers reduce to the basic numbers.\n\nIn Section~\\ref{SectionDef} we list some elementary properties of the\n$a,b;q$-numbers and explain the various notions of log-concavity we are\nconcerned about in this paper. Section~\\ref{SectionNum} deals with results\nabout the log-concavity of the $a,b;q$-numbers. The lemma proved in that\nsection involves a multiplicative analogue of Tur\\'an's inequality and plays\na key role in proving results involving $a,b;q$-numbers and $a,b;q$-binomial\ncoefficients, and in proving results in the (more general) elliptic setting.\nSection~\\ref{SectionBin} is devoted to log-concavity results for uni- and\nbiparametric extensions of the $q$-binomial coefficient. The\n$a,b;q$-numbers can be further extended to the elliptic numbers that\nappeared in \\cite{SchlosserYoo1,SchlosserYoo2,SchlosserYoo3,SchlosserYoo5},\nand are the contents of study in Sections~\\ref{sec:ell-prim} and\n\\ref{sec:ell-results}.\nOur elliptic numbers are indeed elliptic functions (i.e., they are meromorphic\nand doubly periodic); they are expressed as certain ratios of theta functions.\nAccordingly, the analysis in Sections~\\ref{sec:ell-prim} and\n\\ref{sec:ell-results} involves some machinery from the theory of Jacobi\ntheta functions (or, equivalently, of the Weierstra{\\ss} sigma function)\nwhich is classical but not so well-known in the community of $q$-series,\nwhich is the reason why we cover this material in separate sections of our\npaper. Finally, in Section~\\ref{sec:con} we conclude with an outlook of\nfurther open problems.\n\n\\section{Preliminaries}\\label{SectionDef}\n\nIt is a matter of simple algebra to verify for arbitrary\n$x$ and $y$ the following addition formula\nfor the $a,b;q$-numbers\ndefined in \\eqref{abqNum}:\n\\begin{subequations}\\label{abq_addition}\n\\begin{equation}\n[x]_{a,b;q} + W_{a,b;q}(x) [y-x]_{aq^{2x},bq^x;q} = [y]_{a,b;q},\\end{equation}\nwhere $W_{a,b;q}(x)$ is the \\textit{$a,b;q$-weight}, defined by\n\\begin{equation}\\label{single_weight}\nW_{a,b;q}(x) = \\frac{(1-aq^{1+2x})(1-b)(1-bq)(1-a\/b)(1-aq\/b)}\n{(1-aq)(1-bq^x)(1-bq^{1+x})(1-aq^x\/b)(1-aq^{1+x}\/b)}q^x.\n\\end{equation}\n\\end{subequations}\n\nNow if we impose $00$ we have\n\\begin{equation}\\label{positivity_dom}\n[x]_{a,b;q}>0\\qquad and\\qquad W_{a,b;q}(x)>0,\n\\end{equation}\nas all the factors appearing in the respective quotients\nare manifestly positive.\n\nIt is also easy to observe the following three properties of the\n$a,b;q$-numbers and the associated $a,b;q$-weights:\n\\begin{subequations}\\label{properties}\n\\begin{align}\n\\label{zero_property}[0]_{a,b;q} &= 0\\qquad and\\qquad W_{a,b;q}(0)=1,\\\\\n\\label{order_rel}[x]_{a,b;q} &\\geq [y]_{a,b;q}\\qquad\n\\text{for}\\quad 0y>0$ the difference $[x]_{a,b;q}-[y]_{a,b;q}$ is\n$W_{a,b;q}(y) [x-y]_{aq^{2y},bq^y;q}$ which is positive by \n\\eqref{positivity_dom}.\nWhile we could use the above relation \\eqref{negative_numbers_def}\nto deal with the $a,b;q$-numbers of negative argument,\nin this paper we shall restrict our attention to the case that the\narguments are non-negative real numbers.\n\nThere are two intermediate extensions from the basic-numbers\nto the $a,b;q$-numbers. These two intermediate extensions correspond to the\nlimits $b\\to 0$, and to $a\\to 0$, in the $a,b;q$-numbers, respectively.\nSpecifically, one can let $b\\to 0$ (or $b\\to\\infty$) in \\eqref{abqNum},\nby which one obtains the \\textit{$a;q$-numbers}\n(studied in \\cite{SchlosserYoo2,SchlosserYoo4,SchlosserYoo5}):\n\\begin{equation}\\label{a_q_numbers}\n[x]_{a;q}:=[x]_{a,0;q} = \\frac{(1-q^x)(1-aq^x)}{(1-q)(1-aq)}q^{1-x}.\n\\end{equation}\nThese do not only generalize the standard $q$-numbers $[x]_q$\nobtained by letting $a\\to 0$ in \\eqref{a_q_numbers},\nbut also the \\textit{quantum} numbers\n$\\langle x\\rangle_q:=(q^x-q^{-x})\/(q-q^{-1})$\n(which frequently appear in physical models),\nobtained by letting $a\\to -1$ in \\eqref{a_q_numbers}.\n\nOne can also let $a\\rightarrow 0$ (or $a\\to\\infty$) in \\eqref{abqNum}\nand arrive at the \\textit{$(b;q)$-numbers}\n\\begin{equation}\\label{b_q_numbers}\n[x]_{(b;q)}:=[x]_{0,b;q} = \\frac{(1-q^x)(1-bq)}{(1-q)(1-bq^x)}.\n\\end{equation}\nWe decided to put parantheses in ``$(b;q)$-numbers'' but none in\n``$a;q$-numbers'' to distinguish them in notation, thus to avoid confusion.\n(For instance, we have $[x]_{(0;q)}=[x]_q$ but $[x]_{0;q}=[x]_{q^{-1}}$.)\n\nIn terms of standard terminology for basic hypergeometric series\n(cf.\\ \\cite{GasperRahman}), the basic hypergeo\\-met\\-ric expression\non the right-hand side of \\eqref{a_q_numbers} is \\textit{well-poised}\nand that on the right-hand side of \\eqref{b_q_numbers} is \\textit{balanced}.\nIt should come to no surprise that the right-hand side of\n\\eqref{abqNum} is well-poised and balanced (while the corresponding\nexpression for the weight in \\eqref{single_weight} is even\n\\textit{very-well-poised} and balanced).\n\nWe now explain different notions of log-concavity.\n\\begin{definition}\\label{log_concavity}\nA sequence of real numbers\n$(a_k)_{k=0}^{\\infty}$ (indexed by non-negative integers)\nis called \\textit{log-concave} if\n\\begin{equation}\\label{log_conc_def}\na_k^2 \\geq a_{k+1}a_{k-1},\n\\end{equation} for all $k\\geq 1$.\nSimilarly, one calls a sequence $(a_k)_{k=0}^{\\infty}$\n\\textit{strongly log-concave} if\n\\begin{equation}\\label{st_log_conc_def} a_k a_l \\geq a_{k+1}a_{l-1}\\end{equation}\nfor all positive integers $k$ and $l$ with $k\\geq l$.\n\\end{definition}\nIt is clear that if the $a_k$ are all positive (or all negative),\nlog-concavity implies strong log-concavity since \\eqref{log_conc_def} then\nis equivalent to\n\\begin{equation*}\n\\frac{a_k}{a_{k+1}}\\ge\\frac{a_{k-1}}{a_k},\n\\end{equation*}\nwhich can be iterated to establish \\eqref{st_log_conc_def}.\n\nWe will also use the notions of log-concavity and strong log-concavity\nin the continuous setting.\n\\begin{definition}\\label{log_concavity-func}\nA function $a(x)$ depending on a non-negative real variable $x$\nis called continuously log-concave if\n\\begin{equation}\\label{log_conc_def-func}\na(x)^2 \\geq a(x+r)a(x-r),\n\\end{equation}\nfor all $x\\geq r\\ge 0$, and continuously\nstrongly log-concave if\n\\begin{equation}\\label{st_log_conc_def-func}\na(x) a(y) \\geq a(x+r)a(y-r)\\end{equation}\nfor all real $x\\geq y\\geq r\\ge 0$.\n\\end{definition}\nAgain it is easy to see that if $a(x)>0$ for all $x\\ge 0$ (or\n$a(x)<0$ for all $x\\ge 0$) log-concavity implies strong log-concavity\nsince the down-shift by $r$ of the arguments,\n$a(x)\/a(x+r)\\ge a(x-r)\/a(x)$, can be iterated with an additional\ndown-shift by $x-y$ to establish \\eqref{st_log_conc_def-func}.\n\nFor a non-vanishing positive (or non-vanishing negative)\nfunction $a(x)$, $x\\ge 0$, one can equivalently express the continuous\nstrong log-concavity as \n\\begin{equation}\\label{log_conc_equivalent}\n\\frac{a(x+r)a(y-r)}{a(x) a(y)} \\leq 1,\n\\end{equation}\nwhere $x\\geq y\\geq r\\geq 0$.\nSimilar reformulations can be applied to the other notions of\nlog-concavity considered above and we will be using them when convenient.\n\n\\begin{example}\nMaybe the most trivial example of a continuously strong log-concave function\nis the identity on $[0,\\infty)$. Indeed, assuming $x\\geq y\\geq r>0$\n(the case $r=0$ of \\eqref{st_log_conc_def-func} is trivial),\n\\begin{equation}\\label{eq:triv}\n\\frac{(x+r)(y-r)}{xy}<1\n\\end{equation}\nof course holds, since $xy-(x+r)(y-r)=r(r+x-y)>0$.\nThis simple fact is already responsible for the continuous strong log-concavity\nof the \\textit{continuous binomial coefficients} (which were recently studied\nby Salwinski~\\cite{Salwinski} who proved identities satisfied by them,\namong them also a continuous binomial theorem),\ndefined by\n\\begin{equation}\\label{cont-bin1}\n\\binom xk=\\frac{\\Gamma(1+x)}{\\Gamma(1+k)\\Gamma(1+x-k)}\n\\qquad\\text{for $x,k\\in\\C$, $x\\notin -1,-2,\\ldots$.}\n\\end{equation}\nBy virtue of Euler's product formula for the gamma function,\n\\begin{equation}\n\\Gamma(1+x)=\\prod_{j=1}^\\infty\\frac{j^{1-x}(1+j)^x}{x+j}\n\\qquad\\text{for $x\\in\\C$, $x\\notin -1,-2,\\ldots$},\n\\end{equation}\nwe may rewrite Equation~\\eqref{cont-bin1} in the following convenient product form:\n\\begin{equation}\n\\binom xk=\\prod_{j=1}^\\infty\\frac{(k+j)(x-k+j)}{j(x+j)}\n\\qquad\\text{for $x,k\\in\\C$, $x\\notin -1,-2,\\ldots$.}\n\\end{equation}\nIt is now easy to deduce the following result:\n\\textit{For any real $x,y,k,l,r$ satisfying $x\\geq y$,\n$k\\geq l\\ge r\\ge 0$, and $y-l\\geq x-k$, we have the continuous\nstrong log-concavity}\n\\begin{equation}\\label{binom_inequality}\n\\binom{x}{k}\\binom{y}{l} \\geq \\binom{x}{k+r}\\binom{y}{l-r}.\n\\end{equation}\n\\begin{proof}\nThe $r=0$ case is trivial, so assume $r>0$. After canceling common factors\nwe see that\n\\begin{equation*}\n\\frac{\\binom x{k+r}\\binom y{l-r}}{\\binom x{k}\\binom y{l}}=\n\\prod_{j=1}^\\infty\\frac{(k+r+j)(x-k-r+j)(l-r+j)(y-l+r+j)}{(k+j)(x-k+j)(l+j)(y-l+j)}<1,\n\\end{equation*}\nsince by taking different instances of \\eqref{eq:triv}, we have\n\\begin{equation*}\n\\frac{(k+r+j)(l-r+j)}{(k+j)(l+j)}<1,\\quad\\;\\text{and}\\quad\\,\n\\frac{(y-l+r+j)(x-k-r+j)}{(y-l+j)(x-k+j)}<1,\n\\end{equation*}\nfor each $j\\geq 1$, which establishes the claim.\n\\end{proof}\n\\end{example}\n\n\\section{Log-concavity of $a,b;q$-numbers}\\label{SectionNum}\n\nTo deal with the continuous log-concavity\nof the $a;q$-, the $(b;q)$- and the $a,b;q$-numbers we will make use\nof the following elementary result.\n\n\\begin{proposition}\\label{Proposition_1} For $00$)\nwhile the case $z>x$ is actually vacuous.\nIndeed, $z>x$, i.e. $f(a)-f(b)>f(\\lambda a\/b)-f(\\lambda )$,\nwould be equivalent to\n\\begin{equation*}\n\\frac{f(a)-f(\\lambda a\/b)}{\\lambda-b}>\\frac{f(b)-f(\\lambda)}{\\lambda-b},\n\\end{equation*}\nwhich again is equivalent to\n\\begin{equation}\\label{feq}\n \\frac ab\\cdot\\frac{f(\\frac{\\lambda a}b)-f(a)}\n {\\frac{\\lambda a}b-a}<\\frac{f(\\lambda)-f(b)}{\\lambda-b}.\n\\end{equation}\nRecall that $a<\\lambda a\/b\\le b<\\lambda$.\nBy two applications of the mean value theorem there exist\n$c\\in(a,\\lambda a\/b)$ and $d\\in(b,\\lambda)$ (in particular,\n$c 0$, let\n\\begin{equation}\\label{f_func} f(u) := f_{q,x,r}(u) =\n\\frac{(1-uq^{x+r})(1-uq^{x-r})}{(1-uq^x)^2}\\qquad\\text{for $u\\in[0,1]$}.\n\\end{equation}\nWe have\n\\begin{equation*}\nf'(u)=-\\frac{(1-q^r)^2(1+uq^x)}{(1-uq^x)^3}q^{x-r}<0,\n\\end{equation*}\nand\n\\begin{equation*}\nf''(u)=-\\frac{2(1-q^r)^2(2+uq^x)}{(1-uq^x)^4}q^{2x-r}<0,\n\\end{equation*}\nfor any $u\\in (0,1)$, \nwhich allows us to apply Lemma~\\ref{lem:mult_Turan} with $f$ as defined\nin \\eqref{f_func}, with $\\delta=0$ and $\\lambda=1$.\nThis immediately establishes that the expression in\n\\eqref{long_fractions} is $\\leq 1$.\n\\end{proof}\n\n\\section{$a,b;q$-Binomial coefficients and\nlog-concavity}\\label{SectionBin}\n\nIn this section, we recall the definition of the $a,b;q$-analogue of binomial\ncoefficients (that first appeared in work of the first author~\\cite{Schlosser1}\nin the context of enumeration of weighted lattice paths and subsequently were\nstudied in further work of the first author~\\cite{Schlosser2} in connection\nwith a non-commutative binomial theorem, and in joint work of the first author\nwith Yoo~\\cite{SchlosserYoo1, SchlosserYoo2}; in particular, these papers\ncontain their recurrence relations and combinatorial interpretations,\nalso for the more general $a,b;q,p$- or elliptic binomial coefficients,\nwhich include the $a,b;q$-binomial coefficients obtained by letting $p\\to 0$).\n\nFor the following definition we restrict the base $q$ to satisfy $0<|q|<1$\n(while for studying inequalities we further assume $q$ to be real and\nsatisfy $00$ and\n $0\n 4\\frac{\\mathrm d}{{\\mathrm d}u}\\log\\theta(uq^x;p).\n \\end{equation*}\n Since the logarithmic derivative of a product is the sum\n of the logarithmic derivatives, we thus need to show\n\\begin{align*}\n&\\sum_{j=0}^\\infty\\frac{\\mathrm d}{{\\mathrm d}u}\\log\n \\big[(1-p^ju^2q^{2x})+(1-p^{j+1}u^{-2}q^{-2x})\\big]\\\\\n &>\n4\\sum_{j=0}^\\infty\\frac{\\mathrm d}{{\\mathrm d}u}\\log\n\\big[(1-p^juq^{x})+(1-p^{j+1}u^{-1}q^{-x})\\big].\n\\end{align*}\nThe inequality actually holds term-wise, for each $j$.\nIndeed, comparison of the logarithmic derivatives for fixed $j$ amounts to\n\\begin{equation*}\n \\frac{-2p^juq^{2x}}{1-p^ju^2q^{2x}}+\n \\frac{4p^{j+1}u^{-3}q^{-2x}}{1-p^{j+1}u^{-2}q^{-2x}}>\n \\frac{-4p^jq^{x}}{1-p^juq^{x}}+\n \\frac{4p^{j+1}u^{-2}q^{-x}}{1-p^{j+1}u^{-1}q^{-x}},\n\\end{equation*}\nwhich is easy to verify, as already the first summand on the left-hand\nside is greater than the first summand on the right-hand side,\nwhich analogously holds for the second summands on both sides.\n\\end{proof}\n\nHaving set up all the ingredients, the results now\nimmediately extend from the $a,b;q,p$-case to the elliptic case.\n\n\\begin{theorem}\\label{direct_log_concavity_ell}\nThe elliptic numbers $[x]_{a,b;q,p}$ are continuously\nstrongly log-concave. In particular, for all real numbers $q,p,x,y,r,a,b$\nsatisfying $0